# swirlspy.ver package¶

## swirlspy.ver.crosstab module¶

swirlspy.ver.crosstab.contingency(threshold, forecast, observed)

Generates contingency table statistics for traditional binary verification

Parameters: threshold (float) – Threshold value of variable for verification. forecast (xarray) – An xarray containing forecasted values of variable observed (xarray) – An xarray containing observed values of variable contingency – tuple structure is (hit, miss, false_alarm, corrneg) tuple

Notes

hit: int
number of hits
miss: int
number of misses
false_alarm: int
number of false alarms
corrneg: int
number of correct negatives

## swirlspy.ver.metric module¶

swirlspy.ver.metric.FSS(threshold, forecast, observed, winsize)

Generates FSS

Parameters: threshold (float) – Threshold value of variable for verification. forecast (numpy array) – An xarray containing forecasted values of variable observed (numpy array) – An xarray containing observed values of variable winsize (int) – Window size fss – FSS score float
swirlspy.ver.metric.accuracy(cont)

Calculates accuracy

Parameters: cont (tuple) – contingency tuple (refer to swirlspy.ver.crosstab) accuracy – Accuracy. float
swirlspy.ver.metric.brier_skill_score(forecast, observation)

Function to calculate the Brier Skill Score. This function is a wrapper about sklearn.metrics.brier_score_loss.

Parameters: forecast (xarray.DataArray) – xarray populated with forecasted probabilities ~ (0, 1). observation (xarray.DataArray) – xarray containing binary observation data. bss – Brier skill score. float
swirlspy.ver.metric.csi(cont)

Calculates Critical Success Index

Parameters: cont (tuple) – contingency tuple (refer to swirlspy.ver.crosstab) critical success index – Critical success index. float
swirlspy.ver.metric.ets(cont)

Returns Equitable Threat Score[0 = no skill][1 = perfect]

Parameters: cont (tuple) – contingency tuple (refer to swirlspy.ver.crosstab) equitable threat score – Equitable threat score. float
swirlspy.ver.metric.f1Score(forecast, observation, average='binary')

Computes the f1 score.

The f1_score can be seen as a weighted average of the precision and recall. The f1 score reaches its best value at 1 and its worst at 0. The formula for the f1 score is:

$f1 = 2 * \frac{precision * recall}{precision + recall}$
Parameters: forecast (xarray.DataArray) – xarray populated with binary forecast data. observation (xarray.DataArray) – xarray containing binary observation data. average (string) – This parameter is required for multiclass/multilabel targets. If None, the scores for each class are returned. Otherwise, this determines the type of averaging performed on the data: ’binary’: Only report results for the class specified by pos_label. This is applicable only if targets (y_{true,pred}) are binary. ’micro’: Calculate metrics globally by counting the total true positives, false negatives and false positives. ’macro’: Calculate metrics for each label, and find their unweighted mean. This does not take label imbalance into account. ’weighted’: Calculate metrics for each label, and find their average weighted by support (the number of true instances for each label). This alters ‘macro’ to account for label imbalance; it can result in an F-score that is not between precision and recall. ’samples’: Calculate metrics for each instance, and find their average (only meaningful for multilabel classification where this differs from accuracy_score). f1 – The f1 score. float or array of float, shape = [n_unique_labels]
swirlspy.ver.metric.far(cont)

Calculates False Alarm Ratio

Parameters: cont (tuple) – contingency tuple (refer to swirlspy.ver.crosstab) false alarm ratio – False alarm ratio. float
swirlspy.ver.metric.freq_bias(cont)

Calculates Frequency Bias

Parameters: cont (tuple) – contingency tuple (refer to swirlspy.ver.crosstab) frequency bias – Frequency bias. float or int
swirlspy.ver.metric.hss(cont)

Calculates Heidke Skill Score [0 = no skill][1=perfect]

Parameters: cont (tuple) – contingency tuple (refer to swirlspy.ver.crosstab) Heidke Success Score – The Heidke Success Score int or float
swirlspy.ver.metric.pod(cont)

Calculates Probability of Detection

Parameters: cont (tuple) – contingency tuple (refer to swirlspy.ver.crosstab) probability of detection – Probability of detection. float
swirlspy.ver.metric.pofd(cont)

Calculates Probability of False Detection

Parameters: cont (tuple) – contingency tuple (refer to swirlspy.ver.crosstab) probability of false detection – The probability of false detection. float
swirlspy.ver.metric.precision_recall(forecast, observation, average=None)

A function to compute precision-recall pairs for different probability thresholds. This function is a wrapper around sklearn.metrics.precision_recall_curve.

Parameters: forecast (xarray.DataArray) – xarray populated with forecasted probabilities ~ (0, 1). observation (xarray.DataArray) – xarray containing binary observation data. average (list of strings) – Decides the averaging in calculating the average precision score. If None, the scores for each class are returned. Otherwise, this determines the type of averaging performed on the data: ’micro’: Calculate metrics globally by considering each element of the label indicator matrix as a label. ’macro’: Calculate metrics for each label, and find their unweighted mean. This does not take label imbalance into account. ’weighted’: Calculate metrics for each label, and find their average, weighted by support (the number of true instances for each label). ’samples’: Calculate metrics for each instance, and find their average. ’binary’: Only report results for the class specified by pos_label. This is applicable only if targets both forecast and observation arrays are binary. precision_recall_data – Contains precision and recall data as keys “precision” and “recall” required for plotting precision and recall curve. The average precision score, area under the curve and thresholds are included as keys “ap” , “auc”, and “thresholds” respectively. dict
swirlspy.ver.metric.reliability(forecast, observation, normalize=False, n_bins=5, strategy='uniform')

Function to generate data required for the plotting of reliability diagrams. Forecast and observation xarrays are considered to be geographically similar. This function is a wrapper around sklearn.calibration.calibration_curve.

Parameters: forecast (xarray.DataArray) – xarray populated with forecasted probabilities ~ (0, 1). observation (xarray.DataArray) – xarray containing binary observation data. normalize (bool) – Whether y_prob needs to be normalized into the bin [0, 1], i.e. is not a proper probability. If True, the smallest value in y_prob is mapped onto 0 and the largest one onto 1. Defaults to False. n_bins (int) – Number of bins. A bigger number requires more data. Bins with no data points (i.e. without corresponding values in y_prob) will not be returned, thus there may be fewer than n_bins in the return value. strategy (str) – Strategy used to define the widths of the bins. Options are ‘uniform’ and ‘quantile’. Defaults to ‘uniform’. reliability_data – Contains data useful for plotting reliability diagrams. Data vars are: observed_rf (observed relative frequency) and nforecasts (number of forecasts). Climatology is included as an attribute. xarray.Dataset
swirlspy.ver.metric.roc(forecast, observation, average=None)

Function to compute Receiver Operating Characteristic. This function is a wrapper around sklearn.metrics.roc_curve and sklearn.metrics.roc_auc_score.

Parameters: forecast (xarray.DataArray) – xarray populated with forecasted probabilities ~ (0, 1). observation (xarray.DataArray) – xarray containing binary observation data. average (string, [None, ‘micro’, ‘macro’ (default), ‘samples’, ‘weighted’]) – If None, the scores for each class are returned. Otherwise, this determines the type of averaging performed on the data: ’micro’: Calculate metrics globally by considering each element of the label indicator matrix as a label. ’macro’: Calculate metrics for each label, and find their unweighted mean. This does not take label imbalance into account. ’weighted’: Calculate metrics for each label, and find their average, weighted by support (the number of true instances for each label). ’samples’: Calculate metrics for each instance, and find their average. Will be ignored when observation is binary. roc_data – Contains Probability of Detection and Probability of False Detection as keys “pod” and “pofd” required for plotting ROC curves. Area under the ROC curve and thresholds are included as keys “auc” and “thresholds” respectively. dict