bob.ip.binseg.utils.measure¶
Functions
|
Calculates the area under the precision-recall curve (AUC) |
|
Calculates measures from true/false positive and negative counts |
|
Divides n by d. |
Classes
|
Track a series of values and provide access to smoothed values over a window or the global series average. |
-
class
bob.ip.binseg.utils.measure.
SmoothedValue
(window_size=20)[source]¶ Bases:
object
Track a series of values and provide access to smoothed values over a window or the global series average.
-
property
median
¶
-
property
avg
¶
-
property
-
bob.ip.binseg.utils.measure.
tricky_division
(n, d)[source]¶ Divides n by d. Returns 0.0 in case of a division by zero
-
bob.ip.binseg.utils.measure.
base_measures
(tp, fp, tn, fn)[source]¶ Calculates measures from true/false positive and negative counts
This function can return standard machine learning measures from true and false positive counts of positives and negatives. For a thorough look into these and alternate names for the returned values, please check Wikipedia’s entry on Precision and Recall.
- Parameters
- Returns
precision (float) – P, AKA positive predictive value (PPV). It corresponds arithmetically to
tp/(tp+fp)
. In the casetp+fp == 0
, this function returns zero for precision.recall (float) – R, AKA sensitivity, hit rate, or true positive rate (TPR). It corresponds arithmetically to
tp/(tp+fn)
. In the special case wheretp+fn == 0
, this function returns zero for recall.specificity (float) – S, AKA selectivity or true negative rate (TNR). It corresponds arithmetically to
tn/(tn+fp)
. In the special case wheretn+fp == 0
, this function returns zero for specificity.accuracy (float) – A, see Accuracy. is the proportion of correct predictions (both true positives and true negatives) among the total number of pixels examined. It corresponds arithmetically to
(tp+tn)/(tp+tn+fp+fn)
. This measure includes both true-negatives and positives in the numerator, what makes it sensitive to data or regions without annotations.jaccard (float) – J, see Jaccard Index or Similarity. It corresponds arithmetically to
tp/(tp+fp+fn)
. In the special case wheretn+fp+fn == 0
, this function returns zero for the Jaccard index. The Jaccard index depends on a TP-only numerator, similarly to the F1 score. For regions where there are no annotations, the Jaccard index will always be zero, irrespective of the model output. Accuracy may be a better proxy if one needs to consider the true abscence of annotations in a region as part of the measure.f1_score (float) – F1, see F1-score. It corresponds arithmetically to
2*P*R/(P+R)
or2*tp/(2*tp+fp+fn)
. In the special case whereP+R == (2*tp+fp+fn) == 0
, this function returns zero for the Jaccard index. The F1 or Dice score depends on a TP-only numerator, similarly to the Jaccard index. For regions where there are no annotations, the F1-score will always be zero, irrespective of the model output. Accuracy may be a better proxy if one needs to consider the true abscence of annotations in a region as part of the measure.
-
bob.ip.binseg.utils.measure.
auc
(x, y)[source]¶ Calculates the area under the precision-recall curve (AUC)
This function requires a minimum of 2 points and will use the trapezoidal method to calculate the area under a curve bound between
[0.0, 1.0]
. It interpolates missing points if required. The inputx
should be continuously increasing or decreasing.- Parameters
x (numpy.ndarray) – A 1D numpy array containing continuously increasing or decreasing values for the X coordinate.
y (numpy.ndarray) – A 1D numpy array containing the Y coordinates of the X values provided in
x
.