bob.ip.binseg.utils.measure

Functions

auc(x, y)

Calculates the area under the precision-recall curve (AUC)

base_measures(tp, fp, tn, fn)

Calculates measures from true/false positive and negative counts

tricky_division(n, d)

Divides n by d.

Classes

SmoothedValue([window_size])

Track a series of values and provide access to smoothed values over a window or the global series average.

class bob.ip.binseg.utils.measure.SmoothedValue(window_size=20)[source]

Bases: object

Track a series of values and provide access to smoothed values over a window or the global series average.

update(value)[source]
property median
property avg
bob.ip.binseg.utils.measure.tricky_division(n, d)[source]

Divides n by d. Returns 0.0 in case of a division by zero

bob.ip.binseg.utils.measure.base_measures(tp, fp, tn, fn)[source]

Calculates measures from true/false positive and negative counts

This function can return standard machine learning measures from true and false positive counts of positives and negatives. For a thorough look into these and alternate names for the returned values, please check Wikipedia’s entry on Precision and Recall.

Parameters
  • tp (int) – True positive count, AKA “hit”

  • fp (int) – False positive count, AKA, “correct rejection”

  • tn (int) – True negative count, AKA “false alarm”, or “Type I error”

  • fn (int) – False Negative count, AKA “miss”, or “Type II error”

Returns

  • precision (float) – P, AKA positive predictive value (PPV). It corresponds arithmetically to tp/(tp+fp). In the case tp+fp == 0, this function returns zero for precision.

  • recall (float) – R, AKA sensitivity, hit rate, or true positive rate (TPR). It corresponds arithmetically to tp/(tp+fn). In the special case where tp+fn == 0, this function returns zero for recall.

  • specificity (float) – S, AKA selectivity or true negative rate (TNR). It corresponds arithmetically to tn/(tn+fp). In the special case where tn+fp == 0, this function returns zero for specificity.

  • accuracy (float) – A, see Accuracy. is the proportion of correct predictions (both true positives and true negatives) among the total number of pixels examined. It corresponds arithmetically to (tp+tn)/(tp+tn+fp+fn). This measure includes both true-negatives and positives in the numerator, what makes it sensitive to data or regions without annotations.

  • jaccard (float) – J, see Jaccard Index or Similarity. It corresponds arithmetically to tp/(tp+fp+fn). In the special case where tn+fp+fn == 0, this function returns zero for the Jaccard index. The Jaccard index depends on a TP-only numerator, similarly to the F1 score. For regions where there are no annotations, the Jaccard index will always be zero, irrespective of the model output. Accuracy may be a better proxy if one needs to consider the true abscence of annotations in a region as part of the measure.

  • f1_score (float) – F1, see F1-score. It corresponds arithmetically to 2*P*R/(P+R) or 2*tp/(2*tp+fp+fn). In the special case where P+R == (2*tp+fp+fn) == 0, this function returns zero for the Jaccard index. The F1 or Dice score depends on a TP-only numerator, similarly to the Jaccard index. For regions where there are no annotations, the F1-score will always be zero, irrespective of the model output. Accuracy may be a better proxy if one needs to consider the true abscence of annotations in a region as part of the measure.

bob.ip.binseg.utils.measure.auc(x, y)[source]

Calculates the area under the precision-recall curve (AUC)

This function requires a minimum of 2 points and will use the trapezoidal method to calculate the area under a curve bound between [0.0, 1.0]. It interpolates missing points if required. The input x should be continuously increasing or decreasing.

Parameters
  • x (numpy.ndarray) – A 1D numpy array containing continuously increasing or decreasing values for the X coordinate.

  • y (numpy.ndarray) – A 1D numpy array containing the Y coordinates of the X values provided in x.