tsseg.metrics package
Submodules
tsseg.metrics.base module
tsseg.metrics.change_point_detection module
- class tsseg.metrics.change_point_detection.Covering(convert_labels_to_segments=False, **kwargs)[source]
Bases:
BaseMetricComputes the Covering score for a segmentation.
The Covering metric evaluates how well the predicted segments cover the ground truth segments. It is calculated as a weighted sum of the maximum Intersection over Union (IoU) for each ground truth segment, where the weight is the length of the ground truth segment. This implementation is based on the logic proposed in various segmentation evaluation studies.
- class tsseg.metrics.change_point_detection.F1Score(margin=0.01, convert_labels_to_segments=False, **kwargs)[source]
Bases:
BaseMetricComputes the F1-score for change point detection.
- class tsseg.metrics.change_point_detection.HausdorffDistance(**kwargs)[source]
Bases:
BaseMetricComputes the Hausdorff distance between two sets of change points.
tsseg.metrics.gaussian_f1 module
Experimental fuzzy F1 metric for change point detection.
This module introduces a differentiable alternative to the classic F1-score. Instead of relying on a hard margin around each change point, it evaluates predictions with a Gaussian reward that decays smoothly as the predicted change point drifts away from the ground truth. The default configuration uses the same Gaussian width for every change point, derived from a single fraction of the series length so that no event is implicitly favoured.
- class tsseg.metrics.gaussian_f1.GaussianF1Score(*, sigma_fraction=0.01, min_sigma=1.0, adaptive_sigma=False, convert_labels_to_segments=False)[source]
Bases:
BaseMetricGaussian-weighted alternative to the classic F1 score.
The metric operates in three conceptual steps:
- Preparation – convert optional label sequences into change point
lists, remove boundary markers, and infer the series length.
- Gaussian matching – every true change point is associated with a
Gaussian of width
sigma_fraction * n(clamped below bymin_sigma). Predictions are rewarded according to that shared kernel and a greedy assignment keeps the best non-overlapping pairs.
- Soft precision & recall – derive precision and recall from the sum of
Gaussian rewards, yielding a fuzzy F1 in \([0, 1]\).
Special cases are handled explicitly:
- No ground-truth change point – if the data really is stationary and no
change points are predicted either, we return the perfect score 1.0. Conversely, predicting spurious changes yields a zero score.
- Single change point – the Gaussian spread still follows the global
fraction, ensuring a consistent reward scale across all events.
tsseg.metrics.bidirectional_covering module
Bidirectional Covering metric for change-point segmentation.
- class tsseg.metrics.bidirectional_covering.BidirectionalCovering(*, convert_labels_to_segments=False, aggregation='harmonic', **kwargs)[source]
Bases:
BaseMetricBidirectional extension of the classical Covering metric.
The classical Covering score only evaluates how well predicted segments cover the ground-truth segmentation. However, this directionality means that long predicted segments that cover the truth sparsely may still obtain a high score, even when the prediction introduces substantial over-segmentation.
The bidirectional variant evaluates coverage in both directions:
ground_truth_coveringmirrors the traditional definition where each ground-truth interval is weighted by its duration and matched to the best overlapping predicted interval via Intersection over Union (IoU).prediction_coveringswaps the roles. Each predicted segment is weighted by its duration and matched to the best ground-truth overlap.
The two directional scores are then aggregated using an F1-style harmonic mean by default. Alternative aggregation strategies (
geometric,arithmeticormin) can be selected via theaggregationargument. The resulting metric rewards segmentations that both cover the truth and avoid excessive over-segmentation.- Parameters:
convert_labels_to_segments (
bool) – WhenTrue, the inputs are interpreted as label sequences and will be converted to change-points viatsseg.metrics.change_point_detection.labels_to_change_points().aggregation (
str) – Name of the aggregation strategy used to combine the two directional covering scores. Supported values are"harmonic"(default),"geometric","arithmetic"and"min".kwargs – Forwarded to
tsseg.metrics.base.BaseMetric.
tsseg.metrics.state_detection module
- class tsseg.metrics.state_detection.AdjustedMutualInformation(**kwargs)[source]
Bases:
BaseMetricComputes the Adjusted Mutual Information (AMI).
- class tsseg.metrics.state_detection.AdjustedRandIndex(**kwargs)[source]
Bases:
BaseMetricComputes the Adjusted Rand Index (ARI).
- class tsseg.metrics.state_detection.NormalizedMutualInformation(**kwargs)[source]
Bases:
BaseMetricComputes the Normalized Mutual Information (NMI).
- class tsseg.metrics.state_detection.StateMatchingScore(weights=None, **kwargs)[source]
Bases:
BaseMetricComputes the State Matching Score (SMS).
- DEFAULT_WEIGHTS = {'delay': 0.1, 'isolation': 0.8, 'missing': 0.5, 'transition': 0.3}
- class tsseg.metrics.state_detection.WeightedAdjustedRandIndex(distance_func='linear', alpha=0.1, **kwargs)[source]
Bases:
BaseMetricComputes the Weighted Adjusted Rand Index (WARI).
- class tsseg.metrics.state_detection.WeightedNormalizedMutualInformation(distance_func='linear', alpha=0.1, average_method='arithmetic', **kwargs)[source]
Bases:
BaseMetricComputes the Weighted Normalized Mutual Information (WNMI).
- tsseg.metrics.state_detection.weighted_adjusted_rand_score(labels_true, labels_pred, weights)[source]
Compute the Weighted Adjusted Rand Index (WARI).
- tsseg.metrics.state_detection.weighted_contingency_matrix(labels_true, labels_pred, weights, *, eps=None, sparse=False, dtype=<class 'numpy.float64'>)[source]
Build a weighted contingency matrix.
- tsseg.metrics.state_detection.weighted_entropy(labels, weights)[source]
Compute the weighted entropy of a labeling.
- tsseg.metrics.state_detection.weighted_mutual_info_score(labels_true, labels_pred, weights)[source]
Compute the Weighted Mutual Information (WMI).
Module contents
- class tsseg.metrics.AdjustedMutualInformation(**kwargs)[source]
Bases:
BaseMetricComputes the Adjusted Mutual Information (AMI).
- class tsseg.metrics.AdjustedRandIndex(**kwargs)[source]
Bases:
BaseMetricComputes the Adjusted Rand Index (ARI).
- class tsseg.metrics.BaseMetric(**kwargs)[source]
Bases:
ABCBase class for all metrics.
- class tsseg.metrics.BidirectionalCovering(*, convert_labels_to_segments=False, aggregation='harmonic', **kwargs)[source]
Bases:
BaseMetricBidirectional extension of the classical Covering metric.
The classical Covering score only evaluates how well predicted segments cover the ground-truth segmentation. However, this directionality means that long predicted segments that cover the truth sparsely may still obtain a high score, even when the prediction introduces substantial over-segmentation.
The bidirectional variant evaluates coverage in both directions:
ground_truth_coveringmirrors the traditional definition where each ground-truth interval is weighted by its duration and matched to the best overlapping predicted interval via Intersection over Union (IoU).prediction_coveringswaps the roles. Each predicted segment is weighted by its duration and matched to the best ground-truth overlap.
The two directional scores are then aggregated using an F1-style harmonic mean by default. Alternative aggregation strategies (
geometric,arithmeticormin) can be selected via theaggregationargument. The resulting metric rewards segmentations that both cover the truth and avoid excessive over-segmentation.- Parameters:
convert_labels_to_segments (
bool) – WhenTrue, the inputs are interpreted as label sequences and will be converted to change-points viatsseg.metrics.change_point_detection.labels_to_change_points().aggregation (
str) – Name of the aggregation strategy used to combine the two directional covering scores. Supported values are"harmonic"(default),"geometric","arithmetic"and"min".kwargs – Forwarded to
tsseg.metrics.base.BaseMetric.
- class tsseg.metrics.Covering(convert_labels_to_segments=False, **kwargs)[source]
Bases:
BaseMetricComputes the Covering score for a segmentation.
The Covering metric evaluates how well the predicted segments cover the ground truth segments. It is calculated as a weighted sum of the maximum Intersection over Union (IoU) for each ground truth segment, where the weight is the length of the ground truth segment. This implementation is based on the logic proposed in various segmentation evaluation studies.
- class tsseg.metrics.F1Score(margin=0.01, convert_labels_to_segments=False, **kwargs)[source]
Bases:
BaseMetricComputes the F1-score for change point detection.
- class tsseg.metrics.GaussianF1Score(*, sigma_fraction=0.01, min_sigma=1.0, adaptive_sigma=False, convert_labels_to_segments=False)[source]
Bases:
BaseMetricGaussian-weighted alternative to the classic F1 score.
The metric operates in three conceptual steps:
- Preparation – convert optional label sequences into change point
lists, remove boundary markers, and infer the series length.
- Gaussian matching – every true change point is associated with a
Gaussian of width
sigma_fraction * n(clamped below bymin_sigma). Predictions are rewarded according to that shared kernel and a greedy assignment keeps the best non-overlapping pairs.
- Soft precision & recall – derive precision and recall from the sum of
Gaussian rewards, yielding a fuzzy F1 in \([0, 1]\).
Special cases are handled explicitly:
- No ground-truth change point – if the data really is stationary and no
change points are predicted either, we return the perfect score 1.0. Conversely, predicting spurious changes yields a zero score.
- Single change point – the Gaussian spread still follows the global
fraction, ensuring a consistent reward scale across all events.
- class tsseg.metrics.HausdorffDistance(**kwargs)[source]
Bases:
BaseMetricComputes the Hausdorff distance between two sets of change points.
- class tsseg.metrics.NormalizedMutualInformation(**kwargs)[source]
Bases:
BaseMetricComputes the Normalized Mutual Information (NMI).
- class tsseg.metrics.StateMatchingScore(weights=None, **kwargs)[source]
Bases:
BaseMetricComputes the State Matching Score (SMS).
- DEFAULT_WEIGHTS = {'delay': 0.1, 'isolation': 0.8, 'missing': 0.5, 'transition': 0.3}
- class tsseg.metrics.WeightedAdjustedRandIndex(distance_func='linear', alpha=0.1, **kwargs)[source]
Bases:
BaseMetricComputes the Weighted Adjusted Rand Index (WARI).
- class tsseg.metrics.WeightedNormalizedMutualInformation(distance_func='linear', alpha=0.1, average_method='arithmetic', **kwargs)[source]
Bases:
BaseMetricComputes the Weighted Normalized Mutual Information (WNMI).