tsseg.algorithms.time2state package
Time2State — unsupervised latent state inference.
Description
Time2State infers latent states in time series data using a Causal CNN-based encoder and a novel unsupervised loss function (LSE-Loss). A sliding window extracts subsequences; the encoder maps them to a low-dimensional latent space where a Dirichlet Process Gaussian Mixture Model (DPGMM) clusters the embeddings into states without requiring the number of states a priori.
The framework drastically reduces computational cost compared to operating on raw time series by compressing the representation before clustering.
Parameters
Name |
Type |
Default |
Description |
|---|---|---|---|
|
int |
|
Sliding window size. |
|
int |
|
Step size of the sliding window. |
|
int |
|
Maximum number of states for DPGMM. |
|
float |
|
DPGMM concentration parameter. |
|
int |
|
Training batch size. |
|
int |
|
Training optimisation steps. |
|
float |
|
Learning rate. |
|
int |
|
Depth of the Causal CNN. |
|
int |
|
Encoder output channels. |
|
int |
|
CNN output dimension before the linear layer. |
|
int |
|
Convolution kernel size. |
|
bool / None |
|
Force GPU ( |
|
int / None |
|
Random seed. |
|
int |
|
Time axis. |
Usage
from tsseg.algorithms import Time2StateDetector
detector = Time2StateDetector(window_size=128, n_states=10)
states = detector.fit_predict(X)
Implementation: Adapted from original Time2State code.
Reference: Wang, Wu, Zhou & Cai (2023), Time2State: An Unsupervised Framework for Inferring the Latent States in Time Series Data, SIGMOD.