diff --git a/docs/anomaly-detection/Overview.md b/docs/anomaly-detection/Overview.md index d08940120..d164fa4c8 100644 --- a/docs/anomaly-detection/Overview.md +++ b/docs/anomaly-detection/Overview.md @@ -88,10 +88,6 @@ Currently, vmanomaly ships with a set of built-in models: See [statsmodels.org documentation](https://www.statsmodels.org/dev/examples/notebooks/generated/stl_decomposition.html) for LOESS STD. -1. [**ARIMA**](/anomaly-detection/components/models.html#arima) - - Commonly used forecasting model. See [statsmodels.org documentation](https://www.statsmodels.org/stable/generated/statsmodels.tsa.arima.model.ARIMA.html) for ARIMA. - 1. [**Rolling Quantile**](/anomaly-detection/components/models.html#rolling-quantile) A simple moving window of quantiles. Easy to use, easy to understand, but not as powerful as diff --git a/docs/anomaly-detection/components/models.md b/docs/anomaly-detection/components/models.md index e47d245e6..47e11626c 100644 --- a/docs/anomaly-detection/components/models.md +++ b/docs/anomaly-detection/components/models.md @@ -135,19 +135,19 @@ Introduced in [1.13.0](/anomaly-detection/CHANGELOG/#1130), `detection_direction Here's how default (backward-compatible) behavior looks like - anomalies will be tracked in `both` directions (`y > yhat` or `y < yhat`). This is useful when there is no domain expertise to filter the required direction. - + When set to `above_expected`, anomalies are tracked only when `y > yhat`. *Example metrics*: Error rate, response time, page load time, number of failed transactions - metrics where *lower values are better*, so **higher** values are typically tracked. - + When set to `below_expected`, anomalies are tracked only when `y < yhat`. *Example metrics*: Service Level Agreement (SLA) compliance, conversion rate, Customer Satisfaction Score (CSAT) - metrics where *higher values are better*, so **lower** values are typically tracked. - + Config with a split example: @@ -193,11 +193,11 @@ Introduced in [v1.13.0](/anomaly-detection/CHANGELOG/#1130), the `min_dev_from_e Visualizations below demonstrate this concept; the green zone defined as the `[yhat - min_dev_from_expected, yhat + min_dev_from_expected]` range excludes actual data points (`y`) from generating anomaly scores if they fall within that range. - + - + - + ## Model types @@ -224,7 +224,7 @@ If during an inference, you got a series having **new labelset** (not present in **Examples:** [Prophet](#prophet), [Holt-Winters](#holt-winters)
- + ### Multivariate Models @@ -239,7 +239,7 @@ If during an inference, you got a **different amount of series** or some series **Examples:** [IsolationForest](#isolation-forest-multivariate) - + ### Rolling Models @@ -256,7 +256,7 @@ Such models put **more pressure** on your reader's source, i.e. if your model sh **Examples:** [RollingQuantile](#rolling-quantile) - + ### Non-Rolling Models @@ -273,7 +273,7 @@ Produced model instances are **stored in-memory** between consecutive re-fit cal **Examples:** [Prophet](#prophet) - + ## Built-in Models @@ -312,7 +312,7 @@ Tuning hyperparameters of a model can be tricky and often requires in-depth know - `n_trials` (int) - How many trials to sample from hyperparameter search space. The higher, the longer it takes but the better the results can be. Defaults to 128. - `timeout` (float) - How many seconds in total can be spent on each model to tune hyperparameters. The higher, the longer it takes, allowing to test more trials out of defined `n_trials`, but the better the results can be. - + ```yaml # ...