Time-Series of Information Technology Operating Analytics Part 3

Time Series Data Analysis & Information Technology Operating Analytics
3 minutes

In Part 3 of the Time-Series of Information Technology Operating Analytics article series, I have included the literature used and the current trial experiments.

Firstly, it shows the types of raw data anomalies that should be defined as additive outliers, temporal changes, level shifts, or seasonal level shifts. This step clarified the anomaly target. After that, the detailed approaches are enumerated:

  • STL decomposition: Stands for seasonal-trend decomposition procedure based on Loess. Split time-series signal into three parts: seasonal, trend and residue.

  • Classification and regression trees (CART): First, use supervised learning to teach trees to classify anomaly and non-anomaly data points. To realise this task, labeled anomaly data points are needed. Second, use unsupervised learning to teach CART to predict the next data point in the series and have some confidence interval or prediction error as in the case of the STL decomposition approach. If applying neural network, LSTM allows modeling the most sophisticated dependencies in time series as well as advanced seasonality dependencies.

  • Auto-Regressive Integrated Moving Average (ARIMA): Based on an approach that several points from the past generate a forecast of the next point with the addition of some random variable, which is usually white noise (just like a forward-moving window). Seasonal ARIMA has two important factors, trend, and seasonality.

  • Auto ARIMA returns the best set of parameters for the algorithm in our specified range.

Besides, Exponential smoothing techniques are very similar to the ARIMA approach. The basic exponential model is equivalent to the ARIMA (0, 1, 1) model. The most interesting method from the anomaly detection perspective is Holt-Winters seasonal method where the seasonal period needs to be defined.

 

Current Progress

The first approach that I implemented was based on the ARIMA approach using a rolling time window over historic data to calculate a boundary outside of which anomalies could be identified.  A trial of the rolling-time window is illustrated as follows:

Here, the CPU-utilization dataset is taken as an example. The start date is assumed to be 2020-05-12. It is designed to have a one-day length rolling window. Therefore, the rolling-window and calculations started to output the values from 13rd of May. In this example, the rolling-window moves from 2020-05-12 to 2020-05-22, while the output values are from 13th to 22nd. From the figure, the Mean and Geometric Mean values are in yellow and blue. The boundary calculating algorithm is based on the mean value and standard deviations of data in real-time rolling time windows. If any data point is greater than these boundaries for more than 120 seconds, it will be labeled as an anomaly.

 

Summary

Thanks for your time and reading. In this blog, I included some literature I found and the current trial experiments. These contents will be updated and replenished throughout the project. Therefore, if you have any thoughts, please don’t hesitate to contact me. I will be very happy to hear your thoughts so I can improve the outcome of this project.

The next and last blog will include evaluations of the anomaly the other detection algorithms and the final results of my project. In case you wanted to see how my work has progressed so far, take a look at Part 1 & Part 2 of the series.


Do you want to find out more?

TO FIND OUT MORE ABOUT THE PROJECT & OUR SERVICES, GET IN TOUCH WITH THE TEAM.