Temporal anomaly detection: calibrating the surprise
May 29, 2017 ยท Entered Twilight ยท ๐ AAAI Conference on Artificial Intelligence
"Last commit was 7.0 years ago (โฅ5 year threshold)"
Evidence collected by the PWNC Scanner
Repo contents: TDA_dataset, TLR
Authors
Eyal Gutflaish, Aryeh Kontorovich, Sivan Sabato, Ofer Biller, Oded Sofer
arXiv ID
1705.10085
Category
cs.CR: Cryptography & Security
Cross-listed
cs.LG
Citations
17
Venue
AAAI Conference on Artificial Intelligence
Repository
https://github.com/eyalgut/TLR_anomaly_detection.git
โญ 4
Last Checked
2 months ago
Abstract
We propose a hybrid approach to temporal anomaly detection in access data of users to databases --- or more generally, any kind of subject-object co-occurrence data. We consider a high-dimensional setting that also requires fast computation at test time. Our methodology identifies anomalies based on a single stationary model, instead of requiring a full temporal one, which would be prohibitive in this setting. We learn a low-rank stationary model from the training data, and then fit a regression model for predicting the expected likelihood score of normal access patterns in the future. The disparity between the predicted likelihood score and the observed one is used to assess the `surprise' at test time. This approach enables calibration of the anomaly score, so that time-varying normal behavior patterns are not considered anomalous. We provide a detailed description of the algorithm, including a convergence analysis, and report encouraging empirical results. One of the data sets that we tested, TDA, is new for the public domain. It consists of two months' worth of database access records from a live system. Our code is publicly available at https://github.com/eyalgut/TLR_anomaly_detection.git. The TDA data set is available at https://www.kaggle.com/eyalgut/binary-traffic-matrices.
Community Contributions
Found the code? Know the venue? Think something is wrong? Let us know!
๐ Similar Papers
In the same crypt โ Cryptography & Security
R.I.P.
๐ป
Ghosted
R.I.P.
๐ป
Ghosted
Membership Inference Attacks against Machine Learning Models
R.I.P.
๐ป
Ghosted
The Limitations of Deep Learning in Adversarial Settings
R.I.P.
๐ป
Ghosted
Practical Black-Box Attacks against Machine Learning
R.I.P.
๐ป
Ghosted
Distillation as a Defense to Adversarial Perturbations against Deep Neural Networks
R.I.P.
๐ป
Ghosted