SYNC: A Copula based Framework for Generating Synthetic Data from Aggregated Sources
September 20, 2020 ยท Declared Dead ยท ๐ 2020 International Conference on Data Mining Workshops (ICDMW)
"No code URL or promise found in abstract"
Evidence collected by the PWNC Scanner
Authors
Zheng Li, Yue Zhao, Jialin Fu
arXiv ID
2009.09471
Category
stat.AP
Cross-listed
cs.DB,
cs.LG,
stat.ML
Citations
35
Venue
2020 International Conference on Data Mining Workshops (ICDMW)
Last Checked
2 months ago
Abstract
A synthetic dataset is a data object that is generated programmatically, and it may be valuable to creating a single dataset from multiple sources when direct collection is difficult or costly. Although it is a fundamental step for many data science tasks, an efficient and standard framework is absent. In this paper, we study a specific synthetic data generation task called downscaling, a procedure to infer high-resolution, harder-to-collect information (e.g., individual level records) from many low-resolution, easy-to-collect sources, and propose a multi-stage framework called SYNC (Synthetic Data Generation via Gaussian Copula). For given low-resolution datasets, the central idea of SYNC is to fit Gaussian copula models to each of the low-resolution datasets in order to correctly capture dependencies and marginal distributions, and then sample from the fitted models to obtain the desired high-resolution subsets. Predictive models are then used to merge sampled subsets into one, and finally, sampled datasets are scaled according to low-resolution marginal constraints. We make four key contributions in this work: 1) propose a novel framework for generating individual level data from aggregated data sources by combining state-of-the-art machine learning and statistical techniques, 2) perform simulation studies to validate SYNC's performance as a synthetic data generation algorithm, 3) demonstrate its value as a feature engineering tool, as well as an alternative to data collection in situations where gathering is difficult through two real-world datasets, 4) release an easy-to-use framework implementation for reproducibility and scalability at the production level that easily incorporates new data.
Community Contributions
Found the code? Know the venue? Think something is wrong? Let us know!
๐ Similar Papers
In the same crypt โ stat.AP
R.I.P.
๐ป
Ghosted
R.I.P.
๐ป
Ghosted
Sequence-to-point learning with neural networks for nonintrusive load monitoring
R.I.P.
๐ป
Ghosted
Predictive Business Process Monitoring with LSTM Neural Networks
R.I.P.
๐ป
Ghosted
Forecasting: theory and practice
R.I.P.
๐ป
Ghosted
Accurate estimation of influenza epidemics using Google search data via ARGO
R.I.P.
๐ป
Ghosted
Survey of resampling techniques for improving classification performance in unbalanced datasets
Died the same way โ ๐ป Ghosted
R.I.P.
๐ป
Ghosted
Language Models are Few-Shot Learners
R.I.P.
๐ป
Ghosted
PyTorch: An Imperative Style, High-Performance Deep Learning Library
R.I.P.
๐ป
Ghosted
XGBoost: A Scalable Tree Boosting System
R.I.P.
๐ป
Ghosted