Exploring the potential of transfer learning for metamodels of heterogeneous material deformation
October 28, 2020 ยท Declared Dead ยท ๐ Journal of The Mechanical Behavior of Biomedical Materials
"No code URL or promise found in abstract"
Evidence collected by the PWNC Scanner
Authors
Emma Lejeune, Bill Zhao
arXiv ID
2010.16260
Category
q-bio.TO
Cross-listed
cs.LG,
physics.data-an
Citations
20
Venue
Journal of The Mechanical Behavior of Biomedical Materials
Last Checked
2 months ago
Abstract
From the nano-scale to the macro-scale, biological tissue is spatially heterogeneous. Even when tissue behavior is well understood, the exact subject specific spatial distribution of material properties is often unknown. And, when developing computational models of biological tissue, it is usually prohibitively computationally expensive to simulate every plausible spatial distribution of material properties for each problem of interest. Therefore, one of the major challenges in developing accurate computational models of biological tissue is capturing the potential effects of this spatial heterogeneity. Recently, machine learning based metamodels have gained popularity as a computationally tractable way to overcome this problem because they can make predictions based on a limited number of direct simulation runs. These metamodels are promising, but they often still require a high number of direct simulations to achieve an acceptable performance. Here we show that transfer learning, a strategy where knowledge gained while solving one problem is transferred to solving a different but related problem, can help overcome this limitation. Critically, transfer learning can be used to leverage both low-fidelity simulation data and simulation data that is the outcome of solving a different but related mechanical problem. In this paper, we extend Mechanical MNIST, our open source benchmark dataset of heterogeneous material undergoing large deformation, to include a selection of low-fidelity simulation results that require 2-4 orders of magnitude less CPU time to run. Then, we show that transferring the knowledge stored in metamodels trained on these low-fidelity simulation results can vastly improve the performance of metamodels used to predict the results of high-fidelity simulations.
Community Contributions
Found the code? Know the venue? Think something is wrong? Let us know!
๐ Similar Papers
In the same crypt โ q-bio.TO
R.I.P.
๐ป
Ghosted
R.I.P.
๐ป
Ghosted
Early Cancer Detection in Blood Vessels Using Mobile Nanosensors
R.I.P.
๐ป
Ghosted
Relationship between brain injury criteria and brain strain across different types of head impacts can be different
R.I.P.
๐ป
Ghosted
Interpretable Classification from Skin Cancer Histology Slides Using Deep Learning: A Retrospective Multicenter Study
R.I.P.
๐ป
Ghosted
Towards Machine Learning-based Quantitative Hyperspectral Image Guidance for Brain Tumor Resection
R.I.P.
๐ป
Ghosted
SDF4CHD: Generative Modeling of Cardiac Anatomies with Congenital Heart Defects
Died the same way โ ๐ป Ghosted
R.I.P.
๐ป
Ghosted
Language Models are Few-Shot Learners
R.I.P.
๐ป
Ghosted
PyTorch: An Imperative Style, High-Performance Deep Learning Library
R.I.P.
๐ป
Ghosted
XGBoost: A Scalable Tree Boosting System
R.I.P.
๐ป
Ghosted