Differentiable Rule Induction from Raw Sequence Inputs

February 14, 2026 Β· Grace Period Β· πŸ› International Conference on Learning Representations

⏳ Grace Period
This paper is less than 90 days old. We give authors time to release their code before passing judgment.
Authors Kun Gao, Katsumi Inoue, Yongzhi Cao, Hanpin Wang, Feng Yang arXiv ID 2602.13583 Category cs.AI: Artificial Intelligence Cross-listed cs.LG Citations 2 Venue International Conference on Learning Representations
Abstract
Rule learning-based models are widely used in highly interpretable scenarios due to their transparent structures. Inductive logic programming (ILP), a form of machine learning, induces rules from facts while maintaining interpretability. Differentiable ILP models enhance this process by leveraging neural networks to improve robustness and scalability. However, most differentiable ILP methods rely on symbolic datasets, facing challenges when learning directly from raw data. Specifically, they struggle with explicit label leakage: The inability to map continuous inputs to symbolic variables without explicit supervision of input feature labels. In this work, we address this issue by integrating a self-supervised differentiable clustering model with a novel differentiable ILP model, enabling rule learning from raw data without explicit label leakage. The learned rules effectively describe raw data through its features. We demonstrate that our method intuitively and precisely learns generalized rules from time series and image data.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

πŸ“œ Similar Papers

In the same crypt β€” Artificial Intelligence