Are Neural Networks Collision Resistant?

September 24, 2025 Β· Declared Dead Β· πŸ› IACR Cryptology ePrint Archive

πŸ‘» CAUSE OF DEATH: Ghosted
No code link whatsoever

"No code URL or promise found in abstract"

Evidence collected by the PWNC Scanner

Authors Marco Benedetti, Andrej Bogdanov, Enrico M. Malatesta, Marc MΓ©zard, Gianmarco Perrupato, Alon Rosen, Nikolaj I. Schwartzbach, Riccardo Zecchina arXiv ID 2509.20262 Category cond-mat.dis-nn Cross-listed cs.CR, math.PR Citations 1 Venue IACR Cryptology ePrint Archive Last Checked 2 months ago
Abstract
When neural networks are trained to classify a dataset, one finds a set of weights from which the network produces a label for each data point. We study the algorithmic complexity of finding a collision in a single-layer neural net, where a collision is defined as two distinct sets of weights that assign the same labels to all data. For binary perceptrons with oscillating activation functions, we establish the emergence of an overlap gap property in the space of collisions. This is a topological property believed to be a barrier to the performance of efficient algorithms. The hardness is supported by numerical experiments using approximate message passing algorithms, for which the algorithms stop working well below the value predicted by our analysis. Neural networks provide a new category of candidate collision resistant functions, which for some parameter setting depart from constructions based on lattices. Beyond relevance to cryptography, our work uncovers new forms of computational hardness emerging in large neural networks which may be of independent interest.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

πŸ“œ Similar Papers

In the same crypt β€” cond-mat.dis-nn

Died the same way β€” πŸ‘» Ghosted