AI Summary of Peer-Reviewed Research

This page presents an AI-generated summary of a published research paper. The original authors did not write or review this article. [See full disclosure ↓]

Research area:Computer ScienceArtificial IntelligenceArtificial neural network
Publishing process signals: MODERATE — reflects the venue and review process. — venue and review process.

Label noise changes information in neural representations

What the study found

Hidden representations in neural networks change with label noise in ways that depend on how many parameters the network has. The study found a double descent pattern in the information content of these representations, and it found that overparameterized networks are robust to label noise.

Why the authors say this matters

The authors conclude that the relationship between information imbalance, which is a computationally efficient proxy for conditional mutual information, and test error offers a new perspective on generalization. They also say this helps explain how training objectives shape internal representations.

What the researchers tested

The researchers compared hidden representations from networks of varying sizes trained on datasets with controlled levels of label noise. They used Information Imbalance to compare representations, and they studied how this measure changed across the hidden layers, including the penultimate layer and the pre-softmax layer, as well as under random-label training.

What worked and what didn't

In the underparameterized regime, representations learned with noisy labels were more informative than those learned with clean labels. In the overparameterized regime, they were equally informative. Label noise decreased the information content between the penultimate and pre-softmax layers, and representations learned from random labels performed worse than random features when network parameters and training samples were scaled proportionally with a fixed ratio.

What to keep in mind

The abstract does not describe specific experimental limits beyond the controlled label-noise setting and the network-size comparisons. It also does not provide details on the datasets, model architectures, or the exact range of noise levels used.

Key points

  • The study found a double descent pattern in the information content of hidden representations as network size changed.
  • Overparameterized networks were described as robust to label noise.
  • Noisy labels made representations more informative in the underparameterized regime, but not in the overparameterized regime.
  • Label noise reduced the information content between the penultimate and pre-softmax layers.
  • Representations learned from random labels performed worse than random features under proportional scaling of parameters and training samples.

Disclosure

Research title:
Label noise changes information in neural representations
Authors:
Ali Hussaini Umar, Franky Kevin Nando Tezoh, Jean Barbier, S. Acevedo, Alessandro Laio
Institutions:
Scuola Internazionale Superiore di Studi Avanzati, International Centre for Theoretical Sciences, Centre de Physique Théorique
Publication date:
2026-04-22
OpenAlex record:
View
AI provenance: This post was generated by OpenAI. The original authors did not write or review this post.