On the Relationship Between Representation Geometry and Generalization in Deep Neural Networks
Abstract
Effective dimension, an unsupervised geometric metric, strongly predicts neural network performance across different architectures and domains, showing bidirectional causality between representation geometry and accuracy.
We investigate the relationship between representation geometry and neural network performance. Analyzing 52 pretrained ImageNet models across 13 architecture families, we show that effective dimension -- an unsupervised geometric metric -- strongly predicts accuracy. Output effective dimension achieves partial r=0.75 (p < 10^(-10)) after controlling for model capacity, while total compression achieves partial r=-0.72. These findings replicate across ImageNet and CIFAR-10, and generalize to NLP: effective dimension predicts performance for 8 encoder models on SST-2/MNLI and 15 decoder-only LLMs on AG News (r=0.69, p=0.004), while model size does not (r=0.07). We establish bidirectional causality: degrading geometry via noise causes accuracy loss (r=-0.94, p < 10^(-9)), while improving geometry via PCA maintains accuracy across architectures (-0.03pp at 95% variance). This relationship is noise-type agnostic -- Gaussian, Uniform, Dropout, and Salt-and-pepper noise all show |r| > 0.90. These results establish that effective dimension provides domain-agnostic predictive and causal information about neural network performance, computed entirely without labels.
Community
Similar discovery: matrix-based entropy tightens generalization upper-bound https://arxiv.org/abs/2505.08727
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- The Inductive Bottleneck: Data-Driven Emergence of Representational Sparsity in Vision Transformers (2025)
- High-Dimensional Search, Low-Dimensional Solution: Decoupling Optimization from Representation (2025)
- Understanding Generalization from Embedding Dimension and Distributional Convergence (2026)
- Local Intrinsic Dimension of Representations Predicts Alignment and Generalization in AI Models and Human Brain (2026)
- GLaD: Geometric Latent Distillation for Vision-Language-Action Models (2025)
- Next-Embedding Prediction Makes Strong Vision Learners (2025)
- Beyond Real Weights: Hypercomplex Representations for Stable Quantization (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper