I AI-Generated 100 Top Computer Vision Papers

Community Article Published December 30, 2025

What happens when you don’t just use AI to help write parts of a research paper, but let it generate the entire thing?

Fake (1)

arxAIv (https://allisonandreyev.github.io/arxaiv.github.io/) is a speculative installation of 100 AI-generated computer vision papers that parodies the accelerating fusion of generative AI and academic publishing. In an era where drafting abstracts or smoothing prose with large language models has become mundane, this work imagines a near-future endpoint — one where titles, figures, affiliations, citations, formatting, all conjured from prompts alone.

From a distance, the illusion is remarkably polished. Paper titles sound technical. Layouts mimic top-tier conferences. Affiliation lines blend real institutions with subtle fakes. Diagrams appear structured and data-driven. At first glance, it’s disturbingly easy to mistake these for real submissions.

But upon closer inspection, the cracks emerge. Figures, while plausible, unravel into visual noise. Affiliations contradict themselves. References cite ghosts. And though the structure is convincing, the content falters, revealing how far we still are from AI fully imitating scientific rigor.

That gap is the point.

arxAIv visualizes this gap through a series of semantic and visual embeddings. Title clusters were formed using all-MiniLM-L6-v2 and animated in 3D with 3d-force-graph, forming orbiting clouds of thematic similarity. Figure embeddings, generated with OpenCLIP ViT-B-32, were mapped to reveal how real CVPR figures tend to cluster tightly, while AI-generated ones drift further apart. The contrast is visual and visceral: human-created figures show internal logic, while the synthetic ones often slide into abstraction or contradiction.

By layering false papers, hallucinated figures, and surreal affiliations into graphs, stats, and animations, arxAIv doesn’t just parody academic aesthetics — it warns of how quickly those aesthetics could be mimicked. It proposes a future where the boundary between rigorous work and visual imitation becomes harder to detect.

This is not a celebration of generative AI — it’s a speculative checkpoint. If we already accept AI-generated fragments in scientific work, how far until we let it author the whole? And when we get there, how will we know what to trust?

Community

Sign up or log in to comment