Visual Elevator Music: What AI Might Be Doing to Human Creativity
- Ben Mazza
- Apr 7
- 3 min read
There's a study you should know about.
Researchers linked a text-to-image AI with an image-to-text AI and let them run in a loop. Image, caption, image, caption. Regardless of how diverse the starting prompts were, the outputs collapsed onto the same narrow themes every time. Atmospheric cityscapes. Grandiose buildings. Pastoral landscapes.
The researchers called it "visual elevator music."

No bad decisions were made. Nobody got lazy. The system just converged, repeatedly choosing the probable over the possible until the range was gone.
Researchers at Oxford and Cambridge have a name for this: model collapse.
Training AI on content produced by earlier AI causes irreversible defects, where the tails of the original content distribution simply disappear. UCLA Anderson Review The outliers. The weird stuff. The things that don't fit the pattern. Gone.
It gets more personal than that. A study published in Science Advances put writers into two groups -- one with AI assistance, one without. The AI-assisted writers produced stories rated as more creative, better written, and more enjoyable. But those stories were also significantly more similar to each other than the ones written without AI. Better individually. Blander collectively. The researchers called it a social dilemma, and that framing is exactly right.
Then there's the one that should really land.
A longitudinal study tracked what happened when AI assistance was taken away. Creative performance dropped sharply upon withdrawal, while content homogeneity kept climbing. Generative AI augments performance without fostering real creative ability -- a creativity illusion.
They called it a creative scar.

We are not just producing blander work. We may be training ourselves out of the capacity to produce anything else.
Here is where I am going to be direct.
A lot of the people most enthusiastically deploying AI content are not doing it to free up creative bandwidth. They are doing it to manufacture the appearance of having something to say. The LinkedIn content mills, the newsletter factories, the "thought leaders" publishing five posts a week who have not field-tested an original idea since 2019 -- AI did not create that impulse. But it handed it a printing press.
The problem was never the tool. It was always the motivation.
Greed optimizes for shortcuts. That is not a moral indictment of any one person, it is just what incentive structures do when left unsupervised. Right now the incentive structure rewards volume, velocity, and the performance of expertise over the actual thing. AI is exceptionally good at producing the performance of expertise. So we are about to get buried in it.
What do we actually do about it?
Develop a better nose for it.
Visual elevator music is not just an AI problem, it is a quality signal. Content that provokes nothing, challenges nothing, and could have been written by anyone probably was. Stop rewarding it with your attention. Stop sharing it. Stop feeling obligated to engage just because someone in your network posted it and the algorithm noticed.
Make the process visible again. The same researchers who found the creative scar also found that teams staying genuinely engaged with their own process, using AI as a tool rather than a replacement, kept their range intact. The difference was not the technology. It was the intention. Show your thinking. Publish the draft. Let the uncertainty be part of it. That is how your audience knows a real person actually wrestled with something here.
Be honest about what you are optimizing for. If it is reach, fine, own it. If it is actual contribution, the shortcuts are working against you. There is no faster route to creativity built from real experience and genuine engagement with hard problems. Just a lot of people selling you a map to somewhere they have never been.
I genuinely believe the good stuff surfaces. It always has. People feel the difference between something that was made and something that was manufactured, even when they cannot explain why.
But "eventually" is not good enough when the manufactured stuff is this loud.
So make something real. Specific to you, your actual experience, your genuine uncertainty, your point of view that no model trained on the median of human
output is going to replicate.
So make something real. Something specific to you. Your experience. Your uncertainty. Your point of view.
Not because AI cannot approximate it.
Because approximation was never the point.