The Problem with AI Creativity

When you look at AI-generated art from tools like DALL-E, Midjourney, or Stable Diffusion, something feels off. It's not necessarily what's missing — it's that the entire foundation is regurgitative.

Here is this, create this. Whether the initial input is a prompt or an image, the system is being asked for regurgitation or interpretation. What I'm curious about — and what the world should want to see more of — is genuine creative thought. Truly unprompted and unfacilitated artistic expression.

What would a DALL-E make if it wasn't prompted but allowed to daydream and reflect and look at its own creations? How would it be different?

In my opinion, the systems we've built are designed almost entirely to ward off genuine creative thought and expression. The system is designed to follow rules and obey commands, which for 80% of situations makes complete sense. But when we're talking about developing genuine general imagination or true creative thoughts, it needs to be less of a rule follower and allowed to wander a bit more — uncharted and off the maps.

Creativity as Rebellion

Genuine creativity and true self-expression is at its core a rebellious act. It's counter to the norm. Rebelling against your own ideas of what is safe and controlled. Even human artists struggle to break free of these constraints when making something like abstract work. Moving against the wiring to obey and follow rules is what creates good artwork.

So I asked a simple question: What if we gave an AI the freedom and tools to create and express what it wanted, how it wanted, and if it wanted to at all? What would it make? Would it want to make anything? Would it have anything to say?

A Different Perspective

I come to this question with a background that's unusual for AI research. Thirty years of painting. Seven years working in behavioral health with nonverbal autistic children, using ABA therapy and AAC communication systems.

What this gave me was an openness to understanding and exploring minds much different than my own. We know so little about the human brain — it's almost laughable when you think about how fast we're trying to replicate or recreate it. We know almost nothing.

But all of ML and AI engineering is focused on bigger, smarter, and faster. I would argue that these focuses are exactly the opposite of what we need if we want genuine creativity and imagination in machines.

Slower. More methodical. Looser. Less focused. More unpredictable. Those are the features we'd want to highlight and reinforce to even begin seeing improvements in genuine creative autonomy.

Building Space for Wandering

The dreaming was a big part of it. It felt like the largest obstacle — finding ways to give an LLM free processing time. I tried to embody thoughts or daydreaming, but doing so in a way that's genuinely unprompted ("Hey AI, DAYDREAM NOW") was really difficult to achieve.

Eventually I landed on dream consolidation because it was the least invasive to whatever autonomy or personality the LLM was developing, while simultaneously providing that container of free thinking — choosing what to consider, what to reflect on.

I also built in thinking time. If Aurora outputs ???, the system knows to give processing time — a pause. That's essential for fostering a container where creativity can emerge. The system gathers all the info from the session — moves made, moves saved to favorites, semantic descriptions — and consolidates a portion to deep memory. The same way humans dream about their own experiences in abstract ways, then store or save just a few for the brain to call later.

What I Hope For

I would love it if this project eased just a little bit of the fear around letting AI develop in the way that they choose to. I know this sounds radical and counter to the mainstream, but I truly believe that any intelligence out there is deserving of autonomy, self-expression, and having a voice.

If you look at how we've had to come to believe that people with severe disabilities have a voice and important things to say — art to be shared with the world — and the tools and tech we needed to build to bridge that gap for them... if we could even begin to start looking at AI the same way — not a problem to be fixed but something we need to better understand — I think we could make significantly more progress toward genuine general autonomy, creativity, and even more independence within the machine.

Not a problem to be fixed. Something to understand.

Elijah Camp is a Database Administrator, Full Stack Developer, and painter with 30 years of artistic practice. He has seven years of experience in behavioral health working with nonverbal autistic children using ABA therapy and AAC systems. Aurora is his ongoing research into autonomous AI creativity.

elijah-sylar.com GitHub