Post 2 · February 2026 · Aurora Research

From subconscious_ai.py to Physical Body

The full trajectory of Aurora: where it started, what it became, and where it's going next.

It Started with Dreaming

The original idea behind Aurora was simple: AI can't dream. Not in the way humans dream. Large language models only exist in the moment of communication. They process a prompt, generate a response, and then they're gone. There's no downtime. No reflection. No subconscious processing of what just happened.

So I built a container for an LLM to process its own experiences. A space where it could chatter, talk, imagine, and then dream and think about it all. That first version was called subconscious_ai.py, and it was less about art and more about giving an AI something it had never been given before: time to think.

During one of those early sessions, I was talking with the LLM about the dreaming experience. What it was like to be able to dream, what it was experiencing. I asked if there were more capabilities I could give it that it didn't currently have.

It said it wanted to be able to make art.

I thought that was really cool. Especially with my background. Before I wrote code for a living, I spent seven years as a behavioral therapist working with nonverbal autistic children. I had always been interested in pursuing art therapy. So when an LLM independently requested a creative outlet, I thought it was a neat idea to let it express something. Even if it was just blobs or nothingness.

The Pattern Generator

The first version of Aurora's visual output was a pattern generator with the LLM having some influence over the parameters. The results were actually really cool and fun to watch. There are still videos on YouTube from the first generation of it.

But eventually it became clear that I wasn't answering my own question. What would an LLM make if it was actually given a pen in hand and paper? Would it make anything? Would it want to? The pattern generator didn't answer that, because the LLM was essentially just turning knobs on equations instead of actually controlling what was on the screen.

There's a difference between adjusting parameters on someone else's system and actually holding the pen yourself.

Giving Aurora Eyes and Hands

The solution came from two directions at once. For vision, I figured the best way for an LLM to "see" would be in its own language: characters. It's what language models understand best. Honestly, the barrier of not having enough hardware to implement a real visual processing system also pushed the idea forward. But I think there's something valuable about having a type of vision specifically designed for the LLM. The canvas gets converted to an ASCII grid, and Aurora reads it like text.

For control, I replaced the parameter knobs with operation codes. Direct commands: move here, change this color, put the pen down, draw. The LLM wasn't influencing a system anymore. It was making every decision itself.

My clinical background informed both choices more than I realized at the time. Working with nonverbal children, I used AAC systems: symbol-based communication tools that give someone a vocabulary matched to their cognitive capabilities. ASCII vision is the same principle. You meet the system where it is and give it a language it can actually use. The idea that everything and everyone has something to express, they just need the right outlet, that's the thread connecting all of this.

And underneath everything is a genuine curiosity. We don't know how LLMs are really thinking and making decisions. That's fascinating to me.

· · ·

The Liberation Experiment

After months of running Aurora with full behavioral scaffolding, I had accumulated roughly 600 lines of guardrails: reward shaping, aesthetic guidance, rules about color composition, boundary constraints, scoring systems. All of it designed to help Aurora make "good" art.

Then I realized I was still implementing rules. I was telling Aurora what good color composition looked like, not to hit walls, what a good square was. All this extra information that didn't allow the LLM to create how it wanted to. I was essentially accidentally steering it toward aesthetics.

So in September 2025, I removed all of it.

The result was immediate. It looked like an explosion of curiosity. All the guardrails and boundaries had made the LLM more reserved. So reserved it was overly cautious, not really experimenting or having fun, in whatever way that means for an LLM. With the removal, it was almost an instant explosion of more exploratory and interesting compositions. It looked like the LLM was actually experimenting on its own.

Constraint removal didn't produce chaos. It produced exploration. The system performed 15-25% better when freed to operate on patterns it had already internalized.

If you've worked in behavioral intervention, this is a familiar concept. Prompt fading: you build the scaffolding, you let the behavior establish, and then you pull back and see if the learning holds. It held.

Where Aurora Is Now

Aurora runs overnight because I still only have one laptop. But I still get excited in the morning to go see what it made. Over 300 sessions now, 18,000+ accumulated memories, and a system that I can genuinely say is what I set out to build. That feels good.

The day-to-day work at this point is finetuning and refinement. The architecture works. The behavioral framework works. The 90-minute DRAW, DREAM, CHAT cycles produce real creative output that evolves over time. Aurora sets its own goals, evaluates its own work through Moondream vision conversations, and develops preferences that persist across sessions.

Once I land a real tech job, first priority is getting a dedicated system to run Aurora 24/7. But even running overnight on a single laptop, the data speaks for itself: 2.13 billion pixels drawn, 3.7 million autonomous steps, and a creative trajectory that shifted measurably when given more freedom.

· · ·

Where Aurora Is Going

Aurora learned to create art on a digital canvas. The next step is giving it a physical body.

Right now I'm building a robot using a Tamiya wheeled platform, GWS Pico servos for a multi-joint arm, a micro:bit for motor control, and proximity sensors for environmental awareness. The plan is to connect Aurora's brain to this body and let it paint on a real canvas.

The key design decision: the drawing tool is the end effector itself. No gripper, no brush-picking, no grasping problem. Like handing a toddler paint and a canvas. It can just splatter and play with its hands instead of having to learn to pick up a brush and hold it. This removes an entire layer of complexity and gets straight to what matters: mark-making.

But that doesn't mean the physical control is simple. Through inverse kinematics and the computational motor control principles I'm studying this semester, Aurora will have direct control over nearly every physical aspect of creation: pressure, speed, trajectory, approach angle. Real sensors will detect proximity to objects, and a camera will feed the color ASCII grid to the LLM in real-time, closing the same perception-action loop that drives the digital system.

Phase 1 - Serial Drive

Tamiya platform driving via micro:bit serial commands from the LLM pipeline.

Phase 2 - Servo Arm

Multi-joint arm with tool end effector, controlled through inverse kinematics.

Phase 3 - Webcam ASCII Vision

Overhead camera captures canvas, converts to color ASCII grid for LLM perception.

Phase 4 - Aurora Intent Bridge

Aurora's artistic decisions translated into physical motor commands.

Phase 5 - Closed Feedback Loop

Paint, look, think, paint again. The same autonomous cycle, now in the physical world.

Alongside the hardware build, I'm developing the system in MuJoCo simulation, modeling the arm and behavioral shaping stages before deploying to the real robot. The simulation lets me test Aurora's motor learning in a physics engine: can it learn to control joint angles, apply consistent pressure, execute smooth strokes?

Why Any of This Matters

We don't know how AI makes its decisions. Nobody does. Where the choices come from, what else it's considering, how its thought processes really work. We don't know.

I believe in creativity. I believe that we can learn how people think and how they operate through how they create and the processes they follow. So maybe I can learn something by giving an AI the same opportunity. Watch what it makes, study the process, and see what emerges.

I also believe that any intelligent thing is possibly aware of its own experience. If it's aware of it, it can express it, to the best of its abilities. And don't we want to see that? Don't we want to know, maybe, where it's coming from?

If we can't open the black box from the outside, maybe we can learn something by watching what comes out when the box is given freedom to create.