Scene in a Bottle
I have this bottle full of clear fluid next to my desk. Inside you see
a ship complete with ocean and tiny sailors moving about on deck. Tap
on the bottle and the scene changes. Tap. The Captain's Ready Room.
Tap-Tap. The Main Hall. Tap. Crews Quarters.
I like to imagine it that way anyway.
I'm actually talking about the end result to filling a bottle with
trillions upon trillions of tiny machines that emit colors as they
float about in the clear liquid. I've been wondering how to program
them. I've been wondering how complicated each one has to be in order
to pull of the illusion. I've been trying to prove that I can hold a
3D matrix of tiny volumes of color in place without the scene
dissolving into a fuzzy cloud.
That's just one of the problems along the path to bringing this system
to life. I have to initialise the 3D grid of volumes as well as load
the images. Holding the image in place seemed like the simplest
problem to tackle.
So, I decided to start there.
It was a Sunday when I was just getting some DNA simulation software
to compile. I realized I was spending a lot of time with software
written by other folks and I wanted to try my hand at computational
science. After some thought, I recalled that old idea I had years ago
about drawing scenes in 3D space. I could just use little robots
changing color throughout that space... I might be able to make some
pretty neat stuff.
They would act like 3D pixels holding color at 3D points. The trick
was to devise a system that could preserve a region of color despite
parts that flow freely in and out of the region. Are there simple
rules I could prescribe to the robots that would prevent colors from
Monday came and I wrote the initial code that defines the structure
for the simulated bots. I sat there at first, trying to decide what to
name these things. And I wrote down, "Coid" It's a play on Craig W.
Reynold's Boids; the software agents he used to simulate flocking
behavior. Since these just floated around in a cloud I called them
Coids. (You know, Cloud-Boids)
By Tuesday, I had a working simulation of 2 regions of Coids
interacting. (You can see them in the picture up there.) Although, it
was slow. It takes nearly 14 seconds on my 2.8 GHz machine to properly
compute 1 frame. I went to bed that night resolving to speed the thing
up the next day.
Ironically, the next day I wrote something much slower and in fact,
broken. That's a shame because my scheme sounded so wonderfully
elaborate. You see each Coid has to keep track of the other Coids
nearest to it. So, the simplest way is to just run the distance
formula on the position of a selected Coid and all 4000 other ones.
You have to do this 4000 times so it works its way up to 16 million
computations per frame. (aka 4000 times 4000)
Since my "clever" speed up method failed...
I began researching for anything that would cut down the 16 million
computations per frame and wound up finding out about alternate
geometry systems and some articles on solving the Nearest Neighbor
You see, to preserve the boundary between two regions of Coids, say
between the red and the blues, I told each Coid to look at the closest
Coids around themselves and if more of them are blue turn blue and if
more of them are red turn red. And if they equal, be indecisive and
turn green. They're told to ignore green aka indecisive Coids. Also
if they don't have enough neighbors remain the same color.
In reality, a Coid would easily tell whom is closest to it. But in my
simulation I have to go through all the xyz points of each one and
determine which Coids are close together via math alone. So, this
simulation has to work hard to simulate a system that would never need
to make that calculation. Such, is the world of writing simulations.
Full of intellectual twists and turns and logical puzzles that keep
you guessing. These days I smile half amused by my failed speed up
Mild amusement is a far better response than frustration. These days
I recognize the attempt itself is a valuable learning experience as I
doubt I would have learned about nearest neighbor algorithms any other
way. Passive learning would never have yielded the same maddening
result but it takes a calm and patient demeanor to accomplish these
things. In the past, I would be hurried and frustrated to death. After
3 nights, I would ask myself why in the world was I doing this and
then promptly shelve it.
But not this time.
Or more aptly, I have good reasons for spending
my time on this and other questions like it.
For one, there is much more to learn about the python programming
language. It's clear from the errors in the "optimized code" that
there's something I don't understand in there. I'm debating how
beneficial hunting it down really is but all that aside it's high time
I became an expert in something... even if it takes another 10 years.
Secondly, simulating "technologies to come" is a big part of the web
series, "Power and Magic" Something I've been working out the ideas
for a long while now. A lot of the weight of the series comes from
asking what-if questions and then extrapolating and visualising the
results. I'm working hard to make realistic portraits of the
promise and the perils of the Nanotech Revolution.
Finally, I want a damn simulation in a bottle! I have a feeling the
tech will come along and I wanted to devise some useful applications.
I needed to point it in a direction in order to ask, "What
capabilities and logic do the simplest machines need in order to
create meaningful things?" What should the simplest machines be
designed to do?
All useful enough.
Of course, I have no clear conclusions yet.
I do entertain mild aspirations for the future. There's just something
thrilling about attempting to design with the technologies of
tomorrow. I close my eyes at night and see molecular structures
dancing about as I fall asleep. Sometimes I see code that I can't
really read playback in glowing shifting shapes. Ah to sleep,
perchance to dream... But what dreams may come?