≡ Menu
jaced.com

The Matrix: Where Are We Now?

#1 on Google!

The release of The Matrix Reloaded this past May created an an amazing wave of analytical energy among its fans. This year I contributed some material the most epic discussion thread I’ve ever seen on any topic, which you can visit here. Within a couple months this thread became something extraordinary, attracting considerable traffic and eventually becoming the #1 search on google.com for “matrix reloaded discussion”. Also, have a peek @ the thread’s host’s entertaining information regarding this thread. Some day I aim to reassemble everything from these discussions into a single comprehensive writing. For now you can peruse that blog, which is in two parts. You’ll find tons of good mind candy on all things Matrix. Look for Spoon Boy.

So where are we now? We’re down to the final week of our wait for Revolutions, and my mind is once again drifting off to that special place.

I received an email from a mathematician who came across my original May 15 Matrix article on the Net. He offers some insight on how we construct statistical error models in order to program complex computer systems which simulate real-world randomness. This completely ties into the topic of A.I. in general, and the Matrix story in particular. Check it out here.

I spent some time thinking about the insight this guy’s offered regarding how we, in complex computer systems, construct statistical error models in order to simulate reality. As he illustrates, programmers use sensors to input data into a computer, which then “learns” how the real world behaves. As more data is input into the program, the accuracy of the simulation is increased. Over time, as long as your model is free of error, things begin to stabilize and simulation becomes increasingly more like the real world.

Here’s something that occured to me:

Consider a deck of cards. We know that the natural odds of drawing an Ace off the top of the deck is 1 in 13.

Now let’s imagine that we want to create a computer program that simulates this real world fact. With something as simple as cards, we can easily program a computer to calculate 1 out of 13, and use that as our model. However, for the purpose of this example, let’s say that the only way we can input the data is through some sort of sensor, where the computer can “observe” its real world counterpart, and then attempt to simulate it with the data given. To construct our error model, we as programmers must repetitively shuffle a 52 card deck, flip over the top card, and input the result into the computer. Odds tell us that we’ll have an Ace 1 out of 13 times. 2 out of 26 times, 3 out of 39 times, and so on. The more times we do this and enter it into the computer, the more accurate it becomes. Odds tell us that if we did this 13 million times, we’d get an Ace 1 million times.

However, as Rivas points out, there are occasions in the real world where highly improbable things can happen. If these highly improbable events occur too often, especially early on, the error model breaks down and becomes corrupted, leading to an inaccurate simulation. When this happens, we as programmers need to clean the slate and start over, wiping out all the instances of coincidence that the computer is basing its simulation on.

Back to our card example. We shuffle the deck, pull the top card, record it into the computer, and repeat. Repeat. Repeat. But what if, just by chance, we pull four Aces in a row? Highly improbable, but not impossible. This “integral anomaly” throws our model off and corrupts our results. We therefore start over, and reapply the equation to a clean slate. After all, we don’t want to give the computer any wrong ideas.

Neo is that weird coincidence, that integral anomaly, the four Aces in a row, that throw the model off and necessitate a “re-insertion of the prime program”, or a “reapplying of the equation”. A Reload of the Matrix.

I reloaded again the other night with all this in mind. I now wonder if what we’re looking @ here *is* Artificial Intelligence, or simply our relentlously feeble *attempt at it*. Perhaps, as is the case with our real-world efforts to create A.I., the attempt never quite succeeds due the persistent integral anomaly (i.e. a factor which cannot be simulated artificially.) In the case with our story, that integral anomaly is Choice, manifested in the Neo character. Makes perfect sense to me now, particularly after carefully reviewing the Architect scene again.

Coming down the stretch here. Have fun with it!

“To an artificial mind, all reality is virtual.”
–Matriculated

Comments on this entry are closed.