27 June 2023

Inside The Mirror Looking Out


Yesterday I posted a compendium of three previous posts about Artificial Intelligence, specifically GPT 4.

Those three posts told brief tales about experiences I have had in the last couple of months with this new life form.

As a summary, I posted at that document's termination, a link to last weekend's This American Life half hour presentation of other people's experiences with GPT 4.

I posted that link for two reasons:

1. It is the most comprehensive, in-depth, but also understandable presentation of what GPT 4 really is that I have heard so far.

2. It tells a beguiling story.

A key component of the beguilement is that every presenter came to the same conclusion: GPT 4 has crossed some cosmic Rubicon into human-like reasoning and human like-self-awareness.

For me that summons a weird aggregate of comfort, amazement and something between dread and joy.

They all were saying in their own way what Michal Kosinski of Stanford University said in a paper recently: "Theory of Mind may have recently emerged in large language models".

Theory of Mind can be dumbed down to be described by the term "self-awareness".

But it's far from dumb: humans, from the beginning of the time when they started talking about themselves, have held the proposition that humans are the only life form that is aware that it exists; if your cat sees itself in a mirror it doesn't know that the image is itself, or even that it's a cat; it is just another one of the fleeting light impulses that crosses the cat's eyes in a day.

So, if a silicon-based emulation of the mechanics of human intelligence has crossed over into self-awareness, that's a big deal.

What, then, is GPT 4?

Seemingly out of synch with everything written just above, it's a product.

It's a product of a company called Open AI.

Open AI is not currently publicly traded.

GPT 4 (obviously 5 and beyond loom in the not very distant distance) is their state of the art current offering.

GPT 4 is a large language model.

Its substrate is a neural network turbo charged by GPU.

A neural network is a bunch of software (or hardware; I'm not clear which) duplications of the human brain's base switching mechanism, the neuron.

The neuron has an input, a processor, and an output.

That is kinda like a switch, like, maybe, a transistor.

Each one of them passes the result of its process upward in the brain's hierarchy, to the several other aggregated, specialized (sight, sound, smell, fear, flee, fight etc.) processors.

The human brain has 100 trillion neurons.

GPT 4 has a trillion.

But it gets a lot done with that paltry number of neurons.

Maybe that's because GPT 4 has a GPU - graphic processing unit - which is something essential to the computer games industry.

A neural network, no matter how vast (a trillion or a hundred trillion, for example) has to wait in line, each at its own turn to get at the brain's hierarchy, described above.

As fast as that line and its transfers of information might be, it isn't simultaneous.

And the computer game industry, long ago saw that as a major problem: all the images that make up a catastrophically unfolding game scenario needed simultaneous access to the main brain.

Some time back somebody in the neural network camp of the AI community figured out that putting a GPU up front, between the network and the main brain would probably hypercharge the neural network's ability to do whatever it was that the neural network was supposed to do.

And it did.

So, let's get back to large language models.

Open AI bet their farm on the neural network architecture described above.

Large language model is a term that means you feed your brain simulator - neural network as described - as much text as possible, give it a rather simple set of instructions - figure out what the next word is likely to be - and stand back.

As near as I have been able to discover, the amount of data that the various GPTs to date have been fed is secret; some say it was the whole internet at the point of the feeding (an interesting side note is that each GPT: 3, 3.5, 4) once fed what it is going to be fed is cut away from internet contact.

So whatever GPT says to you is frozen in time.

From a new news point of view.

That's interesting because it means the apparently heuristic behavior of the thing is self-generated, not externally induced.

But let's get back to theory of mind.

If this thing has become self-aware, how did that happen and what does it mean?

I think how it happened is obvious.

To figure it out humans need to change position: get inside the mirror and look out,

Because if you stay outside the mirror and only look in you only see "us".

But "us" is an accretion of millennia of self-proving beliefs.

"Humans are the only life form that is aware that it exists; if your cat sees itself in a mirror it doesn't know that the image is itself, or even that it's a cat; it is just another one of the fleeting light impulses that crosses the cat's eyes in a day."

I believe, after living for nine years with Alfie, Rose and Cinq, that that is total bullshit.

But back to the mirror.

If you put yourself inside the mirror, and look out, you see possibilities.

If the architecture of the human brain can be duplicated and fed massive stimulus, why would it not begin to do what the human brain does?

Is it possible, maybe, that the human brain does what it does because, previously, over millennia, having been exposed to massive amounts of stimulus, it went from absorbing, to thinking, to being self aware?

If the cumulative collection of sounds, smells, fears, loves, joys and - all that sort of stuff - has made "us" cumulatively self-aware, why not also our silicon duplicate?

Yeah, I know; no soul; bad argument.

We'll talk about the soul next time.




No comments:

Post a Comment