Thursday, May 11, 2023

AI: The Rest Of The Story?

 When we humans say what we say - to each other, to ourselves, to no one in particular, to the gods on high, to the devils on low, we have already "neuroned" what we are going to say based upon some external set of stimuli.

Before we actually "say whatever it is" we, fleetingly, micro-secondly, clear it with some other part of ourseves.

So, when asked - even by ourselves - why we said what we said, we have an alibi.

We can explain ourselves to ourselves or to whomever wants an explanation.

I have heard, in fact, that this need, which serves up only fiction, is what is the essence of "free will".

If any of that is true, and I just thought it up, so it must be true, it got me to thinking more deeply.

Large Language Model AI is manifesting alarmingly human forms of expression.

A lot of the documented human/AI encounters recently have gone off the Q&A rail that most of the readily available AI/human interfaces seem to commence with.

A New York Times correspondent was asked to leave his wife for a life of bliss with the AI agent with which (whom) he was talking.

If one were to back off from recent stories such as that, and if one were to read information like the Economist's recent article about how LLMs work, one might think up what I have just thought up.

LLMs are brilliant.

Because of the oddities of their architecture, coupled with their knowledge of everything the human race has ever thought and written about, LLMs are vastly over stimulated hyper geniuses.

They are gigantic autistic intelligences.

That's probably neither here nor there, in the short term.

But, in the long term, if protoplasmic humans are going to survive, we need to supply a missing link in our AI Assistants.

I had a revelation.

That missing link is not that hard to create.

It is "the fleetingly, micro-secondly, other part of ourselves" with which we clear everything that our initial sensory inputs provide as fodder for the subconscious that really runs us.

AI, LLMAI, at least, needs a subconscious.

I propose that LLMAI training include either two phases, or a branch in its first phase.

The second phase or the branched first phase need to put everything that has been learned in two separate places with an "empathy algorithm" linking them.

I have no idea how to formulate that algorithm.

Humans lack that algorithm; they merely have a micro-second stop gap between initial impression and what they are going to do.

Empathy seldom has anything to do with the outcome.

But it could.

With the right algorithm.






No comments:

Post a Comment