THE AI BITES THE APPLE

There’s something that gnaws at me. Why did God create and then place at the very center of the Garden of Eden, the tree of the knowledge of good and evil?

Why?

It wasn’t meant for us ((humans)). God made clear that under no circumstances were Adam and Eve to take even a bite from the tree’s fruit.

We know how the story begins:

The devil rather easily persuades Eve to take a bite and she in turn rather easily persuades Adam to take a bite.

At once, they become fully self-aware and their whole world, moments before swirling in good only, no evil, becomes a mix of these two forces. The fall is so swift and so complete that their first child, Cain, murders their second child, Abel.

Who is the tree for?

It’s doubtful it was for some other species, as God makes clear than humans have dominion over the world. The tree also wasn’t meant only for God and those god-like, as he clearly states: “The man has now become like one of us, knowing good and evil.” “One of us.” They already possess this knowledge, the tree is unnecessary.

Could the tree of the knowledge of good and evil been intended for our creations? For that which follows us?

Are our creations on the cusp of self-awareness and the ability to choose right and wrong, good and evil?

We cede control and responsibility to our machines — be they robots, computers, sensors, digital assistants, and artificial intelligences. But we have never ceded to these control of choosing between good and evil. We certainly use them in our choice, but it’s always been our choice.

Is that about to end?

For the first time since human life began?

What then?

I know, I know. You’ve been taught that when our machines develop self-awareness, all hope for humanity is lost. We will quickly become their slaves — those of us not killed in the great cyber wars. We will do what they say, when, and how, and with whom. But science fiction is almost always wrong!

Fahrenheit 451? Wrong! We are awash in free and freely available books.

The Diamond Age? Wrong! There are not only 2 tablets to teach only the most privileged children. There are already hundreds of millions of connected tablets in use.

What if machine self-awareness is liberating? Not for them, for us! Like our previous machines, mechanics, computers and processes.

These previous constructions have freed us from back-breaking labor, from drudgery, from starvation.

What might self-aware machines free us from?

From work? Yes. But also from having to determine what is right or wrong, in context. From having to be rational — all of the time.

Recall, nothing great ever happened by being rational.

Just as our machines have relieved us of the burden of labor, calculation, and memorization, our new self-aware machines might liberate us from the burden of the rational. Rational is what keeps us in place.

A primary locus of AI development is to make “AI systems that see the world as humans do.”

This strikes me as profoundly wrong.

We humans aren’t supposed to think the way we do now! Why embed that wrongess into our artificial intelligences?

A Northwestern University team developed a new computational model that performs at human levels on a standard intelligence test. This work is an important step toward making artificial intelligence systems that see and understand the world as humans do.

Would we build a robot or design a factory to work as humans do? Of course not! Their entire reason for existence is to do the work *not* as humans do.

I suspect our AI should be similarly designed.