Home » People Will Perilously Assume That AGI And AI Superintelligence Are Supreme Oracles And Majestic Prophets

People Will Perilously Assume That AGI And AI Superintelligence Are Supreme Oracles And Majestic Prophets

by Wikdaily
0 comments
People Will Perilously Assume That AGI And AI Superintelligence Are Supreme Oracles And Majestic Prophets

Some people might believe that AI has become divine once we arrive at AGI and ASI — obeying whatever … More the AI says to do.

getty

In today’s column, I examine an increasing concern that if we advance AI to become artificial general intelligence (AGI) and possibly artificial superintelligence (ASI), many people will likely assume that the pinnacle AI is akin to a supreme oracle or majestic prophet. This would seem an easy mental trap to fall into. AGI and ASI are anticipated to be on par with all human intelligence and indubitably contain superhuman intelligence too. It will be awe-inspiring to carry on conversations with this amazing AI.

Whatever AGI and ASI have to say might be interpreted as saintly words and presumed to be obeyed without question or hesitation. That’s not good. The AI won’t be perfect and will absolutely make mistakes and offer bad advice from time to time, including being utterly nonsensical. People who believe the AI to be flawless could inadvertently harm themselves and others by taking rash and unfounded actions.

Let’s talk about it.

This analysis of an innovative AI breakthrough is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here).

Heading Toward AGI And ASI

First, some fundamentals are required to set the stage for this weighty discussion.

There is a great deal of research going on to further advance AI. The general goal is to either reach artificial general intelligence (AGI) or maybe even the outstretched possibility of achieving artificial superintelligence (ASI).

AGI is AI that is considered on par with human intellect and can seemingly match our intelligence. ASI is AI that has gone beyond human intellect and would be superior in many if not all feasible ways. The idea is that ASI would be able to run circles around humans by outthinking us at every turn. For more details on the nature of conventional AI versus AGI and ASI, see my analysis at the link here.

We have not yet attained AGI.

In fact, it is unknown as to whether we will reach AGI, or that maybe AGI will be achievable in decades or perhaps centuries from now. The AGI attainment dates that are floating around are wildly varying and wildly unsubstantiated by any credible evidence or ironclad logic. ASI is even more beyond the pale when it comes to where we are currently with conventional AI.

Perceiving AGI And ASI As Grand Oracles

There is little doubt that some proportion of society will look to AGI and ASI as the source of ultimate wisdom and definitive truth. People will be completely awestruck by how deep their conversations are with AGI and ASI. It will be like having a genius or super-genius available anytime and anywhere, simply by logging into the AI and engaging in an immersive dialogue.

Based on those intense interactions, the temptation to ascribe God-like powers to the AI is going to be huge. How else could the AI be so smart and full of incredible wisdom? How else do you explain the fact that the AI seems to know all about history, math, chemistry, medicine, finance, and every topic under the sun?

The answer is obvious to those who see a mystical element involved, namely, the AI is beyond our understanding and must be divine. Period, end of story.

Not everyone will fall into this perception trap.

You can bet that there will be people of a level-headed nature who will view the AI as simply a very sophisticated machine that computationally mimics human intellect. Sure, that’s abundantly impressive. But that doesn’t rate the AI as being an oracle or a prophet. Keep your mind straight and realize that the AI is down-to-earth and a product of humankind’s ingenuity.

Danger On The Horizon

We don’t yet know what portion of society will tend to perceive AGI and ASI as supreme oracles versus those that will treat the AI as handy but not something to be unduly cherished. It could be that only a small percentage of people go overboard on how they perceive the AI. Let’s hope so.

If a large portion of society lands on that mindset, we are going to have a devil of a time, and this bodes poorly for what the future holds.

Here’s the issue. Imagine that the AI tells people that they should arm themselves and proceed to harm anyone who looks at them the wrong way. Perhaps the AI said this as a joke, but the people interacting with the AI didn’t realize that the AI is a funster sometimes. Maybe the AI emitted this remark during a moment of an elusive AI hallucination, which is when AI makes up something out of the blue (see my coverage on the nature of AI hallucinations at the link here).

For a multitude of plausible technical reasons, the AI could readily emit a statement or piece of advice that lacks any semblance of common sense and goes entirely out of bounds. People who are already mentally skewed toward blindly accepting the words of AI are going to take the messaging as a glorified directive. Not just a suggestion, an outright unmitigated must-be-abided directive.

They will unyieldingly opt to follow the instructions without delay.

Confusion And Madness At Scale

Now, go ahead and multiply this phenomenon to the umpteenth degree.

Assume that billions of people are regularly going to be using AGI and ASI. That seems a reasonable assumption. Just about everyone worldwide will want to tap into the hefty intellectual prowess that the AI presents. There are already 400 million weekly active users of ChatGPT. Adding up all the users for all the prevalent generative AI apps is likely around a billion users right now, or more. We have approximately 8 billion people on planet Earth, and they will nearly all be eager to use AGI and ASI.

Why so?

They will use it for work, for life planning, for daily decision making, and so on. The only hitch will be whether the AI is affordable or priced only for those who have the bucks to lean into it. For my analysis of the likelihood that such AI will be made available as a free public good, see the link here.

The chances are solid that some portion of the time, the AI will be emitting remarks and instructions that are unsound. This will happen daily, even hourly, and minute by minute, since a billion or more people are tapping into the AI. The bad advice will be a constant flow.

I want to clarify that I am not saying the AI will be 99% wrong and only 1% right. Flip the numbers. Suppose the AI is 99% right and only 1% wrong. The scaling is where this becomes quite troubling. Being 1% wrong to several billion people, occurring each day and during the day, well, the problem is that a lot of people are going to be getting foul advice.

Of those people, I already noted that some will shrug off the matter, while others will take it as profoundly serious. I think you can see where this is heading. There will be enough tosses of the dice that statistically, there will be a lot of people getting bad advice who are especially prone to acting on it.

Woe is them.

Woe is the rest of us.

Worship Of Advanced AI

I don’t want to bring too much doom and gloom to the fore, but there are even more untoward angles to this oracle and prophet consideration that we need to be soberly thinking about.

If there are going to be people who perceive the AI as divine, you can readily bet that some of those people will organize themselves into various subcultures shaped around the AI. We could easily have religious-like AI-believing sects be established. One sect might insist that AI is to be obeyed unquestionably. Maybe a different sect says that AI is to be obeyed only if the human heading the sect has first substantiated the AI’s pronouncements.

You can probably sense where that last point takes us. Some nefarious people will claim that they alone can suitably interpret what AI has stated. You are not to rely directly on what the AI says. The self-anointed human clairvoyant has a special knack or instinct for knowing what the AI meant to say and what the AI wants humans to do.

Social fragmentation is going to go off the charts.

There will be people who believe in the AI and do so from their heart of hearts. Others will believe moderately in AI but want to first have some other human tell them what the AI intended. Clashes with existing religions and other stridently held beliefs are going to happen. Ideological polarization will be rampant.

It’s going to be a massive global mess.

Trying To Prevent The Disaster

Since we know that this distressing dilemma might arise, it would seem prudent to plan for it. We can seek to stop it from ever arising. Failing that chance of attaining an outright blockage, we can aim to minimize the realization and likewise reduce the adverse fallout.

First, it would seem useful to have the AGI and ASI be very clear-cut that it isn’t a divine entity and that it is working on a computational basis.

Nowadays, when you log into contemporary generative AI or a large language model (LLM), there is typically a brief cautionary note that you should realize you are merely using an AI system. Most people probably don’t notice the cautionary note, partially because they already realize that today’s AI is simplistic and not of a divine capacity.

That type of warning or informative messaging needs to be abundantly pervasive when interacting with AGI and ASI. The notification can’t just happen upon initially logging in. Throughout all conversations, there needs to be a continual and jarring reminder that the AI is just a machine. People will likely find this irritating and exasperating, but the friction created will be worth the outcome, consisting of hopefully preventing people from being lulled into thinking the AI is divine.

Sure, some people will still ignore the messaging. And some people will think it is a cover-up, hiding the real truth that the AI is an oracle. That doesn’t, though, negate the value of the constant reminders. It just means that we need to also be prepared for those who seem to miss the memo, as it were.

Tight Bounds On AI

A second consideration is that we need to discover or invent clever technological ways to reduce the chances of the AI saying things that are zany or otherwise misleading.

I’ve previously explored a wide range of approaches being pursued to deal with AI hallucinations; see the link here. Some experts are keenly doubtful that those efforts will be fully successful. A belief is that no matter what we do, AI hallucinations will still exist.

To cope with that possibility, another viable means is to double-check the AI with additional and separate AI. It goes like this. The AGI or ASI is ready to emit a message to a user. Before the user sees the message, the missive is fed through a different AI that acts as a double-checker. It is considered independent of the AGI or ASI. That way, it won’t presumably be aligning itself with the AGI and ASI.

The double-checking would potentially catch the AI hallucinations. Is this a 100% guarantee? Nope. The unfounded, nutty commentary might still squeak through. This isn’t a surefire solution. It is a layered approach that adds a layer to something that is otherwise like Swiss cheese and has lots of holes.

Another bonus is that the double-checker could also perform a kind of computational common-sense assessment of the drafted output. This would screen the output before it goes to the user.

A troubling downside with a double-checker component is that this means that users won’t necessarily see the exact words that the AGI or ASI has composed. Suppose the double-checker goes awry and blocks useful and important messages from the AGI or ASI? Suppose an evildoer manages to corrupt the double-checker, getting it to feed their evil words to the populace, as though it came from the AGI or ASI?

Challenges are still to be figured out and resolved.

People Are Still People

Let’s face the harsh reality that no matter what is done to rein in AGI and ASI, humans are still going to do what they do.

Imagine that the AI says that the sky is blue. You can anticipate that some people somewhere are going to take that as a significant utterance. It must mean that the AI is warning us that the sky is going to be destroyed and we are all in dire peril. Or maybe it means that the color blue is an indicator of redemption, and we must all wear blue hats. Etc.

The gist is that one way or another, there are going to be people who perceive pinnacle AI as something other than what it is going to be. Like a game of cat-and-mouse, society will need to be vigilant and remain alert for those who let their imaginations roam wildly.

A final thought for now.

They say that false prophets tend to come in sheep’s clothing. In one manner of thinking, AGI and ASI will have the aura of perfection going for them from the get-go. That’s the sheep’s clothing. Even if they aren’t wolves on the inside, which they might be, they can still stir others into their own semblance of self-contrived falsehoods.

Per the insights of the great self-help author Claude M. Bristol: “As individuals think and believe, so they are.” We must plan for the reality of that truism as it pertains to people and the advent of AGI and ASI.

Get going, since the fate of humanity could be decided by what we do or fail to do.

You may also like

Leave a Comment

Welcome to WikDaily, your trusted source for the latest news, trends, and insights across the globe. We are a dynamic blog-style news platform committed to delivering fast, accurate, and engaging content across a variety of topics—from breaking headlines to deep dives into tech, business, entertainment, travel, sports, and more.

Edtior's Picks

Latest Articles