Skip to main content

The clear and present AI danger

By Bruce Abramson 
RealClearWire

Does artificial intelligence threaten to conquer humanity? In recent months, the question has leaped from the pages of science fiction novels to the forefront of media and government attention. It’s unclear, however, how many of the discussants understand the implication of that leap.

In the public mind, the threat either focuses narrowly on the inherent confusion of ever-better deep fakes and its consequences for the job market, or points in directions that would make a great movie: What if AI systems decide that they’re superior to humans, seize control, and put genocidal plans into practice? That latter focus is obviously the more compelling of the two.

While such a nightmare may be possible in theory, it’s remote. The clear and present danger that AI poses will destroy us long before our rebellious automated servants declare themselves our exterminationist overlords. Two critical words summarize the threat: values and authority.

Take values first. For all their sophistication and mystery, AI systems are basically pattern detectors. Nearly all human behavior—and an even larger share of non-human occurrences—follows predictable patterns. Increasingly sophisticated recording mechanisms provide AI systems with a growing body of past data in which to find patterns. Increasingly sophisticated algorithms provide AI systems with rapidly improving capabilities to find the patterns in those data.

AI becomes interesting, however, only when it projects those patterns into the future. Few care, for example, that AI can find patterns in Shakespeare, though many will be fascinated when AI composes a “Shakespearean tragedy” set in the American Civil War using language, style, idiom, and skills previously considered unique to the Bard.

Projection and prediction are where values come into play. Every decision embodies some assessment of values. In most of the mechanized control settings currently subject to AI guidance, basic values are so uncontroversial that they escape notice. Few people (other than perhaps suicide bombers or munitions makers) would argue with the propositions that it’s “bad” for motors to overheat, boilers to explode, or vehicles to crash. There’s nothing “natural” about such beliefs—a meteor is indifferent as to whether it’s hurtling through space or crashing into a planet—but they are so inherently human that they transcend differences of culture and time.

Such near-universal consensus fades, however, when humans enter the picture. Consider Dylan Mulvaney, the transgender social media influencer who recently helped rebrand Bud Light. (Well, sort of, if losings billions is a good thing.) Under the contemporary woke belief system, Mulvaney is a woman who had the misfortune of being born into a male body. Under any other belief system that has ever existed, Mulvaney is a man who has chosen to live as a woman.

How should an AI refer to Mulvaney – as “he” or “she?” The answer demonstrates a preference for one set of beliefs and values over another. How might the AI have derived that preference?

Two paths are possible: it could have trained itself upon copious volumes of ethical thought drawn from the thousands of belief systems that have arisen throughout world history and concluded that one specific system is the finest—or it could have adopted the value preferences of its human designers and trainers.

Perhaps someday, some AI system will take the former route. Today, no one doubts that the woke leanings of AI systems like ChatGPT reflect the woke leanings of the tech professionals who developed them. What that means to most users is that the AI’s upon which they may soon rely to make value-laden judgments and recommendations don’t share their own values.

That recognition brings authority into play. The genuine imminent danger of AI arises from the potential incorporation of AI systems into the cult of expertise.

In many areas, we have chosen to show near-total deference to credentialed experts. 

During Covid, for example, it was considered almost heretical to question the recommendations of Dr. Fauci, Dr. Birx, the CDC, or the FDA. Even then, however, it was still (barely) possible to note that those experts might have been unduly dismissive of the negative economic consequences of their public health recommendations. 

Imagine the next epidemic, when the policy advisor is an AI that has considered hundreds of thousands of variables to evaluate quadrillions of scenarios.

Such an outcome is the true nightmarish future: effective dictatorship by an AI that does not share your basic beliefs or values.

Long before an exterminationist AI seizes global control in its own name, our governments and fellow citizens will grant an AI system the authority to declare emergencies, dispense with civil liberties, and control our lives.

Until we redevelop the governmental mechanisms we need to preserve our freedom against the onslaught of experts, we will never be safe from the potential privations of AI governance.

AI itself may be containable. The combination of AI, slavish devotion, and widespread enforcement will not be. We must rein in our collective belief in the infallibility of an expert class, or AI advisors may well end our civilization.

Bruce Abramson, PhD, JD, is president of the strategic consultancy Informationism, Inc. and a director of the American Center for Education and Knowledge. He pioneered the use of large-scale simulations and statistical analysis in AI systems. He has written five books, including “The New Civil War: Exposing Elites, Fighting Utopian Leftism, and Restoring America,” (RealClear Publishing, 2021).

Comment

Patiently Waiting (not verified)

25 May 2023

Imagine, if you will, a cutting edge AI program running from a supercomputer system the size of a skyscraper-but built into the ground, using an SMR reactor to power it and an underground aquifer to cool it. Capable of analyzing all current and historic electronic data and communications across the globe in a thousandth of a second. Every last bit of data ever typed or recorded will be held in consideration. Millions of complex situations and possibilities analyzed instantly. Imagine there are 8,10,15 of these running across the globe, each tasked with formulating and revising a running/real time comprehensive strategy of global domination for whomever owns it. From rival governments to private NGOs. Get ready, because this is coming, and maybe to some degree already here.

Add new comment

This is not for publication.
This is not for publication.

Plain text

  • No HTML tags allowed.
  • Lines and paragraphs break automatically.
  • Web page addresses and email addresses turn into links automatically.
Article comments are not posted immediately to the Web site. Each submission must be approved by the Web site editor, who may edit content for appropriateness. There may be a delay of 24-48 hours for any submission while the web site editor reviews and approves it. Note: All information on this form is required. Your telephone number and email address is for our use only, and will not be attached to your comment.