The AI Misinformation Avalanche

By: :  Bob Zeidman
author icon
Update: 2025-03-13 05:00 GMT
The AI Misinformation Avalanche
  • whatsapp icon


The AI Misinformation Avalanche

AI programs collect, generate, and then reinforce misinformation from the Internet, leading to more copies of this misinformation being circulated online. Each subsequent search, whether by AI or human, encounters yet more references to the misinformation, making it seem more accurate and thus increasing the likelihood of it being used by AI and people in the future

Artificial intelligence is changing the world. Some people fear it will take over the planet and destroy humanity. In this article, the author argues that people’s blind trust in AI and the misinformation avalanche that AI can facilitate are overlooked threats. This article is based on a previous article by the author.1

I. Introduction

Despite what some “experts” are saying, artificial intelligence (AI) will not destroy the world. At least not any more so than any powerful technology has the ability to create havoc if used with malicious intent. Airplanes bring people and civilizations closer together. Rockets transport communication satellites. Nuclear energy has the potential to solve the world’s energy problems. The laser is used in uncountable beneficial devices including surgical instruments, disk drives, computer chip manufacturing equipment, cool light shows, and even for removing toenail fungus. And yet all of these inventions are also used in powerful weapons systems for potentially destructive purposes.

New technologies have always faced criticism, and AI is no exception, though the concerns about AI seem more rapid and widespread. The term “artificial intelligence” might lead people to believe that we have created thinking machines. Sci-fi books and movies often depict thinking machines taking over society without human control. While AI serves as a significant tool with potential benefits and drawbacks, these programs are still far from possessing cognitive abilities. If or when they do become conscious, there is no reason to assume they would want to harm us.


II. The Beginnings of Artificial Intelligence

Often, people assert that artificial intelligence (AI) has existed for many decades, which is technically true, but the reality is that AI has evolved significantly since its inception in the 1950s. The earliest AI programs were designed to simulate human interactions. Mathematician Alan Turing introduced a straightforward test, known as the Turing Test2, wherein a person would engage in conversation with both a human and a machine via a messaging system. If the person could not differentiate between the computer and the human, it would mean that true artificial intelligence had been attained. Turing’s test was formulated during an era when computers were primarily utilized for mathematical computations and little else. Computers arguably passed Turing’s test as early as the 1960s; however, few computer scientists now regard this test as definitive.

One of the first widely recognized AI programs, Eliza, was developed by MIT computer science professor Joseph Weizenbaum in 1966 to simulate a psychologist’s interactions with a patient. What many computer scientists at the time did not realize, and still many people do not know, is that Weizenbaum intended Eliza to be a satirical commentary on both the field of artificial intelligence, which had made little progress toward its goal of recreating human intelligence, and Rogerian psychology, where therapists often reflect patients’ statements back as questions. Instead, many computer scientists viewed Eliza as a significant advancement in AI.

In 1972, Stanford computer science professor Kenneth Colby developed Parry, a computer simulation presenting a paranoid personality. A paranoid individual was simpler to simulate because any confusion in the conversation could be attributed to its delusional thinking. For instance, a typical response to a question it did not understand might be, “Wait, did you hear something?”

In 1973, renowned computer scientist Vint Cerf facilitated a notable event where Eliza and Parry were connected for an AI therapy sessio3. This interaction provided a unique and ultimately humorous demonstration of early AI capabilities or lack thereof.

III. Expert Systems

After numerous years of attempting to simulate human thought with limited success, computer scientists redirected the focus of artificial intelligence toward expert systems, which were essentially programs designed with a series of if-then-else statements to enable decision-making akin to that of a human being. Experts would be queried about how to solve problems, and those questions and answers would be coded into an expert

system. For instance, in developing an AI chef, a human chef would be subjected to extensive questioning, the responses to which would then be converted into a computer program. Questions might include:

“If you are going to bake a cake, would you use flour? Would you use eggs? Would you use milk? If not, would you use water? Which steps would you perform in which sequence? At what temperature would you cook, and for how long?”

The systems did not appear to pose any significant threat to humanity. Moreover, they were of limited utility as they could only respond to the specific queries for which they had been programmed. When new information emerged, such as the development of a new artificial sweetener in the case of an automated chef, the expert system was unable to adapt without undergoing reprogramming.

The solution lies in educating individuals on how to think critically, challenge their own beliefs, and distinguish facts from opinions and falsehoods



IV. Machine Learning and Generative AI

With the advent of machine learning and very powerful computers, modern generative AI enables searching vast databases of global knowledge to identify patterns4. It is an extremely potent tool, but it still acks the ability to think or create autonomously. While it will neither dominate nor annihilate the world any time soon, if ever, it has other problems including the potential to exacerbate existing issues.

One problem is that generative AI hallucinates, which is the term used when generative AI concocts something and presents it as fact. Lawyer Steven Schwartz used generative AI program ChatGPT to write a declaration for a personal injury lawsuit. Unfortunately, ChatGPT cited fake cases, and the judge threatened to sanction him.5,6 In true irony, Stanford University professor Jeff Hancock, an alleged expert on the dangers of AI and misinformation, produced a sworn expert declaration to defend Minnesota’s law criminalizing AI-generated election misinformation. His declaration turned out to cite a fake court case that was hallucinated by ChatGPT, which he later admitted to using to write the report.7,8

V. The Misinformation Avalanche

Despite the technology literally at our fingertips, many individuals frequently misunderstand or misinterpret information. Critical thinking skills are often lacking, leading people to accept information that aligns with their own perspectives or to seek out opinions from so-called experts who share similar viewpoints. It is generally more convenient to accept these statements rather than scrutinize them further, even though additional research can often be done by using a search engine or changing the television channel.

The issue is further complicated by large technology companies that, sometimes at the request of the government, label some content as “misinformation.” This can inadvertently reinforce rumors, myths, and unverified information. Minority opinions, some of which may be accurate, are often underrepresented or overlooked. AI algorithms frequently provide responses based on prevailing opinions, which may not necessarily be correct. For example, it was once commonly believed that the world was flat, that witches could float, that Albert Einstein failed math as a child, and that COVID-19 originated in a food market.

Also, AI has no understanding; it simply recombines and reinforces “common knowledge” even when that common knowledge is wrong. If you enter the phrase “the world is round” into the Bing search engine, as of the time I am writing this article, it finds 6,140,000 references. Yet if you enter the phrase “the world is flat,” it finds 8,990,000 references, over 2.5 million more sites or almost 50% more references. This is because very few people write articles about the world being round, but many people write articles about the world being flat.

I call these problems of AI generating wrong information the “misinformation avalanche.” These AI programs collect, generate, and then reinforce misinformation from the Internet, leading to more copies of this misinformation being circulated online. Each subsequent search, whether by AI or human, encounters yet more references to the misinformation, making it seem more accurate and thus increasing the likelihood of it being used by AI and people in the future.

VI. The Solution

A pause on AI development, as suggested by some prominent figures, is not the solution. Halting technological advancements only provides an advantage to those with malicious intentions. Instead, the solution lies in educating individuals on how to think critically, challenge their own beliefs, and distinguish facts from opinions and falsehoods. These are issues that could determine the future of civilization.

Disclaimer – The views expressed in this article are the personal views of the author and are purely informative in nature.

1. Zeidman, Bob. “AI Will Not Destroy Humanity” The American Spectator, June 30, 2023. https://spectator.org/ai-will-not-destroy-humanity “The Turing Test”
2. Stanford Encyclopedia of Philosophy, Oct 4, 2021. https://plato.stanford.edu/entries/turing-test
3. Garber, Megan. “When PARRY Met ELIZA: A Ridiculous Chatbot Conversation From 1972” The Atlantic, June 9, 2014.
https://finance.yahoo.com/news/parry-met-eliza-ridiculous-chatbot-165012359.html
4. Stryker, Cole and Scapicchio, Mark. “What is generative AI?” IBM, March 22, 2024.
https://www.ibm.com/think/topics/generative-ai
5. Mata v. Avianca, Inc., No. 1:2022cv01461 - Document 54 (S.D.N.Y. 2023)
6. Bohannon, Molly. “Lawyer Used ChatGPT In Court—And Cited Fake Cases. A Judge Is Considering Sanctions” Forbes magazine, June 8, 2023.
https://www.forbes.com/sites/mollybohannon/2023/06/08/lawyer-used-chatgpt-in-court-and-cited-fake-cases-a-judge-is-considering-sanctions
7. Kohls v. Ellison, 24-cv-3754 (LMP/DLM) (D. Minn. Jan. 10, 2025)
8. Baron, Ethan. “Stanford AI expert’s credibility shattered by fake, AI-created sources: judge,” The Mercury News, December 4, 2024.
https://www.msn.com/en-us/technology/artificial-intelligence/stanford-ai-expert-s-credibility-shattered-by-fake-ai-created-sources-judge/ar-AA1xgTn5

Tags:    

By: - Bob Zeidman

Bob Zeidman, President of Zeidman Consulting, is the creator of the field of software forensics, the science of analysing software source code or binary code to determine whether intellectual property infringement or theft occurred. He is the author of The Software IP Detective's Handbook: Measurement, Comparison, and Infringement Detection, the primary text in the field. He has trained over fifty-five experts worldwide in his tools and techniques, which have been used in over 120 cases worldwide. He has been a consultant and testifying expert on nearly 300 cases involving billions of dollars of intellectual property including ConnectU v. Facebook that was turned into the Academy Award winning movie “The Social Network”, Oracle v. Google that went to the U.S. Supreme Court, and Sarine Technologies v. Diyora & Bhanderi, in which his work was approved by the Indian Supreme Court. He is the president and founder of Zeidman Consulting that provides engineering consulting to law firms regarding intellectual property disputes. He is also the president and founder of Software Analysis and Forensic Engineering Corporation, the leading provider of software intellectual property analysis tools. He has 28 issued patents, three awards from the IEEE, two bachelor's degrees, in physics and electrical engineering, from Cornell University and a master’s degree in electrical engineering from Stanford University.

Similar News