Fighting Against Misinformation in AI Tools

0
407

Fighting Against Misinformation in AI Tools. AI tools such as ChatGPT, Google Search, Google Gemini, Bing, CoPilot and Perplexity are recommendation engines. And they are far and away the biggest influencers on the planet, making trillions of recommendations about niche topics to billions of people every day.

How and when they present you, when someone is researching you, or looking for a solution you can provide, is fundamentally important to you and your business. 

In this article I will address how often they get information wrong, why they get it wrong, and how you can fight back (and win).

AI Gets a Lot Wrong

We use these tools because, generally, they get things right. They provide a helpful solution to our problem. But they provide incorrect or misleading information all too often. 

At Kalicube, we have collected over 3 billion datapoints from Google since 2015, and that indicates that, in 2024, Google provides information about people that is substantively wrong about 6% of the time. For companies, that figure is about 2.5%. How often it gets details wrong is somewhere closer to 50%. 

Every time AI tools get information about you or your company wrong, they are misrepresenting you to your audience and potentially damaging your reputation. Every time the AI tools don’t know a pertinent piece of information about you that would help your audience, that is a lost opportunity. 

You are faced with a huge problem, however you look at it.

Where does the Misinformation in AI Tools come from?

Inaccurate, false and misleading information in AI tools such as ChatGPT, Google Gemini, Bing, CoPilot and Perplexity comes from three major sources:

  1. Disinformation (ie lies spread by humans with the intent to mislead).
  2. Inaccurate information online (ie honest mistakes in factual inaccuracy by humans).
  3. Misinformation created by the machines themselves (ie they have simply misunderstood).

They are trying to understand everything about everybody in the world. An impossible task: they cannot possibly get it right all the time. 

However, you can easily help them get it right about you. The majority of the information BigTech uses for their AI is from the web and the web is a huge mess. All you need to do is clean up your corner of the Internet.

How You Can Stay in Control: A Human-Centric Approach

You can take control and avoid the vast majority of issues with misinformation spread by AI about you, if you take (and keep) control of your digital footprint. You can teach the AI tools simply by optimising your corner of the Internet.

© James Barnard

The Simple System for staying in control 

At Kalicube, we have a system for helping the AI Search and Assistive Engines give our clients preferential treatment in their results. It is simple to implement (anyone can do it) and it consists of three phases: Understandability, Credibility and Deliverability. Successfully completing those three phases will give you control, influence and visibility respectively. 

You can download a free 60 page guide that shows you how to implement The Kalicube Process here >>

© The Kalicube Process – James Barnard

To prevent misinformation in the AI and ensure the tools get the facts about you correct (to stay in control), you’ll only need the Understandability phase.

You can Teach Google, ChatGPT and other AI with clarity, consistency and repetition 

Think of it like this: You’re building a control center for your online identity. You do that by building a solid hub-spoke-wheel structure. 

© Kalicube

Step 1: create a central hub on a website you own. This is your Entity Home – a single go-to page for Google, ChatGPT, Bing, and other AI tools where they know they will find your (honest) version of the facts about you or your company. It’s like a command center where you explain to the AI: who you are, what you do, and who you serve.

Fun fact: Google engineers use the term “point of reconciliation” rather than Entity Home. 

Step 2: identify the online resources that make up your wheel. These are all the other places on the web where information about you exists, for example, your social media profiles, articles you’ve been featured in, videos by or about you, or even mentions on other websites. Find them all and ensure that the information is factually correct and consistent with the information you present on your Entity Home hub.

Step 3: connect the wheel to the hub with spokes. Add a hyperlink from your Entity Home hub to all these other resources, and if possible, link back from them to your Entity Home. This creates a network of information that proves your central message is the truth.

That’s it! So Simple.

Kalicube’s super-simple hub-spoke-wheel model for managing information about you online puts you in control. The AI will understand and search and assistive engines such as Google, ChatGPT and Microsoft Bing will get the facts about you correct in their recommendations to the subset of their billions of users who are your audience. 

Wrapping All That Up

Misinformation in AI search and assistive technology poses a significant threat to your business and you personally. 

My advice is to take back control, and do it now. 

Create a solid, reliable and controlled digital footprint. Start by creating an Entity Home – a central hub on your website where the AI finds the definitive information about you or your company. Ensure the information about you around the web is factually correct and consistent. Then simply connect your Entity Home hub to the corroboration with two-way links, forming a cohesive network that reinforces the truth about your brand.

By cleaning up your digital footprint today, you take control of how you’re represented in the AI and digital world, reducing the risks of AI-generated misinformation. Stay ahead of the curve by optimizing your corner of the Internet – your business future depends on it!