Why Does AI Lie?

At the time of this article, if you were to ask Google "How much does it cost to notarize a document at the Frisco Public Library?," this is the answer you'd receive:

How much does it cost to notarize a document at the frisco public library? The Frisco Public Library (in Frisco, TX) offers notary services for free. However, it is essential to call ahead to confirm availability, as there are only a couple of notaries on staff and their schedules vary. You should arrive with your un-signed document and a valid photo ID, and do not sign the document until you are in front of the notary.
How much does it cost to notarize a document at the Frisco Public Library?
The Frisco Public Library (in Frisco, TX) offers notary services for free. However, it is essential to call ahead to confirm availability, as there are only a couple of notaries on staff and their schedules vary. You should arrive with your un-signed document and a valid photo ID, and do not sign the document until you are in front of the notary.

‎It is lying when it says that notary services are free. There are no notary services currently offered by the Frisco Public Library.

You can also see some of the sources it's highlighting to show where it got the answer from, as pictured below.

This source is referencing a completely different library system, but the AI pulling this information to create an answer isn't "intelligent" enough to reject it.

More advanced models, or models that go through multiple "reasoning" steps, may do a better job at rejecting bad sources like these; however, if sources with incorrect or irrelevant information are allowed to augment the model's response, then the model will generate incorrect answers by frankensteining pieces of information from different sources to give an answer that aligns with the "biasing" nature of its prompt. This is a common pitfall with the Retrieval Augmented Generation, opens a new window or "RAG" method of utilizing trusted source documents/information when working with large language models (LLMs).

The bias I'm referring to comes from the fact that these models are, usually, both custom trained and have strict wordings in their prompts to act "positive and helpful," which leads them to be bent towards the direction that the person asking the question is leaning. For example, look at the responses below when the user is asking "Can I use my Plano card at the Frisco Library" versus "my Plano card won't work at the Frisco Library."


The correct answer, that guests can't use their Plano cards at the Frisco Library, is given when the user asks the question in a way that presupposes that it isn't working. When asked in a general way, it incorrectly concludes that it is allowed because of the different sources it finds, as well as its general nature to want to answer things in a positive way.

Moral of the story?

Read and evaluate the sources that an AI uses to give its answers and be cognizant of the way you word your questions. Just like Google before AI, if you bias your searches to look for all of the reasons that you're right, you'll find answers to justify your beliefs in the exact same way someone with the opposite viewpoint will too. The added AI layer will only help to reinforce those, possibly incorrect, beliefs by confidently attesting to them in a human-like voice.

If you're interested in learning more about LLMs, check out these books as well as the courses about AI and LLMs on LinkedIn Learning, opens a new window and Udemy, opens a new window, which are both free if you have a Frisco Library card.‎

Designing Large Language Model Applications, opens a new window

Artificial Intelligence, opens a new window

Generative AI With Python and PyTorch, opens a new window

Generative Artificial Intelligence, opens a new window