Writers in the Storm

A blog about writing

storm moving across a field
July 8, 2024

2 Ways Artificial Intelligence (AI) Can Ruin Credibility

by Miffie Seideman

AI letters with circuit board pattern

It’s no secret that writers are increasingly relying on AI technology to generate queries, synopses, and even complete books, creating a controversy in the publishing industry. The technology is still unrefined, generating poorly written prose and stilted dialogue, absent the beauty of human story creation.

It’s even more concerning that AI generates information from pre-existing online content, often word for word and with no acknowledgement of (or consent from) the original author. For these, and other reasons, a number of literary agencies are already specifying that they will not consider any submissions generated by AI. 

But the internet remains a great wealth of knowledge and writers are turning to AI assistance to help with time-consuming research for stories.

happy cartoon toddler

So, what’s the problem with AI?

AI is still in its infancy. It’s terrible twos, to be more exact. AI is rapidly growing and learning, while still relying on very human programmers for direction and guidance. And by virtue of all that, it can (and does) make mistakes—some significant. For just a brief overview of recent AI-generated errors including recommendations to break the law and producing explicit imagery, check out this article by Aaron Drapkin and another by Danny Goodwin.

Writers unaware of the limitations of using AI for research risk creating an error-laden story that alienates readers and damages credibility. But writers can avoid these problems by understanding just a few key concepts:

  • What AI can get wrong (and why)
  • How to validate AI generated information

1. What AI can get wrong (and why):

The days of combing through volumes of bound encyclopedias, with content vetted by editors and specialists, have been replaced by internet searches. And most writers know that the information on the internet can be factual, opinion, biased, or completely fake. For a long time, simply checking the names of websites returned in a search could help writers hone in on factual sites.

However, over the last year, there have been rapid changes to the formatting of information provided through web searches. Have you noticed that AI-generated summaries of your queries automatically appear at the top of your search results? They look slightly different in various apps, but the overall effect is similar: the summaries look authoritative and are very easy (and tempting) to click.

But be careful, that information can be false:

basketball player

Wrong information

Some information (reported) by AI is blatantly false. This can happen when AI misinterprets information it collects to compose summaries. For example, X’s chatbot Grok misunderstood an online conversation involving the term ‘throwing bricks’ (a basketball slang term for missed shots) and reported that a famous NBA player had vandalized another player’s home using bricks. And while this gave everyone a good laugh, what if the information had been taken as fact, magnified on social media, and resulted in fan retribution?

Incomplete or omitted Information

You might draw wrong conclusions if an AI summary leaves out information. This has been reported to occur in up to 75% of drug information queries using AI, prompting the American Society of Health-System Pharmacists to issue a warning that patient’s lives could be endangered by using AI-generated drug information.

My own test of both Google AI and Facebook’s Meta AI asking “what is kush?” (a slang term now used for three distinct illicit drugs) resulted in incomplete responses, some blending together information about all three drugs. Had I used that information, my blog post would have been very wrong.

dripping cheesy pizza

Facts mingled with fiction, or please don’t glue your pizza!

AI uses information sourced from across the internet, but it can’t tell the difference between fact, false posts, satire, and jokes. Nor does it separate content from crowd-sourced information sites, like Reddit or Quora, from reputable sources. AI simply collects information on a topic and presents it.

The result?

AI has recommend using glue to hold cheese onto pizza and changing a car’s blinker fluid (which doesn’t actually exist).

Kermit the frog

Fabricated information and AI hallucinations

AI is simply programmed to complete a query task. Sometimes, like a two-year-old, it really just wants to give you an answer, even if it’s made up. In addition, poorly worded or leading queries can add to the odds of AI returning fake information.


Because AI doesn’t fact check your query wording, it just answers it. To test this, the University of Maryland library ran a query prompting AI to write an essay on Jim Henson, creator of the Muppets, and his time at The Ohio State University. AI generated information on Jim Henson’s start in puppeteering at The Ohio State University and the impact of one of his OSU mentors, Dr. Richard Lederer.

Wouldn’t this information create a rich character background for a story? Maybe your MC ‘s entire life choice was influenced by working with Jim Henson on projects in Dr. Lederer’s class.

Sounds great, right?

There’s a problem. Jim Henson never attend The Ohio State University. And Dr. Lederer? Well, he doesn’t exist. The University of Maryland ultimately identified him as a ‘hallucination’ of ChatGPT.

Unintentional programming consequences

Computer programs only do what they’ve been programmed to do (at least, so far). And human programmers are…well, human. Program outcomes may not always be foreseen, as was the case of one intended to create more racial balance online. AI’s response was to reassign incorrect races to various historical figures. Imagine if you’d written an historical fiction novel based on that information?


With all the potential confusion, doesn’t is make sense to simply tell AI to only use accurate resources? Queries have attempted to do just that, asking for only peer-reviews references, for example. Unfortunately, even those references cited by AI, which looked quite professional, were completely fabricated.

2. How to validate AI-generated information

So, what’s a time-starved writer to do?

Even before AI-generated information was so prevalent, online information had to be considered with a certain amount of skepticism. When writing The Grim Reader, every drug fact I read on various websites, from how a drug is used to the symptoms a character could feel, had to be fact checked (and double checked!) via reliable resources.

But with AI-generated information, it may be a little trickier. Since AI pulls bits and pieces from various resources throughout the internet into its summary, the source of the information may not be obvious.    

detective looking at a question mark

Fortunately, simple steps have been developed by the University of Maryland for checking the credibility of AI generated information:

Prompt perfect:

What used to be a simple search question online has become a more complicated query that needs to be well crafted to avoid incorrect information. Covering all the tips for writing good queries would take an entire blog post, but writers can use this resource to get a good overview.

Fact checking:

Basically, anything that looks like a fact in the AI prompt response should be independently verified. Here’s a good short video on fact checking. In addition, Canada’s Centre for Digital Media Literacy has developed a program called “Break the Fake” to help internet users tell fact from fiction.

Scholarly reference check:

Those references AI credits for its summary may look nice, but are they real? As mentioned, AI sometimes fakes references. Here’s a great video on how to check those scholarly references to be sure you’re not writing more fiction than you intend!

Leave your bias at the door:

Unfortunately, AI-generated summaries can include biases from several sources:

  • The way the prompt is written
  • The author(s) of the information gathered
  • The programmer of the computer.

At a minimum, writers should check a prompt for biased (or leading) wording and verify the reliability and credibility of sources being used.

Where do we go from here?

AI is continually growing and changing. What we know today about how it’s programmed and what to be aware of may completely change in the near future. Being aware of how to best use AI to get credible information and ideas for stories and characters can offer a powerful tool to writers. For the moment, AI is probably best thought of as a virtual assistant (in training) that needs oversight and verification.

Has this information helped you better understand how to use the power of AI to research story ideas? Has AI generated wrong information from your searches? Let us know in the comments.

* * * * * *

About Miffie

Miffie Seideman

Miffie Seideman has been a pharmacist for over 30 years, with a passion for helping others. Her research articles have appeared in professional pharmacy journals. Miffie blended her passion for pharmacy and her love of writing into THE GRIM READER: Putting Your Characters in Peril (A Pharmacist’s Guide For Authors),(Red Lightening Books, Indiana University Press). She’s represented by Amy Collins with Talcott Notch Literary Services.  

An avid triathlete, Miffie spends countless hours training in the arid deserts of Arizona, devising new plots for her upcoming fantasy love story. She can be found hanging around her website https://GrimReaders.com offering tips to writers and on X @MiffieSeideman…you know…tweeting. Contact her at info@grimreaders.com/

Picture attributes:

  • Kermit: mdherren via Pixabay
  • AI: GDJ via Pixabay
  • Toddler: OPenClipArt-Vectus via Pixabay
  • Detective: GraphicMam-Team via Pixabay
  • Basketball player: Mohamed_hassan via Pixabay
  • Cheese Pizza: Cheifyc via Pixabay

Leave a Reply

Your email address will not be published. Required fields are marked *

15 comments on “2 Ways Artificial Intelligence (AI) Can Ruin Credibility”

  1. Very interesting! I've used AI for a couple of pictures but really don't see using it for text. I like writing my own!

    Unfortunately, I'm afraid the ability to create a book with only a few key-punches may lower book standards even more than self-publishing did. And, I am sure, just like nuclear power and drugs, it will be used in ways that harm.

    Thanks for the opportunity to think about how I feel about it!

    1. Thanks, Sally. I love writing my own, too. And I agree- the envelope will probably be pushed. And in ways we can not even fathom right now. But I have high hopes that readers will be the ultimate gatekeepers on how books they read are written. Let's hope!

  2. Due to the inaccuracy of AI query response, I actually shut off that feature on my web browser. I don't want to see it and have to sort out what elements in a response are useful and what's absolute junk. It's easier to sort through web search results for credibility than tearing apart an AI summary. I saw some doozies before I figured out how to shut off the AI query response option on my browser.

    Also, I have a problem with the fact AI used for writing outright plagiarizes the work of actual writers. That's nothing short of criminal theft and copyright infringement, no matter how the legal system wants to view it (they can't seem to make up their minds). Those who "write" books using AI are both lazy AND criminal, as far as I'm concerned. Not only are they trying to take the easy way out (i.e. not having to actually write), they're stealing the work of others in the process. Unfortunately, it's probably safe to assume many, if not most of such folks, don't care about anything but the money they can get thanks to the work of other people. Hopefully book distributors like Amazon, D2D, and such will start sorting out and eliminating such fraudulently produced books. I doubt that'll be an easy task, though.

    1. I absolutely agree with the plagiarism issue. I didn't touch on it a lot here, since it could be huge post all of it's own. I know the Writers Guild of America and others are making strides to protect writers. I know Amazon KDP has made a statement about AI, but I think at this time it is a self-reporting thing-which won't be helpful. Unfortunately, those who are initially culpable (meaning Meta and others that initiate/advocate/allow the plagiarism) are driving it too rapidly for many to wrap their legal minds around. It's interesting to note that colleges (which have long had a plagiarism detector for essay submissions) have increased the level of plagiarism identification to keep up with the level and amount of AI generated essays. If they can do it, I would think the large publishers could do it, as well.
      Thanks for dropping by and sharing your thoughts on this.

  3. Thank you for this timely and valuable blog! I love your examples of gluing the cheese on pizza and NBA players throwing "bricks."

    1. Hi Rhonda, yes! Isn’t that crazy? There are no standards for the medical information AI provides. Imagine if it culls medical answers from some of the satire or chatroom personal opinion pages? And in one ‘study’ AI was asked to do, it literally made up the data set to support its reported medical outcome. Hard to believe!
      Thank you for dropping in and sharing that information!

  4. It has been amusing to see how badly AI has destroyed people's confidence in search engines like Google and others. AI is an amazingly powerful tool, but it needs to be used with great care!

    1. Yes! It seems like they didn’t think it out very well before tossing it out there or didn’t think about the trust impact of such blatantly wrong information. Thanks for chiming in Lisa!

  5. Off subject, but you have the same name as my mother--Miffy, short for Mifawnwy. I don't think she knows another Miffy/Miffie. Anyway, I am writing a historical fiction novel and have started using AI as a research assistant. Even with the cross-checking, it has saved me hours of time!

Subscribe to WITS

Recent Posts





Copyright © 2024 Writers In The Storm - All Rights Reserved