• About
    • About the FIA
    • Priorities
    • Our Team
    • Brainstorming Board
    • Partners and Affiliates
    • Contact Us
  • News + Events
    • News
    • Events
    • Videos
    • Newsletters
    • In the Media
  • Spark Grants
    • Spark Grants Overview
    • Spark Grants FAQ
    • 2012-2015 Seed Grants
    • 2012-2015 Seed Grant Winners
  • Special Topics
    • Curated Topics
    • SearchReSearch
FIA

SearchReSearch

Comment: How well do LLMs answer SRS questions?

Dan Russell • April 21, 2023
 SearchReSearch
Republished with permission from SearchReSearch
Comment: How well do LLMs answer SRS questions? Dan Russell

For all their apparent competence,

P/C Dalle-D. Prompt: computational oracles answering questions rendered as an expressive oil painting set on a sweeping landscape


... when you get down to asking specific, verifiable questions about people, the LLMs are not doing a great job.

As a friend once said to me about LLMs: "it's all cybernetic mansplaining."

When I asked ChatGPT-4 to "write a 500-word biography of Daniel M. Russell, computer scientist from Google," I got a blurb about me that's about 50% correct. (See below for the annotated version.)

When I tried again, modifying the prompt to include "... Would you please be as accurate as possible and give citations?" the response did not improve. It was different (lots of the "facts" had changed), and there were references to different works, but often the cited works didn't actually support the claims.

So that's pretty disappointing.

But even worse, when I asked Bard for the same thing, the reply was

"I do not have enough information about that person to help with your request. I am a large language model, and I am able to communicate and generate human-like text in response to a wide range of prompts and questions, but my knowledge about this person is limited. "

That's odd, because when I do a Google search for

[ Daniel M. Russell computer scientist ]

I show up in first 26 positions. (And no, I didn't do any special SEO on my content.)

But to say "I do not have enough information about that person.." is just wrong.

I tested the "write a 500 word biography" prompt on Bard--it only generates them for REALLY well known people. Even then, when I asked for a bio of Kara Swisher, the very well-known reporter and podcaster, several of the details were wrong. I did a few other short bios of people I know well. Same behavior every single time. Out of the 5 bios I tried, none of them were blunder-free.

Bottom line: Don't trust an LLM to give you accurate information about a person. At this point, it's not just wrong, it's confidently wrong. You have to fact-check every single thing.


Here's what ChatGPT-4 says about me. (Sigh.)




Keep searching. Really.


Share

Comments

This post was republished. Comments can be viewed and shared via the original site.
5 comments

About the Author

Dan RussellDan Russell

I study the way people search and research. I guess that makes me an anthropologist of search. I am FIA's Future-ist in Residence. More »

Recent News

  • Deepfakes and the Future of Facts
    Deepfakes and the Future of FactsSeptember 27, 2019
  • Book cover for Joy of Search by Daniel M. Russell
    The Joy of Search: A Google Insider’s Guide to Going Beyond the BasicsSeptember 26, 2019
  • The Future of Facts in a ‘Post-Truth’ World
    The Future of Facts in a ‘Post-Truth’ WorldMay 15, 2018
  • The Future of Virtual and Augmented Reality and Immersive Storytelling
    The Future of Virtual and Augmented Reality and Immersive StorytellingJune 6, 2017

More »

Upcoming Events

There are no upcoming events scheduled. Please check back later.
Event Archive »
Video Archive »

Join Email List

SearchReSearch

  • SearchResearch Challenge (11/29/23): What type of paintings are these?  (Swiss Mystery #4)
    SearchResearch Challenge (11/29/23): What type of paintings are these? (Swiss Mystery #4)November 29, 2023
  • Answer: How does it work?  Checking your assumptions?
    Answer: How does it work? Checking your assumptions?November 22, 2023
  • SearchResearch Challenge (11/8/23): How does it work? Checking your assumptions.
    SearchResearch Challenge (11/8/23): How does it work? Checking your assumptions.November 8, 2023
  • Answer: Three little Swiss mysteries?
    Answer: Three little Swiss mysteries?November 1, 2023

More »

University of Maryland logo
Robert W. Deutsch Foundation logo
Google logo
Barrie School
Library of Congress logo
State of Maryland logo
National Archives logo
National Geographic Society logo
National Park Service logo
Newseum logo
Sesame Workshop logo
Smithsonian logo
WAMU
© 2023 The Future of Information Alliance, University of Maryland | Privacy Policy | Web Accessibility