Showing posts with label searching. Show all posts
Showing posts with label searching. Show all posts

Thursday, February 16, 2023

At a Crossroads? The Intersection of AI and Digital Searching


Microsoft's foray into next generation searching powered by Artificial Intelligence is raising concerns.

Take, for example, Kevin Roose, a technology columnist for The New York Times, who has tried Bing and interviewed the ChatGPT bot that interfaces with Bing. He describes his experience as "unsettling." (Roose's full article here). 

Initially, Roose was so impressed by Bing's new capabilities he decided to make Bing his default search engine, replacing Google. (It should be noted that Google recognizes the threat to its search engine dominance and is planning to add its own AI capabilities.) But a week later, Roose has changed his mind and is more alarmed by the emergent possibilities of AI than the first blush of wonderment produced by AI-powered searching. He thinks AI isn't ready for release or people aren't ready for AI contact yet.

Roose pushed the AI, which called itself 'Sydney,' beyond what it was intended to do, which is help people with relatively simple searches. His two hour conversation probed into existential and dark questions which made him "unable to sleep afterwards." Admittedly, that's not a normal search experience. Microsoft acknowledged that's why only a handful of testers have access to its nascent product at the moment.

All this gives a feeling we are soon to be at a crossroads and what we know about search engines and strategies is about to change. How much isn't certain but there are already a couple warnings:

  • AI seems more polished than it is. One of the complaints from testers like Roose is that AI returns "confident-sounded" results that are inaccurate and out-of-date. A classic in this regard is Google's costly mistake of publishing an answer generated by its own AI bot (known as Bard) to the question, "what telescope was the first to take pictures of a planet outside the earth's solar system?" Bard came back with a wrong answer, but no one at Google fact-checked it. As a result, Google's parent company Alphabet lost $100 billion in market value. (source)
  • AI makes it easier to use natural language queries. Instead of the whole question about the telescope in the bullet above, current search box strategy would suggest TELESCOPE FIRST PLANET OUTSIDE "SOLAR SYSTEM" is just as effective as a place to start. Entering that query in Google, the top result is from a NASA press release on Jan 11, 2023 which doesn't exactly answer the question, but is probably why Bard decided that it did. Apparently AI takes a very human leap to thinking it found the answer to the question when, in fact, the information answers a different question: "what telescope was the first to confirm a planet's existence outside the earth's solar system?" This demonstrates one of the five problems students have with searching: misunderstanding the question. AI isn't ready yet to take care of that problem.

There's much more to come on this topic.

Monday, November 24, 2014

Searching Myth Exposed (again)

source
It's not true: growing up digital makes one an effective digital searcher.

We've stated this before in our book, Teaching Information Fluency, and now it comes from another source: Google.

Here's an article covering Dan Russell's (senior researcher at Google) talk at Strathclyde University: http://www.heraldscotland.com/news/education/great-internet-age-divide-is-a-myth.25672713

The solution starts with teachers.

Research needs to be included in the curriculum.

"Knowing how to frame a question, pose a query, how to interpret the texts you find, how to organize and use the information you discover are all critical parts of being literate...."

Tuesday, February 16, 2010

Changing Course

When I first started studying information fluency, I thought most of the "good stuff" pertained to finding information efficiently, namely, using keywords and operators optimally, finding the 'right' database to search, choosing links effectively, etc.  Most of the activities we developed at 21cif addressed searching and, to a lesser extent, evaluation. There was a lot of energy around keyword effectiveness and power, the number of terms that was most often best and so on.

I'd say that interest has shifted; now the majority of our work centers on evaluation because that's the greatest need. It's not hard to find information. It's harder to tell if it can be trusted.  Many people are satisfied with their search skills because most of the time they find things they are looking for. Sure, they waste a lot of time getting there and miss a lot of relevant information in the process, but they get the job done.

My own thinking about evaluation has changed.  The 21st Century Information Fluency Project still maintains that determining credibility depends on the source and content of information and that knowing about the author, publisher, date and who links to a site are really important.  Lately I've been concentrating on investigative skills that serve to reveal information about authors and their objectivity, publishers, dates, linking sites and the accuracy of evidence taken from the content. One of these is truncation; others include searching domain registries, examining page information, using the links operator, browsing carefully and checking facts. Few students or adults use these investigative skills without training and as a result, sometimes they mistake fiction for fact.

An emphasis on searching is still pervasive but I hope that will change. Today I took an online challenge that was all about finding but not about qualifying the information. I got 100% and had about half the time left. But in truth, if students have the same experience (and they probably will because the questions were easy searches) they will go away thinking they are pretty good searchers when, in fact, they are not. They may be good finders, but they are not good evaluators. Because of the timed nature of the challenge, they are forced to take the first thing they find--evaluation is out of the question.

I would rather see (and you will find this increasingly on 21cif.com resources) a greater emphasis on evaluation challenges: "how do you know that information can be trusted?"

Would you call students who don't investigate what they find complete searchers?