Lots of articles have been written about generative AI, and this is one of them.
I started scribbling this short post as a bit of prediction, but I think instead I’ll offer it as a small hope: the increased use of generative AI will lead folks to reevaluate what trust, scholarship, and expertise look like.
Before Google started providing AI-driven search results, I didn’t fully trust the top 3 entries it returned. Though they were generally good, I’d still check the second and third pages for additional content, just in case. I don’t immediately accept output from a single search engine query and I’m certainly not going to trust guidance from disembodied software acting as an agent providing potentially life-threatening suggestions about the edibility of wildlife.
Broadening the set of information I consider helps me locate both corroborating and contrarian content I can use to build knowledge. To ask better questions. To refine scope. To improve my research. This is scholarship. An important point here is that use of AI does not, itself, provide understanding. Coupled with a bit of rigor and critical thinking it can support the development of understanding.
For a long time, I think people have implicitly trusted technology companies’ products and services, to the point of deference. My evidence here is anecdotal: friends and colleagues that have told me they don’t look at Google search results “below the fold”, for example. But the more folks experience the limits of generative AI, the more they will seek additional sources of information. Inevitably they will identify human experts.
It’s here that a bit of hope (optimism?) sneaks in: we will increasingly seek out more local sources we can trust (generative AI being the global foil) and build strong communities whose foundations are supported by expertise.
Tags