Event Recap: The Future of Search in the Age of AI

May 17, 2024  ·   News

How do search engines influence the information landscape — and what role could AI play going forward? We convened experts from academia, journalism, and industry to discuss these questions and more.

Graphic with the title of the event and headshots of all four panelists.

For all the public discussion about the impact of social media on people’s information diets and belief formation, there is comparatively less focus on search engines, even though Google is the most visited website in the world. For the last 20 years, search engines have been, for many people, their entry point to the web. They index information at an unimaginable scale and make it accessible at unimaginable speed. 

But there are also tensions, contradictions, and challenges. How can search engines feed people reliable information? How can they support the financial models of publishers to support a robust web? How can they innovate and adapt to competitive pressures, such as generative AI, while remaining a stable part of our digital infrastructure?

Last week, we were thrilled to bring together three experts — Google’s Zoe Darmé, The Verge’s Mia Sato, and UNC CITAP’s Francesca Tripodi — to discuss these questions and more. The following is a brief recap of the event, which was moderated by CSMaP’s Zeve Sanderson.

The Evolution of Google Search

To kick us off, Darmé, a Senior Manager of Knowledge and Information on Google’s Trust team, discussed the evolution of Google search over the years. What started as a list of 10 blue links gradually grew to include more features — such as maps, translate, and the knowledge graph  — and many of the changes were made to respond to how people used the product.

“Search evolved in large part due to the changes and how people came to approach their search journey,” Darmé said. “But there is a critical part of search that has very much stayed the same, and will stay the same, and that’s the investment in search quality.”

Darmé explained the fundamentals of how search engines actually work. First, she detailed how Google uses advances in machine learning to better understand language and be more responsive to user queries. Then, she discussed how the company assesses page quality, emphasizing how its systems shift to favor higher quality content when dealing with certain sensitive topics, such as health and safety.

How Google Changed the Web

Next, Sato, a Platforms & Communities Reporter at The Verge, discussed her reporting on how creators have optimized their content and websites around Google’s search algorithms. 

To illustrate how this works, Sato created a fictional website and employed common search engine optimization (SEO) practices to improve the site’s standing in search results. These SEO tools led her to change the site’s structure and writing by targeting keywords, using shorter paragraphs, and adding headers, resulting in a site that lacked creativity and started to feel like everything else on the web.

“The most consequential and important version of this change happens when Google and search end up influencing what kind of content and work is created in the first place,” Sato said. 

For example, she spoke with writers who emphasized that they don’t focus on projects if they can’t outrank bigger sites in search engines. This means that consumers of information end up seeing a lot of the same things, which might not be the best quality, but is optimized in a way that’s beneficial for the creators.

How Bad Actors Game Search

Panelists also discussed the problem of “data voids,” when low-quality content rises to the top of search results because there’s little high-quality information available on a topic. Nefarious actors take advantage of these voids, manipulating certain keywords in a way that surfaces misinformation aligned with their political interests, according to Tripodi, a sociologist and information scholar at UNC-Chapel Hill.

The data void itself isn’t necessarily the problem, she said, but the rapid filling of that void with untrustworthy content. Google recently released an update to combat this type of low-quality content, following a spate of reports on AI-generated spam and “obituary spam.” But it’s unclear how search engines will handle this issue as they continue to integrate AI features. 

“Data voids are particularly problematic in AI-enabled search because the environment of results comes in this conversational style where it just seems more trustworthy,” Tripodi said.

How Will AI Change Search?

Moving into the Q&A portion of the conversation, moderated by Sanderson, panelists discussed two main themes: how AI-enabled search will affect news publishing, as well as how it will impact public trust in information.

This week, Google announced it will now feature AI-written summaries at the top of search results, rather than the traditional list of links. This follows similar moves by other companies and startups. This change, however, could threaten news publishers, ranging from small bloggers to large media companies, who rely on search engines for a majority of their traffic. If people get all the information they need from an AI-written summary, they will have little reason to go to the website itself, critics say

To respond, publishers will likely rethink their work, similar to how they did when optimizing for search engines. But that could backfire, according to Sato. 

“For a long time, not just on Google search but also Facebook and Twitter, we over-indexed how much these platforms can do for us,” she said. “As we’ve done that, we’ve spent more and more time adjusting our work for those platforms instead of building something on our own.”

In an era of eroding trust in democratic institutions, the move to AI-augmented search also presents challenges to building public trust in the information environment.

In a recent paper, Tripodi and colleagues explored this dynamic, focusing on how propagandists use features of a given technology — or technological “affordances” — to garner support and instill trust in a lie. The trust doesn’t come from a specific person, but from interacting with the technology itself. “When audiences are primed to ‘do their own research,’ disinformation becomes a more entangled, participatory process,” the authors write.

This concept of “doing your own research” can also be activated in a positive way, Tripodi said, since we all search online and try to find information for ourselves. 

“Done well, and informed by evidence-based practices and information literacy, doing your own research can be really powerful,” added Darmé. For its part, Google has integrated information literacy practices, such as the SIFT method, into search results, creating features allowing users to explore more information about a result or image.

AI continues to develop at a rapid pace. This week, for example, OpenAI unveiled its latest model, which can hold human-like conversations while processing text, audio, and visuals. The company is negotiating with Apple to possibly integrate ChatGPT into iOS. And there’s even rumors about an OpenAI search engine. CSMaP will continue analyzing this space and exploring how advances in AI impact digital media and democracy.

Watch the Event