Friday, March 6, 2026

SearchResearch (3/6/26): Why you STILL need to know how to search... perhaps more than ever.

 It's been an interesting few weeks.  


Surprise! I found my stone twin hiding in an architectural sculpture at Yale. 

I was there in February to give a lecture on Human-Centered AI, during one of the colder Februaries on record, with temperatures dropping to -5°F (-15°C). Fortunately, after years of living in upstate New York, I own the right kind of clothing.

But, as Regular SRS Readers know, I’ve also been writing about the changes underway at the intersection of AI and search. And there are a few things we need to keep in mind...


Search behavior is evolving rapidly, with several key changes:

Increased Complexity: Queries are becoming longer and more complex, with a significant rise in conversational and long-tail queries. Not a surprise, really, but it suggests that more people are shifting to an AI model of search (and are tackling more complex search tasks). 

Visual Searches: Visual searches have grown significantly, with a 65% year-over-year increase, with more than 100 billion visual searches already this year (2026). 

Multimodal Searches: Users are embracing new ways to search, including voice, text, circle, scribble, and humming.  What's interesting is that they're combining different inputs like images and text. You know video isn't far behind. 

AI-Driven Searches:
AI is driving an explosion of complex and long-tail (i.e. rare) searches, with AI mode users asking much longer questions, sometimes as much as 2-3 times the length of traditional searches.

We're really not in the Kansas of Search any longer.  


2.  All of the metrics point to one thing: people want visual, browsable, and snackable search experiences:

That's fine, but we also live in a world where it is now 1,000x cheaper to generate a plausible hypothesis than it is to verify it by looking-up and wading through through rigorous evidence. A desire for more "snackable" presentation of results isn't going to encourage deeper research and careful analysis.  

In fields like sociology or history, where nuance is everything, AI 'hallucinations' are becoming more sophisticated and harder to spot.

Research is shifting from a labor-intensive process to a judgment-intensive one.

The advent of 'Deep Research' agents means we can now summarize 500 papers in 5 minutes. This doesn't make research easier; it makes it harder. It moves the bottleneck from information gathering to critical evaluation.  How do we build trust in the 'market of ideas' when the primary tool for research is also the primary generator of high-quality misinformation, much of which is inherently snack-food for the mind?


3. Intellectual Vertigo: 

We are currently living through a moment of collective intellectual vertigo. For the last twenty years, search was an act of retrieval. We were librarians looking for a book on a shelf. Today, search has become an act of synthesis. We ask a question, and a machine doesn't just find the book; it reads ten books, summarizes them, and hands us a neat, three-paragraph answer, as if from a vertiginous height.  

The AI magic is so good that it creates a dangerous illusion: that the labor of research has been eliminated. But in reality, the labor hasn't disappeared—it has shifted.

In the age of AI, the quality of your answer is strictly capped by the quality of your query. If you ask a "lazy" question—"What is the consensus on climate migration?"—the AI will give you the most probable, middle-of-the-road, and often outdated "average" of the internet. As Ted Chiang brilliantly wrote, AI answers are often like a blurry JPEG of the internet.  Be Careful.  

Knowing how to search now means knowing how to probe the AI model to bypass that "average." 

It means knowing how to construct searches that force the AI to look at the edges of a field, to find the dissenters, and to cite the specific data that doesn't fit the neat narrative. If you don't know how to search, you are essentially letting an algorithm decide the boundaries of your world.

AI provides a beautiful map of human knowledge. But as any researcher knows, the map is not the territory. The actual "territory" is the messy, footnote-heavy, peer-reviewed primary sources.

When we lose the skill of searching—the ability to find the original source, to verify the DOI, to check the methodology of the paper being cited—we lose our connection to the territory. We become "Map-Readers" who are vulnerable to every hallucination and every bias baked into the system. Knowing how to search is the only way to verify that the ground beneath our feet is actually solid.


3. What do we do?  

It's not going to be a surprise, but we need to develop new research habits that assume "hallucination by default" and use adversarial validation techniques.

To move beyond simple "Q&A" and into high-quality AI searching, you have to treat the AI as a sophisticated but fallible partner. Practicing critical analysis and adversarial methods ensures you are extracting the most accurate information while guarding against "hallucinations" or biased patterns.

Here are three practical ways to level up your search game:

A. The "Devil’s Advocate" Cross-Examination

Instead of asking for the truth, ask the AI to defend a counter-intuitive or unpopular position. This forces the model to bypass its "standard" consensus-based response and reveals the complexity of a topic. You might well discover something that nobody else has noticed. 

The Method: Once the AI provides an answer, ask it to: 

"Identify the three strongest arguments against the conclusion you just provided, citing specific potential data gaps."

Why it works: It breaks the "echo chamber" effect. If the AI struggles to find counter-arguments, you know you need to switch to a different search tool to find dissenting views.

Adversarial Twist: Tell the AI: "I believe [X] is true. Your job is to convince me I am wrong using only verifiable historical or scientific data."

B The "Triangulation & Source Stress-Test"

High-quality searching involves verifying that the results the AI gives you aren't just "sounding" right. You can use adversarial prompting to make the AI audit its own logic.

The Method: After an AI search, use a Multi-Step Verification prompt:

"Summarize the consensus on [Topic]."

"Now, provide the names of three specific experts or organizations that would disagree with that summary."

"Explain why those experts might claim your previous summary is oversimplified."

Why it works: It forces the AI to look for "friction" in the data rather than just the smoothest path to an answer.  (And as my Yale students will attest, one needs friction in the intellectual work you're doing.  If you're just gliding along, you're probably not learning anything.  

C. The "Constraint-Based Fact-Checking" (Adversarial Prompting)

This method involves setting "traps" or strict rules to see if the AI relies on generic templates rather than actual search data.

The Method: Use a Negative Constraint prompt: 

"Explain the impact of [Event/Policy], but do not use any information or talking points that appeared in major news headlines in the last 48 hours. Focus only on academic or niche industry-specific data."

Why it works: By forbidding the most "obvious" or "available" information (that is, working to avoid the  Availability Bias effect), you force the AI to dig deeper into its training data or actively search live search results for more nuanced, less-discussed facts.

Practical Tip: Ask the AI to: "Compare these two perspectives and point out any logical fallacies present in either side."


Face it, we aren't librarians anymore; we are navigators in a high-speed, synthetic storm, and the rain isn't letting up. 

Knowing how to search is no longer a technical skill you delegate to a junior researcher. It is the primary way we exercise our agency. It is how we ensure that as the machines get smarter, the humans don't get lazier. Don't be lazy; look for the friction.  

In 2026, the most powerful person in the room isn't the one with the best AI; it’s the one who knows how to ask the question that the AI didn't expect.

So... Keep searching!