Friday, March 27, 2026

Answer: Who designed this stained glass?

 This should have been easy... 


... but it wasn't.  If you've been around the SearchResearch Rancho for a while, your first instinct would have been to just use Google Lens to search for the image.  That's what I did.  

But... as usual, there's more to the story... 

Here were our presenting questions for the week:  

1. Where is this stained glass? 

2. Who designed it? 

Let's tackle both questions at once. 

IF you use Google Lens with a right-click (then "Search this image with Google Lens,") you might get a result like this, telling us that it's in the Church of St. Mary, Slough, in England.  

Google Lens search on window; first result is completely wrong. 

Nice. HOWEVER... If you click through to that image of stained glass in the Church of St. Mary's, you see that it's NOT a match. 

If, on the other hand, you click the "AI Mode" button on the image search panel (shown in bold in the image below), you get a different answer.  Here, the result shows that it's the window "Land is Bright" in the Washington National Cathedral, designed by John Piper.   


That's an interesting answer, but wrong.  If you do a regular Image search for ["Land is bright" Washington National Cathedral stained glass] you see this image, which clearly is NOT our target.  Oops.  

"Land is bright" window at the National Cathedral, DC.
P/C Wikimedia

It's a beautiful window, but as I always say, CHECK YOUR ANSWERS!  This one is clearly wrong.  

On the other hand, another thing I always say is that "incorrect answers can sometimes give you a clue..."  

In the very first Google Lens result, the second image points to a window at the Washington DC National Cathedral.  If you click on that result, it takes you to a page about Pentecost in 2022, but with an image of our target window.  This is a big hint: We're getting closer!  Stay on the trail!

Even though this isn't the final result, it does suggest we should check the windows at the National Cathedral.  A quick search for [stained glass windows of the National Cathedral] takes us to the Wikipedia Category for this topic.  A Category page is a collection of all the windows at the cathedral.  Simply paging through the collection takes you rapidly to this page:  "Founding of a New Nation" which tells us that this the answer we seek. 

Answer:  This is a "a stained glass clerestory window above the George Washington Bay in the south nave of the Washington National Cathedral. It was designed by Robert Pinart and fabricated by Dieter Goldkuhle, and dedicated in 1976."

But, since we ALWAYS check (right?), I went to the National Cathedral home site and found a nice video about the windows, "January 25 2022 Docent Spotlight: Sacred Stories in Light & Color."  At 26:11 you'll find this slide that confirms our finding and tells us more about the window: 

From YouTube video "Sacred Stories in Light & Color"

I tried all of the obvious AIs--none of them got it right.  Bing didn't get it, and the clue to the right answer was fairly hidden in the Google results.  This is genuinely a hard search task.  

SearchResearch Lessons 

1. Keep searching.  I had a strong suspicion that someone would have documented this kind of thing.  There are books on this topic, and if the online searches hadn't worked, I would have gotten one of them via interlibrary loan.  In this case, I found a good result AND a high-quality confirmation from the Cathedral's own site.  

2. Verify everything!  Even though we got lots of positive-sounding answers, if you check carefully, you'll see that the confident answer was, in fact, not correct. 

3. Follow those other trails.  In this case, the second result actually led us to the correct answer.  Don't give up on seemingly incorrect results... you might find something useful on the trail to your answer.  



Keep searching.  

Wednesday, March 18, 2026

SearchResearch Challenge (3/18/26): Who designed this stained glass?

 I was out for a walk yesterday... 


... and took this photo.  As you know, I love stained glass, and this was an especially beautiful example. I took the shot and then realized that it might make a great SRS Challenge.  

I know where I was when I took the photo, and I know who designed it... but can YOU figure this out? 

1. Where is this stained glass? 

2. Who designed it? 

I tried a couple of AI tricks, that failed.  Can you figure it out? 

Let us know what you did!  (And also tell us about any methods you tried that did NOT work out!  

Keep searching.  

Wednesday, March 11, 2026

SearchResearch (3/4/26): How to do long term research with an AI partner

The Art of Long-Term AI Triangulation

Surveyor triangulating on a construction site (1920s) P/C USC and California Historical Society

In the previous post, we looked at the reality of modern search, recognizing that the world now is very different that it was 5 years ago. 

With the explosion of multimodal inputs and AI-driven queries, we’ve traded the quiet librarian searcher for the role of navigators in a high-speed, synthetic storm.

We also confronted a dangerous paradox. At the exact moment search is becoming infinitely richer and more complex, users are demanding a "snackable," frictionless experience. We live in a world where it is now much cheaper for an AI to generate a plausible hypothesis than it is for us to wade through rigorous evidence to verify it. 

To combat this, I mentioned the necessity of friction—using a method like Constraint-Based Fact-Checking to set intellectual traps, force the AI out of its lazy defaults, and avoid the "average" of the internet.

But there is a catch.

Constraint-based prompting is a good survival tactic for a single search session. But what happens when your research spans weeks or months? This is the world I live in: my research often isn’t done in one day, but takes weeks to search, accumulate evidence, and understand what I’m trying to do. 

In fields where nuance is everything, an adversarial prompt that sparks brilliant friction on Day 1 can slowly degrade into an intellectual echo chamber by Day 30. If you are using AI to synthesize hundreds of documents over a long-term project, relying on one-off Q&A tricks leaves you highly vulnerable to compounding hallucinations.

Knowing how to search is the primary way we exercise our agency, and for serious researchers, that means evolving past the single prompt. We have to move from one-off trap-setting to a continuous, iterative methodology.

What we need is a way to do Long-Term Triangulation by treating the AI as a partner in the research. 

If you want to ensure that as the machines get smarter, we don't get lazier, you have to design an environment that treats the AI not as an answering machine, but as a sustained intellectual sparring partner. 

Here is a step-by-step breakdown of how a researcher can build and maintain this longitudinal friction over a sustained period of research.

Here are the four steps you can use to support your long-term research projects with AI-augmented search and analysis tools.  Let’s call these the Four Pillars of Long-Term AI Triangulation.


1. Build and use a Persistent Memory 

You cannot have a long-term sparring partner if the AI forgets everything every time you close the tab. The foundation of this method is establishing a persistent context window.

The Action: Instead of starting new chats every time, use long-context workspaces (like Gemini Advanced, NotebookLM, or custom project threads) that hold the entire history of the project.

The Routine: At the end of every research sprint (say, at the end of your research day), create a "State of the Thesis" summary within that workspace. (Save this summary—you’ll need it later.)  

The Prompt: [Synthesize our current working hypothesis based on the last 24 hours of inputs. List the three strongest pieces of evidence we have, and identify the single weakest link in our current logic.]


2. Track the Shifts

When dealing with complex topics, the danger isn't just hallucination; it's the subtle shifting of goalposts. As you feed the AI more data, it will naturally try to smooth out the narrative to keep it "snackable." You need to learn to track the deltas—the differences between last week's consensus and this week's. Things change, and that’s okay, but plan to track that.  Use these changes for better triangulation.

The Action: Create a "Friction Log." Whenever new, messy primary sources are introduced, do not simply ask the AI to summarize them. Ask it to compare the new information to its own previous conclusions.

The Routine: The weekly reconciliation.

The Prompt: [I am uploading three new peer-reviewed papers and my previous “State of the Thesis.”  Do not just summarize the new papers. Compare their findings against the “State of the Thesis”. Highlight every specific point where this new data contradicts our previous assumptions. Force a reconciliation.]

And then, naturally, include the shifts in your weekly “State of the Thesis.”  


3. Active Critiquing

An intellectual sparring partner must be allowed to throw punches. Be cautious: If you only reuse data that confirms the biases, the AI will happily build an echo chamber. Triangulation requires intentionally breaking the model's consensus.

The Action: Dedicate 20% of your daily research time to actively hunting for contradictory, fringe, or highly niche data that challenges the dominant narrative of your research, and force the AI to grapple with it.

The Routine: A "Red Team" injection.

The Prompt: [We have spent three weeks building a case file for <Concept X>. I want you to act as a hostile, highly skeptical peer reviewer. Imagine a critique from a dissenting academic. Try to critically break down the current thesis. Where is our argument most likely to fail peer review?] 


4. The Meta-Audit (Check your blind spots)

Eventually, your research process will settle into a rhythm, and that rhythm can create blind spots. The final step in long-term triangulation is stepping back to audit the process of the research, rather than just the facts.

The Action: Periodically ask the AI to evaluate the shape of the data it has been fed, looking for structural biases in the researcher's own search behavior.

The Routine: Do a monthly audit looking for gaps.

The Prompt: [Analyze the <N> sources we have processed in this thread over the last month. What academic disciplines, geographic regions, or ideological perspectives are entirely missing from our dataset? What search queries should I be running today to cover those blind spots?]


By structuring your workflow this way, you come to realize that real research in the AI era isn't about getting the machine to write the final paper or asking the cleverest prompt--it’s about building a system of continuous, productive friction that allows both the human and the machine to think harder.

Keep searching.  Keep the friction.  


 

Friday, March 6, 2026

SearchResearch (3/6/26): Why you STILL need to know how to search... perhaps more than ever.

 It's been an interesting few weeks.  


Surprise! I found my stone twin hiding in an architectural sculpture at Yale. 

I was there in February to give a lecture on Human-Centered AI, during one of the colder Februaries on record, with temperatures dropping to -5°F (-15°C). Fortunately, after years of living in upstate New York, I own the right kind of clothing.

But, as Regular SRS Readers know, I’ve also been writing about the changes underway at the intersection of AI and search. And there are a few things we need to keep in mind...


Search behavior is evolving rapidly, with several key changes:

Increased Complexity: Queries are becoming longer and more complex, with a significant rise in conversational and long-tail queries. Not a surprise, really, but it suggests that more people are shifting to an AI model of search (and are tackling more complex search tasks). 

Visual Searches: Visual searches have grown significantly, with a 65% year-over-year increase, with more than 100 billion visual searches already this year (2026). 

Multimodal Searches: Users are embracing new ways to search, including voice, text, circle, scribble, and humming.  What's interesting is that they're combining different inputs like images and text. You know video isn't far behind. 

AI-Driven Searches:
AI is driving an explosion of complex and long-tail (i.e. rare) searches, with AI mode users asking much longer questions, sometimes as much as 2-3 times the length of traditional searches.

We're really not in the Kansas of Search any longer.  


2.  All of the metrics point to one thing: people want visual, browsable, and snackable search experiences:

That's fine, but we also live in a world where it is now 1,000x cheaper to generate a plausible hypothesis than it is to verify it by looking-up and wading through through rigorous evidence. A desire for more "snackable" presentation of results isn't going to encourage deeper research and careful analysis.  

In fields like sociology or history, where nuance is everything, AI 'hallucinations' are becoming more sophisticated and harder to spot.

Research is shifting from a labor-intensive process to a judgment-intensive one.

The advent of 'Deep Research' agents means we can now summarize 500 papers in 5 minutes. This doesn't make research easier; it makes it harder. It moves the bottleneck from information gathering to critical evaluation.  How do we build trust in the 'market of ideas' when the primary tool for research is also the primary generator of high-quality misinformation, much of which is inherently snack-food for the mind?


3. Intellectual Vertigo: 

We are currently living through a moment of collective intellectual vertigo. For the last twenty years, search was an act of retrieval. We were librarians looking for a book on a shelf. Today, search has become an act of synthesis. We ask a question, and a machine doesn't just find the book; it reads ten books, summarizes them, and hands us a neat, three-paragraph answer, as if from a vertiginous height.  

The AI magic is so good that it creates a dangerous illusion: that the labor of research has been eliminated. But in reality, the labor hasn't disappeared—it has shifted.

In the age of AI, the quality of your answer is strictly capped by the quality of your query. If you ask a "lazy" question—"What is the consensus on climate migration?"—the AI will give you the most probable, middle-of-the-road, and often outdated "average" of the internet. As Ted Chiang brilliantly wrote, AI answers are often like a blurry JPEG of the internet.  Be Careful.  

Knowing how to search now means knowing how to probe the AI model to bypass that "average." 

It means knowing how to construct searches that force the AI to look at the edges of a field, to find the dissenters, and to cite the specific data that doesn't fit the neat narrative. If you don't know how to search, you are essentially letting an algorithm decide the boundaries of your world.

AI provides a beautiful map of human knowledge. But as any researcher knows, the map is not the territory. The actual "territory" is the messy, footnote-heavy, peer-reviewed primary sources.

When we lose the skill of searching—the ability to find the original source, to verify the DOI, to check the methodology of the paper being cited—we lose our connection to the territory. We become "Map-Readers" who are vulnerable to every hallucination and every bias baked into the system. Knowing how to search is the only way to verify that the ground beneath our feet is actually solid.


3. What do we do?  

It's not going to be a surprise, but we need to develop new research habits that assume "hallucination by default" and use adversarial validation techniques.

To move beyond simple "Q&A" and into high-quality AI searching, you have to treat the AI as a sophisticated but fallible partner. Practicing critical analysis and adversarial methods ensures you are extracting the most accurate information while guarding against "hallucinations" or biased patterns.

Here are three practical ways to level up your search game:

A. The "Devil’s Advocate" Cross-Examination

Instead of asking for the truth, ask the AI to defend a counter-intuitive or unpopular position. This forces the model to bypass its "standard" consensus-based response and reveals the complexity of a topic. You might well discover something that nobody else has noticed. 

The Method: Once the AI provides an answer, ask it to: 

"Identify the three strongest arguments against the conclusion you just provided, citing specific potential data gaps."

Why it works: It breaks the "echo chamber" effect. If the AI struggles to find counter-arguments, you know you need to switch to a different search tool to find dissenting views.

Adversarial Twist: Tell the AI: "I believe [X] is true. Your job is to convince me I am wrong using only verifiable historical or scientific data."

B The "Triangulation & Source Stress-Test"

High-quality searching involves verifying that the results the AI gives you aren't just "sounding" right. You can use adversarial prompting to make the AI audit its own logic.

The Method: After an AI search, use a Multi-Step Verification prompt:

"Summarize the consensus on [Topic]."

"Now, provide the names of three specific experts or organizations that would disagree with that summary."

"Explain why those experts might claim your previous summary is oversimplified."

Why it works: It forces the AI to look for "friction" in the data rather than just the smoothest path to an answer.  (And as my Yale students will attest, one needs friction in the intellectual work you're doing.  If you're just gliding along, you're probably not learning anything.  

C. The "Constraint-Based Fact-Checking" (Adversarial Prompting)

This method involves setting "traps" or strict rules to see if the AI relies on generic templates rather than actual search data.

The Method: Use a Negative Constraint prompt: 

"Explain the impact of [Event/Policy], but do not use any information or talking points that appeared in major news headlines in the last 48 hours. Focus only on academic or niche industry-specific data."

Why it works: By forbidding the most "obvious" or "available" information (that is, working to avoid the  Availability Bias effect), you force the AI to dig deeper into its training data or actively search live search results for more nuanced, less-discussed facts.

Practical Tip: Ask the AI to: "Compare these two perspectives and point out any logical fallacies present in either side."


Face it, we aren't librarians anymore; we are navigators in a high-speed, synthetic storm, and the rain isn't letting up. 

Knowing how to search is no longer a technical skill you delegate to a junior researcher. It is the primary way we exercise our agency. It is how we ensure that as the machines get smarter, the humans don't get lazier. Don't be lazy; look for the friction.  

In 2026, the most powerful person in the room isn't the one with the best AI; it’s the one who knows how to ask the question that the AI didn't expect.

So... Keep searching!