Thursday, February 19, 2026

SearchResearch (2/19/26): Your path to deeper reading with AI tools

Reading tools have been around... 
A scholar at work. Not a self-portrait, but a nice example of how I see myself at work.
A bit of architectural sculpture found in the Sterling Library at Yale.


... for a long time. For years, I kept a well-thumbed dictionary close at hand so I could look up all those words I didn't quite know, or was slightly uncertain about.  (That's how you learn that a word like "peruse" is a  a contronym, a word with two opposite definitions.  The original meaning was "to read very carefully," but it has come to also mean the opposite: "to skim over lightly.")  

My dictionary led me to understand what words really mean--like polynya (a non-linear opening in the ice pack), or spezzatura (an Italian word that refers to a kind of effortless grace), or Rückenfigur (an image composition where a person's back is included in the scene, facing out to the view rather than at the viewer).  

Ever since smartphones became ubiquitous, I've always read with a phone nearby for much the same reason.  To look up things along the way.  I actually really like this ability to instantly look up things and the ability to get as much detail as I need, often with figures included.  




We can now extend this habit to include asking your favorite AI questions about the book you're reading, asking questions that are really difficult to search for with "classic" Googling.  

For example, I'm currently reading The Dark Forest by Cixin Liu, Part 2 of the "Remembrance of Earth's Past" trilogy.  It's a fun read, but I read Part 1 (The Three Body Problem) early last year,  That was a big book, and Part 2 is aksi a big book that's very dense with ideas and substory lines.  

(Spoiler warning: A detail is discussed below that you might want to skip if you're planning on reading the trilogy. Skip to the "Caution" below.)  

After a couple hundred pages, I realize that an important plot point is that the Trisolarians have, as a key part of their invasion stragegy, managed to block all important physics research taking place on Earth.  But for the life of me, I could not remember HOW they managed to accomplish this. 

To make things worse, I also managed to lose/misplace my copy of volume 1. Ugh. Now what?  I didn't want to read the Wikipedia page on the book as it might well contain spoilers.  

Then I realized I could ask my AI buddy this question and I'd probably get a decent answer.  So I whip out my phone, and ask this question: 



This is exactly what I needed to restore my memory about what happened in Book 1.  

Note that I was careful to ask a fairly specific question, not anything that might reveal upcoming plot points.  

Caution:  A VERY important skill to develop is the ability to NOT get sucked down the rabbit hole.  Yes, I know that clickbaity thing just demands to be checked-out, but don't do it.  Don't turn a lovely, engaging, wonderful reading experience into endless hours of slop-content reading.  

Hallucinations?  Maybe, but I find that the questions I ask of AI while reading tend to be fairly specific ("what's that?" or "when did this happen?" or "what's the connection between Person 1 and Person 1?"), so the probability of hallucinations is much less.  Usually my while-reading questions are an easy RAG ("Retrieval Augmented Generation") task, and they tend to have fewer errors like this.  

In early smartphone days, I would use it as a dictionary.  Then, as Wikipedia came easy available, I could look up specific topics (but having to avoid spoilers).  

Now I can ask fairly sophisticated questions of my AI buddy... and that's the way I think of it. As Ethan Mollick points out in his book Co-Intelligence, a very reasonable mental model is to consider an AI as a colleague, one who can answer questions about your work project.  In this case, my project is to read and understand a book.  




That's a useful bit of background.  

Or, while reading a scholarly article on The Rise and Fall of Plains Indian Horse Cultures, I could ask a question like this (because the author assumed that the reader would know this information implicitly--I am not his target audience):  


I have to admit that I didn't know what the Arkansas Basin was, including that it was huge--so this summary was great background material for me to read.  

Reading has always been about more than just sitting with the text on the page--good readers have always used external sources to amplify and enrich their understanding. Now, it's easier than ever.  Hope you take advantage.  


SearchResearch Lessons

There's one big lesson here... I now make it a habit to co-read with an AI partner, not to summarize, but to enhance my reading by giving me important background that I don't have.  I rely on the AI partner to answer questions about the material that I never understood in the first place, or to give my memory a boost... especially when reading long texts... especially when subsequent books are read years apart.  

I'm looking forward to re-reading (for the 4th time) the entire Lord of the Rings epic series... this time with AI augmentation.  (I know who Tom Bombadil is, but who is Gildor Inglorion?)

This time, Gemini can be my intelligent vademecum and fill me in on the backstory.    

Keep searching. 




Thursday, February 5, 2026

SearchResearch (2/4/26): Be careful about using image search--it hallucinates too! (But Google Lens is the best of the bunch)

 This month I'm teaching... 

Sheffield-Sterling-Strathcona Hall at Yale

... at Yale University in New Haven, CT... that paragon of Collegiate Gothic architecture.  

As a consequence, I've been taking a lot of photos of the buildings, the stained glass windows, and all the ornamentation.  Wonderful!  (I've also been taking in as many music performances as I can.  But I digress.) 

Naturally, I've been trying to use Google Lens and Gemini to identify the buildings.  It's a natural thing to do--what IS that building?  What does that particular sculptural ornament mean?  

I've written about Search-By-Image more than a few times (Modifying a reverse image search, Asking questions of images with AI, What's that logo?

But I've had some questions in the past: Image identification is great--when it works.  

Now that we're in an ever-improving, lovely age of AI, have things improved much?

Bottom line:  Sadly, Search-by-image (aka "reverse image search") is still sometimes problematic.  Verify everything. Good to know that Google Lens is your best option.

I don't mind when it makes an error--we all do that.  What I DO mind is when it presents results as authoritative, without any expression of doubt. 

Here are a couple of examples of successes: 


On the left is a photo I took of a particularly intriguing fence post sculpture. Here, I used Google Lens by right-clicking on the image in Google Photos.  The answer here, on the right, is correct--this is a fence post in front of the "Book and Snake" secret society tomb at Yale.  (Aside: The secret societies meet in mostly windowless buildings called "tombs." You could look it up.)  

Here's another example of Google Lens working properly (oddly enough, in front of another secret society tomb, "Scroll and Key"):  

I was curious what Gemini would say about this, so I was shocked to learn that Google Lens and Google's Gemini do NOT AGREE on what this is.  Same photo, but a different part of Google has a different opinion of what this is.. 

This result is SO WRONG.


This is so odd because the image has the EXIF metadata--Gemini should know that my photo wasn't at the corner of Trumbull and Temple streets.  Here's a map showing where I took the photo: 

I was standing on the east side of College Street, shooting west towards the Scroll and Key Tomb.


Of course, the Berzelius tomb actually IS at the corner of Trumbull and New Haven streets.  It looks like this: 


As you can see, the two buildings look NOTHING alike--the Scroll and Key tomb has very obvious striped walls.  Something is screwy somewhere.  

On the other hand, I have to say that Google Lens very often DOES get it right--this is especially true when contrasted with Bing's image search.  Bing got none of these right.  (I can't recommend using it for much of anything.)  

Here's another one--an image I took inside of a dining hall at Yale: 


Once again, Google Lens (invoked from Google Photos) identifies this correctly:  this is the Berkeley College dining hall, "The architecture is characteristic of the Collegiate Gothic style prevalent at Yale, featuring high, ornate wooden ceilings and large windows." 

And, once again, Gemini gets it wrong: "This photo was taken in the Great Hall of Hart House at the University of Toronto in Canada."  It even gives a lot of very specific (and wrong!) details: "The large stained-glass window at the end is known as the Great West Window. It features various coats of arms, including those of the University of Toronto and its constituent colleges..."  

For giggles, I tried Bing image search--which believes it's the Oxford dining hall at Emory University.

I can go on, but let me summarize this.  

SearchResearch Summary 

1. Google Lens is the most accurate search-by-image system out there.  But Lens has limits, it won't search for sexually explicit content; violent or gory content; hateful content; or dangerous content.  And does a terrible job with faces, mostly to preserve individual privacy.  (Though it will identify famouse people.)  

2. LLM-based AIs are not a great image analysis tool. It's odd because Gemini could call Google Lens and improve their accuracy, but apparently those two parts of the company don't talk together.  Don't trust those LLM/AI results.  FWIW, no other LLM / AI tool does a decent job either.  ChatGPT and all the others are just as bad.  

3. Bing reverse image search is wrong a lot of the time.  Ermmm... maybe avoid it?  


Keep searching!  




Thursday, January 22, 2026

SearchResearch (1/22/26): Searching for insights with AI (NotebookLM for sensemaking)

 I have two really big interests these days... 

... sensemaking...

(that is, how people search for and discover how to make sense of a complex topic)

AND 

...  unanticipated consequences...

(that is, how things go wrong despite your best intentions).  

So it won’t surprise you to learn that I often try to do both at once.   This week I was doing a bit of sensemaking of my book, unanticipated consequences.  

Of course, I’ve been trying to use Google’s NotebookLM to help me understand (that is, "make sense of") the unanticipated consequences space a bit better.  (Henceforth, I’ll abbreviate it as UC.)  

One generally useful method of sensemaking is to try and look at your collection in different ways.  Here's how I got NotebookLM to help me look at my UC collection in several different ways.  


1. Try creating a mindmap to see the main categories of your collection.  

In my case, I’m trying to see the key ideas in a collection of texts all about “unanticipated consequences.”  I added several UC texts to my NotebookLM and then clicked on “Mind Map.”  It created this lovely chart with the major categories that it saw in my texts… (click on the image to see the full thing):  




2. But I need this as a text outline–how to do that?  

That’s a nice graphic, but I really want this as an outline to copy/paste into the text I’m writing.  How can I do that?  

Easy.  Pull it into Gemini and ask for a textual outline version of the image.  



The full text looks like this: 

Unanticipated Consequences

1. Core Reasons for Failure

  • Scope Creep

  • Resource Overallocation

  • Requirements Volatility

  • Cost Overruns

  • Schedule Slippage

  • Benevolent User Fallacy

2. Categories of Consequences

  • Perverse Incentives

  • Population Explosions

  • Hypersuccess Effects

  • End-of-Life Failures

3. Notorious Historical Failures

  • Technology & Software

    • Sony Betamax

    • Apple Lisa/Newton

    • Microsoft Bob/Zune

    • Google Glass/Google+

    • Healthcare.gov

    • Stuxnet Virus

  • Automotive & Transport

    • Ford Edsel/Pinto

    • DeLorean DMC-12

    • Concorde

    • Boeing 737 MAX

    • Tesla Cybertruck Rust

  • Consumer Goods & Food

    • New Coke

    • Crystal Pepsi

    • McDonald's Arch Deluxe

    • Colgate Kitchen Entrees

    • Iridium

  • Environmental & Infrastructure

    • Australian Rabbit Invasion

    • Plastic Pollution

    • Bangladesh Arsenic Wells

    • The Big Dig

    • Garden Bridge

4. Lessons and Prevention

  • Mental Model Analysis

  • Post-Mortem Documentation

  • Maintenance/Repair Planning

  • End-of-Life Strategy

  • Real-time Progress Monitoring

  • Rigorous Market Research

Which is a bit more useful.  (More generally, you can often use an AI tool to do conversions like this.)


3. Another way to look at the categories of your collection: Ask for dimensions 

Now that I have this collection of documents about UC, what would it mean to “make sense” of them?  One way to do this (which is something I might as a clever research assistant) is to ask for a list of the “key dimensions” of the collection.  Here’s what I did: 


NLM gave me a nice breakdown, not a perfect breakdown, but a great starting point for my writing.  



4. Yet another way to look at your collection:  Infographic 

Click on the “Infographic” button on the right side of your NotebookLM (in the Studio panel).  It created this for me: 

Interesting that this is rather different than the outline from above.  Nonetheless, it’s pretty compelling AND it gives yet another way to make sense of the collection by giving us the “Core Pillars of Project Failure,” “Lessons from the ashes,” and “Financial Impact.”  

 

5. When NotebookLM gives you an error for a web page, try using the Pure.md hack  

This happens to me all the time–I add a source to my NotebookLM and get this error: 



The best workaround I’ve found is to prepend the string: https://pure.md/ in front of your url.  


So if you’re trying to load up 

https://www.forbes.com/sites/the-prototype/2026/01/22/these-ai-models-might-take-down-superbugs/ 

Just prepend (that is, add to the front of the URL) that magic string so it looks like this: 

https://pure.md/https://www.forbes.com/sites/the-prototype/2026/01/22/these-ai-models-might-take-down-superbugs/


The Pure.md site converts the text of the page to “pure markdown” format, which NotebookLM can easily ingest.  Which leads to success: 







Hope you find these lessons of use in your work. 


Keep searching! 







Thursday, January 15, 2026

SearchResearch Answer: Where / why / what is causing point source pollution of the Susquehanna?

 The SearchResearch world really is different now... 

Susquehanna River in Pennsylvania (P/C Wikimedia) In a beautiful, unpolluted portion of the stream.

... to exactly nobody's surprise.  

Last week I asked a Challenge question about the Susquehanna River, asking about one truly dramatic point-source of pollution that changes the color of an important tributary into the Susquehanna from clear to… what color?  

This week's Challenge was this:  

1. Can you find this truly amazing point-source of pollution in a tributary of the Susquehanna River?  Where is it?  What’s the point source called and what tributary does it flow into? Why is it even there?    

2. What color is the water just downstream from this point source?


The new SearchResearch world includes AI tools like Gemini, ChatGPT, DeepSeek, Grokker (etc etc.).  And when they work, the AIs work remarkably well.  To try this out, I copy/pasted the text of the post into several AIs.  Here was the prompt (I copied the core of the post): 


[  The mighty Susquehanna River begins near Cooperstown, NY from the confluence of two main branches: the North Branch, and the West Branch, which join in central Pennsylvania. Between the source and the river's destination in Chesapeake Bay, hundreds of tributaries join with the main river.  

The river drainage is around 27,500 square miles (71,000 km2), which is nearly half of the land area of Pennsylvania, and is the single largest inflow into the Chesapeake Bay.  

As you can imagine, the Susquehanna River is fairly heavily polluted with agricultural runoff and sewage picked up during its long run to the sea.  But there’s one truly dramatic point-source of pollution that changes the color of an important tributary from clear to… what?  

That’s today’s Challenge.  

1. Can you find this truly amazing point-source of pollution in a tributary of the Susquehanna River?  Where is it?  What’s the point source called and what tributary does it flow into? Why is it even there?    

2. What color is the water just downstream from this point source?

The backstory is fascinating.  Can you figure it out?  


I just pasted that entire Challenge into Gemini and got a very good answer!  Here's what Gemini told me: 


The Old Forge Borehole

The "truly dramatic" point source you are looking for is the Old Forge Borehole, a massive artificial geyser of acid mine drainage in Pennsylvania.

1. The Challenge Details

What is the point source called? The Old Forge Borehole.  
 
Where is it? It is located in Old Forge, Pennsylvania (Lackawanna County), specifically near the Union Street Bridge. 
 
What tributary does it flow into? It discharges directly into the Lackawanna River, a major tributary that feeds into the Susquehanna River. 
 
Why is it even there? The borehole was not an accident, but an engineered solution. In 1961, the U.S. Army Corps of Engineers drilled the hole to relieve massive hydrostatic pressure from flooded underground coal mines that was forcing mine water into the basements of local homes and businesses. It acts as a pressure relief valve for a vast underground "pool" of flooded mine workings (the Scovell Pool) that covers roughly 15 square miles.

2. The Color Change

What color is the water? Just downstream from the borehole, the clear waters of the Lackawanna River turn a shocking, opaque orange (often described as rusty orange or yellow-orange).

The Science Behind the Color: The water shooting out of the borehole is heavily laden with dissolved iron, sulfate, and other metals from the abandoned coal mines. When this oxygen-poor water hits the air and the river water, the iron oxidizes (rusts) almost instantly, forming iron oxide precipitates that coat the riverbed and turn the water a bright, unnatural orange for the final three miles before it joins the Susquehanna.


Of COURSE I checked this answer. There's a great article on the Old Forge Borehole from a website dedicated to tracking underground mining in the US.  Both the Old Forge and Lackawanna River Wikipedia articles talk about the borehole, all agreeing with and confirming what Gemini told us.  

And, in fact, the Old Forge borehole is the mine drainage site I was thinking of when I posted the Challenge.  


But let's try some other AIs:  

ChatGPT: When given the same prompt, it also did a decent job--though with less detail, but it did include some good images of the polluted river.  

Claude: Gave a bit more detail in the answer, telling us that "By winter 1961-62, a "rainless flood" struck. Water forced its way out of mine entries and mine boreholes, creating a persistent stream that formed a glacier-like formation making roads impassable and encasing some homes..."  That's a good explanation about WHY the borehole was made.  

Grokker: Gave a totally different answer, pointing instead to the "Askam Borehole, located in Hanover Township, Luzerne County, Pennsylvania, near the village of Askam. It discharges into Nanticoke Creek, a tributary of the Susquehanna River."  The Nanticoke Creek also has severely polluted water that's nearly all orange downstream of the borehole.  

Deepseek: Gives a completely wrong answer, pointing to a hallucinated "Audubon Mine" tunnel and borehole there.  However, it ALSO points to Shamokin Creek as a point source of pollution, which does indeed have a mine drainage problem turning the creek orange.  (While it also drains into the Susquehanna River, it does so 82 miles / 131 km southeast of Old Forge.)

Image from Google Earth.  Annotations by Dan, showing the change in river water
color downstream of the abandoned mine sites.  


What I find so interesting about the differences in the answers between the AIs is that there are MULTIPLE possible answers to the Challenge.  

Remember that I wrote that we're looking for "...one truly dramatic point-source of pollution that changes the color of an important tributary."  

In fact, there are many point-sources of pollution that change the water color.  There's the Old Forge Borehole, the Askam Borehole, the Hughes Borehole, and the Butler Water Tunnel (at Pittson, PA), all of which discharge vast amount of toxic waste into the watershed.  

Interesting that none of the AIs mentioned that there are multiple mine discharge sites that are polluters, coming with festive acidic water color changes!  

If you do a regular Google search like this: 

     [ list of boreholes in Pennsylvania that are acid mine discharge sources ]

It's easy to find that there are literally hundreds of sites in Pennsylvania that discharge orange acidic water into the local rivers, including at least 100 that discharge into the Susquehanna.  This search led me to the "Abandoned Mines in Pennsylvania" map, which shows all of the abandoned mine sites. Clicking on a site gives details about the condition of the mines and the watery discharge.  

mapmaker.millersville.edu/pamaps/AbandonedMines/ 


SearchResearch Lessons 

The answer I was looking for was the point source from the "Old Forge Borehole," easily found by using most any AI (except Grokker). 

This is great, and a useful arrow to have in your quiver of SearchResearch talents.  

However... the interesting backstory here is that none of the AIs pointed out that there are many, many, many sites that have huge amounts of point-source pollution.  That's an observation you have to make on your own by reading the results carefully.  Our SRS skills show us that there are around 9,000 different point sources of acid mine discharge in the state, many of which turn the clear water a toxic shade of orange. 

Bottom line: Even though AIs often can answer the SearchResearch Challenges (or any question that you might pose in a similar way), the real story is often an observation that you, the human with a larger perspective and a desire to understand the story, can bring to the Challenge.  The AIs will answer the question you pose, but very well might miss the forest for the trees.



Keep searching.  And keep looking for those stories.