Saturday, March 15, 2025

The road to 5 million blog views on SearchResearch!

 Without much notice... 

SearchResearch Overview, as imagined by Gemini

... SearchResearch just passed a major milestone.  We are now officially well over 5 million blog views!  (The actual number today is 5.4 million--I somehow missed the last 400,000 views by not paying attention.)

If you remember, back in mid-December 2015, we crossed over 2 million views.  

I started the blog on January 30th, 2010 (the very first post) and we quickly became a community of interested searchers sharing tips about the obvious and the (incredibly) obscure.  (In the Obscure Hall of Fame: How flowers rotate--March 25, 2010; Zouave uniforms in the Civil War--October 27, 2011; too many others to list here.)  

And, as I'd hoped, I rewrote several of my favorite posts into a book, The Joy of Search (now available in paperback).  It was a real joy to see my book in airports: 

At San Francisco airport

It was also wonderful to be able to visit bookstores and find it on the shelves: 

Found in Kramer's book shop in Washington DC

Or even MORE fun, to visit bookstores and libraries (including the Library of Congress) to speak about The Joy of Search.  Every time I spoke I mentioned our SRS blog community and how incredibly wonderful the experience has been.  

The announcement for my book talk at Books Inc. Thanks, folks!  


In a very real sense, congratulations to you all.  Without your devoted readership, 5.4 million views would not have been possible. 

As you know, I'm working on a new book (working title: "Unanticipated Consequences").  If you want to follow along in that work, subscribe to my Unanticipated Consequences substack.  Maybe I'll get the book out this year.  When I do, you'll be the first to hear about it right here in SearchResearch.  

Forward, to 10 million views!   



Hasta Luego from the SearchResearch Rancho where I'm taking the weekend off to celebrate.  (And work on my book...)  

Keep searching.  

Another view of the SRS Rancho as envisioned by Gemini. Imagine I'm relaxing here.




Friday, March 14, 2025

Answer: Mimicry in plants?

It's a simple question... 


The question this week was pretty straightforward:  


And you probably also know about some insects that mimic plants: 

 

Leaf insect. P/C Wikipedia.


Mimicry is a fairly common trick in the world of living things--mussels mimic fish, flies mimic spiders, fish mimic their environment... it continues: walking stick insects mimic sticks, mussels can mimic small fish, and Viceroy butterflies mimic Monarch butterflies. This is all well known. But..


1. Can a plant mimic another plant?  Can you find an example of one plant that does this? 


Somewhat unexpectedly, the simple query: 

     [ plant that mimics another plant ] 

teaches us that Boquila trifoliolata, a shrub common in the rain forests of Chile (and much of South America) can somehow mimic the leaves of the plant that it grows on.   It's also called the pilpil, producing edible fruit and stems that can be used for making rope.  

P/C Wikimedia image of B. trifolioata vine mimicking the leaves of the host plant



But knowing THAT leads immediately to a much harder question:  

2. How does the mimicking plant come to be a mimic?  What’s the mechanism by which Plant A comes to look like Plant B?  


This is a bit of a mystery. The pilpil is the only plant species reported to engage not JUST in mimickery, by also in in mimetic polymorphism. That is, the ability to mimic multiple host species simultaneously. This obviously doesn't happen in animals--each animal mimics only one other animal at a time. But somehow, the pilpil manages to mimic multiple species at once.


As Wikipedia tells us, this is a form of Batesian mimicry, when a harmless species mimics a harmful one to ward off predators.


But how does it do the mimicking?


There are a lot of hypotheses about the mechanism include (e.g., microbially mediated horizontal gene transfer, volatile organic compound sensing, and the use of eye-like structures), but nothing seems to have panned out.


On the other hand, looking for:


[ leaf variation on single plant ]


leads us to learn about two concepts new to me: heteroblasty and heterophylly.


Heteroblasty is a significant and abrupt change in form and function, that occurs over the lifespan of certain plants. Like the pilpil changing leaf shape to match the host plant.


Heterophylly is when a plant has multiple leaf shapes on a single plant due to its environment.


As an example, Sassafras (Sassafras albidum) is well-known for having three distinct leaf shapes on the same plant - oval (unlobed), mitten-shaped (two-lobed), and three-lobed all on the same plant. It's the best known example of heterophylly.

More relevant to our discussion, holly leaves (Ilex aquifolium) can also make different types of leaves at the same time, even on the same branch--some with prickles, others without.


P/C after Herrera 

But the mechanism of holly heterophylly is pretty well understood. The prickly variations are a result of deer eating the leaves of the plant. When the leaves are damaged (say, by a passing hungry deer), methylation of the DNA in the leaves happens as a result of tissue damage. (Side note: methylation is the process of adding methyl groups onto pieces of large molecules, like DNA, to modify their behavior. This is the way much of epigenetics works. When an animal chomps on a leaf, methylation happens.)


By comparing the DNA of prickly leaves vs. smooth leaves, it turns out the prickly ones were significantly less methylated than prickless leaves, suggesting that methylation changes are ultimately responsible for leaf shape changes. More methylation = more prickly leaves. What's more, the methylation has an effect on nearby leaves. Other holly leaves nearby will also develop the prickles, with the effect diminishing with distance.


While the variation in leaf shape has been known for a while, it’s now clear that changes in leaf type are associated with differences in DNA methylation patterns, that is, epigenetic changes do not depend on changes in the sequence of DNA, but result from trauma to the plant.


What does this mean for our friend the shape-shifting pilpil? It demonstrates that changes to leaf shape can be epigenetic (that is, the plant doesn't have to modify its DNA, but just tack on a few extra methyl groups here and there).


That doesn't fully explain the way that pilpil leaves can mimic the host plant, but it does suggest a mechanism for changing the leaf structure.


Another intriguing hypothesis is that there is some kind of "visual sensing" that's going on with the vine. What makes this idea particularly interesting is that Boquila can mimic different hosts on the same vine without direct contact with the model leaves, suggesting some form of distance sensing. If the vine is truly using visual cues, it would be amazing--and completely novel. The big problem here is that nobody seems to be able to (pardon the pun) see any such organs!


A more probable hypothesis is that there is some kind of individual plant recognition, perhaps by sensing the release of nearby volatile organic compounds from recognized plants.


Kudu grazing on an acacia tree, causing the tree to put out a cloud of ethylene, telling other
nearby acacias that the browsers are here--increase your tannin load.  

Acacia trees, for instance, can detect ethylene emissions from neighboring damaged trees, triggering increased tannin production in the leaves of the acacia tree as defense against grazing kudu. [Heil, 2010] Other plants do similar things: Arabidopsis thaliana (a small plant in the mustard family) can also detect volatile compounds like methyl jasmonate from neighboring plants that are injured, which triggers its own defensive responses.


So it's not much of a leap to imagine that as a Boquila trifoliolata vine grows from tree to tree, each part of the plant might sense a different host that it's growing on, and invoke different responses--on each different part of the vine--depending on what chemical signals that part of the vine senses. It's also true that mimetic changes appear to be very localized, primarily affecting the leaves within 60 centimeters of the host plant. That's perfect chemical sensing range.


This effect would be mimetic polymorphism at a very fine level of detail.


To broaden my search I asked Claude for:


[ any plant that grows differentially depending on the chemical signals it senses]


I learned about the Centaurea maculosa (spotted knapweed) that detects specific root compounds from competing plants, responding by increasing production of allelopathic compounds (deadly poisons for the competition), essentially tailoring its chemical warfare based on which neighbor it detects.


Obviously, I did a Google Scholar search to verify that claim, and found a wonderfully detailed paper [Kong, et al, 2024] that goes into great detail about how the spotted knapweed senses the competition and then emits specific poisons to kill off the competition!


Just as obviously, I don't know if this hypothesis is correct--we need a good field botanist to do some studies, but it's not crazy.  All of the mechanisms are there and could be the product of evolution.  (And is very similar to the mechanism proposed by [Gianoli, 2014].)  


It's remarkable what you can learn (and hypothesize about) with some desk research!   


SearchResearch Lessons 


1. As with most complex searches, you have to learn as you go.  Note the new terms we had to learn to answer this question (mimetic polymorphism , methylation, heteroblasty, heterophylly).  Learn as you go in order to get more deeply into the topic.  


2. Interleaving "regular search" with LLMs (e.g., Claude, Perplexity, ChatGPT, etc.) can be really useful.  I was able to learn new terms and concepts by working with the AIs.  As always, be sure to CHECK their work. It's like reading an unreliable narrator in a novel--they're useful, but can't be trusted.  


 


Keep searching!



----

Citations:


Gianoli, E., & Carrasco-Urra, F. (2014). Leaf mimicry in a climbing plant protects against herbivory. Curr Biol, 24(9), 984-987.


Heil, Martin, and Richard Karban. "Explaining evolution of plant communication by airborne signals." Trends in ecology & evolution 25.3 (2010): 137-144.


Herrera, C. M., & Bazaga, P. (2013). Epigenetic correlates of plant phenotypic plasticity: DNA methylation differs between prickly and nonprickly leaves in heterophyllous Ilex aquifolium (Aquifoliaceae) trees. Botanical Journal of the Linnean Society, 171(3), 441-452.


Kong, C. H., Li, Z., Li, F. L., Xia, X. X., & Wang, P. (2024). Chemically mediated plant–plant interactions: Allelopathy and allelobiosis. Plants, 13(5), 626.



Thursday, March 6, 2025

SearchResearch Challenge (3/6/25): Mimicry in plants?

 We’ll return to Deep Research next time... 


But for this week, we’ll do a “traditional” SRS Challenge–one that asks a question about the world, leading to a surprising result.


If you’ve been reading SearchResearch for a while you know I’ve got several topics that seem to recur–Egypt is one, fish is another… but another repeating topic is mimicry.  


As you know, mimicry is the ability of a plant or animal to disguise itself as another plant or animal.  Sometimes you see plants looking like insects as we see in the above images.  Here, a Bee orchid (Ophrys apifera) looks enough like a female bumblebee that males get confused.  They try to mate with the floral fake (so-called pseudo-copulation) and get pollen all over their nether regions.  An enthusiastic bumblebee then distributes pollen widely in the area.  


And you probably also know about some insects that mimic plants: 

 

Leaf insect. P/C Wikipedia.


In these virtual pages we’ve talked about mussels mimicking fish, flies mimicking spiders, and fish mimicking their environment. The list goes on and on.


But I wonder… can a plant mimic another plant?  That seems unlikely… how would it manage such a trick? 


1. Can a plan mimic another plant?  Can you find an example of one plant that does this? 


2. How does the mimicking plant come to be a mimic?  What’s the mechanism by which Plant A comes to look like Plant B?  


As always, let us know HOW you found the answers by leaving a comment in the blog.  


Keep searching!     



Thursday, February 27, 2025

Answer: Asking questions of images with AI?

 Image searches are great... 


...  until they don't work. Since skilled researchers use Search-by-Image a fair bit (at least *I* do), it's always useful to understand just how well it works.  And since the LLMs have introduced multimodal capabilities, it's just as important to see how well the new search tools are working.  
 

Last week I gave you 4 full-resolution images that you can download to your heart's delight (with links so you can get the originals, if you really want them).  Here, taken on a recent trip, are 1. my hand; 2. a bottle of wine; 3. a piece of pastry; and 4. a beautiful flower.  So... what ARE these things?  

Our Challenges for this week is are: 

1. How good, really, are the different AI systems at telling you what each of these images are?  

2. What kinds of questions can the AI systems answer reliably? What kinds of questions CAN you ask?  (And how do you know that the answers you find are correct?)  

I did several AI-tool "searches" with each of these images.  For my testing, I used ChatGPT, Gemini 2.0 Flash, Meta's Llama, and Anthropic's Claude (3.7 Sonnet).  I'm not using anything other than what comes up as the default when accessed over the web.  (I didn't any additional money to get special super-service.)  

I started with a simple [ please describe what you see in this image ] , running this query for each image on each of the four LLMs.  Here's the first row of the resulting spreadsheet looks like (and here's a link so you can see the full details):  

Click to expand to readable size or click the link above to see the entire sheet.

Overall, the LLMs did better than I expected, but there are clear differences between them.  

ChatGPT gave decent answers, getting the name of the pastry correct (including the spelling!), and getting much of the wine info correct. The flower's genus was given, but not the species.  

Gemini gave the most details of all, often 3 or 4X the length of other AIs. The hand was described in excruciating detail ("no immediately obvious signs of deformity"), and Gemini also got the name of the pastry correct (although misspelled: it's KremÅ¡nita, not Kremsnita).  Again, immense amounts of detail in the description of the pastry, and definitely a ton of information about the wine photo.  Oddly, while Gemini describes the flower, it does NOT identify it.  

Llama doest okay, but doesn't identify the pastry or the flower.  The wine image just extracts text, but has little other description.  

Claude is fairly similar to ChatGPTs performance, though with a bit longer description.  It also doesn't identify the pastry or the flower.  


You can see the differences in style between the systems by looking at this side-by-side of their answers.  Gemini tends to go on-and-on-and-on... 

Click to see at full-size. This is at the bottom of the sheet.

It's pretty clear that Gemini tries hard to be all-inclusive--a one-query stop for all your information needs.  

Interestingly, if you ask follow-up questions about the flower, all of the systems will make a good effort at identifying it--they all agree it's a Hellborus, but disagree on the species (is it Orientalis or Niger?).  

By contrast, regular Search-by-image does a good job with the flower (saying it's Helleborus niger), an okay job with the wine bottle, a good job with the pastry (identifying it as a "Bled cream cake," which is acceptable), and a miserable job with the hand. 

On the other hand...asking an LLM to describe an image is a very different thing than doing Search-by-Image.  

Asking for an image-description in an LLM is like asking different people on the street to describe a random image that you pop in front of them--you get very different answers depending on the person and what they think is a good answer.  

Gemini does a good job on the wine image, telling us details about the wine labels and listing the prices shown on the list.  By contrast, Claude gives much the same information, but somehow thinks the prices are in British pounds, quoting prices such as "prices ranging from approximately £12.50 to £36.00."  (I assure you, the prices were in Swiss Francs, not pounds Sterling!)  So that bit seems to be hallucinated.  

I included the hand image to see what the systems would do with a very vanilla, ordinary image... and to their credit, they said just plain, vanilla ordinary things without hallucinating much.  (Although Claude did say "...The fingernails appear to have a purple or bluish tint, which could be nail polish or possibly a sign of cyanosis..." I assure you, I'm just fine and not growing cyanotic nor consorting with fingernail polish!  It didn't seem to consider that the lighting might have had something to do with its perception.

And, oddly enough, as Regular Reader Arthur Weiss pointed out, the AIs don't seem to know how to extract the EXIF metadata with GPS lat/long from the image.  If you download the image, you can get that data yourself and find out that the pic of the pastry was in fact taken near Lake Bled in Slovenia.  This isn't just a random cubical cake, but it is a KremÅ¡nita!  

Here's what GPS info I see when I download the photo and open it in Apple's Preview app, then ask for "More info."  


Not so far from Lake Bled itself.  


SearchResearch Lessons 

1. No surprise--but keep checking the details--they might be right, but maybe not. I was impressed with the overall accuracy, but errors do creep in.  (The prices are nowhere noted in British pounds.)  

2. If you're looking for something specific, you'll have to ask for it.  The prompt I gave ("describe the image...") was intentionally generic, just to see what the top-level description would be. Overall I was impressed with the AI's ability to answer follow-up questions.  I asked [what was the price of Riveria 2017] and got correct answers from all of them.  That's a nice capability.    

Overall, we now have another way to figure out what's in those photos beyond just Search-by-image.  Try it out and let us know how well it works for you. 


Keep searching! 


















Wednesday, February 19, 2025

SearchResearch Challenge (2/19/25): Asking questions of images with AI?

 At the SearchResearch Rancho... 


... we're always asking questions.  Pictures that are taken while traveling are a rich source of questions--mostly, what IS that thing? 

But the questing minds of SRS Regulars always wants more.  So today, a question about asking questions, and in particular, about the limits of using AI systems to tell you what you're looking at.  Most of the current AIs are multimodal, meaning they can handle images in addition to text.  Let's check this out!  

Below I've attached 4 full-resolution images that you can download to your heart's delight (with links so you can get the originals, if you really want them).  Here, taken on a recent trip, are 1. my hand; 2. a bottle of wine; 3. a piece of pastry; and 4. a beautiful flower.  So... what ARE these things?  


Our Challenge for this week is this: 

1. How good, really, are the different AI systems at telling you what each of these images are?  

2. What kinds of questions can the AI systems answer reliably? What kinds of questions CAN you ask?  (And how do you know that the answers you find are correct?)  

We're really interested in what answers you find, but just as importantly, what answers you do NOT find!  Are there limits on what you can ask?  What are those limits? 

Let us know in the comments section.  

Keep searching! 




















Thursday, February 13, 2025

SearchResearch Commentary (2/13/25): Using NotebookLM to help with DeepResearch

 Let me tell you a short story... 

A red-stained schistosoma worm. The male is the small one,
spending his entire life within the groove of the female.  P/C CDC.

As you know, I’m writing another book–this one is about Unanticipated Consequences of the choices we made.  (Here’s my substack that I’m using to try out various sections.) 

And, as you also know, I have a deep interest in Egypt.  So, on a recent visit there, I visited the Aswan High Dam (which we discussed earlier in SRS in conjunction with its creation of Lake Nasser) and thought about what were some of the Unanticipated Consequences of building that massive dam?  This led to our SRS Research Challenge about “What has been the effect of the creation of Lake Nasser on the incidence of schistosomiasis in Egypt?”  


A little background: Schistosomiasis (aka snail fever, bilharziasis) is a disease caused by several species of Schistosoma flatworms, any of Schistosoma haematobium, S. japonicum, and S. mansoni. Transmission can occur when humans are exposed to freshwater sources that are contaminated with Schistosoma parasites.


People infected with Schistosoma flatworms shed eggs in their urine or stool. In fresh water, immature flatworms called miracidia hatch from eggs. The miracidia find and infect the tissues of freshwater snails where they mature into an infective stage known as cercariae. The free-swimming cercariae are released from the infected snail into the surrounding water. Cercariae enter their human host by burrowing though skin that is exposed to contaminated water. Male and female parasites migrate to either the liver, the lower intestines, or bladder, where they release eggs into feces or urine.

Symptoms of schistosomiasis are a result of the body’s immune response to the flatworms and their eggs. Symptoms include itching at the cercariae entry sites, fever, cough, abdominal pain, diarrhea, and enlargement of the liver and spleen. In some cases, infection may lead to lesions in the central nervous system.



In that previous post we did a little comparison of the different Deep Research tools out there.  At least, we looked at how Deep Research search tools might work.  I promised to look at some other analysis tools.  This is my briefing on using Google’s NotebookLM.  

Full disclosure: Some of my Googler friends currently work on NotebookLM. I've tried to not let that influence my writing here.


How NotebookLM works:  The core idea of NotebookLM (NLM) is that you create a notebook on a given topic, download a bunch of sources into it, and then “have a conversation” with your sources.  


Think of it as though you’ve got a research assistant–you give them a pile of content (Google Docs, Slides, PDFs, raw text notes, web pages, YouTube videos, and audio files).  Then you can chat with them about the stuff that’s in there.  It’s as though NLM has just read everything and understands it reasonably well.  This is handy if you’ve got a lot of content to use.  (In the AI biz this is called “Retrieval Augmented Generation,” which means that the text generation process is driven primarily by the contents of the sources you’ve uploaded.)  


So… by using our standard SRS methods, I searched for good online content (mostly PDFs of medical articles, mostly found by using Google Scholar) and uploaded it to NLM.  


I’ll spare you the long  story here… but I spent WAY too much time finding low quality content that initially looked good, but turned out to be puff pieces published in low quality journals.  (I assume that you, our SRS regulars, know how to find this out!) 


I ended up with a screen that looked like this (the red rounded-rectangles and circuled numbers are mine):

Click to enlarge for details

In panel 1, I’ve uploaded 9 different papers–one of which was my SRS post from last month about Egypt.  I picked the 8 papers as being high quality about the effects of the building of the Aswan Dam on the incidence of Schistosomiasis in Egypt.  

Once I loaded them up, I chatted with NLM about the papers a bit in panel 2 and then asked my key question:  

    [ did the incidence of schistosomiasis increase or         decrease after the Aswan Dam was built? ] 

And then: 

    [ What were the predicted effects of the Aswan High         Dam on schistosomiasis? ] 

NLM then gave me a quite reasonable answer based on the documents I’d uploaded as sources.  As you can see in the image below, NLM gives citations to places where it extracted the text when you roll over the gray citation number.  


Rolling over one of the citation number causes a pop-up that shows exactly where the
supporting text is in the list of sources.  

Now, I have to admit that I read these 9 papers, so I have a good idea what’s in them.  And I’d written a draft of the answer to that question.  Somewhat to my surprise, the answer was quite good–it’s very much what I would have written (although perhaps not in the style that you see.  Here’s the start of the NLM reply to my question (and a link to the full doc): 


After the Aswan High Dam (AHD) was built, the incidence of schistosomiasis in Egypt decreased overall, despite initial concerns that the dam would cause it to increase [1-4].

  • Initial Concerns: It was predicted that the impoundment of the Nile would create ideal conditions for snails, which are intermediate hosts for the schistosomiasis parasite [5-7]. It was also thought that steady water flow downstream from the dam would increase human contact with the water, leading to more contamination [5, 6].


  • Public Health Measures: The implementation of public health programs played a critical role in reducing schistosomiasis [4, 8-10]. These programs included:

  • Installation of clean fresh water pumps [11, 12].

  • Treatment of canals and drainage ditches with copper sulfate [11, 12].

  • Mass distribution of the drugs metrifonate and praziquantel [11-14].

  • Mollusciciding to reduce snail populations [15].


… etc … 


This is pretty good, but I have a couple of quibbles. 


The citations are both wonderful and annoying.  The wonderful part comes from being a very specific citation.  As you see in the above image, NLM tells you the paper it found AND specifically the supporting text.  That’s really nice. 


On the other hand, you have to go through the note citation-by-citation, one at a time, to figure out that 1, 3, 5, 6, and 8 all are to the paper “Effects of the Aswan Dam.pdf” in the sources.  And, at this time, I haven’t been able to figure out how to get NLM to give me a full citation list–you know, something that looks like this: 


Abd-El Monsef, H., Smith, S. E., & Darwish, K. (2015). Impacts of the Aswan high dam after 50 years. Water Resources Management, 29, 1873-1885.


I hope that gets added to the list of things NLM will do, but at the moment, it’s kind of a laborious process to figure out the mapping from citation numbers to actual papers. 


But perhaps the most impressive piece of this analysis by NLM was the summary (emphasis by NLM): 


In summary, while the Aswan High Dam did cause a change in the types of schistosomiasis seen in Egypt, and there were initial fears the disease would increase, overall schistosomiasis decreased significantly because of public health interventions [9]


That’s the conclusion I came to as well, and it’s definitely NOT the first thing you’d find with just a simple Google search.  In fact, before the dam was built, an  increase in schistosomiasis was predicted by epidemiologist Van Der Schalie who wrote: “...there is evidence that the high incidence of the human blood fluke schistosomiasis in the area may well cancel out the benefits the construction of the Aswan High Dam may yield (Farvar and Milton 1972)”. It was widely thought that schistosomiasis would increase as farmers converted from basin irrigation system to perennial irrigation and so had more water to irrigate with.

But with such predictions widely known, the Egyptian government began a far-ranging program of killing snails, cleaning up waterways, and giving much of the population chemotherapy against schistosomiasis.  


In 1978, soon after the AHD was commissioned, a baseline epidemiological survey on over 15,000 rural Egyptians from three geographical regions of Egypt (Nile Delta, Middle Egypt and Upper Egypt), plus the resettled Nubian population showed that the prevalence of schistosomiasis was 42 % in the North Central Delta region, 27 % in middle Egypt and 25 % in Upper Egypt.  That sounds bad, but it’s a massive reduction in disease rates.

So, overall, the control program worked pretty well, with infection rates dropping to record lows.  

The thing is, the predictions about the consequences about building the dam were right–but they counteracted the increase by anticipating the consequences and being proactive about fixing them. 


Wait!  There’s more: 

In addition to this chat / query process, you can also ask NLM to create a study guide, an overall summary, or a briefing doc.  NLM gives you prompts to use (roughly “Create a study guide from these sources…” or “Summarize these sources in a readable format.”)  

As an expert SearchResearcher, you can also chat with just one document to get JUST that document’s perspective (and, implicitly, that author’s point-of–view).  

Give this a try!


SearchResearch NotebookLM tips:

  1.  Make sure the content you upload as sources is high quality. If you have low quality content, that will surface in the answers you ask NLM.

  2. If you lose the connection between the source you uploaded to NLM, it’s a pain to restore. You have to remember it.  Be sure the uploaded source has the identifying info in it.  (I always put the citation and the original source URL at the top of the source text.) 

  3. If you ask a question about something that’s NOT covered by the sources, the quality will drop.  Stay on topic and try to not overdrive your headlights.   

  4. I haven't tried using NLM without carefully selecting the sources that go into the collection, but I can imagine a use-case where you add multiple sources in and then ask for the plus-and-minus analysis. That's an interesting experiment--let us know if you try it.





Keep searching.