Wednesday, May 31, 2023

SearchResearch Challenge (5/31/23): Did they really burn ancient Roman statues?

 Can that be true? 

A scene from 18th century Rome by Giovanni Battista Piranesi. Veduta della Piazza di Monte Cavallo (View of the Piazza del Quirinale with the Statues of Horse Tamers in side view), from Prianesi's Vedute di Roma. Note the marble columns just ready to be rolled away and repurposed.  These statues are still standing in Rome, notably not-broken up, although there's now an obelisk between them.  


Sometimes you read something so astounding that you have to wonder, "can that possibly be true?

I have to admit that this happens to me on a daily basis, and not always about current events.  

Earlier in the week I read this off the cuff comment in a recent article in the Atlantic Monthly magazine (My Night in the Sistine Chapel, by Cullen Murphy) 

"For centuries, the bountiful supply of ancient statuary unearthed in Rome had been burned for lime to make mortar..."  

The author makes the point that for centuries, ancient Roman statues were more valuable as a source of raw marble than as beautiful works of art.  (Key insight: marble can be burned above 840°C to convert the calcium carbonate into calcium oxide, commonly called quicklime, is an essential ingredient to make concrete).  

That threw me.  The image of folks just tossing works of art into the kiln to make quicklime just killed me.  It's the kind of thing that makes you say "really?" 

I did a little SRS and found the answer.  Fascinating journey that I thought you might enjoy.  

1. Is that sentence true?  Once upon a time did people in Rome just burn ancient marble statuary in order to make quicklime for construction purposes?  

2. (Just for fun..)  I know of at least one other surprising use of ancient materials for the most prosaic of purpose--can you figure out what that other ancient material is (was)?  

As always, let us know your thought process.  HOW did you figure out your answer?  Let us know so we can learn to be better investigators of the mysterious and puzzling!  

Keep searching! 





Monday, May 22, 2023

SearchResearch on a podcast--"The Informed Life"--Listen now

Podcasts are great! 


You can be out running, walking the dog, getting in your 10,000 steps and still listen and learn about the world.  

A few weeks ago I was interviewed by "The Informed Life" podcast host Jorge Arango, which he has just dropped on his channel as Episode 114, Dan Russell on the Joy of Search.  (The transcript can be found here.)  

In this cast, we talk about SearchResearch and my book.  I tell a few stories (some of which SRS Regulars will recognize), and wax wise about what makes someone a good searcher (and what doesn't).  

Take a listen, and let us know what you think!  


Keep searching.  




Wednesday, May 10, 2023

Taking a bit of a break...

 It's May... 

P/C Dan, from a lovely trip to Norway.

... and it has been a very full few months since the start of the year.  I'm feeling in dire need of taking a little bit of a break, so I'm temporarily going on a hiatus to do a little R&R.  

My plan is to catch up on a bunch of reading (mostly non-work), go on walkabout, become a bit of a flâneur.  Eat some great food, travel hither and yon... mostly yon.  

If something remarkable catches my eye, I might pop back into SRS for a quick note--but the current plan will be to come back here on May 31st with some new observations about the world of online research and the general joy of finding out about.   

Have a great month of May!  (A little musical commentary...)  

Keep Searching!  

P/C Dan. The master state of mind being sought. 


Wednesday, May 3, 2023

Answer: How well do LLMs answer SRS questions?

 Remember this? 

P/C Dall-e. Prompt: happy robots answering questions  rendered in a
ukiyo-e style on a sweeping landscape cheerful


Our Challenge was this:  

1.  I'd like you to report on YOUR experiences in trying to get ChatGPT or Bard (or whichever LLM you'd like to use) to answer your curious questions.  What was the question you were trying to answer?  How well did it turn out?  

Hope you had a chance to read my comments from the previous week.  

On April 21 I wrote about why LLMs are all cybernetic mansplaining--and I mean that in the most negative way possible.  If mansplaining is a kind of condescending explanation about something the man has incomplete knowledge (and with the mistaken assumption that he knows more about it than the person he's talking to does), then that's what's going on, cyberneticly.  

On April 23 I wrote another post about how LLMs seem to know things, but when you question them closely, they don't actually know much at all.  

Fred/Krossbow made the excellent point that it's not clear that Bard is learning.  After asking a question, then asking a follow-up and getting a changed response: "Bard corrected the response. What I now wonder: will Bard keep that correction if I ask later today? Will Bard give the same response to someone else?" 

It's unclear.  I'm sure this kind of memory (and gradual learning) will become part of the LLMs.  But at the moment, it's not happening.   

And that's a big part of the problem with LLMs: We just don't know what they're doing, why, or how.  

As several people have pointed out, that's true of humans as well.  I have no idea what you (my dear reader) are capable of doing, whether you're learning or not... but I have decades of experience dealing with other humans of your make and model, and I far a pretty good idea about what a human's performance characteristics are.  I don't have anything similar for an LLM.  Even if I spent a lot of time developing one, it might well change tomorrow when a new model is pushed out to the servers.  Which LLM are you talking to now?  

P/C Dall-E. Prompt: [ twenty robots, all slightly different from each other, trying to answer questions in a hyperrealistic style 3d rendering ]

What happens when the fundamental LLM question-answering system changes moment by moment?  

Of course, that's what happens with Google's index.  It's varying all the time as well, and it's why you sometimes get different answers to the same query from day-to-day--the underlying data has changed.  

And perhaps we'll get used to the constant evolution of our tools.  It's an interesting perspective to have.  

mateojose1 wonders if LLMs are complemented by deep knowledge components (e.g., grafting Wolfram Alpha to handle the heavy math chores), if THEN we'll get citations.  

I think that's part of the goal.  I've been playing around with Scite.ai LLM for the scholarly literature (think of it as ChatGPT trained on the contents of Google Scholar).  It's been working really well for me when I ask it questions that are "reasonably scholarly," that is, with papers that might address the question at hand.  I've been impressed with the quality of the answers, along with the lack of hallucination AND the presence of accurate citations.  

This LLM (scite.ai) is so interesting that I'd devote an entire post to it soon.  (Note that I'm not getting any funding from them to talk about their service.  I've just been impressed.)  

As usual, remmij has a plethora of interesting links for us to consider.  You have to love remmij's "robots throwing an LLM into space" Dall-E images. Wonderful.  (Worth a click.) 

But I also really agree with the link that points to Beren Millidge's blog post about how LLMs "confabulate not hallucinate."  

This is a great point--the term "hallucination" really means that one experiences an apparent sensory perception of something not actually present.  At the same time "confabulation" happens when someone is not able to explain or answer questions correctly, but does so anyway.  The confabulator (that's a real word, BTW) literally doesn't know if what they're saying is true or not, but does ahead regardless. That's much more like what's going on with LLMs.  


Thanks to everyone for their thoughts.  It's been fun to read them the past week.  Sorry about the delay.  I was at a conference in Hamburg, Germany.  As usual, I thought I would have the time to post my reply, but instead I was completely absorbed in what was happening.  As you can imagine, we all spent a lot of time chatting about LLMs and how humans would understand them and grow to use them.  

The consensus was that we're just at the beginning of the LLMs arms race--all of the things we worry about (truth, credibility, accuracy, etc.) are being challenged in new and slightly askew ways.  

I feel like one of the essential messages of SearchResearch has always been that we need to understand what our tools are and how they operate.  The ChatGPTs and LLMs of the world are clearly new tools with great possibilities--and we still need to understand them and their limits.  

We'll do our best, here in the little SRS shop on the prairie.  

Keep searching, my friends.