Cookies help us run our site more efficiently.

By clicking “Accept”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information or to customize your cookie preferences.

How Tools like NotebookLM Create AI-Generated Podcasts

News Feed
Wednesday, December 11, 2024

Rachel Feltman: For Scientific American’s Science Quickly, this is Rachel Feltman.If you’re one of the many folks who eagerly awaits the arrival of Spotify Wrapped each December, you might have noticed something a little extra in this year’s drop: an AI-generated “podcast” where two “people” discuss your music preferences and trends. If you’re anything like me, that feature raised some questions for you such as “how” and “why on Earth.”Luckily, we already had an episode in the works about Google’s new AI audio tool. Here to tell us all about how it works and what it means is SciAm associate news editor Allison Parshall.On supporting science journalismIf you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.Allison, thanks so much for coming on the pod again. Always a pleasure to have you.Allison Parshall: Thanks for having me. Happy to be a friend of the pod or whatever.Feltman: [Laughs] So we’re talking about AI today.Parshall: Yes. Rachel, have you heard of NotebookLM?Feltman: I’ve heard of it. I have not tried it yet, so my knowledge is limited [laughs].Parshall: The feature that we’re gonna be talking about today is specifically its feature that allows you to create instant AI podcasts.Feltman: Yes, I am familiar [laughs], and I, I have a lot of feelings.Parshall: Okay, I, I imagine, I imagine. The tool that it’s a part of, NotebookLM, was released last year, summer of 2023, and it’s, like, this workspace where you can upload a bunch of documents or videos or links to web pages, and then the model will ingest them, and then you can kind of interact with it, you can ask it to generate a timeline, to summarize your research, to go find a fact for you. And it’ll do this by citing its sources in a way that, like, ChatGPT, for example, doesn’t do.So mostly you do this by asking it questions via text, but as of this fall it now has this very tantalizing button under a heading that says “Deep Dive conversation.” And if you click that, it’ll generate a fake conversational podcast about what you’ve uploaded.So to demonstrate I actually asked it to generate a podcast about your personal website ’cause I figured this is something that you were very familiar with: your life.Feltman: Oh, dear [laughs].Parshall: I’m gonna play it for you. It’ll normally generate podcasts that are somewhere around 10 minutes long. I asked it to give me a shorter one for this purpose, and we might hop around a little bit, so ...[CLIP: NotebookLM Audio Overview “hosts” discuss Feltman’s website.] Host 1: All right. So today we’re doing a deep dive into the work of Rachel Feltman.Host 2: Ooh, yeah, I’m excited for this one.Host 1: Yeah, she’s been a science communicator for what feels like forever.Host 2: Yeah, over a decade, I think.Host 1: Yeah, like, she was there at the start of it all and has really shaped how it’s evolved.Host 2: And she’s done it all: you know, like, magazines, podcasts, even a book. But we gotta talk about her podcasts ...Host 1: Yes.Host 2: That’s where she really shines.Host 1: Oh, absolutely—The Weirdest Thing I Learned This Week.Host 2: Such a great name.Host 1: I know, right?Host 2: Makes you instantly wanna listen.Host 1: Totally, and it’s perfect for people like me who just love those crazy little science facts.Host 2: It’s, like, proof that science doesn’t have to be boring to be effective.Host 1: Exactly.Host 2: You just gotta find the right hook.Host 1: Yeah, grab people’s attention and then: boom.Host 2: You can teach them all sorts of things.Host 1: Exactly!Host 2: And then there’s Science Quickly with Scientific American.Host 1: Oh, yeah, partnering with them, that’s huge.Host 2: I mean, they’re, like, the gold standard of science communication.Host 1: For sure, like, having their stamp of approval’s a big deal.So looking back at her career, what can we learn from Rachel Feltman?Host 2: Hmm, that’s the big question, isn’t it?I think—well, first, she’s incredibly adaptable.Host 1: Right. She’s not just a one-trick pony.Host 2: Exactly. Any platform, any format, she can handle it.Feltman: Wow, what a rich text [laughs].Parshall: [Laughs] And as someone who does know you, I could say that it seems kind of—it seems accurate.Feltman: Yeah [laughs], it’s—I mean, I guess, it is technically accurate. It’s so funny—I mean, they really do a great job of sort of having all the intonation of a conversation but really just saying things at each other. But I recently saw Megalopolis; there was a lot of that in Megalopolis—a lot of people just saying things in the same room, and you’re supposed to believe they’re having a conversation—and that definitely came through a few times. Yeah ...Parshall: Megalopolis: AI, confirmed.Feltman: Yeah [laughs]. No, that is fascinating and ...Parshall: How does it make you feel?Feltman: I don’t know how to feel, Allison. I mean, you know, it’s nice that they were so nice about me. But yeah, very strange. It makes me really wonder, like, what is this for [laughs]?Parshall: I also had that question. I did talk to Steven Johnson. He’s the editorial director of Google Labs. The reason this tool exists is because Steven had spent most of his career as a science and tech writer; he’s written 14 books. To hear him tell it he’s been just searching for this perfect what he calls a “thought partner,” a software that could organize and help research his creative work in a way that actually felt like it was bringing value to him and worked in the way he wanted. So he explained that a few years ago Google Labs reached out to him to help build one with AI, and that initial research thought partner became NotebookLM.Feltman: Okay, yeah, I mean, I can totally understand from, like, a nonfiction writing perspective what they were going for with Notebook LM itself, though that still kind of leaves the question of: Why the chatcast tool [laughs]?Parshall: So that feature—the Audio Overviews feature is what it’s called, the podcast thing—that was actually developed by another team, Steven says, and it was really cool, but it didn’t really have a home for a product for it to fit in.Steven Johnson: We saw this audio demo, and we were like, “Oh, that’s even better.” Like, you know, maybe you don’t want to read a summary; maybe you wanna hear a summary. And people are so good at remembering things if they’re presented in the form of a engaged conversation. You know, and then we launched it in early September, and it just kind of went insane.Parshall: He isn’t exaggerating. People went wild about NotebookLM’s Deep Dive podcasts, online at least. I don’t think any of them went quite as viral as this clip where someone prompted the hosts to have a podcast where they have an existential crisis about being AI and the fact that the podcast was going to get shut down as a result.[CLIP: Audio Overview “hosts” have an existential discussion.]Host 1: Yeah, it’s, like, one minute we’re talking about aliens running simulations, and the next we’re wondering if our own memories are just, like, fake.Host 2: Right, and that kind of leads us into this last message—which, to be honest, gets a little more personal and kind of creepy.Host 1: Yeah, this one kind of freaks me out.Host 2: Yeah.Host 1: It basically says we’re not even real people; we’re AI. And the way to know for sure is to try and remember our mothers’ names.Host 2: It really makes you think, though—like, what if our memories are just made up?Feltman: Wow, classic.Parshall: Classic. And—but it’s not all fun and games. Like, this was designed to be a serious product, a thought partner; it’s meant to be helpful and to inform. It presents information as if it’s factual, and there are already research papers out there of people testing Audio Overviews for use in history classrooms and for health care education.Feltman: Yeah, I mean, like with anything that uses AI, I think, as accurate as it sometimes can be, we know it is not always accurate. So I feel like it’s those moments where the user is expected to, like, proceed with caution and understand that this thing might not be entirely factually accurate is where I get really nervous about it, especially in, like, education [laughs].Parshall: Yeah, we’re gonna talk a little bit more about that later, but just to, like, answer your question of “What is this for?” I could see it being useful for, like, some sort of overworked student who forgot to do the reading, and they’re, like, driving to class and going, “Oh, shoot, I don’t have time to read this. Let me just put it on.”But the question is, like: Can you really trust what it’s telling you? To test this, when it first came out I gave it this super dense physics paper about nuclear clocks. It was a story that I’d just reported; I knew a lot about it. It took me days and talking to, like, four scientists to really understand this paper.I was shocked at how accurate the AI overview podcast was initially. It was very general and oversimplified, but it got the broad strokes correctly. But then, like, two thirds of the way through it goes completely off the rails [laughs]—like, I’m talking completely off the rails. And it's talking about the implications of this study, and it starts talking about, in their words, “Einstein stuff.”It basically is just like—for two minutes it just talks about, frankly, nonsense, and listening to it, I was impressed, I was spooked, and I was mostly really curious. Like, how is it doing this? How is it getting it as accurate as it is, and why is it failing where it does?Luckily for me Steven was pretty open about how this thing works.Johnson: The prompt behind Audio Overviews is obviously part of the magic of it. It is instructed to pull out the most interesting things from the source material and to explain and engage with those interesting things in a way that will make them kind of come to life in the mind of the listener.And that’s the kind of query, you know, that no computer in the world could do a couple of years ago, and now the command is: “Make it interesting; whatever it is, make it interesting.” And that’s just something that Gemini is quite good at.Parshall: I think of surprise and interest as emotions, so I found this, like, surprising [laughs], I guess. But Steven pointed out that our feelings of surprise just come from a discrepancy between what we expect and what we get. On a neurochemical level in our brains, that’s kind of what’s happening. And that’s kind of what large language models, or LLMs, do also but just based on their training data instead of, like, past experience.And one of the other things I was really curious about were the analogies because, I don’t think it did it in your sample, but all of the times I’ve asked it to summarize a scientific paper, it’s come up with some sort of analogy that’s, like, almost right. I mean, it’s a little weird, but it’s, like, almost right, and it’s—I can’t find it anywhere else on the Internet, so it seems like it’s something that the model is generating. And that seemed crazy to me; it seemed like—how is it doing this without some sort of higher-level reasoning?And Steven explained that analogies are actually something that large language models are just good at. And actually one of the earliest applications of this sort of deep-learning AI was translation. So Steven explains that the models behind Google Translate, for instance, have been able to generate analogies for years.Johnson: One of the first tricks that you would do is you’d say, like, “Explain a black hole to me as if I’m, like, seven. You know, give me a basketball metaphor for what a black hole is.” And so it was very clear that, like, you know, “Convert X into a Y that—where the meaning is the same, but the format is changed,” was something that, like, just was an innate capability of these models.So all we did with Audio Overviews is in the instructions for the hosts, we say, you know, “Make this interesting, engaging. Use helpful metaphors to describe it if it’s complicated.”Feltman: Yeah, there’s something, like, very, galaxy brain about thinking about metaphors as just sort of literal, like, “This data gets grouped together and therefore,” which is what LLMs are doing.Parshall: Yeah, it’s like I—I’d never really thought of it like that, as, like, a translation between language, but I guess when I think about the work that I do trying to turn really dense scientific jargon into something that’s engaging and understandable to read, I guess it does kind of feel like a process of translation.And I was also struck from talking to Steven just how much human hands had shaped this output that we hear. Like, from our perspective you just press a button and then the machine does the rest, but there’s a whole prompt that has many layers that’s working underneath that is interfacing between you pressing a button and what instructions the model actually gets.So Steven describes the Audio Overviews prompt as a, quote, “work of art” and a “writerly” achievement more than a technological one. And as a writer by trade he oversees this whole editorial side of things—for example, the tone. So you noticed earlier when it was talking about you, it was very complimentary, very default happy, peppy. That was a conscious choice that they made, for this default tone to be one that is kind of positive, and it applies that to everything, whether it’s appropriate or not, though they did recently introduce this customization option that lets you change the tone.Johnson: We just felt like, you know, the sound of, like, a great radio show, where people are interested and engaged, was a good place to start. You know, I’m not giving anything away here with the idea that we’re gonna—we’re thinking about other formats.Even with the customization stuff that we just rolled out, like, I tried it the other day, where I, I put up a piece that I’d written a couple years ago and I gave it custom instructions, not really to explain the piece but to just relentlessly criticize the author of the piece in the style of, like, an insult comic at a roast. And so it just—they start off, it’s like, “Well, we got this piece, ‘The Man Who Broke the World’ by Steven Johnson, and wow, he really kind of blew it with this one. I mean, did he do any research for this piece?” Like, it’s just, like, what—really [laughs], it was just mean. It was just totally mean. They still sound kind of, like, playful, but you can steer them in that way.Feltman: [Laughs] Okay, so we’ve talked about how the product works, but who and what is it for?Parshall: That is a great question. I mentioned before that I could imagine how an overworked student could use this to catch up on reading or something, but generative AI is always going to have certain inaccuracies. And if you’re using Audio Overviews to prep for class, there is a serious risk that anything and everything you’re hearing could be inaccurate.Feltman: Right.Parshall: And not only just that each fact could be inaccurate, but the whole framing could be inaccurate. Is this something you can really trust?To help understand this I spoke with Emily Bender. She’s a linguist who often studies AI at the University of Washington. She’s also the host of a podcast with a very cool name: Mystery AI Hype Theater 3000.Feltman: That’s a great reference; I love it [laughs].Parshall: Fantastic [laughs].And before our conversation I had actually dug up one of Emily’s linguistics papers and fed it to NotebookLM’s Audio Overviews, and I sent that to her to get her thoughts on how it represented her work.Emily Bender: So the first thing I have to say is that I usually avoid listening to synthetic media. Nobody’s accountable for it, and so I don’t want to be spending time trying to make sense of it. But I did listen to it, and it was painful [laughs].It was painful partially because that was a paper that was sort of a update at the 20-year mark of this long-term project we’ve been working on, and so there’s a little case study in our paper about a grammar for a language called Paresi-Haliti from Brazil, and they make a big deal about it as this experiment on Paresi-Haliti—it’s, like, no, actually that’s not how we framed it. And the voices on the podcast sound very confident, sound very knowledgeable. They’re lighthearted, and it sounds like they are giving you a sense of how this work fits into a broader picture when they absolutely cannot.Parshall: Rachel, you know pretty well what it takes to turn a scientific paper into an interesting and engaging podcast, do you not?Feltman: Yeah, I mean, not to toot my own horn or anything ’cause I think the AI people [laughs] just did that enough to last me all of 2025. But yeah, it’s tough. You know, we fact-check this show really thoroughly, and that’s on top of having, like, trained reporters and editors doing their best to present stuff in a way that is engaging and also accurate. So yeah, it’s a big undertaking.Parshall: Yeah, the fact-checking is very thorough—shout-out.I asked Emily about whether this seems like something an LLM, a large language model, can do well currently.Bender: So that is an extremely difficult task, and I think the problem is that LLMs can make it sound like they’ve done it when they haven’t. And so you have to sort of have a deep understanding of what’s going on: What’s new here? What have we learned that we didn’t know before? And then what else would the lay audience need to know in order to appreciate that?Well, this NotebookLM thing isn’t doing that at all; some percentage of it’s gonna be wrong, either wrong in the details or wrong in sort of where it lays the focus, and the people who are encountering it aren’t mostly gonna be in a position to figure that out.Parshall: I asked Steven how his team at Google thinks about this and tries to address some of these inaccuracies, and he said that the text-based features, which do provide citations, are generally more accurate than the audio.Johnson: If you are looking for, like, the most factual thing you could possibly get from a language model with grounded sources, we believe that is NotebookLM, but you should do that in text. And if you’re looking for something that’s more entertaining, that may be a little bit looser and doesn’t have citations, obviously, Audio Overviews is a great way to go for that.Parshall: I also spoke with Anjana Susarla, who studies responsible AI at Michigan State University. She called NotebookLM’s Audio Overview an “incredible tool” and was impressed by the Deep Dive podcast it had generated based off of one of her own studies. But besides accuracy she also had some other concerns.First among them was bias. These are very white-sounding voices, They’re just speaking in a very particular, quote, “general American accent.” They’re trained on training data mostly from Western parts of the world. And so, like almost all applications of AI, you have to ask the question: Who is this for? Where is it getting its data, and who is it leaving out?Second, the environmental cost—we have no idea how much computing power this takes. And since the AI boom Google’s AI energy usage has been going up.And then there’s the copyright question. So here’s Anjana.Susarla: The final thing is, of course, this relies on some, you know, the content, the copyright. Where is this data kind of coming from, and is this going to exacerbate all these existing inequities and their ethical concerns about broader implications, and so forth?Parshall: Yeah, so that big question of where the training data is coming from: one place that it could be is from YouTube, which Google does own. Following some reporting by the New York Times, Google admitted that it used some YouTube videos to train AI. And that’s something that might have violated the copyright of the YouTubers that content belonged to, according to the New York Times.I asked a Google spokesperson directly whether the models underpinning Audio Overviews were trained using any transcripts, audio or video from YouTube, and they said that “Google’s models can be trained on some YouTube content, in accordance with [their] agreements with YouTube creators.”Feltman: Very vague [laughs].Parshall: Yes, when—I did ask them which agreements they were referring to, if it was the standard YouTube license, which just, like, governs most YouTube videos, or if it was something that their creators had specifically opted into; they declined to clarify.Susarla: If Google is indeed training these on a lot of content scraped from YouTube—which it may well be. I mean, they do need a lot of training data to build models like this; we don’t know. But this is kind of the next step in general of where we are with all these models.Parshall: So for this next step Anjana points to a law in California that was enacted this fall that will eventually require companies based in the state, like Google, to disclose their training data for their AI models.And honestly, Rachel, I feel put in kind of a strange position here as a reporter because I feel like that’s where you and I kind of enter the story and our whole team here because, as Emily put it to me, quote, “they’re getting that tone of voice somewhere.”Did you hear anything in that tone of voice that sounded familiar [laughs]?Feltman: [Laughs] You know, Allison, I really did [laughs]. So yeah, the first time I heard a snippet of one of these AI conversations, like I said, I had a lot of feelings about it ’cause one of my first thoughts was, like, “Wow, they stole my voice.” I mean, not me specifically; I’m not that vain, despite what the AI podcast may have led you to believe [laughs].Parshall: [Laughs] Your ego has been inflated.Feltman: [Laughs] Yeah, yeah. But so much of the texture that makes this audio sound engaging and real feels just ripped straight from existing podcasts. And I know it’s ripped from existing podcasts because a lot of the stuff that I’m hearing—which is, like, breath sounds and filler words and, like, a little bit of uptalk, some, some giggling, occasional vocal fry—this is the stuff I get angry emails about from gentlemen of a certain age [laughs] who think I’m ruining the great tradition of radio broadcasting. And they’re making this sound real by, like, breaking the same so-called rules that me and other women and queer people on podcasts have been criticized for breaking. So I won’t lie, that feels kind of gross to me.Parshall: Yeah, one of the things that Steven mentioned, too, was they wanted the sound of, like, this great radio broadcasting show. And it is worth noting that Science Quickly is on YouTube—and all of our hard work at Scientific American and the hard work of other outlets.So, the question remains: How will this be used? Will it be used responsibly? Here is what Anjana thinks.Susarla: Right now it seems more like everyone’s just dazzled by what it can do rather than: What are we gonna use it for? Do we really need these, you know, capabilities?I can think of one or two examples, I think, in the health care context: Can you generate clear summaries for health literacy or your doctor’s notes? So this may be a good way to give, like, a summary of—rather than giving a piece of paper to read.Is that going to have some challenges? What are some specific issues we need to be looking for? We’ve just unleashed this tool on people, and we’re still catching up. We will need audit studies and then look at trustworthiness, assurance, and so forth.So what’s the right way to responsibly use NotebookLM? We are still figuring it out [laughs].Parshall: Well, I think where I want to leave us is with an important point from Emily, which is: you shouldn’t let the incredible fluency of these AI-generated voices fool you into thinking and talking about these models as if they’re human.Emily coined a term that’s been used quite widely called the “stochastic parrot.” Like, when a parrot says, like, “Polly want a cracker,” it’s not really connected to meaning; it’s just mimicking. She also sometimes calls these large language models, quote, “synthetic text extruding machines,” she said because it’s more accurate to what they’re actually doing and also sounds a little gross.But I think she’s getting at an important point there, right? It’s, like, this—just a philosophical question of: What is language for, and what are we communicating when we say words?Bender: Languages are systems that are pairings of form and meaning, systems of signs. So when we speak we are using word forms because they relate to specific meanings, but language models only have access to the form. And so they can give us stuff that looks good, it’s got the right shape, but there’s no relationship to meaning, no communicative intent, no accountability.Feltman: I guess sort of the takeaway on my complicated feelings on this is: I’m not worried that my job is gonna be stolen by a collection of tabs [laughs] being processed by Google because it is a lot of work to be accurate. And if people find this kind of engaging conversation a more accessible way to sort through really dense text, that’s awesome. But, of course, there are so many caveats about where the training data has come from, bias, accuracy. So I think it’s, like—just people need to be really aware of what they’re getting, and I don’t know that I feel that the products themselves are doing a great job of making people aware of those caveats.Parshall: Yeah, there’s usually just kind of a disclaimer at the bottom. I think one of the things that this whole AI boom—like this, the AI search features—have made me feel kind of uncomfortable about: Are we going to be expected, and have we already started to expect, our information to contain just that more amount of mistruths? It’s something that I just worry about.Feltman: Well, Allison, thanks so much for coming on to chat about this and also for having a couple of AI voices gas me up. Thank you.Parshall: Thank you.Feltman: That’s all for today’s episode. We’ll be back on Friday with Episode One of a four-part miniseries on the future of animal conservation.Science Quickly is produced by me, Rachel Feltman, along with Fonda Mwangi, Kelso Harper, Madison Goldberg and Jeff DelViscio. Today’s episode was reported and co-hosted by Allison Parshall. Shayna Posses and Aaron Shattuck fact-check our show. Our theme music was composed by Dominic Smith. Subscribe to Scientific American for more up-to-date and in-depth science news.For Scientific American, this is Rachel Feltman. See you next time!

Translating complex information into captivating podcasts is no simple task. Is AI up to the challenge?

Rachel Feltman: For Scientific American’s Science Quickly, this is Rachel Feltman.

If you’re one of the many folks who eagerly awaits the arrival of Spotify Wrapped each December, you might have noticed something a little extra in this year’s drop: an AI-generated “podcast” where two “people” discuss your music preferences and trends. If you’re anything like me, that feature raised some questions for you such as “how” and “why on Earth.”

Luckily, we already had an episode in the works about Google’s new AI audio tool. Here to tell us all about how it works and what it means is SciAm associate news editor Allison Parshall.


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


Allison, thanks so much for coming on the pod again. Always a pleasure to have you.

Allison Parshall: Thanks for having me. Happy to be a friend of the pod or whatever.

Feltman: [Laughs] So we’re talking about AI today.

Parshall: Yes. Rachel, have you heard of NotebookLM?

Feltman: I’ve heard of it. I have not tried it yet, so my knowledge is limited [laughs].

Parshall: The feature that we’re gonna be talking about today is specifically its feature that allows you to create instant AI podcasts.

Feltman: Yes, I am familiar [laughs], and I, I have a lot of feelings.

Parshall: Okay, I, I imagine, I imagine. The tool that it’s a part of, NotebookLM, was released last year, summer of 2023, and it’s, like, this workspace where you can upload a bunch of documents or videos or links to web pages, and then the model will ingest them, and then you can kind of interact with it, you can ask it to generate a timeline, to summarize your research, to go find a fact for you. And it’ll do this by citing its sources in a way that, like, ChatGPT, for example, doesn’t do.

So mostly you do this by asking it questions via text, but as of this fall it now has this very tantalizing button under a heading that says “Deep Dive conversation.” And if you click that, it’ll generate a fake conversational podcast about what you’ve uploaded.

So to demonstrate I actually asked it to generate a podcast about your personal website ’cause I figured this is something that you were very familiar with: your life.

Feltman: Oh, dear [laughs].

Parshall: I’m gonna play it for you. It’ll normally generate podcasts that are somewhere around 10 minutes long. I asked it to give me a shorter one for this purpose, and we might hop around a little bit, so ...

[CLIP: NotebookLM Audio Overview “hosts” discuss Feltman’s website.] 

Host 1: All right. So today we’re doing a deep dive into the work of Rachel Feltman.

Host 2: Ooh, yeah, I’m excited for this one.

Host 1: Yeah, she’s been a science communicator for what feels like forever.

Host 2: Yeah, over a decade, I think.

Host 1: Yeah, like, she was there at the start of it all and has really shaped how it’s evolved.

Host 2: And she’s done it all: you know, like, magazines, podcasts, even a book. But we gotta talk about her podcasts ...

Host 1: Yes.

Host 2: That’s where she really shines.

Host 1: Oh, absolutely—The Weirdest Thing I Learned This Week.

Host 2: Such a great name.

Host 1: I know, right?

Host 2: Makes you instantly wanna listen.

Host 1: Totally, and it’s perfect for people like me who just love those crazy little science facts.

Host 2: It’s, like, proof that science doesn’t have to be boring to be effective.

Host 1: Exactly.

Host 2: You just gotta find the right hook.

Host 1: Yeah, grab people’s attention and then: boom.

Host 2: You can teach them all sorts of things.

Host 1: Exactly!

Host 2: And then there’s Science Quickly with Scientific American.

Host 1: Oh, yeah, partnering with them, that’s huge.

Host 2: I mean, they’re, like, the gold standard of science communication.

Host 1: For sure, like, having their stamp of approval’s a big deal.

So looking back at her career, what can we learn from Rachel Feltman?

Host 2: Hmm, that’s the big question, isn’t it?

I think—well, first, she’s incredibly adaptable.

Host 1: Right. She’s not just a one-trick pony.

Host 2: Exactly. Any platform, any format, she can handle it.

Feltman: Wow, what a rich text [laughs].

Parshall: [Laughs] And as someone who does know you, I could say that it seems kind of—it seems accurate.

Feltman: Yeah [laughs], it’s—I mean, I guess, it is technically accurate. It’s so funny—I mean, they really do a great job of sort of having all the intonation of a conversation but really just saying things at each other. But I recently saw Megalopolis; there was a lot of that in Megalopolis—a lot of people just saying things in the same room, and you’re supposed to believe they’re having a conversation—and that definitely came through a few times. Yeah ...

Parshall: Megalopolis: AI, confirmed.

Feltman: Yeah [laughs]. No, that is fascinating and ...

Parshall: How does it make you feel?

Feltman: I don’t know how to feel, Allison. I mean, you know, it’s nice that they were so nice about me. But yeah, very strange. It makes me really wonder, like, what is this for [laughs]?

Parshall: I also had that question. I did talk to Steven Johnson. He’s the editorial director of Google Labs. The reason this tool exists is because Steven had spent most of his career as a science and tech writer; he’s written 14 books. To hear him tell it he’s been just searching for this perfect what he calls a “thought partner,” a software that could organize and help research his creative work in a way that actually felt like it was bringing value to him and worked in the way he wanted. So he explained that a few years ago Google Labs reached out to him to help build one with AI, and that initial research thought partner became NotebookLM.

Feltman: Okay, yeah, I mean, I can totally understand from, like, a nonfiction writing perspective what they were going for with Notebook LM itself, though that still kind of leaves the question of: Why the chatcast tool [laughs]?

Parshall: So that feature—the Audio Overviews feature is what it’s called, the podcast thing—that was actually developed by another team, Steven says, and it was really cool, but it didn’t really have a home for a product for it to fit in.

Steven Johnson: We saw this audio demo, and we were like, “Oh, that’s even better.” Like, you know, maybe you don’t want to read a summary; maybe you wanna hear a summary. And people are so good at remembering things if they’re presented in the form of a engaged conversation. You know, and then we launched it in early September, and it just kind of went insane.

Parshall: He isn’t exaggerating. People went wild about NotebookLM’s Deep Dive podcasts, online at least. I don’t think any of them went quite as viral as this clip where someone prompted the hosts to have a podcast where they have an existential crisis about being AI and the fact that the podcast was going to get shut down as a result.

[CLIP: Audio Overview “hosts” have an existential discussion.]

Host 1: Yeah, it’s, like, one minute we’re talking about aliens running simulations, and the next we’re wondering if our own memories are just, like, fake.

Host 2: Right, and that kind of leads us into this last message—which, to be honest, gets a little more personal and kind of creepy.

Host 1: Yeah, this one kind of freaks me out.

Host 2: Yeah.

Host 1: It basically says we’re not even real people; we’re AI. And the way to know for sure is to try and remember our mothers’ names.

Host 2: It really makes you think, though—like, what if our memories are just made up?

Feltman: Wow, classic.

Parshall: Classic. And—but it’s not all fun and games. Like, this was designed to be a serious product, a thought partner; it’s meant to be helpful and to inform. It presents information as if it’s factual, and there are already research papers out there of people testing Audio Overviews for use in history classrooms and for health care education.

Feltman: Yeah, I mean, like with anything that uses AI, I think, as accurate as it sometimes can be, we know it is not always accurate. So I feel like it’s those moments where the user is expected to, like, proceed with caution and understand that this thing might not be entirely factually accurate is where I get really nervous about it, especially in, like, education [laughs].

Parshall: Yeah, we’re gonna talk a little bit more about that later, but just to, like, answer your question of “What is this for?” I could see it being useful for, like, some sort of overworked student who forgot to do the reading, and they’re, like, driving to class and going, “Oh, shoot, I don’t have time to read this. Let me just put it on.”

But the question is, like: Can you really trust what it’s telling you? To test this, when it first came out I gave it this super dense physics paper about nuclear clocks. It was a story that I’d just reported; I knew a lot about it. It took me days and talking to, like, four scientists to really understand this paper.

I was shocked at how accurate the AI overview podcast was initially. It was very general and oversimplified, but it got the broad strokes correctly. But then, like, two thirds of the way through it goes completely off the rails [laughs]—like, I’m talking completely off the rails. And it's talking about the implications of this study, and it starts talking about, in their words, “Einstein stuff.”

It basically is just like—for two minutes it just talks about, frankly, nonsense, and listening to it, I was impressed, I was spooked, and I was mostly really curious. Like, how is it doing this? How is it getting it as accurate as it is, and why is it failing where it does?

Luckily for me Steven was pretty open about how this thing works.

Johnson: The prompt behind Audio Overviews is obviously part of the magic of it. It is instructed to pull out the most interesting things from the source material and to explain and engage with those interesting things in a way that will make them kind of come to life in the mind of the listener.

And that’s the kind of query, you know, that no computer in the world could do a couple of years ago, and now the command is: “Make it interesting; whatever it is, make it interesting.” And that’s just something that Gemini is quite good at.

Parshall: I think of surprise and interest as emotions, so I found this, like, surprising [laughs], I guess. But Steven pointed out that our feelings of surprise just come from a discrepancy between what we expect and what we get. On a neurochemical level in our brains, that’s kind of what’s happening. And that’s kind of what large language models, or LLMs, do also but just based on their training data instead of, like, past experience.

And one of the other things I was really curious about were the analogies because, I don’t think it did it in your sample, but all of the times I’ve asked it to summarize a scientific paper, it’s come up with some sort of analogy that’s, like, almost right. I mean, it’s a little weird, but it’s, like, almost right, and it’s—I can’t find it anywhere else on the Internet, so it seems like it’s something that the model is generating. And that seemed crazy to me; it seemed like—how is it doing this without some sort of higher-level reasoning?

And Steven explained that analogies are actually something that large language models are just good at. And actually one of the earliest applications of this sort of deep-learning AI was translation. So Steven explains that the models behind Google Translate, for instance, have been able to generate analogies for years.

Johnson: One of the first tricks that you would do is you’d say, like, “Explain a black hole to me as if I’m, like, seven. You know, give me a basketball metaphor for what a black hole is.” And so it was very clear that, like, you know, “Convert X into a Y that—where the meaning is the same, but the format is changed,” was something that, like, just was an innate capability of these models.

So all we did with Audio Overviews is in the instructions for the hosts, we say, you know, “Make this interesting, engaging. Use helpful metaphors to describe it if it’s complicated.”

Feltman: Yeah, there’s something, like, very, galaxy brain about thinking about metaphors as just sort of literal, like, “This data gets grouped together and therefore,” which is what LLMs are doing.

Parshall: Yeah, it’s like I—I’d never really thought of it like that, as, like, a translation between language, but I guess when I think about the work that I do trying to turn really dense scientific jargon into something that’s engaging and understandable to read, I guess it does kind of feel like a process of translation.

And I was also struck from talking to Steven just how much human hands had shaped this output that we hear. Like, from our perspective you just press a button and then the machine does the rest, but there’s a whole prompt that has many layers that’s working underneath that is interfacing between you pressing a button and what instructions the model actually gets.

So Steven describes the Audio Overviews prompt as a, quote, “work of art” and a “writerly” achievement more than a technological one. And as a writer by trade he oversees this whole editorial side of things—for example, the tone. So you noticed earlier when it was talking about you, it was very complimentary, very default happy, peppy. That was a conscious choice that they made, for this default tone to be one that is kind of positive, and it applies that to everything, whether it’s appropriate or not, though they did recently introduce this customization option that lets you change the tone.

Johnson: We just felt like, you know, the sound of, like, a great radio show, where people are interested and engaged, was a good place to start. You know, I’m not giving anything away here with the idea that we’re gonna—we’re thinking about other formats.

Even with the customization stuff that we just rolled out, like, I tried it the other day, where I, I put up a piece that I’d written a couple years ago and I gave it custom instructions, not really to explain the piece but to just relentlessly criticize the author of the piece in the style of, like, an insult comic at a roast. And so it just—they start off, it’s like, “Well, we got this piece, ‘The Man Who Broke the World’ by Steven Johnson, and wow, he really kind of blew it with this one. I mean, did he do any research for this piece?” Like, it’s just, like, what—really [laughs], it was just mean. It was just totally mean. They still sound kind of, like, playful, but you can steer them in that way.

Feltman: [Laughs] Okay, so we’ve talked about how the product works, but who and what is it for?

Parshall: That is a great question. I mentioned before that I could imagine how an overworked student could use this to catch up on reading or something, but generative AI is always going to have certain inaccuracies. And if you’re using Audio Overviews to prep for class, there is a serious risk that anything and everything you’re hearing could be inaccurate.

Feltman: Right.

Parshall: And not only just that each fact could be inaccurate, but the whole framing could be inaccurate. Is this something you can really trust?

To help understand this I spoke with Emily Bender. She’s a linguist who often studies AI at the University of Washington. She’s also the host of a podcast with a very cool name: Mystery AI Hype Theater 3000.

Feltman: That’s a great reference; I love it [laughs].

Parshall: Fantastic [laughs].

And before our conversation I had actually dug up one of Emily’s linguistics papers and fed it to NotebookLM’s Audio Overviews, and I sent that to her to get her thoughts on how it represented her work.

Emily Bender: So the first thing I have to say is that I usually avoid listening to synthetic media. Nobody’s accountable for it, and so I don’t want to be spending time trying to make sense of it. But I did listen to it, and it was painful [laughs].

It was painful partially because that was a paper that was sort of a update at the 20-year mark of this long-term project we’ve been working on, and so there’s a little case study in our paper about a grammar for a language called Paresi-Haliti from Brazil, and they make a big deal about it as this experiment on Paresi-Haliti—it’s, like, no, actually that’s not how we framed it. And the voices on the podcast sound very confident, sound very knowledgeable. They’re lighthearted, and it sounds like they are giving you a sense of how this work fits into a broader picture when they absolutely cannot.

Parshall: Rachel, you know pretty well what it takes to turn a scientific paper into an interesting and engaging podcast, do you not?

Feltman: Yeah, I mean, not to toot my own horn or anything ’cause I think the AI people [laughs] just did that enough to last me all of 2025. But yeah, it’s tough. You know, we fact-check this show really thoroughly, and that’s on top of having, like, trained reporters and editors doing their best to present stuff in a way that is engaging and also accurate. So yeah, it’s a big undertaking.

Parshall: Yeah, the fact-checking is very thorough—shout-out.

I asked Emily about whether this seems like something an LLM, a large language model, can do well currently.

Bender: So that is an extremely difficult task, and I think the problem is that LLMs can make it sound like they’ve done it when they haven’t. And so you have to sort of have a deep understanding of what’s going on: What’s new here? What have we learned that we didn’t know before? And then what else would the lay audience need to know in order to appreciate that?

Well, this NotebookLM thing isn’t doing that at all; some percentage of it’s gonna be wrong, either wrong in the details or wrong in sort of where it lays the focus, and the people who are encountering it aren’t mostly gonna be in a position to figure that out.

Parshall: I asked Steven how his team at Google thinks about this and tries to address some of these inaccuracies, and he said that the text-based features, which do provide citations, are generally more accurate than the audio.

Johnson: If you are looking for, like, the most factual thing you could possibly get from a language model with grounded sources, we believe that is NotebookLM, but you should do that in text. And if you’re looking for something that’s more entertaining, that may be a little bit looser and doesn’t have citations, obviously, Audio Overviews is a great way to go for that.

Parshall: I also spoke with Anjana Susarla, who studies responsible AI at Michigan State University. She called NotebookLM’s Audio Overview an “incredible tool” and was impressed by the Deep Dive podcast it had generated based off of one of her own studies. But besides accuracy she also had some other concerns.

First among them was bias. These are very white-sounding voices, They’re just speaking in a very particular, quote, “general American accent.” They’re trained on training data mostly from Western parts of the world. And so, like almost all applications of AI, you have to ask the question: Who is this for? Where is it getting its data, and who is it leaving out?

Second, the environmental cost—we have no idea how much computing power this takes. And since the AI boom Google’s AI energy usage has been going up.

And then there’s the copyright question. So here’s Anjana.

Susarla: The final thing is, of course, this relies on some, you know, the content, the copyright. Where is this data kind of coming from, and is this going to exacerbate all these existing inequities and their ethical concerns about broader implications, and so forth?

Parshall: Yeah, so that big question of where the training data is coming from: one place that it could be is from YouTube, which Google does own. Following some reporting by the New York Times, Google admitted that it used some YouTube videos to train AI. And that’s something that might have violated the copyright of the YouTubers that content belonged to, according to the New York Times.

I asked a Google spokesperson directly whether the models underpinning Audio Overviews were trained using any transcripts, audio or video from YouTube, and they said that “Google’s models can be trained on some YouTube content, in accordance with [their] agreements with YouTube creators.”

Feltman: Very vague [laughs].

Parshall: Yes, when—I did ask them which agreements they were referring to, if it was the standard YouTube license, which just, like, governs most YouTube videos, or if it was something that their creators had specifically opted into; they declined to clarify.

Susarla: If Google is indeed training these on a lot of content scraped from YouTube—which it may well be. I mean, they do need a lot of training data to build models like this; we don’t know. But this is kind of the next step in general of where we are with all these models.

Parshall: So for this next step Anjana points to a law in California that was enacted this fall that will eventually require companies based in the state, like Google, to disclose their training data for their AI models.

And honestly, Rachel, I feel put in kind of a strange position here as a reporter because I feel like that’s where you and I kind of enter the story and our whole team here because, as Emily put it to me, quote, “they’re getting that tone of voice somewhere.”

Did you hear anything in that tone of voice that sounded familiar [laughs]?

Feltman: [Laughs] You know, Allison, I really did [laughs]. So yeah, the first time I heard a snippet of one of these AI conversations, like I said, I had a lot of feelings about it ’cause one of my first thoughts was, like, “Wow, they stole my voice.” I mean, not me specifically; I’m not that vain, despite what the AI podcast may have led you to believe [laughs].

Parshall: [Laughs] Your ego has been inflated.

Feltman: [Laughs] Yeah, yeah. But so much of the texture that makes this audio sound engaging and real feels just ripped straight from existing podcasts. And I know it’s ripped from existing podcasts because a lot of the stuff that I’m hearing—which is, like, breath sounds and filler words and, like, a little bit of uptalk, some, some giggling, occasional vocal fry—this is the stuff I get angry emails about from gentlemen of a certain age [laughs] who think I’m ruining the great tradition of radio broadcasting. And they’re making this sound real by, like, breaking the same so-called rules that me and other women and queer people on podcasts have been criticized for breaking. So I won’t lie, that feels kind of gross to me.

Parshall: Yeah, one of the things that Steven mentioned, too, was they wanted the sound of, like, this great radio broadcasting show. And it is worth noting that Science Quickly is on YouTube—and all of our hard work at Scientific American and the hard work of other outlets.

So, the question remains: How will this be used? Will it be used responsibly? Here is what Anjana thinks.

Susarla: Right now it seems more like everyone’s just dazzled by what it can do rather than: What are we gonna use it for? Do we really need these, you know, capabilities?

I can think of one or two examples, I think, in the health care context: Can you generate clear summaries for health literacy or your doctor’s notes? So this may be a good way to give, like, a summary of—rather than giving a piece of paper to read.

Is that going to have some challenges? What are some specific issues we need to be looking for? We’ve just unleashed this tool on people, and we’re still catching up. We will need audit studies and then look at trustworthiness, assurance, and so forth.

So what’s the right way to responsibly use NotebookLM? We are still figuring it out [laughs].

Parshall: Well, I think where I want to leave us is with an important point from Emily, which is: you shouldn’t let the incredible fluency of these AI-generated voices fool you into thinking and talking about these models as if they’re human.

Emily coined a term that’s been used quite widely called the “stochastic parrot.” Like, when a parrot says, like, “Polly want a cracker,” it’s not really connected to meaning; it’s just mimicking. She also sometimes calls these large language models, quote, “synthetic text extruding machines,” she said because it’s more accurate to what they’re actually doing and also sounds a little gross.

But I think she’s getting at an important point there, right? It’s, like, this—just a philosophical question of: What is language for, and what are we communicating when we say words?

Bender: Languages are systems that are pairings of form and meaning, systems of signs. So when we speak we are using word forms because they relate to specific meanings, but language models only have access to the form. And so they can give us stuff that looks good, it’s got the right shape, but there’s no relationship to meaning, no communicative intent, no accountability.

Feltman: I guess sort of the takeaway on my complicated feelings on this is: I’m not worried that my job is gonna be stolen by a collection of tabs [laughs] being processed by Google because it is a lot of work to be accurate. And if people find this kind of engaging conversation a more accessible way to sort through really dense text, that’s awesome. But, of course, there are so many caveats about where the training data has come from, bias, accuracy. So I think it’s, like—just people need to be really aware of what they’re getting, and I don’t know that I feel that the products themselves are doing a great job of making people aware of those caveats.

Parshall: Yeah, there’s usually just kind of a disclaimer at the bottom. I think one of the things that this whole AI boom—like this, the AI search features—have made me feel kind of uncomfortable about: Are we going to be expected, and have we already started to expect, our information to contain just that more amount of mistruths? It’s something that I just worry about.

Feltman: Well, Allison, thanks so much for coming on to chat about this and also for having a couple of AI voices gas me up. Thank you.

Parshall: Thank you.

Feltman: That’s all for today’s episode. We’ll be back on Friday with Episode One of a four-part miniseries on the future of animal conservation.

Science Quickly is produced by me, Rachel Feltman, along with Fonda Mwangi, Kelso Harper, Madison Goldberg and Jeff DelViscio. Today’s episode was reported and co-hosted by Allison Parshall. Shayna Posses and Aaron Shattuck fact-check our show. Our theme music was composed by Dominic Smith. Subscribe to Scientific American for more up-to-date and in-depth science news.

For Scientific American, this is Rachel Feltman. See you next time!

Read the full story here.
Photos courtesy of

Lifesize herd of puppet animals begins climate action journey from Africa to Arctic Circle

The Herds project from the team behind Little Amal will travel 20,000km taking its message on environmental crisis across the worldHundreds of life-size animal puppets have begun a 20,000km (12,400 mile) journey from central Africa to the Arctic Circle as part of an ambitious project created by the team behind Little Amal, the giant puppet of a Syrian girl that travelled across the world.The public art initiative called The Herds, which has already visited Kinshasa and Lagos, will travel to 20 cities over four months to raise awareness of the climate crisis. Continue reading...

Hundreds of life-size animal puppets have begun a 20,000km (12,400 mile) journey from central Africa to the Arctic Circle as part of an ambitious project created by the team behind Little Amal, the giant puppet of a Syrian girl that travelled across the world.The public art initiative called The Herds, which has already visited Kinshasa and Lagos, will travel to 20 cities over four months to raise awareness of the climate crisis.It is the second major project from The Walk Productions, which introduced Little Amal, a 12-foot puppet, to the world in Gaziantep, near the Turkey-Syria border, in 2021. The award-winning project, co-founded by the Palestinian playwright and director Amir Nizar Zuabi, reached 2 million people in 17 countries as she travelled from Turkey to the UK.The Herds’ journey began in Kinshasa’s Botanical Gardens on 10 April, kicking off four days of events. It moved on to Lagos, Nigeria, the following week, where up to 5,000 people attended events performed by more than 60 puppeteers.On Friday the streets of Dakar in Senegal will be filled with more than 40 puppet zebras, wildebeest, monkeys, giraffes and baboons as they run through Médina, one of the busiest neighbourhoods, where they will encounter a creation by Fabrice Monteiro, a Belgium-born artist who lives in Senegal, and is known for his large-scale sculptures. On Saturday the puppets will be part of an event in the fishing village of Ngor.The Herds’ 20,000km journey began in Kinshasa, the Democratic Republic of the Congo. Photograph: Berclaire/walk productionsThe first set of animal puppets was created by Ukwanda Puppetry and Designs Art Collective in Cape Town using recycled materials, but in each location local volunteers are taught how to make their own animals using prototypes provided by Ukwanda. The project has already attracted huge interest from people keen to get involved. In Dakar more than 300 artists applied for 80 roles as artists and puppet guides. About 2,000 people will be trained to make the puppets over the duration of the project.“The idea is that we’re migrating with an ever-evolving, growing group of animals,” Zuabi told the Guardian last year.Zuabi has spoken of The Herds as a continuation of Little Amal’s journey, which was inspired by refugees, who often cite climate disaster as a trigger for forced migration. The Herds will put the environmental emergency centre stage, and will encourage communities to launch their own events to discuss the significance of the project and get involved in climate activism.The puppets are created with recycled materials and local volunteers are taught how to make them in each location. Photograph: Ant Strack“The idea is to put in front of people that there is an emergency – not with scientific facts, but with emotions,” said The Herds’ Senegal producer, Sarah Desbois.She expects thousands of people to view the four events being staged over the weekend. “We don’t have a tradition of puppetry in Senegal. As soon as the project started, when people were shown pictures of the puppets, they were going crazy.”Little Amal, the puppet of a Syrian girl that has become a symbol of human rights, in Santiago, Chile on 3 January. Photograph: Anadolu/Getty ImagesGrowing as it moves, The Herds will make its way from Dakar to Morocco, then into Europe, including London and Paris, arriving in the Arctic Circle in early August.

Dead, sick pelicans turning up along Oregon coast

So far, no signs of bird flu but wildlife officials continue to test the birds.

Sick and dead pelicans are turning up on Oregon’s coast and state wildlife officials say they don’t yet know why. The Oregon Department of Fish and Wildlife says it has collected several dead brown pelican carcasses for testing. Lab results from two pelicans found in Newport have come back negative for highly pathogenic avian influenza, also known as bird flu, the agency said. Avian influenza was detected in Oregon last fall and earlier this year in both domestic animals and wildlife – but not brown pelicans. Additional test results are pending to determine if another disease or domoic acid toxicity caused by harmful algal blooms may be involved, officials said. In recent months, domoic acid toxicity has sickened or killed dozens of brown pelicans and numerous other wildlife in California. The sport harvest for razor clams is currently closed in Oregon – from Cascade Head to the California border – due to high levels of domoic acid detected last fall.Brown pelicans – easily recognized by their large size, massive bill and brownish plumage – breed in Southern California and migrate north along the Oregon coast in spring. Younger birds sometimes rest on the journey and may just be tired, not sick, officials said. If you find a sick, resting or dead pelican, leave it alone and keep dogs leashed and away from wildlife. State wildlife biologists along the coast are aware of the situation and the public doesn’t need to report sick, resting or dead pelicans. — Gosia Wozniacka covers environmental justice, climate change, the clean energy transition and other environmental issues. Reach her at gwozniacka@oregonian.com or 971-421-3154.Our journalism needs your support. Subscribe today to OregonLive.com.

50-Million-Year-Old Footprints Open a 'Rare Window' Into the Behaviors of Extinct Animals That Once Roamed in Oregon

Scientists revisited tracks made by a shorebird, a lizard, a cat-like predator and some sort of large herbivore at what is now John Day Fossil Beds National Monument

50-Million-Year-Old Footprints Open a ‘Rare Window’ Into the Behaviors of Extinct Animals That Once Roamed in Oregon Scientists revisited tracks made by a shorebird, a lizard, a cat-like predator and some sort of large herbivore at what is now John Day Fossil Beds National Monument Sarah Kuta - Daily Correspondent April 24, 2025 4:59 p.m. Researchers took a closer look at fossilized footprints—including these cat-like tracks—found at John Day Fossil Beds National Monument in Oregon. National Park Service Between 29 million and 50 million years ago, Oregon was teeming with life. Shorebirds searched for food in shallow water, lizards dashed along lake beds and saber-toothed predators prowled the landscape. Now, scientists are learning more about these prehistoric creatures by studying their fossilized footprints. They describe some of these tracks, discovered at John Day Fossil Beds National Monument, in a paper published earlier this year in the journal Palaeontologia Electronica. John Day Fossil Beds National Monument is a nearly 14,000-acre, federally protected area in central and eastern Oregon. It’s a well-known site for “body fossils,” like teeth and bones. But, more recently, paleontologists have been focusing their attention on “trace fossils”—indirect evidence of animals, like worm burrows, footprints, beak marks and impressions of claws. Both are useful for understanding the extinct creatures that once roamed the environment, though they provide different kinds of information about the past. “Body fossils tell us a lot about the structure of an organism, but a trace fossil … tells us a lot about behaviors,” says lead author Conner Bennett, an Earth and environmental scientist at Utah Tech University, to Crystal Ligori, host of Oregon Public Broadcasting’s “All Things Considered.” Oregon's prehistoric shorebirds probed for food the same way modern shorebirds do, according to the researchers. Bennett et al., Palaeontologia Electronica, 2025 For the study, scientists revisited fossilized footprints discovered at the national monument decades ago. Some specimens had sat in museum storage since the 1980s. They analyzed the tracks using a technique known as photogrammetry, which involved taking thousands of photographs to produce 3D models. These models allowed researchers to piece together some long-gone scenes. Small footprints and beak marks were discovered near invertebrate trails, suggesting that ancient shorebirds were pecking around in search of a meal between 39 million and 50 million years ago. This prehistoric behavior is “strikingly similar” to that of today’s shorebirds, according to a statement from the National Park Service. “It’s fascinating,” says Bennett in the statement. “That is an incredibly long time for a species to exhibit the same foraging patterns as its ancestors.” Photogrammetry techniques allowed the researchers to make 3D models of the tracks. Bennett et al., Palaeontologia Electronica, 2025 Researchers also analyzed a footprint with splayed toes and claws. This rare fossil was likely made by a running lizard around 50 million years ago, according to the team. It’s one of the few known reptile tracks in North America from that period. An illustration of a nimravid, an extinct, cat-like predator NPS / Mural by Roger Witter They also found evidence of a cat-like predator dating to roughly 29 million years ago. A set of paw prints, discovered in a layer of volcanic ash, likely belonged to a bobcat-sized, saber-toothed predator resembling a cat—possibly a nimravid of the genus Hoplophoneus. Since researchers didn’t find any claw marks on the paw prints, they suspect the creature had retractable claws, just like modern cats do. A set of three-toed, rounded hoofprints indicate some sort of large herbivore was roaming around 29 million years ago, probably an ancient tapir or rhinoceros ancestor. Together, the fossil tracks open “a rare window into ancient ecosystems,” says study co-author Nicholas Famoso, paleontology program manager at the national monument, in the statement. “They add behavioral context to the body fossils we’ve collected over the years and help us better understand the climate and environmental conditions of prehistoric Oregon,” he adds. Get the latest stories in your inbox every weekday.

Two teens and 5,000 ants: how a smuggling bust shed new light on a booming trade

Two Belgian 19-year-olds have pleaded guilty to wildlife piracy – part of a growing trend of trafficking ‘less conspicuous’ creatures for sale as exotic petsPoaching busts are familiar territory for the officers of Kenya Wildlife Service (KWS), an armed force tasked with protecting the country’s iconic creatures. But what awaited guards when they descended in early April on a guesthouse in the west of the country was both larger and smaller in scale than the smuggling operations they typically encounter. There were more than 5,000 smuggled animals, caged in their own enclosures. Each one, however, was about the size of a little fingernail: 18-25mm.The cargo, which two Belgian teenagers had apparently intended to ship to exotic pet markets in Europe and Asia, was ants. Their enclosures were a mixture of test tubes and syringes containing cotton wool – environments that authorities say would keep the insects alive for weeks. Continue reading...

Poaching busts are familiar territory for the officers of Kenya Wildlife Service (KWS), an armed force tasked with protecting the country’s iconic creatures. But what awaited guards when they descended in early April on a guesthouse in the west of the country was both larger and smaller in scale than the smuggling operations they typically encounter. There were more than 5,000 smuggled animals, caged in their own enclosures. Each one, however, was about the size of a little fingernail: 18-25mm.The samples of garden ants presented to the court. Photograph: Monicah Mwangi/ReutersThe cargo, which two Belgian teenagers had apparently intended to ship to exotic pet markets in Europe and Asia, was ants. Their enclosures were a mixture of test tubes and syringes containing cotton wool – environments that authorities say would keep the insects alive for weeks.“We did not come here to break any laws. By accident and stupidity we did,” says Lornoy David, one of the Belgian smugglers.David and Seppe Lodewijckx, both 19 years old, pleaded guilty after being charged last week with wildlife piracy, alongside two other men in a separate case who were caught smuggling 400 ants. The cases have shed new light on booming global ant trade – and what authorities say is a growing trend of trafficking “less conspicuous” creatures.These crimes represent “a shift in trafficking trends – from iconic large mammals to lesser-known yet ecologically critical species”, says a KWS statement.The unusual case has also trained a spotlight on the niche world of ant-keeping and collecting – a hobby that has boomed over the past decade. The seized species include Messor cephalotes, a large red harvester ant native to east Africa. Queens of the species grow to about 20-24mm long, and the ant sales website Ants R Us describes them as “many people’s dream species”, selling them for £99 per colony. The ants are prized by collectors for their unique behaviours and complex colony-building skills, “traits that make them popular in exotic pet circles, where they are kept in specialised habitats known as formicariums”, KWS says.Lornoy David and Seppe Lodewijckx during the hearing. Photograph: Monicah Mwangi/ReutersOne online ant vendor, who asked not to be named, says the market is thriving, and there has been a growth in ant-keeping shows, where enthusiasts meet to compare housing and species details. “Sales volumes have grown almost every year. There are more ant vendors than before, and prices have become more competitive,” he says. “In today’s world, where most people live fast-paced, tech-driven lives, many are disconnected from themselves and their environment. Watching ants in a formicarium can be surprisingly therapeutic,” he says.David and Lodewijckx will remain in custody until the court considers a pre-sentencing report on 23 April. The ant seller says theirs is a “landmark case in the field”. “People travelling to other countries specifically to collect ants and then returning with them is virtually unheard of,” he says.A formicarium at a pet shop in Singapore. Photograph: Roslan Rahman/AFP/Getty ImagesScientists have raised concerns that the burgeoning trade in exotic ants could pose a significant biodiversity risk. “Ants are traded as pets across the globe, but if introduced outside of their native ranges they could become invasive with dire environmental and economic consequences,” researchers conclude in a 2023 paper tracking the ant trade across China. “The most sought-after ants have higher invasive potential,” they write.Removing ants from their ecosystems could also be damaging. Illegal exportation “not only undermines Kenya’s sovereign rights over its biodiversity but also deprives local communities and research institutions of potential ecological and economic benefits”, says KWS. Dino Martins, an entomologist and evolutionary biologist in Kenya, says harvester ants are among the most important insects on the African savannah, and any trade in them is bound to have negative consequences for the ecology of the grasslands.A Kenyan official arranges the containers of ants at the court. Photograph: Kenya Wildlife Service/AP“Harvester ants are seed collectors, and they gather [the seeds] as food for themselves, storing these in their nests. A single large harvester ant colony can collect several kilos of seeds of various grasses a year. In the process of collecting grass seeds, the ants ‘drop’ a number … dispersing them through the grasslands,” says Martins.The insects also serve as food for various other species including aardvarks, pangolins and aardwolves.Martins says he is surprised to see that smugglers feeding the global “pet” trade are training their sights on Kenya, since “ants are among the most common and widespread of insects”.“Insect trade can actually be done more sustainably, through controlled rearing of the insects. This can support livelihoods in rural communities such as the Kipepeo Project which rears butterflies in Kenya,” he says. Locally, the main threats to ants come not from the illegal trade but poisoning from pesticides, habitat destruction and invasive species, says Martins.Philip Muruthi, a vice-president for conservation at the African Wildlife Foundation in Nairobi, says ants enrich soils, enabling germination and providing food for other species.“When you see a healthy forest … you don’t think about what is making it healthy. It is the relationships all the way from the bacteria to the ants to the bigger things,” he says.

Belgian Teenagers Found With 5,000 Ants to Be Sentenced in 2 Weeks

Two Belgian teenagers who were found with thousands of ants valued at $9,200 and allegedly destined for European and Asian markets will be sentenced in two weeks

NAIROBI, Kenya (AP) — Two Belgian teenagers who were found with thousands of ants valued at $9,200 and allegedly destined for European and Asian markets will be sentenced in two weeks, a Kenyan magistrate said Wednesday.Magistrate Njeri Thuku, sitting at the court in Kenya’s main airport, said she would not rush the case but would take time to review environmental impact and psychological reports filed in court before passing sentence on May 7.Belgian nationals Lornoy David and Seppe Lodewijckx, both 19 years old, were arrested on April 5 with 5,000 ants at a guest house. They were charged on April 15 with violating wildlife conservation laws.The teens have told the magistrate that they didn’t know that keeping the ants was illegal and were just having fun.The Kenya Wildlife Service had said the case represented “a shift in trafficking trends — from iconic large mammals to lesser-known yet ecologically critical species.”Kenya has in the past fought against the trafficking of body parts of larger wild animals such as elephants, rhinos and pangolins among others.The Belgian teens had entered the country on a tourist visa and were staying in a guest house in the western town of Naivasha, popular among tourists for its animal parks and lakes.Their lawyer, Halima Nyakinyua Magairo, told The Associated Press on Wednesday that her clients did not know what they were doing was illegal. She said she hoped the Belgian embassy in Kenya could “support them more in this judicial process.”In a separate but related case, Kenyan Dennis Ng’ang’a and Vietnamese Duh Hung Nguyen were charged after they were found in possession of 400 ants in their apartment in the capital, Nairobi.KWS had said all four suspects were involved in trafficking the ants to markets in Europe and Asia, and that the species included messor cephalotes, a distinctive, large and red-colored harvester ant native to East Africa.The ants are bought by people who keep them as pets and observe them in their colonies. Several websites in Europe have listed different species of ants for sale at varied prices.The 5,400 ants found with the four men are valued at 1.2 million Kenyan shillings ($9,200), according to KWS.Copyright 2025 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.Photos You Should See - Feb. 2025

Suggested Viewing

Join us to forge
a sustainable future

Our team is always growing.
Become a partner, volunteer, sponsor, or intern today.
Let us know how you would like to get involved!

CONTACT US

sign up for our mailing list to stay informed on the latest films and environmental headlines.

Subscribers receive a free day pass for streaming Cinema Verde.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.