Cookies help us run our site more efficiently.

By clicking “Accept”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information or to customize your cookie preferences.

Advancing technology for aquaculture

News Feed
Thursday, April 18, 2024

According to the National Oceanic and Atmospheric Administration, aquaculture in the United States represents a $1.5 billion industry annually. Like land-based farming, shellfish aquaculture requires healthy seed production in order to maintain a sustainable industry. Aquaculture hatchery production of shellfish larvae — seeds — requires close monitoring to track mortality rates and assess health from the earliest stages of life.  Careful observation is necessary to inform production scheduling, determine effects of naturally occurring harmful bacteria, and ensure sustainable seed production. This is an essential step for shellfish hatcheries but is currently a time-consuming manual process prone to human error.  With funding from MIT’s Abdul Latif Jameel Water and Food Systems Lab (J-WAFS), MIT Sea Grant is working with Associate Professor Otto Cordero of the MIT Department of Civil and Environmental Engineering, Professor Taskin Padir and Research Scientist Mark Zolotas at the Northeastern University Institute for Experiential Robotics, and others at the Aquaculture Research Corporation (ARC), and the Cape Cod Commercial Fishermen’s Alliance, to advance technology for the aquaculture industry. Located on Cape Cod, ARC is a leading shellfish hatchery, farm, and wholesaler that plays a vital role in providing high-quality shellfish seed to local and regional growers. Two MIT students have joined the effort this semester, working with Robert Vincent, MIT Sea Grant’s assistant director of advisory services, through the Undergraduate Research Opportunities Program (UROP).  First-year student Unyime Usua and sophomore Santiago Borrego are using microscopy images of shellfish seed from ARC to train machine learning algorithms that will help automate the identification and counting process. The resulting user-friendly image recognition tool aims to aid aquaculturists in differentiating and counting healthy, unhealthy, and dead shellfish larvae, improving accuracy and reducing time and effort. Vincent explains that AI is a powerful tool for environmental science that enables researchers, industry, and resource managers to address challenges that have long been pinch points for accurate data collection, analysis, predictions, and streamlining processes. “Funding support from programs like J-WAFS enable us to tackle these problems head-on,” he says.  ARC faces challenges with manually quantifying larvae classes, an important step in their seed production process. "When larvae are in their growing stages they are constantly being sized and counted,” explains Cheryl James, ARC larval/juvenile production manager. “This process is critical to encourage optimal growth and strengthen the population."  Developing an automated identification and counting system will help to improve this step in the production process with time and cost benefits. “This is not an easy task,” says Vincent, “but with the guidance of Dr. Zolotas at the Northeastern University Institute for Experiential Robotics and the work of the UROP students, we have made solid progress.”  The UROP program benefits both researchers and students. Involving MIT UROP students in developing these types of systems provides insights into AI applications that they might not have considered, providing opportunities to explore, learn, and apply themselves while contributing to solving real challenges. Borrego saw this project as an opportunity to apply what he’d learned in class 6.390 (Introduction to Machine Learning) to a real-world issue. “I was starting to form an idea of how computers can see images and extract information from them,” he says. “I wanted to keep exploring that.” Usua decided to pursue the project because of the direct industry impacts it could have. “I’m pretty interested in seeing how we can utilize machine learning to make people’s lives easier. We are using AI to help biologists make this counting and identification process easier.” While Usua wasn’t familiar with aquaculture before starting this project, she explains, “Just hearing about the hatcheries that Dr. Vincent was telling us about, it was unfortunate that not a lot of people know what’s going on and the problems that they’re facing.” On Cape Cod alone, aquaculture is an $18 million per year industry. But the Massachusetts Division of Marine Fisheries estimates that hatcheries are only able to meet 70–80 percent of seed demand annually, which impacts local growers and economies. Through this project, the partners aim to develop technology that will increase seed production, advance industry capabilities, and help understand and improve the hatchery microbiome. Borrego explains the initial challenge of having limited data to work with. “Starting out, we had to go through and label all of the data, but going through that process helped me learn a lot.” In true MIT fashion, he shares his takeaway from the project: “Try to get the best out of what you’re given with the data you have to work with. You’re going to have to adapt and change your strategies depending on what you have.” Usua describes her experience going through the research process, communicating in a team, and deciding what approaches to take. “Research is a difficult and long process, but there is a lot to gain from it because it teaches you to look for things on your own and find your own solutions to problems.” In addition to increasing seed production and reducing the human labor required in the hatchery process, the collaborators expect this project to contribute to cost savings and technology integration to support one of the most underserved industries in the United States.  Borrego and Usua both plan to continue their work for a second semester with MIT Sea Grant. Borrego is interested in learning more about how technology can be used to protect the environment and wildlife. Usua says she hopes to explore more projects related to aquaculture. “It seems like there’s an infinite amount of ways to tackle these issues.”

MIT Sea Grant students apply machine learning to support local aquaculture hatcheries.

According to the National Oceanic and Atmospheric Administration, aquaculture in the United States represents a $1.5 billion industry annually. Like land-based farming, shellfish aquaculture requires healthy seed production in order to maintain a sustainable industry. Aquaculture hatchery production of shellfish larvae — seeds — requires close monitoring to track mortality rates and assess health from the earliest stages of life. 

Careful observation is necessary to inform production scheduling, determine effects of naturally occurring harmful bacteria, and ensure sustainable seed production. This is an essential step for shellfish hatcheries but is currently a time-consuming manual process prone to human error. 

With funding from MIT’s Abdul Latif Jameel Water and Food Systems Lab (J-WAFS), MIT Sea Grant is working with Associate Professor Otto Cordero of the MIT Department of Civil and Environmental Engineering, Professor Taskin Padir and Research Scientist Mark Zolotas at the Northeastern University Institute for Experiential Robotics, and others at the Aquaculture Research Corporation (ARC), and the Cape Cod Commercial Fishermen’s Alliance, to advance technology for the aquaculture industry. Located on Cape Cod, ARC is a leading shellfish hatchery, farm, and wholesaler that plays a vital role in providing high-quality shellfish seed to local and regional growers.

Two MIT students have joined the effort this semester, working with Robert Vincent, MIT Sea Grant’s assistant director of advisory services, through the Undergraduate Research Opportunities Program (UROP). 

First-year student Unyime Usua and sophomore Santiago Borrego are using microscopy images of shellfish seed from ARC to train machine learning algorithms that will help automate the identification and counting process. The resulting user-friendly image recognition tool aims to aid aquaculturists in differentiating and counting healthy, unhealthy, and dead shellfish larvae, improving accuracy and reducing time and effort.

Vincent explains that AI is a powerful tool for environmental science that enables researchers, industry, and resource managers to address challenges that have long been pinch points for accurate data collection, analysis, predictions, and streamlining processes. “Funding support from programs like J-WAFS enable us to tackle these problems head-on,” he says. 

ARC faces challenges with manually quantifying larvae classes, an important step in their seed production process. "When larvae are in their growing stages they are constantly being sized and counted,” explains Cheryl James, ARC larval/juvenile production manager. “This process is critical to encourage optimal growth and strengthen the population." 

Developing an automated identification and counting system will help to improve this step in the production process with time and cost benefits. “This is not an easy task,” says Vincent, “but with the guidance of Dr. Zolotas at the Northeastern University Institute for Experiential Robotics and the work of the UROP students, we have made solid progress.” 

The UROP program benefits both researchers and students. Involving MIT UROP students in developing these types of systems provides insights into AI applications that they might not have considered, providing opportunities to explore, learn, and apply themselves while contributing to solving real challenges.

Borrego saw this project as an opportunity to apply what he’d learned in class 6.390 (Introduction to Machine Learning) to a real-world issue. “I was starting to form an idea of how computers can see images and extract information from them,” he says. “I wanted to keep exploring that.”

Usua decided to pursue the project because of the direct industry impacts it could have. “I’m pretty interested in seeing how we can utilize machine learning to make people’s lives easier. We are using AI to help biologists make this counting and identification process easier.” While Usua wasn’t familiar with aquaculture before starting this project, she explains, “Just hearing about the hatcheries that Dr. Vincent was telling us about, it was unfortunate that not a lot of people know what’s going on and the problems that they’re facing.”

On Cape Cod alone, aquaculture is an $18 million per year industry. But the Massachusetts Division of Marine Fisheries estimates that hatcheries are only able to meet 70–80 percent of seed demand annually, which impacts local growers and economies. Through this project, the partners aim to develop technology that will increase seed production, advance industry capabilities, and help understand and improve the hatchery microbiome.

Borrego explains the initial challenge of having limited data to work with. “Starting out, we had to go through and label all of the data, but going through that process helped me learn a lot.” In true MIT fashion, he shares his takeaway from the project: “Try to get the best out of what you’re given with the data you have to work with. You’re going to have to adapt and change your strategies depending on what you have.”

Usua describes her experience going through the research process, communicating in a team, and deciding what approaches to take. “Research is a difficult and long process, but there is a lot to gain from it because it teaches you to look for things on your own and find your own solutions to problems.”

In addition to increasing seed production and reducing the human labor required in the hatchery process, the collaborators expect this project to contribute to cost savings and technology integration to support one of the most underserved industries in the United States. 

Borrego and Usua both plan to continue their work for a second semester with MIT Sea Grant. Borrego is interested in learning more about how technology can be used to protect the environment and wildlife. Usua says she hopes to explore more projects related to aquaculture. “It seems like there’s an infinite amount of ways to tackle these issues.”

Read the full story here.
Photos courtesy of

Aligning economic and regulatory frameworks for today’s nuclear reactor technology

Today’s regulations for nuclear reactors are unprepared for how the field is evolving. PhD student Liam Hines wants to ensure that policy keeps up with the technology.

Liam Hines ’22 didn't move to Sarasota, Florida, until high school, but he’s a Floridian through and through. He jokes that he’s even got a floral shirt, what he calls a “Florida formal,” for every occasion.Which is why it broke his heart when toxic red algae used to devastate the Sunshine State’s coastline, including at his favorite beach, Caspersen. The outbreak made headline news during his high school years, with the blooms destroying marine wildlife and adversely impacting the state’s tourism-driven economy.In Florida, Hines says, environmental awareness is pretty high because everyday citizens are being directly impacted by climate change. After all, it’s hard not to worry when beautiful white sand beaches are covered in dead fish. Ongoing concerns about the climate cemented Hines’ resolve to pick a career that would have a strong “positive environmental impact.” He chose nuclear, as he saw it as “a green, low-carbon-emissions energy source with a pretty straightforward path to implementation.”Undergraduate studies at MITKnowing he wanted a career in the sciences, Hines applied and got accepted to MIT for undergraduate studies in fall 2018. An orientation program hosted by the Department of Nuclear Science and Engineering (NSE) sold him on the idea of pursuing the field. “The department is just a really tight-knit community, and that really appealed to me,” Hines says.During his undergraduate years, Hines realized he needed a job to pay part of his bills. “Instead of answering calls at the dorm front desk or working in the dining halls, I decided I’m going to become a licensed nuclear operator onsite,” he says. “Reactor operations offer so much hands-on experience with real nuclear systems. It doesn’t hurt that it pays better.” Becoming a licensed nuclear reactor operator is hard work, however, involving a year-long training process studying maintenance, operations, and equipment oversight. A bonus: The job, supervising the MIT Nuclear Reactor Laboratory, taught him the fundamentals of nuclear physics and engineering.Always interested in research, Hines got an early start by exploring the regulatory challenges of advanced fusion systems. There have been questions related to licensing requirements and the safety consequences of the onsite radionuclide inventory. Hines’ undergraduate research work involved studying precedent for such fusion facilities and comparing them to experimental facilities such as Princeton University’s Tokamak Fusion Test Reactor.Doctoral focus on legal and regulatory frameworksWhen scientists want to make technologies as safe as possible, they have to do two things in concert: First they evaluate the safety of the technology, and then make sure legal and regulatory structures take into account the evolution of these advanced technologies. Hines is taking such a two-pronged approach to his doctoral work on nuclear fission systems.Under the guidance of Professor Koroush Shirvan, Hines is conducting systems modeling of various reactor cores that include graphite, and simulating operations under long time spans. He then studies radionuclide transport from low-level waste facilities — the consequences of offsite storage after 50 or 100 or even 10,000 years of storage. The work has to make sure to hit safety and engineering margins, but also tread a fine line. “You want to make sure you’re not over-engineering systems and adding undue cost, but also making sure to assess the unique hazards of these advanced technologies as accurately as possible,” Hines says.On a parallel track, under Professor Haruko Wainwright’s advisement, Hines is applying the current science on radionuclide geochemistry to track radionuclide wastes and map their profile for hazards. One of the challenges fission reactors face is that existing low-level waste regulations were fine-tuned to old reactors. Regulations have not kept up: “Now that we have new technologies with new wastes, some of the hazards of the new waste are completely missed by existing standards,” Hines says. He is working to seal these gaps.A philosophy-driven outlookHines is grateful for the dynamic learning environment at NSE. “A lot of the faculty have that go-getter attitude,” he points out, impressed by the entrepreneurial spirit on campus. “It’s made me confident to really tackle the things that I care about.”An ethics class as an undergraduate made Hines realize there were discussions in class he could apply to the nuclear realm, especially when it came to teasing apart the implications of the technology — where the devices would be built and who they would serve. He eventually went on to double-major in NSE and philosophy.The framework style of reading and reasoning involved in studying philosophy is particularly relevant in his current line of work, where he has to extract key points regarding nuclear regulatory issues. Much like philosophy discussions today that involve going over material that has been discussed for centuries and framing them through new perspectives, nuclear regulatory issues too need to take the long view.“In philosophy, we have to insert ourselves into very large conversations. Similarly, in nuclear engineering, you have to understand how to take apart the discourse that’s most relevant to your research and frame it,” Hines says. This technique is especially necessary because most of the time the nuclear regulatory issues might seem like wading in the weeds of nitty-gritty technical matters, but they can have a huge impact on the public and public perception, Hines adds.As for Florida, Hines visits every chance he can get. The red tide still surfaces but not as consistently as it once did. And since he started his job as a nuclear operator in his undergraduate days, Hines has progressed to senior reactor operator. This time around he gets to sign off on the checklists. “It’s much like when I was shift lead at Dunkin’ Donuts in high school,” Hines says, “everyone is kind of doing the same thing, but you get to be in charge for the afternoon.”

How Germany outfitted half a million balconies with solar panels

Meet balkonkraftwerk, the simple technology putting solar power in the hands of renters.

Matthias Weyland loves having people ask about his balcony. A pair of solar panels hang from the railing, casting a sheen of dark blue against the red brick of his apartment building. They’re connected to a microinverter plugged into a wall outlet and feed electricity directly into his home. On a sunny day, he’ll produce enough power to supply up to half of his family’s daily needs. Weyland is one of hundreds of thousands of people across Germany who have embraced balkonkraftwerk, or balcony solar. Unlike rooftop photovoltaics, the technology doesn’t require users to own their home, and anyone capable of plugging in an appliance can set it up. Most people buy the simple hardware online or at the supermarket for about $550 (500 euros.) The ease of installation and a potent mix of government policies to encourage adoption has made the wee arrays hugely popular. More than 550,000 of them dot cities and towns nationwide, half of which were installed in 2023. During the first half of this year, Germany added 200 megawatts of balcony solar. Regulations limit each system to just 800 watts, enough to power a small fridge or charge a laptop, but the cumulative effect is nudging the country toward its clean energy goals while giving apartment dwellers, who make up more than half of the population, an easy way to save money and address the climate crisis. “I love the feeling of charging the bike when the sun is shining, or having the washing machine run when the sun is shining, and to know that it comes directly from the sun,” Weyland said. “It’s a small step you can take as a tenant” and an act of “self-efficacy, to not just sit and wait until the climate crisis gets worse.” Balcony solar emerged around a decade ago, but didn’t catch on until four or five years ago, thanks in part to years of lobbying by solar and clean energy advocates for policies to foster its adoption. The German government enacted the first technical regulations for plug-in solar devices in 2019, allowing balcony solar systems to use standard electrical plugs and feed into the grid. That prompted an influx of plug-in devices and advocates to promote the technology. The pandemic helped fuel the surge in popularity as people spent time at home, working on DIY projects. More recently, the escalating energy prices that followed Russia’s invasion of Ukraine led more Germans to consider balcony solar. “People just did anything they could to reduce their energy bills,” said Wolfgang Gründinger, who works with the clean energy company Enpal. Federal and local policymakers have redoubled their efforts to make the technology more accessible. In April, the government simplified permitting and registration requirements, and in July, federal lawmakers passed renter protections that prevent landlords from arbitrarily blocking installations. Cities throughout Germany, including Berlin and Weyland’s home city of Kiel, have offered millions of euros in subsidies to install balcony solar. Gründinger and experts at the German Solar Industry Association noted that the devices don’t generate enough power to strain the grid, and their standardized design and safety features allow them to integrate smoothly and easily. Solar panels are connected to a microinverter that is plugged into a wall outlet and feeds electricity directly into the home. German regulations limit balcony solar systems to 800 watts, enough to power a small fridge or charge a laptop. Photo courtesy Matthias Weyland Despite the hype, most users concede that balcony solar provides modest cost and energy savings. Weyland spent around $530 for his 600-watt-capacity system. While he’s happy with how his south-facing panels perform during balmy weather, such days are rare in northern Germany. He estimates that he’ll save around $100 in annual electricity costs and recoup his investment in about five years.  That’s fairly typical, although advocates of the technology say a system’s efficacy — and, therefore, payback timeline — varies widely depending upon the number of panels, their location and direction, and how much shade surrounds them. A household with a “comparatively large well-positioned balcony system in a sunny spot facing south” can produce 15 percent of its electricity with balcony solar, according to Peter Stratmann, head of renewables at German Federal Network Agency, the country’s utility regulator.  While that can put a dent in a household’s utility bill, its impact on Germany’s consumption is far smaller. “Even if we attached panels to all suitable balconies across the country, we’d still only manage to meet 1 percent or less of our overall energy needs,” Stratmann told Deutsche Welle.  So if balcony solar doesn’t generate a lot of power or save a lot of money, why are so many people flocking to it? Many of them like the idea of producing energy at home and gaining a bit of independence from the grid. It also provides a tangible way to take climate action. “It makes the energy transition feel a little more concrete and not so abstract,” said Helena Holenweger of the nonprofit Deutsche Umwelthilfe, or Environmental Action Germany. She installed a balcony solar system on top of her garage about a year ago. “You can literally do something about it.” Holenweger and others who have tapped the sun said balcony solar led them to reevaluate their understanding of electricity consumption and take steps to reduce it. “For lots of people, energy is just something that comes out of your socket,” Holenweger said. “You never think about how it gets there or how it works.” The systems don’t include battery storage, so the juice they generate must be used immediately, leading people to plan the best time to, say, run the washing machine to ensure they’re using renewable energy. In that way, it becomes something of a game. Many balcony solar kits feature an app to track daily energy generation, providing what has, for many people, become a scorecard. “They screenshot that, they send it around to their Facebook groups, family WhatsApp groups. They’re super proud,” Gründinger said. Germany is unique in its rabid embrace of the tech. Although increasingly popular in Austria, the Netherlands, France, and elsewhere in Europe, plug-in solar devices aren’t viable in the United States due to costly permitting requirements and other local regulations. Beyond that, most systems are designed to European electrical standards, making them incompatible with U.S. power systems.  But even in Germany, balcony solar still faces hurdles, including fierce resistance from landlords worried about electrical fires or put off by the aesthetics of the panels. Last year, Weyland sued his building’s property management company for imposing what he deemed unreasonable requirements to install a system, including a formal inspection of the building’s electrical system. A court sided with him in October, 2023, but similar cases pop up regularly.  Weyland hopes that as more people adopt balcony solar, that will soon change. Already, people in his life regularly ask him about his panels, and two friends are buying systems of their own.  “So many people talk to me in our neighborhood and ask about the system when they see it,” Weyland said. “It’s kind of like a snowball that gets bigger and bigger.” This story was originally published by Grist with the headline How Germany outfitted half a million balconies with solar panels on Sep 26, 2024.

How Forensic Scientists Continue to Identify 9/11 Victims 23 Years after the Attacks

Forensic scientists are still working to identify victims of the 9/11 attacks using advancements in technology and techniques developed over the past two decades.

Rachel Feltman: Twenty-three years ago, a series of coordinated terrorist attacks killed nearly 3,000 people and turned Manhattan’s iconic World Trade Center into Ground Zero. Most of you probably remember seeing footage and photos of the long, complicated process of looking for victims in the smoldering debris. But you might not realize that for forensic scientists, that work is far from finished even today.For Scientific American’s Science Quickly, I’m Rachel Feltman. I’m joined today by Kathleen Corrado, the forensics executive director at Syracuse University College of Arts & Sciences. She’s here to tell us how the staggering scale of 9/11’s mass casualty event presented forensic scientists with new challenges—and how the lessons they learned are helping them identify wildfire victims, suspected criminals and the many remaining casualties of 9/11 itself.Thank you so much for joining us today.On supporting science journalismIf you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.Kathleen Corrado: My pleasure.Feltman: So broadly speaking, what kind of impact did 9/11 have on the forensic science community?Corrado: Well, the event that happened in 9/11 in the World Trade Center was basically the first time that DNA analysis was used to identify victims on such a large scale. So while there were about 2,700 victims or so, due to the fire, the explosion, the building collapse, there were a lot of very small samples. A lot of the bodies were degraded.... Really that’s the first time that we really had to think about: How do we deal with this many samples, this many people?Feltman: Mm.Corrado: We had to, you know, look at how we store the samples, how we track the samples. We had to think about software in terms of inventorying the samples, in terms of analyzing the DNA. We had to automate. And then again, with the samples being so degraded, it really affected the way that we process the samples.Feltman: Tell me more about some of those, you know, unique forensic challenges.Corrado: Right, so when we have natural disasters—whether it’s a fire, a flood—or something more accidental, like a plane crash, or a terrorist event, like a bombing, typically the way that bodies are identified are by different methods such as fingerprints and dental records and physical attributes, like tattoos, or if there’s some kind of a medical device, like if someone has a pacemaker or an artificial knee or hip, they have serial numbers on ’em.So that’s the typical way that bodies are identified. But in this instance, in 9/11, because of the jet fuel, there was a really large amount of fire, the building collapsed, a lot of the bodies were really, really degraded and compromised. And so that left us with a lot of really small fragments of bone and other items that you really couldn’t use any other identification method other than DNA.So one of the challenges, first off, was to basically determine what was bone, what wasn’t bone. And if it was bone, was it human bone? And the second challenge is: How do we get the DNA out of such a compromised sample?Feltman: Well, and I think that’s a great segue into talking about the new technologies that emerged. What are we doing differently now because of what forensic scientists learned after 9/11?Corrado: Right, so a lot of the samples were degraded, and so we had to come up with new ways of extracting the DNA: so basically taking the DNA out of the cell and then processing it. There also was—just due to the large volume of samples, everything was done manually, and it took quite a long time. It could take weeks or months to get through the process. And so we basically had automatic robotics that we could put in to process the samples. So those are some of the innovations that came out of that.In addition, one of the other things that we had to think about was the reference samples. So when you have bodies that we’re trying to identify, there’s two different ways we can identify them. One would be with what we call antemortem samples, which is when we’re taking a direct sample from the victim and comparing it: something like a toothbrush or a razor or earbuds—something like that that might have the victim’s DNA on it that we can do a direct comparison.And then a second type of comparison that we would do is where we compare the victim’s DNA to relatives. And so that would be first-degree relatives—we’re looking for parents, children, sometimes siblings. So basically there were a lot of challenges in 9/11 with just, you know, determining: How do you get the message out to these families that we need these samples? How do we tell them which family members we need to collect and what samples we need to collect?You know, when 9/11 happened, after 9/11 happened, it really was a wake-up call, saying: We need to have policies and procedures for this type of mass disaster. You know, we need to know who’s in charge, who’s collecting the samples, who’s gonna be the voice speaking to the families.There’s a lot of new policies and procedures in place that we have now so we know how to do this: we know how to put the message out and how to make sure that we’re getting the right samples.Feltman: Yeah. Can we talk a little bit more about the technological leaps that have happened? You know, I think some of our, our listeners might not know what the process of DNA extraction looked like in 2000 and what it looks like now, so I would love to get a little bit of an overview.Corrado: Yeah, so—absolutely. One of the biggest changes that’s happened is what we call the rapid DNA instruments—basically [they’re] a game changer. So [a] rapid DNA instrument, how it’s different is: previously what would happen is the samples would have to be collected at the site, they’d have to be shipped to the laboratory, and then the laboratory would manually process the samples—so they’d have to extract the DNA, and then take that DNA and generate a DNA profile, and then do the interpretation. And that could take weeks or months.Feltman: Mm.Corrado: With rapid DNA instruments now, all of those processes are done inside the instrument, so it’s one step. So you take the sample, whether that’s a swab of blood or perhaps a sample from bone that we can extract, we put it into the instrument, it does all of those processes within the instrument, and it does it in about 90 minutes ...Feltman: Wow.Corrado: Which truly is a game changer. So something that would take weeks or months before, we now can do quite quickly.Other benefits are, [two], that these instruments can be placed directly at the site. So we don’t have to send the samples to a lab; we can set up a makeshift lab, put these instruments right in the area where the disaster occurred and process the samples right there.And then the third reason why they’re very helpful is that we don’t need a DNA analyst—we don’t need an expert to run these samples. So as before, every sample had to be run by a DNA expert in the lab and interpreted by a DNA expert, these results are spit out in 90 minutes, and you don’t need to be a DNA expert to run it to get the results.And ... these types of instruments were used in the 2018 Camp Fire in California. So I think there were about 100 victims of that fire, and I think something close to, like, 80 percent of those samples were ID’d through DNA, which is really high. So prior to that it was—usually it was about 20 percent of samples were—we would use DNA to identify.Now we can use it not only just for the samples of the victims but also the family reference samples. So even before, all those family reference samples had to go to a lab. Now they can all be processed on-site in these instruments.And I believe it was also used in the Maui wildfires, and also it’s used in things like the war in Ukraine—I mean, these instruments have a lot of other uses besides mass disaster victim identification.Feltman: Yeah, well, and tell me more about the policies that emerged and changed because of 9/11. You mentioned that it was really a wake-up call in terms of needing systems in place. What are some of those systems?Corrado: Well, we have to make sure that we have a good policy in terms of what samples to collect, how those samples are stored, what will happen to those samples after they’re used and the data after they’re used. We also have to make sure that we have a single point person that can go ahead and give the information out to the public as well as to the families. We have to have safety. You know, we have to worry about hazards—biohazards. So all of those policies are in place.Additionally, with the reference samples, something that’s really important now is the informed consent. So we wanna make sure that the relatives that are giving their samples know what it is that they’re giving, know why they’re giving it and also they know what’s gonna happen to that sample and to that data afterwards—you know, is it gonna go into a database, or is it gonna be destroyed? So there’s informed consent now, which is really important in terms of protecting people’s privacy.Feltman: So are there any new technologies that actually emerged from the 9/11 investigation specifically?Corrado: Well, specifically from the 9/11 investigation there were new technologies in terms of how to analyze degraded samples. And particularly when we have these samples, they’re very small fragments of DNA, and previous to 9/11 we really weren’t able to get data from such small samples. And so after 9/11 and continuously we’ve been able to improve the extraction technologies for small samples.There’s also a new technology called next-generation sequencing that’s at the forefront right now. That technology will allow us to analyze samples that are even smaller. So when the DNA is broken up into small, small pieces, this technology will allow us to analyze even smaller samples, and then it allows us to build them together into a bigger, contiguous DNA profile or sequence, and that will allow us to have more sensitivity, so we’ll be able to analyze samples that are even smaller. And that technology is starting to be used even to identify more of the remains from 9/11 because only about 60 percent of the victims have been identified from the 9/11 event.Feltman: Wow. And outside of the 9/11 investigations, you know, how is that technology changing forensic science?Corrado: In the criminal justice system, similar to things like mass disasters, where we have degradation of samples, we have a lot of samples in crime scenes that are exposed to environmental conditions. There’s old samples, cold cases where there’s not a lot of DNA left. So all of these technologies that allow us to generate a DNA profile from a very small sample or a very degraded sample have really made leaps and bounds in terms of us being able to identify perpetrators of crimes.Another technology that's out there that I think is being used in criminal and in identification is SNPs, single nucleotide polymorphisms, and, in particular, that’s using externally visible characteristics, or EVCs. So, say we have a victim of a mass disaster that no one’s really looking for them—they don’t have family members that are looking for ’em, or there are no family reference samples ...Feltman: Mm-hmm.Corrado: What we can do with externally visible characteristics is: it can give us clues about the person’s eye color, their hair color, if they had freckles, their skin tone and their biogeographical ancestry. So if we don’t have something to compare to, we might be able to get information as to how this person looked—you know, what their external characteristics were—that might help us identify them.Feltman: And I assume that’s quite useful in forensic science for lots of other kinds of investigations, too.Corrado: It can be. It’s relatively new. And quite honestly it’s a little controversial because it’s not clear that we should be using externally visible characteristics to identify suspects, but there are companies out there that offer that service.Feltman: Sure, yeah, no, I can, I can see the potential issues in, in using it for suspect identification specifically.Well, are there any challenges related to mass casualty events that forensic scientists are still figuring out how to tackle?Corrado: Yeah, absolutely. So, you know, when it comes down to the mass disasters, certainly the environment still plays a huge effect. So, you know, like I said, if we have a fire, that can cause degradation. But also if we think about something like a flood—like think about the tsunami in 2004 in South Asia.First there was this flood, so all of the bodies were submerged underwater, and then they were scattered in such a large area, and it was really hot there; the sun’s beating down on these bodies. And so all of that causes the remains to degrade, and unfortunately there were so many victims in that mass disaster that they couldn’t collect everything quickly enough. And so in that instance the temperature and the heat really affected the ability to use DNA. So in those instances they really had to rely more on other types of mechanisms to identify a body, such as odontology or dental records or fingerprints. So in, in that instance I think DNA was used in very small numbers of the identifications.And secondly, I think another challenge that we faced in 9/11 that still happens to this day is getting the message out to the family members and collecting reference samples.So you can imagine—let’s use Maui as an example—it’s a little bit difficult when people are faced with all of these really traumatic experiences to say, “Hey, by the way, we need to collect a DNA sample from you.”In addition to that, there are sometimes a lot of reluctance for families to give a reference sample. There’s somewhat of a distrust of the government, and particularly in Maui, again, there’s some cultural issues to that. A lot of Indigenous people there had some concerns. They had issues in the past where the government was collecting their DNA to determine, you know, who had rights to land and things like that. So they had a lot of distrust. And so it’s hard to think about: How are we going to explain to these families why it’s so important for them to give their DNA samples if they want to identify their loved one?Another challenge that we still have, again, that we had in 9/11, we have in all of these situations is: no matter how good our identification methods are, whether it’s DNA or dental records or fingerprints, we still have to identify the remains. So it’s still gonna take the very first person, the anthropologist coming in, sifting through all the debris, saying, “Yeah, that’s a bone. No, it’s not a bone. Yeah, that’s human. No, it’s not human.” It’s a time-comprehensive process. So that’s sort of a limiting factor. So we still have to think about: Are there ways that we could perhaps move that part of the process a little bit faster?Feltman: Hmm, well, and just going back to something you mentioned earlier: you know, the fact that so many of the victims of 9/11 have not yet been identified. Could you tell me more about how that process is going?Corrado: Yes, so that project is still being worked [on] by the Office of the Chief Medical Examiner of New York City. So they have staff that are dedicated to that project. They have committed to identifying every one of those last remains if they can. And so they continue to analyze those, and basically they’re doing work in terms of: What are the new technologies out there?So we’ve talked about the new extraction technologies. They also use different types of DNA: they don’t just use nuclear DNA; they use mitochondrial DNA, which is another type of DNA that’s found in cells in higher copy numbers. So oftentimes in very degraded samples or in bone, there’s more mitochondrial DNA left than there is nuclear DNA. So that’s another process that they can use.And again they’re looking at this new technology called next-generation sequencing, which is a very different process than we currently use. And this is where we’re sequencing the base pairs of DNA, and next-generation sequencing has a promise of—it’s a lot more sensitive because it—we’re able to sequence a lot smaller fragments, and we can sequence the smaller fragments and then put them together into one larger fragment to read the sample and generate information. And so as this technology progresses, the labs are picking up this technology, validating it and using it in the hopes of identifying more of those remains.Feltman: Thank you so much for coming on. This was really fascinating.Corrado: Well, thank you so much for having me. I really appreciate it. And it was my pleasure.Feltman: That’s all for today’s episode. Tune in on Friday for something very special: a chat with an astronaut—from actual space—about how his time on the ISS is helping him take his photography hobby to new heights.In the meantime, do us a favor and leave us a quick rating or review or comment or whatever your podcast platform of choice lets you do to tell them that you like us. You can also send us any questions or comments you have at ScienceQuickly@sciam.com.Science Quickly is produced by me, Rachel Feltman, along with Fonda Mwangi, Kelso Harper, Madison Goldberg and Jeff DelViscio. Shayna Posses and Aaron Shattuck fact-check our show. Our theme music was composed by Dominic Smith. Subscribe to Scientific American for more up-to-date and in-depth science news.For Scientific American, this is Rachel Feltman. See you next time!

UCLA Unveils Breakthrough 3D Imaging Technology to Peer Inside Objects

All-optical multiplane quantitative phase imaging design eliminates the need for digital phase recovery algorithms. UCLA researchers have introduced a breakthrough in 3D quantitative phase imaging...

Artistic depiction of a wavelength-multiplexed diffractive optical processor for 3D quantitative phase imaging. Credit: Ozcan Lab @ UCLAAll-optical multiplane quantitative phase imaging design eliminates the need for digital phase recovery algorithms.UCLA researchers have introduced a breakthrough in 3D quantitative phase imaging that utilizes a wavelength-multiplexed diffractive optical processor to enhance imaging efficiency and speed. This method enables label-free, high-resolution imaging across multiple planes and has significant potential applications in biomedical diagnostics, material characterization, and environmental analysis.Introduction to Quantitative Phase ImagingLight waves, as they propagate through a medium, experience a temporal delay. This delay can unveil crucial information about the underlying structural and compositional characteristics. Quantitative Phase Imaging (QPI) is a cutting-edge optical technique that reveals variations in optical path length as light moves through biological samples, materials, and other transparent structures. Unlike traditional imaging methods that rely on staining or labeling, QPI allows researchers to visualize and quantify phase variations by generating high-contrast images that enable noninvasive investigations crucial to fields such as biology, materials science, and engineering. A recent study reported on July 25 in Advanced Photonics introduces a cutting-edge approach to 3D QPI using a wavelength-multiplexed diffractive optical processor. The innovative approach, developed by researchers at the University of California, Los Angeles (UCLA), offers an effective solution to a bottleneck posed by traditional 3D QPI methods, which can be time-consuming and computationally intensive.UCLA researchers report a new method for quantitative phase imaging of a 3D phase-only object using a wavelength-multiplexed diffractive optical processor. Utilizing multiple spatially engineered diffractive layers trained through deep learning, this diffractive processor can optically transform the phase distributions of multiple 2D objects at various axial positions into intensity patterns, each encoded at a unique wavelength channel. These wavelength-multiplexed patterns are projected onto a single field-of-view (FOV) at the output plane of the diffractive processor, enabling the capture of quantitative phase distributions of input objects located at different axial planes using an intensity-only image sensor – eliminating the need for digital phase recovery algorithms. Credit: C. Shen et al., doi 10.1117/1.AP.6.5.056003.The UCLA Innovation in Optical ProcessingThe UCLA team developed a wavelength-multiplexed diffractive optical processor capable of all-optically transforming phase distributions of multiple 2D objects at various axial positions into intensity patterns, each encoded at a unique wavelength channel. The design allows for the capture of quantitative phase images of input objects located at different axial planes using an intensity-only image sensor, eliminating the need for digital phase recovery algorithms.“We are excited about the potential of this new approach for biomedical imaging and sensing,” said Aydogan Ozcan, lead researcher and Chancellor’s Professor at UCLA. “Our wavelength-multiplexed diffractive optical processor offers a novel solution for high-resolution, label-free imaging of transparent specimens, which could greatly benefit biomedical microscopy, sensing, and diagnostics applications.”Multiplane Imaging and Its ApplicationsThe innovative multiplane QPI design incorporates wavelength multiplexing and passive diffractive optical elements that are collectively optimized using deep learning. By performing phase-to-intensity transformations that are spectrally multiplexed, this design enables rapid quantitative phase imaging of specimens across multiple axial planes. This system’s compactness and all-optical phase recovery capability make it a competitive analog alternative to traditional digital QPI methods.A proof-of-concept experiment validated the approach, showcasing successful imaging of distinct phase objects at different axial positions in the terahertz spectrum. The scalable nature of the design also allows adaptation to different parts of the electromagnetic spectrum, including the visible and IR bands, using appropriate nano-fabrication methods, paving the way for new phase imaging solutions integrated with focal plane arrays or image sensor arrays for efficient on-chip imaging and sensing devices.Implications for Science and TechnologyThis research has significant implications for various fields, including biomedical imaging, sensing, materials science, and environmental analysis. By providing a faster, more efficient method for 3D QPI, this technology can enhance the diagnosis and study of diseases, the characterization of materials, and the monitoring of environmental samples, among other applications.Reference: “Multiplane quantitative phase imaging using a wavelength-multiplexed diffractive optical processor” by Che-Yung Shen, Jingxi Li, Yuhang Li, Tianyi Gan, Langxing Bai, Mona Jarrahi and Aydogan Ozcan, 25 July 2024, Advanced Photonics.DOI: 10.1117/1.AP.6.5.056003

Want to buy an electric car but unsure you can justify it? Here’s how the arguments against EVs stack up

You’ve probably heard the arguments against electric cars, but most of them are getting weaker as the technology, markets and infrastructure mature.

So you’re thinking of buying an electric car. Perhaps you want to save money on fuel, or reduce your greenhouse gas emissions, or both. After all, for Australia to reach net zero it needs to electrify vehicles (and expand public transport use). But you’ve heard arguments against electric cars: they have limited range and many owners can’t easily charge at home. They cost too much, resale values are poor and insurance costs are higher than for other cars. They’re also heavier and cause more damage to our roads. Alarmingly, the mining of some minerals used to make them involves modern-day slavery. Are these concerns warranted? Let’s walk through them. Driving range In 2014, an electric vehicle’s top driving range was between 160 and 210 kilometres. Today, most new models can travel 300–600km under real-world conditions. In Australia, the average privately owned car travels 12,100km a year. That’s about 33.2km a day. Current models have more than enough battery capacity to cover most trips. Access to chargers What about longer trips? Many drivers still worry about finding a public charger. It’s common to see long queues at public charging stations (when they are working) or owners searching for a charger. Public charging infrastructure is struggling to keep up with rising demand. While not an issue for short trips (90% of owners charge at home or work), it’s a challenge for longer travel. Private home chargers are getting cheaper but not everyone has off-street parking. Some resort to the legally questionable strategy of running power cables over sidewalks or through trees. Apartment block residents typically have requests to install private chargers rejected for safety reasons (mainly fire risks). Many also can’t install solar panels, which would greatly reduce charging costs. Purchase costs While electric vehicles cost more than petrol or diesel vehicles today, this won’t be true in future. In 2023, the average price of a new petrol car in Australia was A$40,916, compared to $117,785 for battery electric vehicles. But the problem with averages is they’re skewed by outliers. And there are lots of very expensive outliers on the electric vehicle market. You can own a Porsche Taycon Turbo S for $374,000, or a Mercedes-AMG EQS 53 for $327,000. Three models account for about 70% of electric vehicle sales in Australia: the Telsa Model Y (from $60,900), Tesla Model 3 (from $58,900) and the BYD Atto 3 (from $48,011). The Model 3 entered our market in 2019 at $66,000, so it’s clear prices are dropping, and dropping fast. You can buy the GWM ORA or MG4 Excite MY23 for $39,990. Prices becoming cheaper is common for most new technology. It’s just we notice it more with electric vehicles because they cost more than most technology we buy, including phones and TVs. Secondhand value Concerns about resale value may be justified. In the year to January 2024, the value of used electric vehicles fell 21%, which was more than for fossil fuel vehicles. A higher initial price does not necessarily carry over to the second-hand market. Early adopters valued EV technology, but most buyers have different priorities. As the technology improves and misconceptions fade, resale values could rebound. Insurance costs Insurance costs are also higher than for other vehicles – typically around 20% more. The vehicles generally cost more to buy in the first place and newer technology is more costly to produce and replace. The supply chain for parts is still developing, with fewer trained technicians and service centres to maintain these vehicles. As the market grows and service infrastructure improves, insurance costs should fall. Access to charging and service infrastructure will improve as electric vehicles become mainstream. Darunrat Wongsuvan/Shutterstock Environmental damage? One recent study suggests electric vehicles are actually more environmentally damaging than petrol and diesel vehicles. They are typically heavier, resulting in more tyre wear and heavier braking. As this produces small particulate matter with a diameter of 10 microns (PM10) or less (a typical human hair is 50–70 microns wide), the suggestion is electric vehicles will produce more of it. But such studies often compare particulate emissions from EVs to tailpipe emissions from their fossil fuel counterparts. They ignore the latter’s tyre and braking concerns, which means comparing apples to oranges. More scientific studies suggest electric vehicles, particularly smaller ones, produce less PM10 from non-exhaust sources than their non-electric equivalents. Slavery in the supply chain Unfortunately, the modern-day slavery concern is very real. Electric vehicle batteries require cobalt. About 70% of the world’s supply comes from the Democratic Republic of Congo. About 20% of this mining activity involves small, informal, subsistence mines with little or no mechanisation and often using child labour. The minerals from such mines are scattered throughout the world’s supply chains. Those who raise slavery concerns against electric vehicles are usually silent on other affected products such as phones and laptops. Much more must be done to reduce these concerns about battery supply chains. The good outweighs the bad On balance, you’re justified in buying an electric vehicle, assuming you want one. Overall operating costs are far lower than for other vehicles. Public charger issues affect a small percentage of trips. While prices are dropping quickly, this doesn’t mean the bottom is falling out of the market. Price reductions simply represent greater supply of cheaper electric vehicles. Previous market-leading manufacturers can no longer charge hefty premiums for their products. And demand isn’t decreasing. The share of electric vehicles on the road continues to increase. Further, the technology is evolving. Trials of vehicle-to-grid charging, where vehicles return power to the grid or directly to a person’s house, have been taking place across Australia. This ability to power your house will help reduce energy bills, saving owners even more money. Aside from justifiable concerns about human rights abuses, most of the perceived barriers to EV uptake aren’t really barriers at all, or soon won’t be. The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

Suggested Viewing

Join us to forge
a sustainable future

Our team is always growing.
Become a partner, volunteer, sponsor, or intern today.
Let us know how you would like to get involved!

CONTACT US

sign up for our mailing list to stay informed on the latest films and environmental headlines.

Subscribers receive a free day pass for streaming Cinema Verde.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.