The night sky has been brightening faster than researchers realized, thanks to the use of artificial lights at night. A study of more than 50,000 observations of stars by citizen scientists reveals that the night sky grew about 10 percent brighter, on average, every year from 2011 to 2022.
In other words, a baby born in a region where roughly 250 stars were visible every night would see only 100 stars on their 18th birthday, researchers report in the Jan. 20 Science. The perils of light pollution go far beyond not being able to see as many stars. Too much brightness at night can harm people’s health, send migrating birds flying into buildings, disrupt food webs by drawing pollinating insects toward lights instead of plants and may even interrupt fireflies trying to have sex (SN: 8/2/17; SN: 8/12/15).
“In a way, this is a call to action,” says astronomer Connie Walker of the National Optical-Infrared Astronomy Research Laboratory in Tucson. “People should consider that this does have an impact on our lives. It’s not just astronomy. It impacts our health. It impacts other animals who cannot speak for themselves.”
Walker works with the Globe at Night campaign, which began in the mid-2000s as an outreach project to connect students in Arizona and Chile and now has thousands of participants worldwide. Contributors compare the stars they can see with maps of what stars would be visible at different levels of light pollution, and enter the results on an app.
“I’d been quite skeptical of Globe at Night” as a tool for precision research, admits physicist Christopher Kyba of the GFZ German Research Centre for Geosciences in Potsdam. But the power is in the sheer numbers: Kyba and colleagues analyzed 51,351 individual data points collected from 2011 to 2022.
“The individual data are not precise, but there’s a whole lot of them,” he says. “This Globe at Night project is not just a game; it’s really useful data. And the more people participate, the more powerful it gets.”
Those data, combined with a global atlas of sky luminance published in 2016, allowed the team to conclude that the night sky’s brightness increased by an average 9.6 percent per year from 2011 to 2022 (SN: 6/10/16).
Most of that increase was missed by satellites that collect brightness data across the globe. Those measurements saw just a 2 percent increase in brightness per year over the last decade. There are several reasons for that, Kyba says. Since the early 2010s, many outdoor lights have switched from high-pressure sodium lightbulbs to LEDs. LEDs are more energy efficient, which has environmental benefits and cost savings.
But LEDs also emit more short-wavelength blue light, which scatters off particles in the atmosphere more than sodium bulbs’ orange light, creating more sky glow. Existing satellites are not sensitive to blue wavelengths, so they underestimate the light pollution coming from LEDs. And satellites may miss light that shines toward the horizon, such as light emitted by a sign or from a window, rather than straight up or down.
Astronomer and light pollution researcher John Barentine was not surprised that satellites underestimated the problem. But “I was still surprised by how much of an underestimate it was,” he says. “This paper is confirming that we’ve been undercounting light pollution in the world.”
The good news is that no major technological breakthroughs are needed to help fix the problem. Scientists and policy makers just need to convince people to change how they use light at night — easier said than done.
“People sometimes say light pollution is the easiest pollution to solve, because you just have to turn a switch and it goes away,” Kyba says. “That’s true. But it’s ignoring the social problem — that this overall problem of light pollution is made by billions of individual decisions.”
Some simple solutions include dimming or turning off lights overnight, especially floodlighting or lights in empty parking lots.
Kyba shared a story about a church in Slovenia that switched from four 400-watt floodlights to a single 58-watt LED, shining behind a cutout of the church to focus the light on its facade. The result was a 96 percent reduction in energy use and much less wasted light , Kyba reported in the International Journal of Sustainable Lighting in 2018. The church was still lit up, but the grass, trees and sky around it remained dark.
“If it was possible to replicate that story over and over again throughout our society, it would suggest you could really drastically reduce the light in the sky, still have a lit environment and have better vision and consume a lot less energy,” he says. “This is kind of the dream.”
Barentine, who leads a private dark-sky consulting firm, thinks widespread awareness of the problem — and subsequent action — could be imminent. For comparison, he points to a highly publicized oil slick fire on the Cuyahoga River, outside of Cleveland, in 1969 that fueled the environmental movement of the 1960s and ’70s, and prompted the U.S. Congress to pass the Clean Water Act.
“I think we’re on the precipice, maybe, of having the river-on-fire moment for light pollution,” he says.
Our modern lives depend on rare earth elements, and someday soon we may not have enough to meet growing demand.
Because of their special properties, these 17 metallic elements are crucial ingredients in computer screens, cell phones and other electronics, compact fluorescent lamps, medical imaging machines, lasers, fiber optics, pigments, polishing powders, industrial catalysts – the list goes on and on (SN Online: 1/16/23). Notably rare earths are an essential part of the high-powered magnets and rechargeable batteries in the electric vehicles and renewable energy technologies needed to get the world to a low- or zero-carbon future. In 2021, the world mined 280,000 metric tons of rare earths — roughly 32 times as much as was mined in the mid-1950s. And demand is only going to increase. By 2040, experts estimate, we’ll need up to seven times as much rare earths as we do today.
Satisfying that appetite won’t be easy. Rare earth elements are not found in concentrated deposits. Miners must excavate huge amounts of ore, subject it to physical and chemical processes to concentrate the rare earths, and then separate them. The transformation is energy intensive and dirty, requiring toxic chemicals and often generating a small amount of radioactive waste that must be safely disposed of. Another concern is access: China has a near monopoly on both mining and processing; the United States has just one active mine (SN Online: 1/1/23).
For most of the jobs rare earths do, there are no good substitutes. So to help meet future demand and diversify who controls the supply — and perhaps even make rare earth recovery “greener” — researchers are looking for alternatives to conventional mining.
Proposals include everything from extracting the metals from coal waste to really out-there ideas like mining the moon. But the approach most likely to make an immediate dent is recycling. “Recycling is going to play a very important and central role,” says Ikenna Nlebedim, a materials scientist at Ames National Laboratory in Iowa and the Department of Energy’s Critical Materials Institute. “That’s not to say we’re going to recycle our way out of the critical materials challenge.”
Still, in the rare earth magnets market, for instance, by about 10 years from now, recycling could satisfy as much as a quarter of the demand for rare earths, based on some estimates. “That’s huge,” he says.
But before the rare earths in an old laptop can be recycled as regularly as the aluminum in an empty soda can, there are technological, economic and logistical obstacles to overcome.
Why are rare earths so challenging to extract? Recycling seems like an obvious way to get more rare earths. It’s standard practice in the United States and Europe to recycle from 15 to 70 percent of other metals, such as iron, copper, aluminum, nickel and tin. Yet today, only about 1 percent of rare earth elements in old products are recycled, says Simon Jowitt, an economic geologist at the University of Nevada, Las Vegas.
“Copper wiring can be recycled into more copper wiring. Steel can just be recycled into more steel,” he says. But a lot of rare earth products are “inherently not very recyclable.” Rare earths are often blended with other metals in touch screens and similar products, making removal difficult. In some ways, recycling rare earths from tossed-out items resembles the challenge of extracting them from ore and separating them from each other. Traditional rare earth recycling methods also require hazardous chemicals such as hydrochloric acid and a lot of heat, and thus a lot of energy. On top of the environmental footprint, the cost of recovery may not be worth the effort given the small yield of rare earths. A hard disk drive, for instance, might contain just a few grams; some products offer just milligrams.
Chemists and materials scientists, though, are trying to develop smarter recycling approaches. Their techniques put microbes to work, ditch the acids of traditional methods or attempt to bypass extraction and separation.
Microbial partners can help recycle rare earths One approach leans on microscopic partners. Gluconobacter bacteria naturally produce organic acids that can pull rare earths, such as lanthanum and cerium, from spent catalysts used in petroleum refining or from fluorescent phosphors used in lighting. The bacterial acids are less environmentally harmful than hydrochloric acid or other traditional metal-leaching acids, says Yoshiko Fujita, a biogeochemist at Idaho National Laboratory in Idaho Falls. Fujita leads research into reuse and recycling at the Critical Materials Institute. “They can also be degraded naturally,” she says.
In experiments, the bacterial acids can recover only about a quarter to half of the rare earths from spent catalysts and phosphors. Hydrochloric acid can do much better — in some cases extracting as much as 99 percent. But bio-based leaching might still be profitable, Fujita and colleagues reported in 2019 in ACS Sustainable Chemistry & Engineering.
In a hypothetical plant recycling 19,000 metric tons of used catalyst a year, the team estimated annual revenues to be roughly $1.75 million. But feeding the bacteria that produce the acid on-site is a big expense. In a scenario in which the bacteria are fed refined sugar, total costs for producing the rare earths are roughly $1.6 million a year, leaving around just $150,000 in profits. Switching from sugar to corn stalks, husks and other harvest leftovers, however, would slash costs by about $500,000, raising profits to about $650,000. Other microbes can also help extract rare earths and take them even further. A few years ago, researchers discovered that some bacteria that metabolize rare earths produce a protein that preferentially grabs onto these metals. This protein, lanmodulin, can separate rare earths from each other, such as neodymium from dysprosium — two components of rare earth magnets. A lanmodulin-based system might eliminate the need for the many chemical solvents typically used in such separation. And the waste left behind — the protein — would be biodegradable. But whether the system will pan out on a commercial scale is unknown.
How to pull rare earths from discarded magnets Another approach already being commercialized skips the acids and uses copper salts to pull the rare earths from discarded magnets, a valuable target. Neodymium-iron-boron magnets are about 30 percent rare earth by weight and the single largest application of the metals in the world. One projection suggests that recovering the neodymium in magnets from U.S. hard disk drives alone could meet up about 5 percent of the world’s demand outside of China before the end of the decade.
Nlebedim led a team that developed a technique that uses copper salts to leach rare earths out of shredded electronic waste that contains magnets. Dunking the e-waste in a copper salt solution at room temperature dissolves the rare earths in the magnets. Other metals can be scooped out for their own recycling, and the copper can be reused to make more salt solution. Next, the rare earths are solidified and, with the help of additional chemicals and heating, transformed into powdered minerals called rare earth oxides. The process, which has also been used on material left over from magnet manufacturing that typically goes to waste, can recover 90 to 98 percent of the rare earths, and the material is pure enough to make new magnets, Nlebedim’s team has demonstrated.
In a best-case scenario, using this method to recycle 100 tons of leftover magnet material might produce 32 tons of rare earth oxides and net more than $1 million in profits, an economic analysis of the method suggests.
That study also evaluated the approach’s environmental impacts. Compared with producing one kilogram of rare earth oxide via one of the main types of mining and processing currently used in China, the copper salt method has less than half the carbon footprint. It produces an average of about 50 kilograms of carbon dioxide equivalent per kilogram of rare earth oxide versus 110, Nlebedim’s team reported in 2021 in ACS Sustainable Chemistry & Engineering. But it’s not necessarily greener than all forms of mining. One sticking point is that the process requires toxic ammonium hydroxide and roasting, which consumes a lot of energy, and it still releases some carbon dioxide. Nlebedim’s group is now tweaking the technique. “We want to decarbonize the process and make it safer,” he says.
Meanwhile, the technology seems promising enough that TdVib, an Iowa company that designs and manufactures magnetic materials and products, has licensed it and built a pilot plant. The initial aim is to produce two tons of rare earth oxides per month, says Daniel Bina, TdVib’s president and CEO. The plant will recycle rare earths from old hard disk drives from data centers.
Noveon Magnetics, a company in San Marcos, Texas, is already making recycled neodymium-iron-boron magnets. In typical magnet manufacturing, the rare earths are mined, transformed into metal alloys, milled into a fine powder, magnetized and formed into a magnet. Noveon knocks out those first two steps, says company CEO Scott Dunn.
After demagnetizing and cleaning discarded magnets, Noveon directly mills them into a powder before building them back up as new magnets. Unlike with other recycling methods, there’s no need to extract and separate the rare earths out first. The final product can be more than 99 percent recycled magnet, Dunn says, with a small addition of virgin rare earth elements — the “secret sauce,” as he puts it — that allows the company to fine-tune the magnets’ attributes.
Compared with traditional magnet mining and manufacturing, Noveon’s method cuts energy use by about 90 percent, Miha Zakotnik, Noveon’s chief technology officer, and other researchers reported in 2016 in Environmental Technology & Innovation. Another 2016 analysis estimated that for every kilogram of magnet produced via Noveon’s method, about 12 kilograms of carbon dioxide equivalent are emitted. That’s about half as much of the greenhouse gas as conventional magnets.
Dunn declined to share what volume of magnets Noveon currently produces or how much its magnets cost. But the magnets are being used in some industrial applications, for pumps, fans and compressors, as well as some consumer power tools and other electronics. Rare earth recycling has logistical hurdles Even as researchers clear technological hurdles, there are still logistical barriers to recycling. “We don’t have the systems for collecting end-of-life products that have rare earths in them,” Fujita says, “and there’s the cost of dismantling those products.” For a lot of e-waste, before rare earth recycling can begin, you have to get to the bits that contain those precious metals.
Noveon has a semiautomated process for removing magnets from hard disk drives and other electronics.
Apple is also trying to automate the recycling process. The company’s Daisy robot can dismantle iPhones. And in 2022, Apple announced a pair of robots called Taz and Dave that facilitate the recycling of rare earths. Taz can gather magnet-containing modules that are typically lost during the shredding of electronics. Dave can recover magnets from taptic engines, Apple’s technology for providing users with tactile feedback when, say, tapping an iPhone screen.
Even with robotic aids, it would still be a lot easier if companies just designed products in a way that made recycling easy, Fujita says.
No matter how good recycling gets, Jowitt sees no getting around the need to ramp up mining to feed our rare earth–hungry society. But he agrees recycling is necessary. “We’re dealing with intrinsically finite resources,” he says. “Better we try and extract what we can rather than just dumping it in the landfill.”
Shape-shifting liquid metal robots might not be limited to science fiction anymore.
Miniature machines can switch from solid to liquid and back again to squeeze into tight spaces and perform tasks like soldering a circuit board, researchers report January 25 in Matter.
This phase-shifting property, which can be controlled remotely with a magnetic field, is thanks to the metal gallium. Researchers embedded the metal with magnetic particles to direct the metal’s movements with magnets. This new material could help scientists develop soft, flexible robots that can shimmy through narrow passages and be guided externally. Scientists have been developing magnetically controlled soft robots for years. Most existing materials for these bots are made of either stretchy but solid materials, which can’t pass through the narrowest of spaces, or magnetic liquids, which are fluid but unable to carry heavy objects (SN: 7/18/19).
In the new study, researchers blended both approaches after finding inspiration from nature (SN: 3/3/21). Sea cucumbers, for instance, “can very rapidly and reversibly change their stiffness,” says mechanical engineer Carmel Majidi of Carnegie Mellon University in Pittsburgh. “The challenge for us as engineers is to mimic that in the soft materials systems.”
So the team turned to gallium, a metal that melts at about 30° Celsius — slightly above room temperature. Rather than connecting a heater to a chunk of the metal to change its state, the researchers expose it to a rapidly changing magnetic field to liquefy it. The alternating magnetic field generates electricity within the gallium, causing it to heat up and melt. The material resolidifies when left to cool to room temperature.
Since magnetic particles are sprinkled throughout the gallium, a permanent magnet can drag it around. In solid form, a magnet can move the material at a speed of about 1.5 meters per second. The upgraded gallium can also carry about 10,000 times its weight.
External magnets can still manipulate the liquid form, making it stretch, split and merge. But controlling the fluid’s movement is more challenging, because the particles in the gallium can freely rotate and have unaligned magnetic poles as a result of melting. Because of their various orientations, the particles move in different directions in response to a magnet.
Majidi and colleagues tested their strategy in tiny machines that performed different tasks. In a demonstration straight out of the movie Terminator 2, a toy person escaped a jail cell by melting through the bars and resolidifying in its original form using a mold placed just outside the bars. On the more practical side, one machine removed a small ball from a model human stomach by melting slightly to wrap itself around the foreign object before exiting the organ. But gallium on its own would turn to goo inside a real human body, since the metal is a liquid at body temperature, about 37° C. A few more metals, such as bismuth and tin, would be added to the gallium in biomedical applications to raise the material’s melting point, the authors say. In another demonstration, the material liquefied and rehardened to solder a circuit board. Although this phase-shifting material is a big step in the field, questions remain about its biomedical applications, says biomedical engineer Amir Jafari of the University of North Texas in Denton, who was not involved in the work. One big challenge, he says, is precisely controlling magnetic forces inside the human body that are generated from an external device.
“It’s a compelling tool,” says robotics engineer Nicholas Bira of Harvard University, who was also not involved in the study. But, he adds, scientists who study soft robotics are constantly creating new materials.
“The true innovation to come lies in combining these different innovative materials.”
The Arctic today is a hostile place for most primates. But a series of fossils found since the 1970s suggest that wasn’t always the case.
Dozens of fossilized teeth and jaw bones unearthed in northern Canada belonged to two species of early primates — or at least close relatives of primates — that lived in the Arctic around 52 million years ago, researchers report January 25 in PLOS ONE. These remains are the first primate-like fossils ever discovered in the Arctic and tell of a groundhog-sized animal that may have skittered across trees in a swamp that once existed above the Arctic Circle. The Arctic was significantly warmer during that time. But creatures still had to adapt to extreme conditions such as long winter months without sunlight. These challenges make the presence of primate-like creatures in the Arctic “incredibly surprising,” says coauthor Chris Beard, a paleontologist at the University of Kansas in Lawrence. “No other primate or primate relative has ever been found this far north so far.”
Between frigid temperatures, limited plant growth and months of perpetual darkness, living in the modern Arctic isn’t easy. This is especially true for primates, which evolved from small, tree-dwelling creatures that largely fed on fruit (SN: 6/5/13). To this day, most primates — humans and few other outliers like Japan’s snow monkeys excepted — tend to stick to tropical and subtropical forests, largely found around the equator.
But these forests haven’t always been confined to their present location. During the early Eocene Epoch, which started around 56 million years ago, the planet underwent a period of intense warming that allowed forests and their warm-loving residents to expand northward (SN: 11/3/15).
Scientists know about this early Arctic climate in part because of decades of paleontological work on Ellesmere Island in northern Canada. These digs revealed that the area was once dominated by swamps not unlike those found in the southeastern United States today. This ancient, warm, wet Arctic environment was home to a wide array of heat-loving animals, including giant tapirs and crocodile relatives. For the new study, Beard and his colleagues examined dozens of teeth and jawbone fossils found in the area, concluding that they belong to two species, Ignacius mckennai and Ignacius dawsonae. These two species belonged to a now-extinct genus of small mammals that was widespread across North America during the Eocene. The Arctic variants probably made their way north as the planet warmed, taking advantage of the new habitat opening up near the poles.
Scientists have long debated whether this lineage can be considered true primates or whether they were simply close relatives. Regardless, it’s still “really weird and unexpected” to find primates or their relatives in the area, says Mary Silcox, a vertebrate paleontologist at the University of Toronto Scarborough.
For one thing, Ellesmere Island was already north of the Arctic Circle 52 million years ago. So while conditions may have been warmer and wetter, the swamp was plunged into continuous darkness during the winter months.
Newly arrived Ignacius would have had to adapt to these conditions. Unlike their southern kin, the Arctic Ignacius had unusually strong jaws and teeth suited to eating hard foods, the researchers found. This may have helped these early primates feed on nuts and seeds over the winter, when fruit wasn’t as readily available.
This research can shed light on how animals can adapt to live in extreme conditions. “Ellesmere Island is arguably the best deep time analog for a mild, ice-free Arctic,” says Jaelyn Eberle, a vertebrate paleontologist at the University of Colorado Boulder.
Studying how plants and animals adapted to this remarkable period in Arctic history, Beard says, could offer clues to the Arctic’s future residents.
Birds that dive underwater — such as penguins, loons and grebes — may be more likely to go extinct than their nondiving kin, a new study finds.
Many water birds have evolved highly specialized bodies and behaviors that facilitate diving. Now, an analysis of the evolutionary history of more than 700 water bird species shows that once a bird group gains the ability to dive, the change is irreversible. That inflexibility could help explain why diving birds have an elevated extinction rate compared with nondiving birds, researchers report in the Dec. 21 Proceedings of the Royal Society B. “There are substantial morphological adaptations for diving,” says Catherine Sheard, an evolutionary biologist at the University of Bristol in England, who was not involved with the study. For instance, birds that plunge into the water from the air, such as gannets and some pelicans, may have tweaks to the neck muscles and the bones in the chest.
It’s possible that some diving birds are evolving under an evolutionary “ratchet,” where adaptations to exploit a certain food source or habitat unlock some new opportunities, but also encourage ever more specialized evolutionary tailoring. These birds may become trapped in their ways, increasing their risk of extinction. That’s especially true if their habitat rapidly changes in some negative way, possibly because of human-caused climate change (SN: 1/16/20).
Evolutionary biologists Josh Tyler and Jane Younger investigated the evolution of diving in Aequorlitornithes, a collection of 727 water bird species across 11 bird groups. The team divided species into either nondiving birds, or one of three diving types: foot-propelled pursuit (such as loons and grebes), wing-propelled pursuit (like penguins and auks) and the plunge divers.
Diving has evolved at least 14 separate times in the water birds, but there were no instances where diving birds reverted to a nondiving form, the researchers found.
The scientists also explored the link between diving and the development of new species, or their demise, in various bird lineages. Among 236 diving bird species, 75, or 32 percent, were part of lineages that are experiencing 0.02 more species extinctions per million years than the generation of new species. This elevated extinction rate was more common in the wing-propelled and foot-propelled pursuit divers compared with plunge divers. Bird lineages that don’t dive, on the other hand, generated 0.1 more new species per million years than the rate of species dying out.
“The more specialized you become, the more reliant you are on a particular diet, foraging strategy or environment,” says Tyler, of the University of Bath in England. “The range of environments available for foraging is much larger for the nondiving birds than for the specialist divers, and this may play into their ability to adapt and thrive.”
Within diving bird groups, the less specialized, the better. Take penguins, a group that has become the subject of a fair share of conservation concern (SN: 8/1/18). The researchers point out that gentoo penguins (Pygoscelis papua) — which have a broad diet — have larger population sizes than related chinstrap penguins (P. antarcticus) that eat mostly krill, and may actually be as many as four very recently diverged species. The International Union for the Conservation of Nature considers both penguin species to be of “least concern” in terms of imminent extinction risk. But chinstrap numbers are declining in some areas, while gentoo population numbers remain generally stable.
If some diving birds are being trapped in their environments by their own adaptations, that doesn’t bode well for their long-term survival, say Tyler and Younger, who is at the University of Tasmania in Hobart.
According to the IUCN, 156 species, or about one-fifth, of the 727 species of water birds are considered vulnerable, endangered or critically endangered. The researchers calculate that of the 75 diving bird species from lineages with heightened extinction rates, 24 species, or nearly one-third, are already listed as threatened.
We spend so much time making sure wildlife stays away from us, whether that’s setting traps, building fences or putting out poisons. Sure, unwanted guests are annoying. But why do we consider some animals “pests”? It’s all about perspective, says science journalist Bethany Brookshire. “We can put poison out for rats and protest their use as laboratory animals. We can shoot deer in the fall and show their adorable offspring to our children in the spring,” she writes in her new book, Pests: How Humans Create Animal Villains. Brookshire argues that we deem animals “pests” when we fear them (like snakes). Or when they thrive in a niche we unintentionally created for them (think rats in the New York subway). Or when they find a way to live in a habitat now dominated by humans (all those deer in the suburbs). Sometimes we demonize an animal if we feel like it’s threatening our ability to control the landscape (like coyotes that attack our livestock, pets and even children).
Through the lens of science, history, culture, religion, personal anecdotes and a big dose of humor, Brookshire breaks down how our perspective shapes our relationships with our animal neighbors. She also goes into the field — trailing rats, hunting pythons, taming feral cats, tracking drugged-up bears — to see firsthand how pests are treated.
Science News spoke with Brookshire, a former staff writer for Science News for Students (now Science News Explores), about what we can learn from pests and how we can coexist with them. The following conversation has been edited for clarity and brevity.
SN: What inspired you to write this book?
Brookshire: I wrote a news story that was about mice living with humans (SN: 4/19/17). [It was based on a study] showing that we’ve had house mice since we’ve had houses. I love the fact that humans have had these other animals taking advantage of the ecosystems that we create basically since we started living settled life. Every location that has humans has their “rat.” Sometimes that’s a rat, and sometimes it’s a pigeon or a cockatoo or a lizard or a horse. It’s not about what these animals are doing. Animals live in ecosystems that we create, and we hate animals that live too close.
SN: What surprised you during your research?
Brookshire: The reflexiveness of people’s responses [to pests]. People respond emotionally. When you make them pause and think about it, they go, “Oh wow, that doesn’t make any sense. I should not be caught trying to kill a raccoon with a sword.” But in the moment, you’re so wrapped up in the violation of what you see as your personal space.
The other thing is the extent to which our disdain of pests is wrapped up in social justice. A lot of times we see this hatred and disgust for animals that we see as “low class.” High-class people don’t have rats. And that’s really about social justice, about infrastructure and the ability of people to live in clean houses, store their food properly or even have a house at all.
Also, the way we deal with these animals often has vestiges of colonialism, as in the chapter on elephants. [In Kenya, European colonists] made people grow corn and sugarcane, which elephants love. Colonization created national park systems that assumed that humans had no place in wilderness, shoving out Indigenous pastoralists. Colonization created the market for poached ivory. And colonizing people assumed that Indigenous people did not like elephants or know their benefits. We are living with the consequences. Many modern efforts at elephant protection are spearheaded by Western people, and they assume the biggest issue with elephants is poaching and that Indigenous people don’t know what’s best for themselves or the elephants. In fact, human-elephant conflict [which includes elephant crop raids] is the far bigger problem, and Indigenous people have a long history of coexisting with elephants.
SN: In the book, you looked at many different cultures and included Indigenous voices.
Brookshire: It’s important to realize there’s more than one way to look at the world. By learning from other cultures, it helps us understand our biases. It’s only when you get outside of your own beliefs that you realize that’s not just the way things are.
SN: That shows up when you write about the Karni Mata Temple in India, also known as the Temple of Rats. Temple rats are not treated as pests, but a rat in a house would be. Brookshire: That’s the result of context. And you see that in Western cultures all the time. People love squirrels. Well, they’re basically rats with better PR. Then you have people who have pet rats, who would probably scream if a sewer rat ran by.
SN: Are there any animals that you consider a pest?
Brookshire: No. The animal that I’ve probably come away with the most negative impression of is humans. It’s funny because we think we can extinct anything. And I love how these animals have gone: “Oh, poison? That’s cute.” “Oh, a trap? You’re funny.” We’ve tried to use electric fences on elephants [to stop them from eating crops]. And elephants are like, “Guess what? Ivory doesn’t conduct electricity.” Even if they don’t have tusks, elephants just pick up a log [to destroy the fence].
SN: Are you hoping to change people’s minds about pests?
Brookshire: I hope that they will ask why they respond to pests the way they do. Instead of just going, “This animal bothers me,” ask why, and does it make sense. I also hope it opens more curiosity about the animals around us. I learned from Indigenous groups just how much knowledge they have of the animals in their ecosystem. I hope more people learn. A world that you know a lot about is just a better world to live in.
In the battle against the invasive house mouse on islands, scientists are using the rodent’s own genes against it.
With the right tweaks, introducing a few hundred genetically altered mice could drive an island’s invasive mouse population to extinction in about 25 years, researchers report in the Nov. 15 Proceedings of the National Academy of Sciences. The trick is adding the changes to a section of mouse DNA that gets inherited far more often than it should. Scientists have been creating similar extra-inheritable genes — called gene drives — in the lab. The chunks are designed to get passed on to most or all of an animal’s offspring instead of the usual half, and make those offspring infertile in the bargain. Scientists have used gene drives to reduce populations of mosquitoes and fruit flies (SN: 12/17/18).
But mammals are a different story. Scientists have previously synthesized a gene drive that gets passed on in mice about 80 percent of the time (SN: 1/23/19). But the drive isn’t strong enough to stop a population quickly.
Luckily, nature has it handled. A haplotype is a naturally occurring group of genes that gets passed on as a unit during replication. The genome of the house mouse (Mus musculus) has a particular haplotype, called the t haplotype, that gets passed on to offspring more than 95 percent of the time, instead of the typical 50 percent.
This natural gene drive has benefits, says Anna Lindholm, a biologist at the University of Zurich who was not involved in the study. It “evolved naturally and continues to be present in the wild, and we have as yet not found resistance to it in wild populations,” she says. It’s also not found in species besides M. musculus, meaning it probably won’t spread to other noninvasive mice.
Molecular biologist Paul Thomas and his colleagues decided to target the t haplotype with the cut-and-paste molecular tool called CRISPR/Cas9 (SN: 8/24/16). They used CRISPR to insert the gene sequence for the CRISPR tool itself into the t haplotype. When a male mouse carrying the altered t haplotype mates with a female, the inserted genes for the CRISPR tool spring into action. It uses a special genetic guide to target and inactivate the gene for the hormone prolactin — rendering any baby female mice infertile.
The best part is that the natural t haplotype can also sterilize males, says Thomas, of the University of Adelaide in Australia. Males with two copies — homozygous males — won’t reproduce at all.
“If you could get a t to spread through a population, you could get homozygous males being sterile,” he says. “And with the addition of the CRISPR element on top of that, we get homozygous females that are also sterile.” To find out how well the t haplotype mice do on an island where mice are wreaking havoc on biodiversity, the scientists used a computer simulation of an island with 200,000 mice. The team found that adding just 256 mice with the CRISPR-altered t haplotype could successfully drive the mouse population to zero in around 25 years. Even without CRISPR, adding mice with the normal t haplotype could tank the population in about 43 years.
But models aren’t mice. In a final test, Thomas and his colleagues made the model reality. The team altered the t haplotype in a small group of mice in the lab and used genetic tests to show that those mice would pass on their new genetics 95 percent of the time.
“This is a clever idea, to build on the t haplotype natural drive system and use CRISPR, not for spreading the construct, but for damaging genes necessary for female fertility,” Lindholm says. “This is a big advance in the development of new tools to control invasive mouse populations.”
The next step, Thomas says, will be to test the effects in real populations of mice in secure enclosures, to find out if the genetically tweaked t can stop mice from reproducing. The scientists also want to ensure that any engineered mice released into the wild have some safety mechanism in place, so other mice elsewhere remain unaffected.
The final version might target tiny mutations that only occur on one island where the pest population is isolated, Thomas suggests. If the mouse escaped onto the mainland, its altered genes would have no effect on the local mice. The scientists also want to consult with people living in the area, as officials did when genetically modified mosquitoes were released in Florida (SN: 5/14/21).
Finally, he notes, 25 years is a long wait for some endangered island populations. “We would love to see CRISPR work faster,” he says. “It’s still a work in progress.”
Just how hot is your chili pepper? A new chili-shaped device could quickly signal whether adding the pepper to a meal might set your mouth ablaze.
Called the Chilica-pod, the device detects capsaicin, a chemical compound that helps give peppers their sometimes painful kick. In general, the more capsaicin a pepper has, the hotter it tastes. The Chilica-pod is sensitive, capable of detecting extremely low levels of the fiery molecule, researchers report in the Oct. 23 ACS Applied Nano Materials.
The device could someday be used to test cooked meals or fresh peppers, says analytical chemist Warakorn Limbut of Prince of Songkla University in Hat Yai, Thailand. People with a capsaicin allergy could use the gadget to avoid the compound, or farmers could test harvested peppers to better indicate their spiciness, he says. A pepper’s relative spiciness typically is conveyed in Scoville heat units — an imperfect measurement determined by a panel of human taste testers. Other more precise methods for determining spiciness are time-intensive and involve expensive equipment, making the methods unsuitable for a quick answer.
Enter the portable, smartphone-compatible Chilica-pod. Built by Limbut and colleagues, the instrument’s sensor is composed of stacks of graphene sheets. When a drop of a chili pepper and ethanol solution is added to the sensor, the capsaicin from the pepper triggers the movement of electrons among the graphene atoms. The more capsaicin the solution has, the stronger the electrical current through the sheets.
The Chilica-pod registers that electrical activity and, once its “stem” is plugged into a smartphone, sends the information to an app for analysis. The device can detect capsaicin levels as low as 0.37 micromoles per liter of solution, equivalent to the amount in a pepper with no heat, one test showed.
Limbut’s team used the Chilica-pod to individually measure six dried chili peppers from a local market. The peppers’ capsaicin concentrations ranged from 7.5 to 90 micromoles per liter of solution, the team found. When translated to Scoville heat units, that range corresponds to the spice of peppers like serrano or cayenne — mild varieties compared to the blazing hot Carolina reaper, one of the world’s hottest peppers (SN: 4/9/18).
Paul Bosland, a plant geneticist and chili breeder at New Mexico State University in Las Cruces who wasn’t involved in the study, notes that capsaicin is just one of at least 24 related compounds that give peppers heat. “I would hope that [the device] could read them all,” he says.
A mucus-wicking robotic pill may offer a new way to deliver meds.
The multivitamin-sized device houses a motor and a cargo hold for drugs, including ones that are typically given via injections or intravenously, such as insulin and some antibiotics. If people could take such drugs orally, they could potentially avoid daily shots or a hospital stay, which would be “a huge game changer,” says MIT biomedical engineer Shriya Srinivasan.
But drugs that enter the body via the mouth face a tough journey. They encounter churning stomach acid, raging digestive enzymes and sticky slicks of mucus in the gut. Intestinal mucus “sort of acts like Jell-O,” Srinivasan says. The goo can trap drug particles, preventing them from entering the bloodstream.
The new device, dubbed RoboCap, whisks away this problem. The pill uses surface grooves, studs and torpedo-inspired fins to scrub away intestinal mucus like a miniature brush whirling inside a bottle. In experiments in pigs, RoboCap tunneled through mucus lining the walls of the small intestine, depositing insulin or the IV antibiotic vancomycin along the way, Srinivasan and colleagues report September 28 in Science Robotics. After churning for about 35 minutes, the pill continued its trip through the gut and eventually out of the body.
RoboCap is the latest pill-like gadget made to be swallowed. In 2019, some of the same researchers who developed RoboCap debuted a different device — one that injects drugs by pricking the inside of the stomach (SN: 2/7/19). That pea-sized injector was not designed to work in the small intestine, where some drugs are most easily absorbed. The RoboCap may also be able to deliver larger drug payloads, Srinivasan says.
Great scientists become immortalized in various ways.
Some through names for obscure units of measurement (à la Hertz, Faraday and Curie). Others in elements on the periodic table (Mendeleev, Seaborg, Bohr, among many others). A few become household names symbolizing genius — like Newton in centuries past and nowadays, Einstein. But only one has been honored on millions and millions of cartons of milk: the French chemist, biologist and evangelist for experimental science Louis Pasteur.
Pasteur was born 200 years ago this December, the most significant scientist birthday bicentennial since Charles Darwin’s in 2009. And Pasteur ranked behind only Darwin among the most exceptional biological scientists of the 19th century.
Pasteur not only made milk safe to drink, but also rescued the beer and wine industry. He established the germ theory of disease, saved the French silkworm population, confronted the scourges of anthrax and rabies, and transformed the curiosity of vaccination against smallpox into a general strategy for treating and preventing human diseases. He invented microbiology and established the foundations for immunology. Had he been alive after 1901, when Nobel Prizes were first awarded, he would have deserved one every year for a decade. No other single scientist demonstrated more dramatically the benefit of science for humankind.
He was not, however, exactly a saint. A Pasteur biographer, Hilaire Cuny, called him “a mass of contradictions.” Pasteur was ambitious and opportunistic, sometimes arrogant and narrow-minded, immodest, undiplomatic and uncompromising. In the scientific controversies he engaged in (and there were many), he was pugnacious and belligerent. He did not suffer criticism silently and was often acerbic in his responses. To his laboratory assistants, he was demanding, dictatorial and aloof. Despite his revolutionary spirit in pursuing science, in political and social matters, he was conformist and deferential to authority.
And yet he was a tireless worker, motivated by service to humankind, faithful to his family and unwaveringly honest. He was devoted to truth, and therefore also to science. In his youth, Pasteur did not especially excel as a student. His interests inclined toward art rather than science, and he did display exceptional skill at drawing and painting. But in light of career considerations (his father wanted him to be a scholar), Pasteur abandoned art for science and so applied to the prestigious École Normale Supérieure in Paris for advanced education. He finished 15th in the competitive entrance examination, good enough to secure admission. But not good enough for Pasteur. He spent another year on further studies emphasizing physical sciences and then took the École Normale exam again, finishing fourth. That was good enough, and he entered the school in 1843. There he earned his doctoral degree, in physics and chemistry, in 1847.
Among his special interests at the École Normale was crystallography. In particular he was drawn to investigate tartaric acid. It’s a chemical found in grapes responsible for tartar, a potassium compound that collects on the surfaces of wine vats. Scientists had recently discovered that tartaric acid possesses the intriguing power of twisting light — that is, rotating the orientation of light waves’ vibrations. In light that has been polarized (by passing it through certain crystals, filters or some sunglasses), the waves are all aligned in a single plane. Light passing through a tartaric acid solution along one plane emerges in a different plane.
Even more mysteriously, another acid (paratartaric acid, or racemic acid), with the exact same chemical composition as tartaric acid, did not twist light at all. Pasteur found that suspicious. He began a laborious study of the crystals of salts derived from the two acids. He discovered that racemic acid crystals could be sorted into two asymmetric mirror-image shapes, like pairs of right-handed and left-handed gloves. All the tartaric acid crystals, on the other hand, had shapes with identical asymmetry, analogous to gloves that were all right-handed. Pasteur deduced that the asymmetry in the crystals reflected the asymmetric arrangement of atoms in their constituent molecules. Tartaric acid twisted light because of the asymmetry of its molecules, while in racemic acid, the two opposite shapes canceled out each other’s twisting effects.
Pasteur built the rest of his career on this discovery. His research on tartaric acid and wine led eventually to profound realizations about the relationship between microbes and human disease. Before Pasteur, most experts asserted that fermentation was a natural nonbiological chemical process. Yeast, a necessary ingredient in the fermenting fluid, was supposedly a lifeless chemical acting as a catalyst. Pasteur’s experiments showed yeast to be alive, a peculiar kind of “small plant” (now known to be a fungus) that caused fermentation by biological activity.
Pasteur demonstrated that, in the absence of air, yeast acquired oxygen from sugar, converting the sugar to alcohol in the process. “Fermentation by yeast,” he wrote, is “the direct consequence of the processes of nutrition,” a property of a “minute cellular plant … performing its respiratory functions.” Or more succinctly, he proclaimed that “fermentation … is life without air.” (Later scientists found that yeast accomplished fermentation by emitting enzymes that catalyzed the reaction.)
Pasteur also noticed that additional microorganisms present during fermentation could be responsible for the process going awry, a problem threatening the viability of French winemaking and beer brewing. He solved that problem by developing a method of heating that eliminated the bad microorganisms while preserving the quality of the beverages. This method, called “pasteurization,” was later applied to milk, eliminating the threat of illness from drinking milk contaminated by virulent microorganisms. Pasteurization became standard public health practice in the 20th century.
Incorporating additional insights from studies of other forms of fermentation, Pasteur summarized his work on microbial life in a famous paper published in 1857. “This paper can truly be regarded as the beginning of scientific microbiology,” wrote the distinguished microbiologist René Dubos, who called it “one of the most important landmarks of biochemical and biological sciences.”
The germ theory of disease is born Pasteur’s investigations of the growth of microorganisms in fermentation collided with another prominent scientific issue: the possibility of spontaneous generation of life. Popular opinion even among many scientists held that microbial life self-generated under the proper conditions (spoiled meat, for example). Demonstrations by the 17th century Italian scientist Francesco Redi challenged that belief, but the case against spontaneous generation was not airtight. In the early 1860s Pasteur undertook a series of experiments that should have left no doubt that spontaneous generation, under conditions encountered on Earth today, was an illusion. Yet he was nevertheless accosted by critics, such as the French biologist Charles-Philippe Robin, to whom he returned verbal fire. “We trust that the day will come when M. Robin will … acknowledge that he has been in error on the subject of the doctrine of spontaneous generation, which he continues to affirm, without adducing any direct proofs in support of it,” Pasteur remarked.
It was his work on spontaneous generation that led Pasteur directly to the development of the germ theory of disease.
For centuries people had suspected that some diseases must be transmitted from person to person by close contact. But determining exactly how that happened seemed beyond the scope of scientific capabilities. Pasteur, having discerned the role of germs in fermentation, saw instantly that something similar to what made wine go bad might also harm human health.
After disproving spontaneous generation, he realized that there must exist “transmissible, contagious, infectious diseases of which the cause lies essentially and solely in the presence of microscopic organisms.” For some diseases, at least, it was necessary to abandon “the idea of … an infectious element suddenly originating in the bodies of men or animals.” Opinions to the contrary, he wrote, gave rise “to the gratuitous hypothesis of spontaneous generation” and were “fatal to medical progress.”
His first foray into applying the germ theory of disease came during the late 1860s in response to a decline in French silk production because of diseases afflicting silkworms. After success in tackling the silkworms’ maladies, he turned to anthrax, a terrible illness for cattle and humans alike. Many medical experts had long suspected that some form of bacteria caused anthrax, but it was Pasteur’s series of experiments that isolated the responsible microorganism, verifying the germ theory beyond doubt. (Similar work by Robert Koch in Germany around the same time provided further confirmation.)
Understanding anthrax’s cause led to the search for a way to prevent it. In this case, a fortuitous delay in Pasteur’s experiments with cholera in chickens produced a fortunate surprise. In the spring of 1879 he had planned to inject chickens with cholera bacteria he had cultured, but he didn’t get around to it until after his summer vacation. When he injected his chickens in the fall, they unexpectedly failed to get sick. So Pasteur prepared a fresh bacterial culture and brought in a new batch of chickens.
When both the new chickens and the previous batch were given the fresh bacteria, the new ones all died, while nearly all of the original chickens still remained healthy. And so, Pasteur realized, the original culture had weakened in potency over the summer and was unable to cause disease, while the new, obviously potent culture did not harm the chickens previously exposed to the weaker culture. “These animals have been vaccinated,” he declared.
Vaccination, of course, had been invented eight decades earlier, when British physician Edward Jenner protected people from smallpox by first exposing them to cowpox, a similar disease acquired from cows. (Vaccination comes from cowpox’s medical name, vaccinia, from vacca, Latin for cow.) Pasteur realized that the chickens surprisingly displayed a similar instance of vaccination because he was aware of Jenner’s discovery. “Chance favors the prepared mind,” Pasteur was famous for saying.
Because of his work on the germ theory of disease, Pasteur’s mind was prepared to grasp the key role of microbes in the prevention of smallpox, something Jenner could not have known. And Pasteur instantly saw that the specific idea of vaccination for smallpox could be generalized to other diseases. “Instead of depending on the chance finding of naturally occurring immunizing agents, as cowpox was for smallpox,” Dubos observed, “it should be possible to produce vaccines at will in the laboratory.”
Pasteur cultured the anthrax microbe and weakened it for tests in farm animals. Success in such tests not only affirmed the correctness of the germ theory of disease, but also allowed it to gain a foothold in devising new medical practices.
Later Pasteur confronted an even more difficult microscopic foe, the virus that causes rabies. He had begun intense experiments on rabies, a horrifying disease that’s almost always fatal, caused usually by the bites of rabid dogs or other animals. His experiments failed to find any bacterial cause for rabies, leading him to realize that it must be the result of some agent too small to see with his microscope. He could not grow cultures in lab dishes of what he could not see. So instead he decided to grow the disease-causing agent in living tissue — the spinal cords of rabbits. He used dried-out strips of spinal cord from infected rabbits to vaccinate other animals that then survived rabies injections.
Pasteur hesitated to test his rabies treatment on humans. Still, in 1885 when a mother brought to his lab a 9-year-old boy who had been badly bitten by a rabid dog, Pasteur agreed to administer the new vaccine. After a series of injections, the boy recovered fully. Soon more requests came for the rabies vaccine, and by early the next year over 300 rabies patients had received the vaccine and survived, with only one death among them.
Popularly hailed as a hero, Pasteur was also vilified by some hostile doctors, who considered him an uneducated interloper in medicine. Vaccine opponents complained that his vaccine was an untested method that might itself cause death. But of course, critics had also rejected Pasteur’s view of fermentation, the germ theory of disease and his disproof of spontaneous generation. Pasteur stood his ground and eventually prevailed (although he did not turn out to be right about everything). His attitude and legacy of accomplishments inspired 20th century scientists to develop vaccines for more than a dozen deadly diseases. Still more diseases succumbed to antibiotics, following the discovery of penicillin by Alexander Fleming — who declared, “Without Pasteur I would have been nothing.”
Even in Pasteur’s own lifetime, thanks to his defeat of rabies, his public reputation was that of a genius.
Pasteur’s scientific legacy As geniuses go, Pasteur was the opposite of Einstein. To get inspiration for his theories, Einstein imagined riding aside a light beam or daydreamed about falling off a ladder. Pasteur stuck to experiments. He typically initiated his experiments with a suspected result in mind, but he was scrupulous in verifying the conclusions he drew from them. Preconceived ideas, he said, can guide the experimenter’s interrogation of nature but must be abandoned in light of contrary evidence. “The greatest derangement of the mind,” he declared, “is to believe in something because one wishes it to be so.”
So even when Pasteur was sure his view was correct, he insisted on absolute proof, conducting many experiments over and over with variations designed to rule out all but the true interpretation.
“If Pasteur was a genius, it was not through ethereal subtlety of mind,” wrote Pasteur scholar Gerald Geison. Rather, he exhibited “clear-headedness, extraordinary experimental skill and tenacity — almost obstinacy — of purpose.” His tenacity, or obstinacy, helped him persevere through several personal tragedies, such as the deaths of three of his daughters, in 1859, 1865 and 1866. And then in 1868 he suffered a cerebral hemorrhage that left him paralyzed on his left side. But that did not slow his pace or impair continuing his investigations.
“Whatever the circumstances in which he had to work, he never submitted to them, but instead molded them to the demands of his imagination and his will,” Dubos wrote. “He was probably the most dedicated servant that science ever had.”
To the end of his life, Pasteur remained dedicated to science and the scientific method, stressing the importance of experimental science for the benefit of society. Laboratories are “sacred institutions,” he asserted. “Demand that they be multiplied and adorned; they are the temples of wealth and of the future.”
Three years before his death in 1895, Pasteur further extolled the value of science and asserted his optimism that the scientific spirit would prevail. In an address, delivered for him by his son, at a ceremony at the Sorbonne in Paris, he expressed his “invincible belief … that science and peace will triumph over ignorance and war, that nations will unite, not to destroy, but to build, and that the future will belong to those who will have done most for suffering humanity.” Two hundred years after his birth, ignorance and war remain perniciously prominent, as ineradicable as the microbes that continue to threaten public health, with the virus causing COVID-19 the latest conspicuous example. Vaccines, though, have substantially reduced the risks from COVID-19, extending the record of successful vaccines that have already tamed not only smallpox and rabies, but also polio, measles and a host of other once deadly maladies.
Yet even though vaccines have saved countless millions of lives, some politicians and so-called scientists who deny or ignore overwhelming evidence continue to condemn vaccines as more dangerous than the diseases they prevent. True, some vaccines can induce bad reactions, even fatal in a few cases out of millions of vaccinations. But shunning vaccines today, as advocated in artificially amplified social media outrage, is like refusing to eat because some people choke to death on sandwiches.
Today, Pasteur would be vilified just as he was in his own time, probably by some people who don’t even realize that they can safely drink milk because of him. Nobody knows exactly what Pasteur would say to these people now. But it’s certain that he would stand up for truth and science, and would be damn sure to tell everybody to get vaccinated.