Here’s how polar bears might get traction on snow

Tiny “fingers” can help polar bears get a grip.

Like the rubbery nubs on the bottom of baby socks, microstructures on the bears’ paw pads offer some extra friction, scientists report November 1 in the Journal of the Royal Society Interface. The pad protrusions may keep polar bears from slipping on snow, says Ali Dhinojwala, a polymer scientist at the University of Akron in Ohio who has also studied the sticking power of gecko feet (SN: 8/9/05).
Nathaniel Orndorf, a materials scientist at Akron who focuses on ice, adhesion and friction, was interested in the work Dhinojwala’s lab did on geckos, but “we can’t really put geckos on the ice,” he says. So he turned to polar bears.

Orndorf teamed up with Dhinojwala and Austin Garner, an animal biologist now at Syracuse University in New York, and compared the paws of polar bears, brown bears, American black bears and a sun bear. All but the sun bear had paw pad bumps. But the polar bears’ bumps looked a little different. For a given diameter, their bumps tend to be taller, the team found. That extra height translates to more traction on lab-made snow, experiments with 3-D printed models of the bumps suggest.

Until now, scientists didn’t know that bump shape could make the difference between gripping and slipping, Dhinojwala says.
Polar bear paw pads are also ringed with fur and are smaller than those of other bears, the team reports, adaptations that might let the Arctic animals conserve body heat as they trod upon ice. Smaller pads generally mean less real estate for grabbing the ground. So extra-grippy pads could help polar bears make the most of what they’ve got, Orndorf says.

Along with bumpy pads, the team hopes to study polar bears’ fuzzy paws and short claws, which might also give the animals a nonslip grip.

Astronomers have found the closest known black hole to Earth

The closest black hole yet found is just 1,560 light-years from Earth, a new study reports. The black hole, dubbed Gaia BH1, is about 10 times the mass of the sun and orbits a sunlike star.

Most known black holes steal and eat gas from massive companion stars. That gas forms a disk around the black hole and glows brightly in X-rays. But hungry black holes are not the most common ones in our galaxy. Far more numerous are the tranquil black holes that are not mid-meal, which astronomers have dreamed of finding for decades. Previous claims of finding such black holes have so far not held up (SN: 5/6/20; SN: 3/11/22).
So astrophysicist Kareem El-Badry and colleagues turned to newly released data from the Gaia spacecraft, which precisely maps the positions of billions of stars (SN: 6/13/22). A star orbiting a black hole at a safe distance won’t get eaten, but it will be pulled back and forth by the black hole’s gravity. Astronomers can detect the star’s motion and deduce the black hole’s presence.

Out of hundreds of thousands of stars that looked like they were tugged by an unseen object, just one seemed like a good black hole candidate. Follow-up observations with other telescopes support the black hole idea, the team reports November 2 in Monthly Notices of the Royal Astronomical Society.

Gaia BH1 is the nearest black hole to Earth ever discovered — the next closest is around 3,200 light-years away. But it’s probably not the closest that exists, or even the closest we’ll ever find. Astronomers think there are about 100 million black holes in the Milky Way, but almost all of them are invisible. “They’re just isolated, so we can’t see them,” says El-Badry, of the Harvard-Smithsonian Center for Astrophysics in Cambridge, Mass.

The next data release from Gaia is due out in 2025, and El-Badry expects it to bring more black hole bounty. “We think there are probably a lot that are closer,” he says. “Just finding one … suggests there are a bunch more to be found.”

Snails trace Stone Age trek from Iberia to Ireland

Stone Age people may have carried land snails on a voyage from the Pyrenees to Ireland, an examination of the snails’ DNA reveals.

Scientists have struggled to explain why Ireland shares some plant and animal species with the Iberian Peninsula, but not with the rest of Europe or the British Isles. For example, Cepaea nemoralis land snails on Ireland’s western coast and in the southern Pyrenees share unique white-lipped shells.

To find out if the two populations of white-lipped snails are related, Angus Davison and Adele Grindon of the University of Nottingham in England took DNA samples from the species all over Europe. The researchers found that snails in Ireland and the Pyrenees share a variation in one gene that distinguishes them from other European specimens.

The simplest explanation, Davison and Grindon report June 19 in PLOS ONE, is that humans journeying to Ireland about 8,000 years ago brought along escargot as a food source. “Other explanations get quite convoluted,” Davison says.

Battle of the deer and eagle

When a deer carcass appeared a few meters from a camera trap without obvious predator prints, scientists were a bit puzzled.

The mystery was only solved when the team reviewed 2-week-old footage from their camera and saw images of an adult golden eagle tearing into the back of a young sika deer. It was a rare sight, the first reported in the extreme eastern regions of Russia, the scientists suggest.

They published the images and a paper on the predator-prey interaction in the September Journal of Raptor Research.

How scientists are shifting their search for links between diet and dementia

The internet is rife with advice for keeping the brain sharp as we age, and much of it is focused on the foods we eat. Headlines promise that oatmeal will fight off dementia. Blueberries improve memory. Coffee can slash your risk of Alzheimer’s disease. Take fish oil. Eat more fiber. Drink red wine. Forgo alcohol. Snack on nuts. Don’t skip breakfast. But definitely don’t eat bacon.

One recent diet study got media attention, with one headline claiming, “Many people may be eating their way to dementia.” The study, published last December in Neurology, found that people who ate a diet rich in anti-inflammatory foods like fruits, vegetables, beans and tea or coffee had a lower risk of dementia than those who ate foods that boost inflammation, such as sugar, processed foods, unhealthy fats and red meat.
But the study, like most research on diet and dementia, couldn’t prove a causal link. And that’s not good enough to make recommendations that people should follow. Why has it proved such a challenge to pin down whether the foods we eat can help stave off dementia?

First, dementia, like most chronic diseases, is the result of a complex interplay of genes, lifestyle and environment that researchers don’t fully understand. Diet is just one factor. Second, nutrition research is messy. People struggle to recall the foods they’ve eaten, their diets change over time, and modifying what people eat — even as part of a research study — is exceptionally difficult.

For decades, researchers devoted little effort to trying to prevent or delay Alzheimer’s disease and other types of dementia because they thought there was no way to change the trajectory of these diseases. Dementia seemed to be the result of aging and an unlucky roll of the genetic dice.

While scientists have identified genetic variants that boost risk for dementia, researchers now know that people can cut their risk by adopting a healthier lifestyle: avoiding smoking, keeping weight and blood sugar in check, exercising, managing blood pressure and avoiding too much alcohol — the same healthy behaviors that lower the risk of many chronic diseases.

Diet is wrapped up in several of those healthy behaviors, and many studies suggest that diet may also directly play a role. But what makes for a brain-healthy diet? That’s where the research gets muddled.

Despite loads of studies aimed at dissecting the influence of nutrition on dementia, researchers can’t say much with certainty. “I don’t think there’s any question that diet influences dementia risk or a variety of other age-related diseases,” says Matt Kaeberlein, who studies aging at the University of Washington in Seattle. But “are there specific components of diet or specific nutritional strategies that are causal in that connection?” He doubts it will be that simple.

Worth trying
In the United States, an estimated 6.5 million people, the vast majority of whom are over age 65, are living with Alzheimer’s disease and related dementias. Experts expect that by 2060, as the senior population grows, nearly 14 million residents over age 65 will have Alzheimer’s disease. Despite decades of research and more than 100 drug trials, scientists have yet to find a treatment for dementia that does more than curb symptoms temporarily (SN: 7/3/21 & 7/17/21, p. 8). “Really what we need to do is try and prevent it,” says Maria Fiatarone Singh, a geriatrician at the University of Sydney.

Forty percent of dementia cases could be prevented or delayed by modifying a dozen risk factors, according to a 2020 report commissioned by the Lancet. The report doesn’t explicitly call out diet, but some researchers think it plays an important role. After years of fixating on specific foods and dietary components — things like fish oil and vitamin E supplements — many researchers in the field have started looking at dietary patterns.

That shift makes sense. “We do not have vitamin E for breakfast, vitamin C for lunch. We eat foods in combination,” says Nikolaos Scarmeas, a neurologist at National and Kapodistrian University of Athens and Columbia University. He led the study on dementia and anti-inflammatory diets published in Neurology. But a shift from supplements to a whole diet of myriad foods complicates the research. A once-daily pill is easier to swallow than a new, healthier way of eating.
Earning points
Suspecting that inflammation plays a role in dementia, many researchers posit that an anti-inflammatory diet might benefit the brain. In Scarmeas’ study, more than 1,000 older adults in Greece completed a food frequency questionnaire and earned a score based on how “inflammatory” their diet was. The lower the score, the better. For example, fatty fish, which is rich in omega-3 fatty acids, was considered an anti-inflammatory food and earned negative points. Cheese and many other dairy products, high in saturated fat, earned positive points.

During the next three years, 62 people, or 6 percent of the study participants, developed dementia. People with the highest dietary inflammation scores were three times as likely to develop dementia as those with the lowest. Scores ranged from –5.83 to 6.01. Each point increase was linked to a 21 percent rise in dementia risk.

Such epidemiological studies make connections, but they can’t prove cause and effect. Perhaps people who eat the most anti-inflammatory diets also are those least likely to develop dementia for some other reason. Maybe they have more social interactions. Or it could be, Scarmeas says, that people who eat more inflammatory diets do so because they’re already experiencing changes in their brain that lead them to consume these foods and “what we really see is the reverse causality.”

To sort all this out, researchers rely on randomized controlled trials, the gold standard for providing proof of a causal effect. But in the arena of diet and dementia, these studies have challenges.

Dementia is a disease of aging that takes decades to play out, Kaeberlein says. To show that a particular diet could reduce the risk of dementia, “it would take two-, three-, four-decade studies, which just aren’t feasible.” Many clinical trials last less than two years.

As a work-around, researchers often rely on some intermediate outcome, like changes in cognition. But even that can be hard to observe. “If you’re already relatively healthy and don’t have many risks, you might not show much difference, especially if the duration of the study is relatively short,” says Sue Radd-Vagenas, a nutrition scientist at the University of Sydney. “The thinking is if you’re older and you have more risk factors, it’s more likely we might see something in a short period of time.” Yet older adults might already have some cognitive decline, so it might be more difficult to see an effect.

Many researchers now suspect that intervening earlier will have a bigger impact. “We now know that the brain is stressed from midlife and there’s a tipping point at 65 when things go sour,” says Hussein Yassine, an Alzheimer’s researcher at the Keck School of Medicine of the University of Southern California in Los Angeles. But intervene too early, and a trial might not show any effect. Offering a healthier diet to a 50- or 60-year-old might pay off in the long run but fail to make a difference in cognition that can be measured during the relatively short length of a study.

And it’s not only the timing of the intervention that matters, but also the duration. Do you have to eat a particular diet for two decades for it to have an impact? “We’ve got a problem of timescale,” says Kaarin Anstey, a dementia researcher at the University of New South Wales in Sydney.

And then there are all the complexities that come with studying diet. “You can’t isolate it in the way you can isolate some of the other factors,” Anstey says. “It’s something that you’re exposed to all the time and over decades.”

Food as medicine?
In a clinical trial, researchers often test the effectiveness of a drug by offering half the study participants the medication and half a placebo pill. But when the treatment being tested is food, studies become much more difficult to control. First, food doesn’t come in a pill, so it’s tricky to hide whether participants are in the intervention group or the control group.

Imagine a trial designed to test whether the Mediterranean diet can help slow cognitive decline. The participants aren’t told which group they’re in, but the control group sees that they aren’t getting nuts or fish or olive oil. “What ends up happening is a lot of participants will start actively increasing the consumption of the Mediterranean diet despite being on the control arm, because that’s why they signed up,” Yassine says. “So at the end of the trial, the two groups are not very dissimilar.”

Second, we all need food to live, so a true placebo is out of the question. But what diet should the control group consume? Do you compare the diet intervention to people’s typical diets (which may differ from person to person and country to country)? Do you ask the comparison group to eat a healthy diet but avoid the food expected to provide brain benefits? (Offering them an unhealthy diet would be unethical.)

And tracking what people eat during a clinical trial can be a challenge. Many of these studies rely on food frequency questionnaires to tally up all the foods in an individual’s diet. An ongoing study is assessing the impact of the MIND diet (which combines part of the Mediterranean diet with elements of the low-salt DASH diet) on cognitive decline. Researchers track adherence to the diet by asking participants to fill out a food frequency questionnaire every six to 12 months. But many of us struggle to remember what we ate a day or two ago. So some researchers also rely on more objective measures to assess compliance. For the MIND diet assessment, researchers are also tracking biomarkers in the blood and urine — vitamins such as folate, B12 and vitamin E, plus levels of certain antioxidants.
Another difficulty is that these surveys often don’t account for variables that could be really important, like how the food was prepared and where it came from. Was the fish grilled? Fried? Slathered in butter? “Those things can matter,” says dementia researcher Nathaniel Chin of the University of Wisconsin–Madison.

Plus there are the things researchers can’t control. For example, how does the food interact with an individual’s medications and microbiome? “We know all of those factors have an interplay,” Chin says.

The few clinical trials looking at dementia and diet seem to measure different things, so it’s hard to make comparisons. In 2018, Radd-Vagenas and her colleagues looked at all the trials that had studied the impact of the Mediterranean diet on cognition. There were five at the time. “What struck me even then was how variable the interventions were,” she says. “Some of the studies didn’t even mention olive oil in their intervention. Now, how can you run a Mediterranean diet study and not mention olive oil?”

Another tricky aspect is recruitment. The kind of people who sign up for clinical trials tend to be more educated, more motivated and have healthier lifestyles. That can make differences between the intervention group and the control group difficult to spot. And if the study shows an effect, whether it will apply to the broader, more diverse population comes into question. To sum up, these studies are difficult to design, difficult to conduct and often difficult to interpret.

Kaeberlein studies aging, not dementia specifically, but he follows the research closely and acknowledges that the lack of clear answers can be frustrating. “I get the feeling of wanting to throw up your hands,” he says. But he points out that there may not be a single answer. Many diets can help people maintain a healthy weight and avoid diabetes, and thus reduce the risk of dementia. Beyond that obvious fact, he says, “it’s hard to get definitive answers.”

A better way
In July 2021, Yassine gathered with more than 30 other dementia and nutrition experts for a virtual symposium to discuss the myriad challenges and map out a path forward. The speakers noted several changes that might improve the research.

One idea is to focus on populations at high risk. For example, one clinical trial is looking at the impact of low- and high-fat diets on short-term changes in the brain in people who carry the genetic variant APOE4, a risk factor for Alzheimer’s. One small study suggested that a high-fat Western diet actually improved cognition in some individuals. Researchers hope to get clarity on that surprising result.
Another possible fix is redefining how researchers measure success. Hypertension and diabetes are both well-known risk factors for dementia. So rather than running a clinical trial that looks at whether a particular diet can affect dementia, researchers could look at the impact of diet on one of these risk factors. Plenty of studies have assessed the impact of diet on hypertension and diabetes, but Yassine knows of none launched with dementia prevention as the ultimate goal.

Yassine envisions a study that recruits participants at risk of developing dementia because of genetics or cardiovascular disease and then looks at intermediate outcomes. “For example, a high-salt diet can be associated with hypertension, and hypertension can be associated with dementia,” he says. If the study shows that the diet lowers hypertension, “we achieved our aim.” Then the study could enter a legacy period during which researchers track these individuals for another decade to determine whether the intervention influences cognition and dementia.

One way to amplify the signal in a clinical trial is to combine diet with other interventions likely to reduce the risk of dementia. The Finnish Geriatric Intervention Study to Prevent Cognitive Impairment and Disability, or FINGER, trial, which began in 2009, did just that. Researchers enrolled more than 1,200 individuals ages 60 to 77 who were at an elevated risk of developing dementia and had average or slightly impaired performance on cognition tests. Half received nutritional guidance, worked out at a gym, engaged in online brain-training games and had routine visits with a nurse to talk about managing dementia risk factors like high blood pressure and diabetes. The other half received only general health advice.

After two years, the control group had a 25 percent greater cognitive decline than the intervention group. It was the first trial, reported in the Lancet in 2015, to show that targeting multiple risk factors could slow the pace of cognitive decline.

Now researchers are testing this approach in more than 30 countries. Christy Tangney, a nutrition researcher at Rush University in Chicago, is one of the investigators on the U.S. arm of the study, enrolling 2,000 people ages 60 to 79 who have at least one dementia risk factor. The study is called POINTER, or U.S. Study to Protect Brain Health Through Lifestyle Intervention to Reduce Risk. The COVID-19 pandemic has delayed the research — organizers had to pause the trial briefly — but Tangney expects to have results in the next few years.

This kind of multi-intervention study makes sense, Chin says. “One of the reasons why things are so slow in our field is we’re trying to address a heterogeneous disease with one intervention at a time. And that’s just not going to work.” A trial that tests multiple interventions “allows for people to not be perfect,” he adds. Maybe they can’t follow the diet exactly, but they can stick to the workout program, which might have an effect on its own. The drawback in these kinds of studies, however, is that it’s impossible to tease out the contribution of each individual intervention.
Preemptive guidelines
Two major reports came out in recent years addressing dementia prevention. The first, from the World Health Organization in 2019, recommends a healthy, balanced diet for all adults, and notes that the Mediterranean diet may help people who have normal to mildly impaired cognition.

The 2020 Lancet Commission report, however, does not include diet in its list of modifiable risk factors, at least not yet. “Nutrition and dietary components are challenging to research with controversies still raging around the role of many micronutrients and health outcomes in dementia,” the report notes. The authors point out that a Mediterranean or the similar Scandinavian diet might help prevent cognitive decline in people with intact cognition, but “how long the exposure has to be or during which ages is unclear.” Neither report recommends any supplements.

Plenty of people are waiting for some kind of advice to follow. Improving how these studies are done might enable scientists to finally sort out what kinds of diets can help hold back the heartbreaking damage that comes with Alzheimer’s disease. For some people, that knowledge might be enough to create change.
“Inevitably, if you’ve had Alzheimer’s in your family, you want to know, ‘What can I do today to potentially reduce my risk?’ ” says molecular biologist Heather Snyder, vice president of medical and scientific relations at the Alzheimer’s Association.

But changing long-term dietary habits can be hard. The foods we eat aren’t just fuel; our diets represent culture and comfort and more. “Food means so much to us,” Chin says.

“Even if you found the perfect diet,” he adds, “how do you get people to agree to and actually change their habits to follow that diet?” The MIND diet, for example, suggests people eat less than one serving of cheese a week. In Wisconsin, where Chin is based, that’s a nonstarter, he says.

But it’s not just about changing individual behaviors. Radd-Vagenas and other researchers hope that if they can show the brain benefits of some of these diets in rigorous studies, policy changes might follow. For example, research shows that lifestyle changes can have a big impact on type 2 diabetes. As a result, many insurance providers now pay for coaching programs that help participants maintain healthy diet and exercise habits.

“You need to establish policies. You need to change cities, change urban design. You need to do a lot of things to enable healthier choices to become easier choices,” Radd-Vagenas says. But that takes meatier data than exist now.

How to build better ice towers for drinking water and irrigation

There’s a better way to build a glacier.

During winter in India’s mountainous Ladakh region, some farmers use pipes and sprinklers to construct building-sized cones of ice. These towering, humanmade glaciers, called ice stupas, slowly release water as they melt during the dry spring months for communities to drink or irrigate crops. But the pipes often freeze when conditions get too cold, stifling construction.

Now, preliminary results show that an automated system can erect an ice stupa while avoiding frozen pipes, using local weather data to control when and how much water is spouted. What’s more, the new system uses roughly a tenth the amount of water that the conventional method uses, researchers reported June 23 at the Frontiers in Hydrology meeting in San Juan, Puerto Rico.
“This is one of the technological steps forward that we need to get this innovative idea to the point where it’s realistic as a solution,” says glaciologist Duncan Quincey of the University of Leeds in England who was not involved in the research. Automation could help communities build larger, longer-lasting ice stupas that provide more water during dry periods, he says.

Ice stupas emerged in 2014 as a means for communities to cope with shrinking alpine glaciers due to human-caused climate change (SN: 5/29/19). Typically, high-mountain communities in India, Kyrgyzstan and Chile pipe glacial meltwater into gravity-driven fountains that sprinkle continuously in the winter. Cold air freezes the drizzle, creating frozen cones that can store millions of liters of water.

The process is simple, though inefficient. More than 70 percent of the spouted water may flow away instead of freezing, says glaciologist Suryanarayanan Balasubramanian of the University of Fribourg in Switzerland.

So Balasubramanian and his team outfitted an ice stupa’s fountain with a computer that automatically adjusted the spout’s flow rate based on local temperatures, humidity and wind speed. Then the scientists tested the system by building two ice stupas in Guttannen, Switzerland — one using a continuously spraying fountain and one using the automated system.

After four months, the team found that the continuously sprinkling fountain had spouted about 1,100 cubic meters of water and amassed 53 cubic meters of ice, with pipes freezing once. The automated system sprayed only around 150 cubic meters of water but formed 61 cubic meters of ice, without any frozen pipes.

The researchers are now trying to simplify their prototype to make it more affordable for high-mountain communities around the world. “We eventually want to reduce the cost so that it is within two months of salary of the farmers in Ladakh,” Balasubramanian says. “Around $200 to $400.”

North America’s oldest skull surgery dates to at least 3,000 years ago

A man with a hole in his forehead, who was interred in what’s now northwest Alabama between around 3,000 and 5,000 years ago, represents North America’s oldest known case of skull surgery.

Damage around the man’s oval skull opening indicates that someone scraped out that piece of bone, probably to reduce brain swelling caused by a violent attack or a serious fall, said bioarchaeologist Diana Simpson of the University of Nevada, Las Vegas. Either scenario could explain fractures and other injuries above the man’s left eye and to his left arm, leg and collarbone.

Bone regrowth on the edges of the skull opening indicates that the man lived for up to one year after surgery, Simpson estimated. She presented her analysis of the man’s remains on March 28 at a virtual session of the annual meeting of the American Association of Biological Anthropologists.
Skull surgery occurred as early as 13,000 years ago in North Africa (SN: 8/17/11). Until now, the oldest evidence of this practice in North America dated to no more than roughly 1,000 years ago.

In his prime, the new record holder likely served as a ritual practitioner or shaman. His grave included items like those found in shamans’ graves at nearby North American hunter-gatherer sites dating to between about 3,000 and 5,000 years ago. Ritual objects buried with him included sharpened bone pins and modified deer and turkey bones that may have been tattooing tools (SN: 5/25/21).

Investigators excavated the man’s grave and 162 others at the Little Bear Creek Site, a seashell covered burial mound, in the 1940s. Simpson studied the man’s museum-held skeleton and grave items in 2018, shortly before the discoveries were returned to local Native American communities for reburial.

Here are the Top 10 times scientific imagination failed

Science, some would say, is an enterprise that should concern itself solely with cold, hard facts. Flights of imagination should be the province of philosophers and poets.

On the other hand, as Albert Einstein so astutely observed, “Imagination is more important than knowledge.” Knowledge, he said, is limited to what we know now, while “imagination embraces the entire world, stimulating progress.”

So with science, imagination has often been the prelude to transformative advances in knowledge, remaking humankind’s understanding of the world and enabling powerful new technologies.
And yet while sometimes spectacularly successful, imagination has also frequently failed in ways that retard the revealing of nature’s secrets. Some minds, it seems, are simply incapable of imagining that there’s more to reality than what they already know.

On many occasions scientists have failed to foresee ways of testing novel ideas, ridiculing them as unverifiable and therefore unscientific. Consequently it is not too challenging to come up with enough failures of scientific imagination to compile a Top 10 list, beginning with:

  1. Atoms
    By the middle of the 19th century, most scientists believed in atoms. Chemists especially. John Dalton had shown that the simple ratios of different elements making up chemical compounds strongly implied that each element consisted of identical tiny particles. Subsequent research on the weights of those atoms made their reality pretty hard to dispute. But that didn’t deter physicist-philosopher Ernst Mach. Even as late as the beginning of the 20th century, he and a number of others insisted that atoms could not be real, as they were not accessible to the senses. Mach believed that atoms were a “mental artifice,” convenient fictions that helped in calculating the outcomes of chemical reactions. “Have you ever seen one?” he would ask.

Apart from the fallacy of defining reality as “observable,” Mach’s main failure was his inability to imagine a way that atoms could be observed. Even after Einstein proved the existence of atoms by indirect means in 1905, Mach stood his ground. He was unaware, of course, of the 20th century technologies that quantum mechanics would enable, and so did not foresee powerful new microscopes that could show actual images of atoms (and allow a certain computing company to drag them around to spell out IBM).

  1. Composition of stars
    Mach’s views were similar to those of Auguste Comte, a French philosopher who originated the idea of positivism, which denies reality to anything other than objects of sensory experience. Comte’s philosophy led (and in some cases still leads) many scientists astray. His greatest failure of imagination was an example he offered for what science could never know: the chemical composition of the stars.

Unable to imagine anybody affording a ticket on some entrepreneur’s space rocket, Comte argued in 1835 that the identity of the stars’ components would forever remain beyond human knowledge. We could study their size, shapes and movements, he said, “whereas we would never know how to study by any means their chemical composition, or their mineralogical structure,” or for that matter, their temperature, which “will necessarily always be concealed from us.”

Within a few decades, though, a newfangled technology called spectroscopy enabled astronomers to analyze the colors of light emitted by stars. And since each chemical element emits (or absorbs) precise colors (or frequencies) of light, each set of colors is like a chemical fingerprint, an infallible indicator for an element’s identity. Using a spectroscope to observe starlight therefore can reveal the chemistry of the stars, exactly what Comte thought impossible.

  1. Canals on Mars
    Sometimes imagination fails because of its overabundance rather than absence. In the case of the never-ending drama over the possibility of life on Mars, that planet’s famous canals turned out to be figments of overactive scientific imagination.

First “observed” in the late 19th century, the Martian canals showed up as streaks on the planet’s surface, described as canali by Italian astronomer Giovanni Schiaparelli. Canali is, however, Italian for channels, not canals. So in this case something was gained (rather than lost) in translation — the idea that Mars was inhabited. “Canals are dug,” remarked British astronomer Norman Lockyer in 1901, “ergo there were diggers.” Soon astronomers imagined an elaborate system of canals transporting water from Martian poles to thirsty metropolitan areas and agricultural centers. (Some observers even imagined seeing canals on Venus and Mercury.)
With more constrained imaginations, aided by better telescopes and translations, belief in the Martian canals eventually faded. It was merely the Martian winds blowing dust (bright) and sand (dark) around the surface in ways that occasionally made bright and dark streaks line up in a deceptive manner — to eyes attached to overly imaginative brains.

  1. Nuclear fission
    In 1934, Italian physicist Enrico Fermi bombarded uranium (atomic number 92) and other elements with neutrons, the particle discovered just two years earlier by James Chadwick. Fermi found that among the products was an unidentifiable new element. He thought he had created element 93, heavier than uranium. He could not imagine any other explanation. In 1938 Fermi was awarded the Nobel Prize in physics for demonstrating “the existence of new radioactive elements produced by neutron irradiation.”

It turned out, however, that Fermi had unwittingly demonstrated nuclear fission. His bombardment products were actually lighter, previously known elements — fragments split from the heavy uranium nucleus. Of course, the scientists later credited with discovering fission, Otto Hahn and Fritz Strassmann, didn’t understand their results either. Hahn’s former collaborator Lise Meitner was the one who explained what they’d done. Another woman, chemist Ida Noddack, had imagined the possibility of fission to explain Fermi’s results, but for some reason nobody listened to her.

  1. Detecting neutrinos
    In the 1920s, most physicists had convinced themselves that nature was built from just two basic particles: positively charged protons and negatively charged electrons. Some had, however, imagined the possibility of a particle with no electric charge. One specific proposal for such a particle came in 1930 from Austrian physicist Wolfgang Pauli. He suggested that a no-charge particle could explain a suspicious loss of energy observed in beta-particle radioactivity. Pauli’s idea was worked out mathematically by Fermi, who named the neutral particle the neutrino. Fermi’s math was then examined by physicists Hans Bethe and Rudolf Peierls, who deduced that the neutrino would zip through matter so easily that there was no imaginable way of detecting its existence (short of building a tank of liquid hydrogen 6 million billion miles wide). “There is no practically possible way of observing the neutrino,” Bethe and Peierls concluded.

But they had failed to imagine the possibility of finding a source of huge numbers of high-energy neutrinos, so that a few could be captured even if almost all escaped. No such source was known until nuclear fission reactors were invented. In the 1950s, Frederick Reines and Clyde Cowan used reactors to definitely establish the neutrino’s existence. Reines later said he sought a way to detect the neutrino precisely because everybody had told him it wasn’t possible to detect the neutrino.

  1. Nuclear energy
    Ernest Rutherford, one of the 20th century’s greatest experimental physicists, was not exactly unimaginative. He imagined the existence of the neutron a dozen years before it was discovered, and he figured out that a weird experiment conducted by his assistants had revealed that atoms contained a dense central nucleus. It was clear that the atomic nucleus packed an enormous quantity of energy, but Rutherford could imagine no way to extract that energy for practical purposes. In 1933, at a meeting of the British Association for the Advancement of Science, he noted that although the nucleus contained a lot of energy, it would also require energy to release it. Anyone saying we can exploit atomic energy “is talking moonshine,” Rutherford declared. To be fair, Rutherford qualified the moonshine remark by saying “with our present knowledge,” so in a way he perhaps was anticipating the discovery of nuclear fission a few years later. (And some historians have suggested that Rutherford did imagine the powerful release of nuclear energy, but thought it was a bad idea and wanted to discourage people from attempting it.)
  2. Age of the Earth
    Rutherford’s reputation for imagination was bolstered by his inference that radioactive matter deep underground could solve the mystery of the age of the Earth. In the mid-19th century, William Thomson (later known as Lord Kelvin) calculated the Earth’s age to be something a little more than 100 million years, and possibly much less. Geologists insisted that the Earth must be much older — perhaps billions of years — to account for the planet’s geological features.

Kelvin calculated his estimate assuming the Earth was born as a molten rocky mass that then cooled to its present temperature. But following the discovery of radioactivity at the end of the 19th century, Rutherford pointed out that it provided a new source of heat in the Earth’s interior. While giving a talk (in Kelvin’s presence), Rutherford suggested that Kelvin had basically prophesized a new source of planetary heat.

While Kelvin’s neglect of radioactivity is the standard story, a more thorough analysis shows that adding that heat to his math would not have changed his estimate very much. Rather, Kelvin’s mistake was assuming the interior to be rigid. John Perry (one of Kelvin’s former assistants) showed in 1895 that the flow of heat deep within the Earth’s interior would alter Kelvin’s calculations considerably — enough to allow the Earth to be billions of years old. It turned out that the Earth’s mantle is fluid on long time scales, which not only explains the age of the Earth, but also plate tectonics.

  1. Charge-parity violation
    Before the mid-1950s, nobody imagined that the laws of physics gave a hoot about handedness. The same laws should govern matter in action when viewed straight-on or in a mirror, just as the rules of baseball applied equally to Ted Williams and Willie Mays, not to mention Mickey Mantle. But in 1956 physicists Tsung-Dao Lee and Chen Ning Yang suggested that perfect right-left symmetry (or “parity”) might be violated by the weak nuclear force, and experiments soon confirmed their suspicion.

Restoring sanity to nature, many physicists thought, required antimatter. If you just switched left with right (mirror image), some subatomic processes exhibited a preferred handedness. But if you also replaced matter with antimatter (switching electric charge), left-right balance would be restored. In other words, reversing both charge (C) and parity (P) left nature’s behavior unchanged, a principle known as CP symmetry. CP symmetry had to be perfectly exact; otherwise nature’s laws would change if you went backward (instead of forward) in time, and nobody could imagine that.

In the early 1960s, James Cronin and Val Fitch tested CP symmetry’s perfection by studying subatomic particles called kaons and their antimatter counterparts. Kaons and antikaons both have zero charge but are not identical, because they are made from different quarks. Thanks to the quirky rules of quantum mechanics, kaons can turn into antikaons and vice versa. If CP symmetry is exact, each should turn into the other equally often. But Cronin and Fitch found that antikaons turn into kaons more often than the other way around. And that implied that nature’s laws allowed a preferred direction of time. “People didn’t want to believe it,” Cronin said in a 1999 interview. Most physicists do believe it today, but the implications of CP violation for the nature of time and other cosmic questions remain mysterious.

  1. Behaviorism versus the brain
    In the early 20th century, the dogma of behaviorism, initiated by John Watson and championed a little later by B.F. Skinner, ensnared psychologists in a paradigm that literally excised imagination from science. The brain — site of all imagination — is a “black box,” the behaviorists insisted. Rules of human psychology (mostly inferred from experiments with rats and pigeons) could be scientifically established only by observing behavior. It was scientifically meaningless to inquire into the inner workings of the brain that directed such behavior, as those workings were in principle inaccessible to human observation. In other words, activity inside the brain was deemed scientifically irrelevant because it could not be observed. “When what a person does [is] attributed to what is going on inside him,” Skinner proclaimed, “investigation is brought to an end.”

Skinner’s behaviorist BS brainwashed a generation or two of followers into thinking the brain was beyond study. But fortunately for neuroscience, some physicists foresaw methods for observing neural activity in the brain without splitting the skull open, exhibiting imagination that the behaviorists lacked. In the 1970s Michel Ter-Pogossian, Michael Phelps and colleagues developed PET (positron emission tomography) scanning technology, which uses radioactive tracers to monitor brain activity. PET scanning is now complemented by magnetic resonance imaging, based on ideas developed in the 1930s and 1940s by physicists I.I. Rabi, Edward Purcell and Felix Bloch.

  1. Gravitational waves
    Nowadays astrophysicists are all agog about gravitational waves, which can reveal all sorts of secrets about what goes on in the distant universe. All hail Einstein, whose theory of gravity — general relativity — explains the waves’ existence. But Einstein was not the first to propose the idea. In the 19th century, James Clerk Maxwell devised the math explaining electromagnetic waves, and speculated that gravity might similarly induce waves in a gravitational field. He couldn’t figure out how, though. Later other scientists, including Oliver Heaviside and Henri Poincaré, speculated about gravity waves. So the possibility of their existence certainly had been imagined.

But many physicists doubted that the waves existed, or if they did, could not imagine any way of proving it. Shortly before Einstein completed his general relativity theory, German physicist Gustav Mie declared that “the gravitational radiation emitted … by any oscillating mass particle is so extraordinarily weak that it is unthinkable ever to detect it by any means whatsoever.” Even Einstein had no idea how to detect gravitational waves, although he worked out the math describing them in a 1918 paper. In 1936 he decided that general relativity did not predict gravitational waves at all. But the paper rejecting them was simply wrong.
As it turned out, of course, gravitational waves are real and can be detected. At first they were verified indirectly, by the diminishing distance between mutually orbiting pulsars. And more recently they were directly detected by huge experiments relying on lasers. Nobody had been able to imagine detecting gravitational waves a century ago because nobody had imagined the existence of pulsars or lasers.

All these failures show how prejudice can sometimes dull the imagination. But they also show how an imagination failure can inspire the quest for a new success. And that’s why science, so often detoured by dogma, still manages somehow, on long enough time scales, to provide technological wonders and cosmic insights beyond philosophers’ and poets’ wildest imagination.

We can do better than what was ‘normal’ before the pandemic

It’s a weird time in the pandemic. COVID-19 cases are once again climbing in some parts of the United States, but still falling from the January surge in other places. The omicron subvariant BA.2 is now dominant in the country, accounting for more than 50 percent of new cases in the week ending March 26, according to the U.S. Centers for Disease Control and Prevention.

BA.2 has already taken parts of the world by storm, spurring large outbreaks in Europe and Asia. With the rising spread of the subvariant in the United States, signs are pointing to another COVID-19 wave here, although it’s unclear how big it could be. There is a good amount of immunity from vaccination and infections from other omicron siblings to help flatten the next peak. But the highly transmissible subvariant is advancing at a time when many have tossed masks aside.

I can’t help but feel that we’re sitting ducks. There’s no movement yet to reinstate protective measures to prepare for the coming wave. Instead, there are loud calls to “return to normal.” But even though it’s been two years, this pandemic isn’t over, no matter how much we wish it were. And when people talk about “normal,” I am struck by what can’t be “normal” again.
For millions and millions of people who have lost children, partners, parents and friends, life won’t be the same. One study reported that the “mortality shock” of COVID-19 has left nine people bereaved for every one U.S. death. So for the more than 975,000 who have died of COVID-19 in the United States, there are close to 9 million who are grieving. Although the study didn’t calculate the ratio globally, more than 6 million have died worldwide, undoubtedly leaving tens of millions bereaved. At the global scale, researchers estimate that through October 2021, more than 5 million children have lost a parent or caregiver to COVID-19, putting these children’s health, development and future education at risk (SN: 2/24/22).

Adding to the loss, the pandemic robbed many people of the chance to be with their loved ones as they died or to gather for a funeral. Psychiatrists are concerned that cases of prolonged grief disorder could rise, considering the scale of this mass mourning event.

Many millions who weathered an infection with SARS-CoV-2 went on to develop debilitating symptoms from long COVID, preventing their return to “normal.” A recent report from the U.S. Government Accountability Office estimates that 7.7 to 23 million people in the United States may have developed the condition. Worldwide, an estimated 100 million people currently have, or have had, long-term symptoms from COVID-19, researchers reported in a preprint study last November. Many with long COVID can no longer work and are struggling to get financial assistance in the United States. Some have lost their homes.

And as masking and vaccine mandates have fallen away, people with compromised immune systems have no choice but to fend for themselves and remain vigilant about restricting their interactions. People taking drugs that suppress the immune system or who have immune system disorders can’t muster much protection, if any, following vaccination against the coronavirus. And if they get COVID-19, their weakened defenses put them at risk of more severe disease.
With all that people have endured — and continue to endure — during the pandemic, it would be a colossal missed opportunity to throw aside what we’ve learned from this experience. The pandemic brought wider attention to how racism fuels health disparities in the United States and renewed calls to make more progress dismantling inequities. With remote work and virtual school, many people with disabilities have gained important accommodations. The argument that internet access is a social determinant of health has been reinforced: Places that had limited access to broadband internet were associated with higher COVID-19 mortality rates in the United States, researchers reported this month in JAMA Network Open.

The pandemic could also provide the push to bring indoor air under public health’s wing, joining common goods like water and food. The recognition that the virus that causes COVID-19 is primarily spread through the air has also been a reminder of the airborne risk posed by other respiratory diseases, including influenza and tuberculosis (SN: 12/16/21). Improved ventilation — bringing fresh outdoor air inside — can temper an influenza outbreak, and it helped to control a real-world tuberculosis outbreak in Taiwan.

Paying attention to indoor air quality has also paid dividends during the COVID-19 pandemic. Schools that combined better ventilation with high-efficiency filters reduced the incidence of COVID-19 by 48 percent compared with schools that didn’t, researchers reported last year. This month, the Fondazione David Hume released not-yet-peer-reviewed results of an experiment in the Marche Region in Italy that looked at schools with and without controlled mechanical ventilation and the impact of different rates of air exchange. Replacing the air in a classroom 2.4 times per hour reduced the risk of COVID-19 spread by a factor of 1.7. More frequent exchanges reduced the risk even more, up to a factor of 5.7 with replacement 6 times an hour.
British scientists who advise the U.K. government would like buildings to display signs to inform the public of the status of the air inside. They have developed prototype placards with different icons and color-coding schemes to convey information such as whether a room is mechanically ventilated, uses filtration or monitors carbon dioxide, which is a proxy for how much fresh air a room gets. The group is testing options in a pilot program.

In the United States, the White House has launched the Clean Air in Buildings Challenge as part of the National COVID-19 Preparedness Plan. The Biden administration and Congress have made federal funding available to improve air quality in schools, public buildings and other structures. The Environmental Protection Agency has released recommendations on how to plan for and take action on indoor air quality.

“Healthy and clean indoor air should become an expectation for all of us,” Alondra Nelson, head of the White House Office of Science and Technology Policy, said at a webinar about the new initiatives on March 29. “It’s just as important as the food we eat and the water we drink.”

Making clean indoor air a public health priority, and putting in the work and money to make it a reality across the country, would go a long way to helping us prepare for infectious disease outbreaks to come. It also reinforces the public in public health, a commitment to protecting as many people as possible, just as masking mandates at appropriate times do. It’s how we get to some kind of “normal” that everyone can share in.

Caribou gut parasites indirectly create a greener tundra

Gut parasites in large plant eaters like caribou thrive out of sight and somewhat out of mind. But these tiny tummy tenants can have big impacts on the landscape that their hosts travel through.

Digestive tract parasites in caribou can reduce the amount that their hosts eat, allowing for more plant growth in the tundra where the animals live, researchers report in the May 17 Proceedings of the National Academy of Sciences. The finding reveals that even nonlethal infections can have reverberating effects through ecosystems.
Interactions between species have long been known to ripple through ecosystems, indirectly impacting other parts of the food web. When predators eat herbivores, for example, a reduction in plant-eating mouths leads to changes in the plant community. This is how sea otters, for example, can encourage kelp growth by feeding on herbivorous urchins (SN: 3/29/21).

“Anytime you have a change in species interactions that changes what the animals are doing on the landscape, it can influence their impact on the ecosystem,” says Amanda Koltz, an ecologist at Washington University in St. Louis.

When parasites and pathogens kill their hosts, it can have a similar effect to predators on ecosystems. A prime example is the rinderpest virus, which in the late 19th century devastated populations of ruminants — buffalo, antelope, cattle — in sub-Saharan Africa. Once wildebeest populations in East Africa were spared further infection following the vaccination of cattle and the eradication of the virus, their exploding numbers trimmed the grass back in the Serengeti and led to other landscape changes.

But unlike rinderpest, most infections aren’t lethal. Nonlethal parasite infections are pervasive in ruminants — plant eaters that play key roles in shaping vegetation on land. Koltz and her team wondered if changes to a ruminant’s overall health or behavior from a chronic parasitic infection could also induce changes in the surrounding plant community.
The researchers looked to caribou (Rangifer tarandus). Using data from published studies, Koltz and her team developed a series of mathematical simulations to test how caribou survival, reproduction and feeding rate could be influenced by stomach worm (Ostertagia spp.) infections.

The scientists then calculated how these effects could alter the total mass of and population changes in the caribou, parasites and plants. The simulations predict that not only could lethal infections trigger a cascade leading to more plant mass, but also nonlethal infections had just as large an effect. Sickened caribou that ate less or experienced a drop in reproduction rate led to an increase in plant mass when compared with a scenario with no parasites.

The team also analyzed data from 59 studies on 18 species of ruminants and their parasites, gathering information on how the parasites impact host feeding rates and body mass. The analysis found that chronic parasitic infections generally cause many types of herbivores to eat less, also reducing their body mass and fat reserves.

Indirect ecological ramifications from parasitic infections could be common in ruminants all over the world, the researchers conclude.

The study “highlights that there are widespread interactions that we’re not considering in ecosystem contexts quite yet, but we should be,” Koltz says.

Globally, parasites face an uncertain future with fast environmental changes — like climate change and habitat loss from changes in land use — altering relationships with their hosts, potentially leading to many parasite extinctions. “How such changes in host-parasite interactions might disrupt the structure and functioning of ecosystems is a topic that we should be thinking about,” Koltz says.

The findings also are “going to change how we think about what controls ecosystems,” says Oswald Schmitz, a population ecologist at Yale University who was not involved in the research. “Maybe it isn’t predators that are necessarily controlling the ecosystem, maybe the parasites are more important,” he says. “And so, what we really need to do is more research that disentangles [this].”

Scientists are rapidly gaining a better understanding of parasites’ ubiquity and abundance, says Joshua Grinath, an ecologist at Idaho State University in Pocatello. “Now we are challenged with understanding the roles of parasites within ecological communities and ecosystems.”