How scientists are shifting their search for links between diet and dementia

The internet is rife with advice for keeping the brain sharp as we age, and much of it is focused on the foods we eat. Headlines promise that oatmeal will fight off dementia. Blueberries improve memory. Coffee can slash your risk of Alzheimer’s disease. Take fish oil. Eat more fiber. Drink red wine. Forgo alcohol. Snack on nuts. Don’t skip breakfast. But definitely don’t eat bacon.

One recent diet study got media attention, with one headline claiming, “Many people may be eating their way to dementia.” The study, published last December in Neurology, found that people who ate a diet rich in anti-inflammatory foods like fruits, vegetables, beans and tea or coffee had a lower risk of dementia than those who ate foods that boost inflammation, such as sugar, processed foods, unhealthy fats and red meat.
But the study, like most research on diet and dementia, couldn’t prove a causal link. And that’s not good enough to make recommendations that people should follow. Why has it proved such a challenge to pin down whether the foods we eat can help stave off dementia?

First, dementia, like most chronic diseases, is the result of a complex interplay of genes, lifestyle and environment that researchers don’t fully understand. Diet is just one factor. Second, nutrition research is messy. People struggle to recall the foods they’ve eaten, their diets change over time, and modifying what people eat — even as part of a research study — is exceptionally difficult.

For decades, researchers devoted little effort to trying to prevent or delay Alzheimer’s disease and other types of dementia because they thought there was no way to change the trajectory of these diseases. Dementia seemed to be the result of aging and an unlucky roll of the genetic dice.

While scientists have identified genetic variants that boost risk for dementia, researchers now know that people can cut their risk by adopting a healthier lifestyle: avoiding smoking, keeping weight and blood sugar in check, exercising, managing blood pressure and avoiding too much alcohol — the same healthy behaviors that lower the risk of many chronic diseases.

Diet is wrapped up in several of those healthy behaviors, and many studies suggest that diet may also directly play a role. But what makes for a brain-healthy diet? That’s where the research gets muddled.

Despite loads of studies aimed at dissecting the influence of nutrition on dementia, researchers can’t say much with certainty. “I don’t think there’s any question that diet influences dementia risk or a variety of other age-related diseases,” says Matt Kaeberlein, who studies aging at the University of Washington in Seattle. But “are there specific components of diet or specific nutritional strategies that are causal in that connection?” He doubts it will be that simple.

Worth trying
In the United States, an estimated 6.5 million people, the vast majority of whom are over age 65, are living with Alzheimer’s disease and related dementias. Experts expect that by 2060, as the senior population grows, nearly 14 million residents over age 65 will have Alzheimer’s disease. Despite decades of research and more than 100 drug trials, scientists have yet to find a treatment for dementia that does more than curb symptoms temporarily (SN: 7/3/21 & 7/17/21, p. 8). “Really what we need to do is try and prevent it,” says Maria Fiatarone Singh, a geriatrician at the University of Sydney.

Forty percent of dementia cases could be prevented or delayed by modifying a dozen risk factors, according to a 2020 report commissioned by the Lancet. The report doesn’t explicitly call out diet, but some researchers think it plays an important role. After years of fixating on specific foods and dietary components — things like fish oil and vitamin E supplements — many researchers in the field have started looking at dietary patterns.

That shift makes sense. “We do not have vitamin E for breakfast, vitamin C for lunch. We eat foods in combination,” says Nikolaos Scarmeas, a neurologist at National and Kapodistrian University of Athens and Columbia University. He led the study on dementia and anti-inflammatory diets published in Neurology. But a shift from supplements to a whole diet of myriad foods complicates the research. A once-daily pill is easier to swallow than a new, healthier way of eating.
Earning points
Suspecting that inflammation plays a role in dementia, many researchers posit that an anti-inflammatory diet might benefit the brain. In Scarmeas’ study, more than 1,000 older adults in Greece completed a food frequency questionnaire and earned a score based on how “inflammatory” their diet was. The lower the score, the better. For example, fatty fish, which is rich in omega-3 fatty acids, was considered an anti-inflammatory food and earned negative points. Cheese and many other dairy products, high in saturated fat, earned positive points.

During the next three years, 62 people, or 6 percent of the study participants, developed dementia. People with the highest dietary inflammation scores were three times as likely to develop dementia as those with the lowest. Scores ranged from –5.83 to 6.01. Each point increase was linked to a 21 percent rise in dementia risk.

Such epidemiological studies make connections, but they can’t prove cause and effect. Perhaps people who eat the most anti-inflammatory diets also are those least likely to develop dementia for some other reason. Maybe they have more social interactions. Or it could be, Scarmeas says, that people who eat more inflammatory diets do so because they’re already experiencing changes in their brain that lead them to consume these foods and “what we really see is the reverse causality.”

To sort all this out, researchers rely on randomized controlled trials, the gold standard for providing proof of a causal effect. But in the arena of diet and dementia, these studies have challenges.

Dementia is a disease of aging that takes decades to play out, Kaeberlein says. To show that a particular diet could reduce the risk of dementia, “it would take two-, three-, four-decade studies, which just aren’t feasible.” Many clinical trials last less than two years.

As a work-around, researchers often rely on some intermediate outcome, like changes in cognition. But even that can be hard to observe. “If you’re already relatively healthy and don’t have many risks, you might not show much difference, especially if the duration of the study is relatively short,” says Sue Radd-Vagenas, a nutrition scientist at the University of Sydney. “The thinking is if you’re older and you have more risk factors, it’s more likely we might see something in a short period of time.” Yet older adults might already have some cognitive decline, so it might be more difficult to see an effect.

Many researchers now suspect that intervening earlier will have a bigger impact. “We now know that the brain is stressed from midlife and there’s a tipping point at 65 when things go sour,” says Hussein Yassine, an Alzheimer’s researcher at the Keck School of Medicine of the University of Southern California in Los Angeles. But intervene too early, and a trial might not show any effect. Offering a healthier diet to a 50- or 60-year-old might pay off in the long run but fail to make a difference in cognition that can be measured during the relatively short length of a study.

And it’s not only the timing of the intervention that matters, but also the duration. Do you have to eat a particular diet for two decades for it to have an impact? “We’ve got a problem of timescale,” says Kaarin Anstey, a dementia researcher at the University of New South Wales in Sydney.

And then there are all the complexities that come with studying diet. “You can’t isolate it in the way you can isolate some of the other factors,” Anstey says. “It’s something that you’re exposed to all the time and over decades.”

Food as medicine?
In a clinical trial, researchers often test the effectiveness of a drug by offering half the study participants the medication and half a placebo pill. But when the treatment being tested is food, studies become much more difficult to control. First, food doesn’t come in a pill, so it’s tricky to hide whether participants are in the intervention group or the control group.

Imagine a trial designed to test whether the Mediterranean diet can help slow cognitive decline. The participants aren’t told which group they’re in, but the control group sees that they aren’t getting nuts or fish or olive oil. “What ends up happening is a lot of participants will start actively increasing the consumption of the Mediterranean diet despite being on the control arm, because that’s why they signed up,” Yassine says. “So at the end of the trial, the two groups are not very dissimilar.”

Second, we all need food to live, so a true placebo is out of the question. But what diet should the control group consume? Do you compare the diet intervention to people’s typical diets (which may differ from person to person and country to country)? Do you ask the comparison group to eat a healthy diet but avoid the food expected to provide brain benefits? (Offering them an unhealthy diet would be unethical.)

And tracking what people eat during a clinical trial can be a challenge. Many of these studies rely on food frequency questionnaires to tally up all the foods in an individual’s diet. An ongoing study is assessing the impact of the MIND diet (which combines part of the Mediterranean diet with elements of the low-salt DASH diet) on cognitive decline. Researchers track adherence to the diet by asking participants to fill out a food frequency questionnaire every six to 12 months. But many of us struggle to remember what we ate a day or two ago. So some researchers also rely on more objective measures to assess compliance. For the MIND diet assessment, researchers are also tracking biomarkers in the blood and urine — vitamins such as folate, B12 and vitamin E, plus levels of certain antioxidants.
Another difficulty is that these surveys often don’t account for variables that could be really important, like how the food was prepared and where it came from. Was the fish grilled? Fried? Slathered in butter? “Those things can matter,” says dementia researcher Nathaniel Chin of the University of Wisconsin–Madison.

Plus there are the things researchers can’t control. For example, how does the food interact with an individual’s medications and microbiome? “We know all of those factors have an interplay,” Chin says.

The few clinical trials looking at dementia and diet seem to measure different things, so it’s hard to make comparisons. In 2018, Radd-Vagenas and her colleagues looked at all the trials that had studied the impact of the Mediterranean diet on cognition. There were five at the time. “What struck me even then was how variable the interventions were,” she says. “Some of the studies didn’t even mention olive oil in their intervention. Now, how can you run a Mediterranean diet study and not mention olive oil?”

Another tricky aspect is recruitment. The kind of people who sign up for clinical trials tend to be more educated, more motivated and have healthier lifestyles. That can make differences between the intervention group and the control group difficult to spot. And if the study shows an effect, whether it will apply to the broader, more diverse population comes into question. To sum up, these studies are difficult to design, difficult to conduct and often difficult to interpret.

Kaeberlein studies aging, not dementia specifically, but he follows the research closely and acknowledges that the lack of clear answers can be frustrating. “I get the feeling of wanting to throw up your hands,” he says. But he points out that there may not be a single answer. Many diets can help people maintain a healthy weight and avoid diabetes, and thus reduce the risk of dementia. Beyond that obvious fact, he says, “it’s hard to get definitive answers.”

A better way
In July 2021, Yassine gathered with more than 30 other dementia and nutrition experts for a virtual symposium to discuss the myriad challenges and map out a path forward. The speakers noted several changes that might improve the research.

One idea is to focus on populations at high risk. For example, one clinical trial is looking at the impact of low- and high-fat diets on short-term changes in the brain in people who carry the genetic variant APOE4, a risk factor for Alzheimer’s. One small study suggested that a high-fat Western diet actually improved cognition in some individuals. Researchers hope to get clarity on that surprising result.
Another possible fix is redefining how researchers measure success. Hypertension and diabetes are both well-known risk factors for dementia. So rather than running a clinical trial that looks at whether a particular diet can affect dementia, researchers could look at the impact of diet on one of these risk factors. Plenty of studies have assessed the impact of diet on hypertension and diabetes, but Yassine knows of none launched with dementia prevention as the ultimate goal.

Yassine envisions a study that recruits participants at risk of developing dementia because of genetics or cardiovascular disease and then looks at intermediate outcomes. “For example, a high-salt diet can be associated with hypertension, and hypertension can be associated with dementia,” he says. If the study shows that the diet lowers hypertension, “we achieved our aim.” Then the study could enter a legacy period during which researchers track these individuals for another decade to determine whether the intervention influences cognition and dementia.

One way to amplify the signal in a clinical trial is to combine diet with other interventions likely to reduce the risk of dementia. The Finnish Geriatric Intervention Study to Prevent Cognitive Impairment and Disability, or FINGER, trial, which began in 2009, did just that. Researchers enrolled more than 1,200 individuals ages 60 to 77 who were at an elevated risk of developing dementia and had average or slightly impaired performance on cognition tests. Half received nutritional guidance, worked out at a gym, engaged in online brain-training games and had routine visits with a nurse to talk about managing dementia risk factors like high blood pressure and diabetes. The other half received only general health advice.

After two years, the control group had a 25 percent greater cognitive decline than the intervention group. It was the first trial, reported in the Lancet in 2015, to show that targeting multiple risk factors could slow the pace of cognitive decline.

Now researchers are testing this approach in more than 30 countries. Christy Tangney, a nutrition researcher at Rush University in Chicago, is one of the investigators on the U.S. arm of the study, enrolling 2,000 people ages 60 to 79 who have at least one dementia risk factor. The study is called POINTER, or U.S. Study to Protect Brain Health Through Lifestyle Intervention to Reduce Risk. The COVID-19 pandemic has delayed the research — organizers had to pause the trial briefly — but Tangney expects to have results in the next few years.

This kind of multi-intervention study makes sense, Chin says. “One of the reasons why things are so slow in our field is we’re trying to address a heterogeneous disease with one intervention at a time. And that’s just not going to work.” A trial that tests multiple interventions “allows for people to not be perfect,” he adds. Maybe they can’t follow the diet exactly, but they can stick to the workout program, which might have an effect on its own. The drawback in these kinds of studies, however, is that it’s impossible to tease out the contribution of each individual intervention.
Preemptive guidelines
Two major reports came out in recent years addressing dementia prevention. The first, from the World Health Organization in 2019, recommends a healthy, balanced diet for all adults, and notes that the Mediterranean diet may help people who have normal to mildly impaired cognition.

The 2020 Lancet Commission report, however, does not include diet in its list of modifiable risk factors, at least not yet. “Nutrition and dietary components are challenging to research with controversies still raging around the role of many micronutrients and health outcomes in dementia,” the report notes. The authors point out that a Mediterranean or the similar Scandinavian diet might help prevent cognitive decline in people with intact cognition, but “how long the exposure has to be or during which ages is unclear.” Neither report recommends any supplements.

Plenty of people are waiting for some kind of advice to follow. Improving how these studies are done might enable scientists to finally sort out what kinds of diets can help hold back the heartbreaking damage that comes with Alzheimer’s disease. For some people, that knowledge might be enough to create change.
“Inevitably, if you’ve had Alzheimer’s in your family, you want to know, ‘What can I do today to potentially reduce my risk?’ ” says molecular biologist Heather Snyder, vice president of medical and scientific relations at the Alzheimer’s Association.

But changing long-term dietary habits can be hard. The foods we eat aren’t just fuel; our diets represent culture and comfort and more. “Food means so much to us,” Chin says.

“Even if you found the perfect diet,” he adds, “how do you get people to agree to and actually change their habits to follow that diet?” The MIND diet, for example, suggests people eat less than one serving of cheese a week. In Wisconsin, where Chin is based, that’s a nonstarter, he says.

But it’s not just about changing individual behaviors. Radd-Vagenas and other researchers hope that if they can show the brain benefits of some of these diets in rigorous studies, policy changes might follow. For example, research shows that lifestyle changes can have a big impact on type 2 diabetes. As a result, many insurance providers now pay for coaching programs that help participants maintain healthy diet and exercise habits.

“You need to establish policies. You need to change cities, change urban design. You need to do a lot of things to enable healthier choices to become easier choices,” Radd-Vagenas says. But that takes meatier data than exist now.

How to build better ice towers for drinking water and irrigation

There’s a better way to build a glacier.

During winter in India’s mountainous Ladakh region, some farmers use pipes and sprinklers to construct building-sized cones of ice. These towering, humanmade glaciers, called ice stupas, slowly release water as they melt during the dry spring months for communities to drink or irrigate crops. But the pipes often freeze when conditions get too cold, stifling construction.

Now, preliminary results show that an automated system can erect an ice stupa while avoiding frozen pipes, using local weather data to control when and how much water is spouted. What’s more, the new system uses roughly a tenth the amount of water that the conventional method uses, researchers reported June 23 at the Frontiers in Hydrology meeting in San Juan, Puerto Rico.
“This is one of the technological steps forward that we need to get this innovative idea to the point where it’s realistic as a solution,” says glaciologist Duncan Quincey of the University of Leeds in England who was not involved in the research. Automation could help communities build larger, longer-lasting ice stupas that provide more water during dry periods, he says.

Ice stupas emerged in 2014 as a means for communities to cope with shrinking alpine glaciers due to human-caused climate change (SN: 5/29/19). Typically, high-mountain communities in India, Kyrgyzstan and Chile pipe glacial meltwater into gravity-driven fountains that sprinkle continuously in the winter. Cold air freezes the drizzle, creating frozen cones that can store millions of liters of water.

The process is simple, though inefficient. More than 70 percent of the spouted water may flow away instead of freezing, says glaciologist Suryanarayanan Balasubramanian of the University of Fribourg in Switzerland.

So Balasubramanian and his team outfitted an ice stupa’s fountain with a computer that automatically adjusted the spout’s flow rate based on local temperatures, humidity and wind speed. Then the scientists tested the system by building two ice stupas in Guttannen, Switzerland — one using a continuously spraying fountain and one using the automated system.

After four months, the team found that the continuously sprinkling fountain had spouted about 1,100 cubic meters of water and amassed 53 cubic meters of ice, with pipes freezing once. The automated system sprayed only around 150 cubic meters of water but formed 61 cubic meters of ice, without any frozen pipes.

The researchers are now trying to simplify their prototype to make it more affordable for high-mountain communities around the world. “We eventually want to reduce the cost so that it is within two months of salary of the farmers in Ladakh,” Balasubramanian says. “Around $200 to $400.”

North America’s oldest skull surgery dates to at least 3,000 years ago

A man with a hole in his forehead, who was interred in what’s now northwest Alabama between around 3,000 and 5,000 years ago, represents North America’s oldest known case of skull surgery.

Damage around the man’s oval skull opening indicates that someone scraped out that piece of bone, probably to reduce brain swelling caused by a violent attack or a serious fall, said bioarchaeologist Diana Simpson of the University of Nevada, Las Vegas. Either scenario could explain fractures and other injuries above the man’s left eye and to his left arm, leg and collarbone.

Bone regrowth on the edges of the skull opening indicates that the man lived for up to one year after surgery, Simpson estimated. She presented her analysis of the man’s remains on March 28 at a virtual session of the annual meeting of the American Association of Biological Anthropologists.
Skull surgery occurred as early as 13,000 years ago in North Africa (SN: 8/17/11). Until now, the oldest evidence of this practice in North America dated to no more than roughly 1,000 years ago.

In his prime, the new record holder likely served as a ritual practitioner or shaman. His grave included items like those found in shamans’ graves at nearby North American hunter-gatherer sites dating to between about 3,000 and 5,000 years ago. Ritual objects buried with him included sharpened bone pins and modified deer and turkey bones that may have been tattooing tools (SN: 5/25/21).

Investigators excavated the man’s grave and 162 others at the Little Bear Creek Site, a seashell covered burial mound, in the 1940s. Simpson studied the man’s museum-held skeleton and grave items in 2018, shortly before the discoveries were returned to local Native American communities for reburial.

Here are the Top 10 times scientific imagination failed

Science, some would say, is an enterprise that should concern itself solely with cold, hard facts. Flights of imagination should be the province of philosophers and poets.

On the other hand, as Albert Einstein so astutely observed, “Imagination is more important than knowledge.” Knowledge, he said, is limited to what we know now, while “imagination embraces the entire world, stimulating progress.”

So with science, imagination has often been the prelude to transformative advances in knowledge, remaking humankind’s understanding of the world and enabling powerful new technologies.
And yet while sometimes spectacularly successful, imagination has also frequently failed in ways that retard the revealing of nature’s secrets. Some minds, it seems, are simply incapable of imagining that there’s more to reality than what they already know.

On many occasions scientists have failed to foresee ways of testing novel ideas, ridiculing them as unverifiable and therefore unscientific. Consequently it is not too challenging to come up with enough failures of scientific imagination to compile a Top 10 list, beginning with:

  1. Atoms
    By the middle of the 19th century, most scientists believed in atoms. Chemists especially. John Dalton had shown that the simple ratios of different elements making up chemical compounds strongly implied that each element consisted of identical tiny particles. Subsequent research on the weights of those atoms made their reality pretty hard to dispute. But that didn’t deter physicist-philosopher Ernst Mach. Even as late as the beginning of the 20th century, he and a number of others insisted that atoms could not be real, as they were not accessible to the senses. Mach believed that atoms were a “mental artifice,” convenient fictions that helped in calculating the outcomes of chemical reactions. “Have you ever seen one?” he would ask.

Apart from the fallacy of defining reality as “observable,” Mach’s main failure was his inability to imagine a way that atoms could be observed. Even after Einstein proved the existence of atoms by indirect means in 1905, Mach stood his ground. He was unaware, of course, of the 20th century technologies that quantum mechanics would enable, and so did not foresee powerful new microscopes that could show actual images of atoms (and allow a certain computing company to drag them around to spell out IBM).

  1. Composition of stars
    Mach’s views were similar to those of Auguste Comte, a French philosopher who originated the idea of positivism, which denies reality to anything other than objects of sensory experience. Comte’s philosophy led (and in some cases still leads) many scientists astray. His greatest failure of imagination was an example he offered for what science could never know: the chemical composition of the stars.

Unable to imagine anybody affording a ticket on some entrepreneur’s space rocket, Comte argued in 1835 that the identity of the stars’ components would forever remain beyond human knowledge. We could study their size, shapes and movements, he said, “whereas we would never know how to study by any means their chemical composition, or their mineralogical structure,” or for that matter, their temperature, which “will necessarily always be concealed from us.”

Within a few decades, though, a newfangled technology called spectroscopy enabled astronomers to analyze the colors of light emitted by stars. And since each chemical element emits (or absorbs) precise colors (or frequencies) of light, each set of colors is like a chemical fingerprint, an infallible indicator for an element’s identity. Using a spectroscope to observe starlight therefore can reveal the chemistry of the stars, exactly what Comte thought impossible.

  1. Canals on Mars
    Sometimes imagination fails because of its overabundance rather than absence. In the case of the never-ending drama over the possibility of life on Mars, that planet’s famous canals turned out to be figments of overactive scientific imagination.

First “observed” in the late 19th century, the Martian canals showed up as streaks on the planet’s surface, described as canali by Italian astronomer Giovanni Schiaparelli. Canali is, however, Italian for channels, not canals. So in this case something was gained (rather than lost) in translation — the idea that Mars was inhabited. “Canals are dug,” remarked British astronomer Norman Lockyer in 1901, “ergo there were diggers.” Soon astronomers imagined an elaborate system of canals transporting water from Martian poles to thirsty metropolitan areas and agricultural centers. (Some observers even imagined seeing canals on Venus and Mercury.)
With more constrained imaginations, aided by better telescopes and translations, belief in the Martian canals eventually faded. It was merely the Martian winds blowing dust (bright) and sand (dark) around the surface in ways that occasionally made bright and dark streaks line up in a deceptive manner — to eyes attached to overly imaginative brains.

  1. Nuclear fission
    In 1934, Italian physicist Enrico Fermi bombarded uranium (atomic number 92) and other elements with neutrons, the particle discovered just two years earlier by James Chadwick. Fermi found that among the products was an unidentifiable new element. He thought he had created element 93, heavier than uranium. He could not imagine any other explanation. In 1938 Fermi was awarded the Nobel Prize in physics for demonstrating “the existence of new radioactive elements produced by neutron irradiation.”

It turned out, however, that Fermi had unwittingly demonstrated nuclear fission. His bombardment products were actually lighter, previously known elements — fragments split from the heavy uranium nucleus. Of course, the scientists later credited with discovering fission, Otto Hahn and Fritz Strassmann, didn’t understand their results either. Hahn’s former collaborator Lise Meitner was the one who explained what they’d done. Another woman, chemist Ida Noddack, had imagined the possibility of fission to explain Fermi’s results, but for some reason nobody listened to her.

  1. Detecting neutrinos
    In the 1920s, most physicists had convinced themselves that nature was built from just two basic particles: positively charged protons and negatively charged electrons. Some had, however, imagined the possibility of a particle with no electric charge. One specific proposal for such a particle came in 1930 from Austrian physicist Wolfgang Pauli. He suggested that a no-charge particle could explain a suspicious loss of energy observed in beta-particle radioactivity. Pauli’s idea was worked out mathematically by Fermi, who named the neutral particle the neutrino. Fermi’s math was then examined by physicists Hans Bethe and Rudolf Peierls, who deduced that the neutrino would zip through matter so easily that there was no imaginable way of detecting its existence (short of building a tank of liquid hydrogen 6 million billion miles wide). “There is no practically possible way of observing the neutrino,” Bethe and Peierls concluded.

But they had failed to imagine the possibility of finding a source of huge numbers of high-energy neutrinos, so that a few could be captured even if almost all escaped. No such source was known until nuclear fission reactors were invented. In the 1950s, Frederick Reines and Clyde Cowan used reactors to definitely establish the neutrino’s existence. Reines later said he sought a way to detect the neutrino precisely because everybody had told him it wasn’t possible to detect the neutrino.

  1. Nuclear energy
    Ernest Rutherford, one of the 20th century’s greatest experimental physicists, was not exactly unimaginative. He imagined the existence of the neutron a dozen years before it was discovered, and he figured out that a weird experiment conducted by his assistants had revealed that atoms contained a dense central nucleus. It was clear that the atomic nucleus packed an enormous quantity of energy, but Rutherford could imagine no way to extract that energy for practical purposes. In 1933, at a meeting of the British Association for the Advancement of Science, he noted that although the nucleus contained a lot of energy, it would also require energy to release it. Anyone saying we can exploit atomic energy “is talking moonshine,” Rutherford declared. To be fair, Rutherford qualified the moonshine remark by saying “with our present knowledge,” so in a way he perhaps was anticipating the discovery of nuclear fission a few years later. (And some historians have suggested that Rutherford did imagine the powerful release of nuclear energy, but thought it was a bad idea and wanted to discourage people from attempting it.)
  2. Age of the Earth
    Rutherford’s reputation for imagination was bolstered by his inference that radioactive matter deep underground could solve the mystery of the age of the Earth. In the mid-19th century, William Thomson (later known as Lord Kelvin) calculated the Earth’s age to be something a little more than 100 million years, and possibly much less. Geologists insisted that the Earth must be much older — perhaps billions of years — to account for the planet’s geological features.

Kelvin calculated his estimate assuming the Earth was born as a molten rocky mass that then cooled to its present temperature. But following the discovery of radioactivity at the end of the 19th century, Rutherford pointed out that it provided a new source of heat in the Earth’s interior. While giving a talk (in Kelvin’s presence), Rutherford suggested that Kelvin had basically prophesized a new source of planetary heat.

While Kelvin’s neglect of radioactivity is the standard story, a more thorough analysis shows that adding that heat to his math would not have changed his estimate very much. Rather, Kelvin’s mistake was assuming the interior to be rigid. John Perry (one of Kelvin’s former assistants) showed in 1895 that the flow of heat deep within the Earth’s interior would alter Kelvin’s calculations considerably — enough to allow the Earth to be billions of years old. It turned out that the Earth’s mantle is fluid on long time scales, which not only explains the age of the Earth, but also plate tectonics.

  1. Charge-parity violation
    Before the mid-1950s, nobody imagined that the laws of physics gave a hoot about handedness. The same laws should govern matter in action when viewed straight-on or in a mirror, just as the rules of baseball applied equally to Ted Williams and Willie Mays, not to mention Mickey Mantle. But in 1956 physicists Tsung-Dao Lee and Chen Ning Yang suggested that perfect right-left symmetry (or “parity”) might be violated by the weak nuclear force, and experiments soon confirmed their suspicion.

Restoring sanity to nature, many physicists thought, required antimatter. If you just switched left with right (mirror image), some subatomic processes exhibited a preferred handedness. But if you also replaced matter with antimatter (switching electric charge), left-right balance would be restored. In other words, reversing both charge (C) and parity (P) left nature’s behavior unchanged, a principle known as CP symmetry. CP symmetry had to be perfectly exact; otherwise nature’s laws would change if you went backward (instead of forward) in time, and nobody could imagine that.

In the early 1960s, James Cronin and Val Fitch tested CP symmetry’s perfection by studying subatomic particles called kaons and their antimatter counterparts. Kaons and antikaons both have zero charge but are not identical, because they are made from different quarks. Thanks to the quirky rules of quantum mechanics, kaons can turn into antikaons and vice versa. If CP symmetry is exact, each should turn into the other equally often. But Cronin and Fitch found that antikaons turn into kaons more often than the other way around. And that implied that nature’s laws allowed a preferred direction of time. “People didn’t want to believe it,” Cronin said in a 1999 interview. Most physicists do believe it today, but the implications of CP violation for the nature of time and other cosmic questions remain mysterious.

  1. Behaviorism versus the brain
    In the early 20th century, the dogma of behaviorism, initiated by John Watson and championed a little later by B.F. Skinner, ensnared psychologists in a paradigm that literally excised imagination from science. The brain — site of all imagination — is a “black box,” the behaviorists insisted. Rules of human psychology (mostly inferred from experiments with rats and pigeons) could be scientifically established only by observing behavior. It was scientifically meaningless to inquire into the inner workings of the brain that directed such behavior, as those workings were in principle inaccessible to human observation. In other words, activity inside the brain was deemed scientifically irrelevant because it could not be observed. “When what a person does [is] attributed to what is going on inside him,” Skinner proclaimed, “investigation is brought to an end.”

Skinner’s behaviorist BS brainwashed a generation or two of followers into thinking the brain was beyond study. But fortunately for neuroscience, some physicists foresaw methods for observing neural activity in the brain without splitting the skull open, exhibiting imagination that the behaviorists lacked. In the 1970s Michel Ter-Pogossian, Michael Phelps and colleagues developed PET (positron emission tomography) scanning technology, which uses radioactive tracers to monitor brain activity. PET scanning is now complemented by magnetic resonance imaging, based on ideas developed in the 1930s and 1940s by physicists I.I. Rabi, Edward Purcell and Felix Bloch.

  1. Gravitational waves
    Nowadays astrophysicists are all agog about gravitational waves, which can reveal all sorts of secrets about what goes on in the distant universe. All hail Einstein, whose theory of gravity — general relativity — explains the waves’ existence. But Einstein was not the first to propose the idea. In the 19th century, James Clerk Maxwell devised the math explaining electromagnetic waves, and speculated that gravity might similarly induce waves in a gravitational field. He couldn’t figure out how, though. Later other scientists, including Oliver Heaviside and Henri Poincaré, speculated about gravity waves. So the possibility of their existence certainly had been imagined.

But many physicists doubted that the waves existed, or if they did, could not imagine any way of proving it. Shortly before Einstein completed his general relativity theory, German physicist Gustav Mie declared that “the gravitational radiation emitted … by any oscillating mass particle is so extraordinarily weak that it is unthinkable ever to detect it by any means whatsoever.” Even Einstein had no idea how to detect gravitational waves, although he worked out the math describing them in a 1918 paper. In 1936 he decided that general relativity did not predict gravitational waves at all. But the paper rejecting them was simply wrong.
As it turned out, of course, gravitational waves are real and can be detected. At first they were verified indirectly, by the diminishing distance between mutually orbiting pulsars. And more recently they were directly detected by huge experiments relying on lasers. Nobody had been able to imagine detecting gravitational waves a century ago because nobody had imagined the existence of pulsars or lasers.

All these failures show how prejudice can sometimes dull the imagination. But they also show how an imagination failure can inspire the quest for a new success. And that’s why science, so often detoured by dogma, still manages somehow, on long enough time scales, to provide technological wonders and cosmic insights beyond philosophers’ and poets’ wildest imagination.

A global warming pause that didn’t happen hampered climate science

It was one of the biggest climate change questions of the early 2000s: Had the planet’s rising fever stalled, even as humans pumped more heat-trapping gases into Earth’s atmosphere?

By the turn of the century, the scientific understanding of climate change was on firm footing. Decades of research showed that carbon dioxide was accumulating in Earth’s atmosphere, thanks to human activities like burning fossil fuels and cutting down carbon-storing forests, and that global temperatures were rising as a result. Yet weather records seemed to show that global warming slowed between around 1998 and 2012. How could that be?
After careful study, scientists found the apparent pause to be a hiccup in the data. Earth had, in fact, continued to warm. This hiccup, though, prompted an outsize response from climate skeptics and scientists. It serves as a case study for how public perception shapes what science gets done, for better or worse.

The mystery of what came to be called the “global warming hiatus” arose as scientists built up, year after year, data on the planet’s average surface temperature. Several organizations maintain their own temperature datasets; each relies on observations gathered at weather stations and from ships and buoys around the globe. The actual amount of warming varies from year to year, but overall the trend is going up, and record-hot years are becoming more common. The 1995 Intergovernmental Panel on Climate Change report, for instance, noted that recent years had been among the warmest recorded since 1860.

And then came the powerful El Niño of 1997–1998, a weather pattern that transferred large amounts of heat from the ocean into the atmosphere. The planet’s temperature soared as a result — but then, according to the weather records, it appeared to slacken dramatically. Between 1998 and 2012, the global average surface temperature rose at less than half the rate it did between 1951 and 2012. That didn’t make sense. Global warming should be accelerating over time as people ramp up the rate at which they add heat-trapping gases to the atmosphere.
By the mid-2000s, climate skeptics had seized on the narrative that “global warming has stopped.” Most professional climate scientists were not studying the phenomenon, since most believed the apparent pause fell within the range of natural temperature variability. But public attention soon caught up to them, and researchers began investigating whether the pause was a real thing. It was a high-profile shift in scientific focus.

“In studying that anomalous period, we learned a lot of lessons about both the climate system and the scientific process,” says Zeke Hausfather, a climate scientist now with the technology company Stripe.

By the early 2010s, scientists were busily working to explain why the global temperature records seemed to be flatlining. Ideas included the contribution of cooling sulfur particles emitted by coal-burning power plants and heat being taken up by the Atlantic and Southern oceans. Such studies were the most focused attempt ever to understand the factors that drive year-to-year temperature variability. They revealed how much natural variability can be expected when factors such as a powerful El Niño are superimposed onto a long-term warming trend.

Scientists spent years investigating the purported warming pause — devoting more time and resources than they otherwise might have. So many papers were published on the apparent pause that scientists began joking that the journal Nature Climate Change should change its name to Nature Hiatus.
Then in 2015, a team led by researchers at the U.S. National Oceanic and Atmospheric Administration published a jaw-dropping conclusion in the journal Science. The rise in global temperatures had not plateaued; rather, incomplete data had obscured ongoing global warming. When more Arctic temperature records were included and biases in ocean temperature data were corrected, the NOAA dataset showed the heat-up continuing. With the newly corrected data, the apparent pause in global warming vanished. A 2017 study led by Hausfather confirmed and extended these findings, as did other reports.

Even after these studies were published, the hiatus remained a favored topic among climate skeptics, who used it to argue that concern over global warming was overblown. Congressman Lamar Smith, a Republican from Texas who chaired the House of Representatives’ science committee in the mid-2010s, was particularly incensed by the 2015 NOAA study. He demanded to see the underlying data while also accusing NOAA of altering it. (The agency denied fudging the data.)

“In retrospect, it is clear that we focused too much on the apparent hiatus,” Hausfather says. Figuring out why global temperature records seemed to plateau between 1998 and 2012 is important — but so is keeping a big-picture view of the broader understanding of climate change. The hiccup represented a short fluctuation in a much longer and much more important trend.
Science relies on testing hypotheses and questioning conclusions, but here’s a case where probing an anomaly was taken arguably too far. It caused researchers to doubt their conclusions and spend large amounts of time questioning their well-established methods, says Stephan Lewandowsky, a cognitive scientist at the University of Bristol who has studied climate scientists’ response to the hiatus. Scientists studying the hiatus could have been working instead on providing clear information to policy makers about the reality of global warming and the urgency of addressing it.

The debates over whether the hiatus was real or not fed public confusion and undermined efforts to convince people to take aggressive action to reduce climate change’s impacts. That’s an important lesson going forward, Lewandowsky says.

“My sense is that the scientific community has moved on,” he says. “By contrast, the political operatives behind organized denial have learned a different lesson, which is that the ‘global warming has stopped’ meme is very effective in generating public complacency, and so they will use it at every opportunity.”

Already, some climate deniers are talking about a new “pause” in global warming because not every one of the past five years has set a new record, he notes. Yet the big-picture trend remains clear: Global temperatures have continued to rise in recent years. The warmest seven years on record have all occurred since 2015, and each decade since the 1980s has been warmer than the one before.

Unexplained hepatitis cases in kids offer more questions than answers

As health officials continue their investigation of unexplained cases of liver inflammation in children, what is known is still outpaced by what isn’t.

At least 500 cases of hepatitis from an unknown cause have been reported in children in roughly 30 countries, according to health agencies in Europe and the United States. As of May 18, 180 cases are under review in 36 U.S. states and territories.

Many of the children have recovered. But some cases have been severe, with more than two dozen of the kids needing liver transplants. At least a dozen children have died, including five in the United States.
The illnesses have mainly been seen in children under age 5. So far, health agencies have ruled out common causes of hepatitis, while reporting that some of the children have tested positive for adenovirus. That pathogen — which infects basically everyone, usually without serious issues — is not known as a primary cause of liver damage. For some children who are positive, officials have identified the particular adenovirus: type 41.

But there are several reasons why pinning an adenovirus as the sole hepatitis culprit doesn’t fully add up, researchers say. Nor is it clear whether the recent cases indicate an uptick in hepatitis illnesses, or just more attention. Though the cases seem to have popped up out of nowhere, “we’ve seen similar rare severe liver disease like this in children,” says Anna Peters, a pediatric transplant hepatologist at the Cincinnati Children’s Hospital Medical Center.

Most of all, it’s important for parents to remember that the cases described so far “are a rare phenomenon,” Peters says. “Parents shouldn’t panic.”

Hepatitis in children
Hepatitis is an inflammation of the liver that can interfere with the organ’s many functions, including filtering blood and regulating clotting. Three hepatitis viruses, called hepatitis A, B and C, are common causes of the illness in the United States. Hepatitis A is spread when infected fecal material reaches the mouth. Children can get B and C when it’s transmitted from a pregnant person to an infant. There are vaccines available for A and B but not C. An excessive dose of acetaminophen can also cause hepatitis in children.

The signs of hepatitis can include nausea, fatigue, a yellow tinge to skin and eyes, urine that’s darker than usual and stools that are light-colored, among other symptoms. Hepatitis that arises quickly usually resolves, whereas some cases progress more slowly and lead to liver damage over time.

It’s rare for a child to develop sudden liver failure. An estimated 500 to 600 cases occur each year in the United States, and around 30 percent of those are “indeterminate,” meaning a cause isn’t found, according to the North American Society for Pediatric Gastroenterology, Hepatology and Nutrition.

The indeterminate category of sudden liver failure has been known for some time, Peters says, and that subset of cases has similarities to the hepatitis under investigation. There hasn’t been data reported yet on whether the recent cases represent an increase over what’s been seen in prior years, Peters says. “Maybe this is just increased recognition of something that’s been going on.”

Adenovirus as a suspect
Not all of the children with hepatitis have been positive for adenovirus, nor have they all been tested. The European Centre for Disease Prevention and Control, or ECDC, has reported that of 151 cases tested, 90 were positive, or 60 percent. The last dispatch from the U.K. Health Security Agency, from early May, noted that 126 samples out of 163 had been tested, with 91, or 72 percent, positive. Further analysis of 18 cases identified adenovirus type 41.

Adenoviruses commonly infect people, typically causing colds, bronchitis or other respiratory illnesses. Two types, adenovirus 40 and 41, target the intestines, leading to gastrointestinal symptoms such as vomiting and diarrhea.

“All of these types, including this prime suspect type 41, have been detected everywhere continuously,” says virologist Adriana Kajon of the Lovelace Biomedical Research Institute in Albuquerque. “All of them have existed and have been reported continuously for decades.”

People usually recover from an adenovirus infection. The exception is those whose immune systems aren’t functioning properly — then, an infection can be serious. There have been cases of hepatitis from adenovirus in immunocompromised children, but the kids under investigation are not immunocompromised.

There are several curious details about the adenovirus findings. For example, the children who have tested positive for the virus had low levels in their blood. In cases of hepatitis from adenovirus, “the virus levels are very, very high,” Peters says.

Nor has adenovirus been found in the liver. In a study of nine children with the hepatitis in Alabama who were positive for adenovirus in blood samples, researchers studied liver tissue from six of the kids. There was no sign of the virus in the liver, the researchers report May 6 in Morbidity and Mortality Weekly Report.

“It’s very hard to implicate a virus that you cannot find in the crime scene,” Kajon said May 3 at a symposium for clinical virology in West Palm Beach, Fla.

Another oddity: There doesn’t seem to be a path of viral spread from one location to another. That’s unlike SARS-CoV-2, the virus that causes COVID-19, “where there was quite clearly a spread from some epicenter originally,” says virologist and clinician Andrew Tai of the University of Michigan Medical School in Ann Arbor, who treats patients with liver disease.

An adenovirus culprit is not out of the realm of possibility, but “virus associations with diseases are always hard to really nail down and prove,” says virologist Katherine Spindler, also of the University of Michigan Medical School. “We’re going to be hard pressed to say this is due to adenovirus 41, let alone adenovirus.”

Considering COVID-19
Looming over all of this is the possibility that a many-magnitudes-larger infectious disease outbreak, COVID-19, could have a part.

Researchers have found that SARS-CoV-2 impacts the liver in milder and more severe cases of COVID-19. There is evidence that the liver becomes inflamed in children and adults during an infection. Liver failure can occur with a severe bout of COVID-19. And children who develop multisystem inflammatory syndrome in children, or MIS-C, after COVID-19 can have hepatitis as part of that syndrome.

Peters and her colleagues have described yet another way SARS-CoV-2 could put the liver at risk. The team reported the case of a young female patient from the fall of 2020, who had sudden liver failure about three weeks after a SARS-CoV-2 infection. She did not have MIS-C. A liver biopsy showed signs of autoimmune hepatitis, a type in which the body attacks its own liver, Peters and colleagues report in the May Journal of Pediatric Gastroenterology and Nutrition Reports. The patient recovered after treatment with anti-inflammatory medication.

Some of the children with hepatitis have tested positive for SARS-CoV-2, but more haven’t. The ECDC has reported that 20 of 173 cases tested were positive for SARS-CoV-2, while the U.K. Health Security Agency detected the virus in 24 of 132 samples tested.

However, there have been very little data reported on whether the children have antibodies to SARS-CoV-2, which would be evidence of a past infection. (Vaccination hasn’t been available to most of these young children.) The ECDC found that of 19 cases tested, 14 were positive for antibodies to the virus.

One theory is that an earlier SARS-CoV-2 infection has set the stage for an unexpected response to an adenovirus or other infection. With people no longer minimizing contact, the spread of adenoviruses and other respiratory viruses is returning to prepandemic levels.

“We are possibly seeing the return of these forgotten pathogens, so to speak, aggravating disease or eliciting severe inflammation resulting from some kind of preexisting condition,” which could be COVID-19, Kajon said on May 3.

“I cannot think of anything else that has had a worldwide impact that can explain cases of hepatitis in places as distant as the U.K. and Argentina,” Kajon says.

With SARS-CoV-2, researchers have a good sense of how it causes disease during an active infection, Peters says. But for the longer-term effects, “everybody is still sorting things out.”

A very specific kind of brain cell dies off in people with Parkinson’s

Deep in the human brain, a very specific kind of cell dies during Parkinson’s disease.

For the first time, researchers have sorted large numbers of human brain cells in the substantia nigra into 10 distinct types. Just one is especially vulnerable in Parkinson’s disease, the team reports May 5 in Nature Neuroscience. The result could lead to a clearer view of how Parkinson’s takes hold, and perhaps even ways to stop it.

The new research “goes right to the core of the matter,” says neuroscientist Raj Awatramani of Northwestern University Feinberg School of Medicine in Chicago. Pinpointing the brain cells that seem to be especially susceptible to the devastating disease is “the strength of this paper,” says Awatramani, who was not involved in the study.

Parkinson’s disease steals people’s ability to move smoothly, leaving balance problems, tremors and rigidity. In the United States, nearly 1 million people are estimated to have Parkinson’s. Scientists have known for decades that these symptoms come with the death of nerve cells in the substantia nigra. Neurons there churn out dopamine, a chemical signal involved in movement, among other jobs (SN: 9/7/17).

But those dopamine-making neurons are not all equally vulnerable in Parkinson’s, it turns out.

“This seemed like an opportunity to … really clarify which kinds of cells are actually dying in Parkinson’s disease,” says Evan Macosko, a psychiatrist and neuroscientist at Massachusetts General Hospital in Boston and the Broad Institute of MIT and Harvard.
The tricky part was that dopamine-making neurons in the substantia nigra are rare. In samples of postmortem brains, “we couldn’t survey enough of [the cells] to really get an answer,” Macosko says. But Abdulraouf Abdulraouf, a researcher in Macosko’s laboratory, led experiments that sorted these cells, figuring out a way to selectively pull the cells’ nuclei out from the rest of the cells present in the substantia nigra. That enrichment ultimately led to an abundance of nuclei to analyze.

By studying over 15,000 nuclei from the brains of eight formerly healthy people, the researchers further sorted dopamine-making cells in the substantia nigra into 10 distinct groups. Each of these cell groups was defined by a specific brain location and certain combinations of genes that were active.

When the researchers looked at substantia nigra neurons in the brains of people who died with either Parkinson’s disease or the related Lewy body dementia, the team noticed something curious: One of these 10 cell types was drastically diminished.

These missing neurons were identified by their location in the lower part of the substantia nigra and an active AGTR1 gene, lab member Tushar Kamath and colleagues found. That gene was thought to serve simply as a good way to identify these cells, Macosko says; researchers don’t know whether the gene has a role in these dopamine-making cells’ fate in people.

The new finding points to ways to perhaps counter the debilitating diseases. Scientists have been keen to replace the missing dopamine-making neurons in the brains of people with Parkinson’s. The new study shows what those cells would need to look like, Awatramani says. “If a particular subtype is more vulnerable in Parkinson’s disease, maybe that’s the one we should be trying to replace,” he says.

In fact, Macosko says that stem cell scientists have already been in contact, eager to make these specific cells. “We hope this is a guidepost,” Macosko says.

The new study involved only a small number of human brains. Going forward, Macosko and his colleagues hope to study more brains, and more parts of those brains. “We were able to get some pretty interesting insights with a relatively small number of people,” he says. “When we get to larger numbers of people with other kinds of diseases, I think we’re going to learn a lot.”

How some sunscreens damage coral reefs

One common chemical in sunscreen can have devastating effects on coral reefs. Now, scientists know why.

Sea anemones, which are closely related to corals, and mushroom coral can turn oxybenzone — a chemical that protects people against ultraviolet light — into a deadly toxin that’s activated by light. The good news is that algae living alongside the creatures can soak up the toxin and blunt its damage, researchers report in the May 6 Science.

But that also means that bleached coral reefs lacking algae may be more vulnerable to death. Heat-stressed corals and anemones can eject helpful algae that provide oxygen and remove waste products, which turns reefs white. Such bleaching is becoming more common as a result of climate change (SN: 4/7/20).
The findings hint that sunscreen pollution and climate change combined could be a greater threat to coral reefs and other marine habitats than either would be separately, says Craig Downs. He is a forensic ecotoxicologist with the nonprofit Haereticus Environmental Laboratory in Amherst, Va., and was not involved with the study.

Previous work suggested that oxybenzone can kill young corals or prevent adult corals from recovering after tissue damage. As a result, some places, including Hawaii and Thailand, have banned oxybenzone-containing sunscreens.

In the new study, environmental chemist Djordje Vuckovic of Stanford University and colleagues found that glass anemones (Exaiptasia pallida) exposed to oxybenzone and UV light add sugars to the chemical. While such sugary add-ons would typically help organisms detoxify chemicals and clear them from the body, the oxybenzone-sugar compound instead becomes a toxin that’s activated by light.

Anemones exposed to either simulated sunlight or oxybenzone alone survived the length of the experiment, or 21 days, the team showed. But all anemones exposed to fake sunlight while submersed in water containing the chemical died within 17 days.
The anemones’ algal friends absorbed much of the oxybenzone and the toxin that the animals were exposed to in the lab. Anemones lacking algae died days sooner than anemones with algae.

In similar experiments, algae living inside mushroom coral (Discosoma sp.) also soaked up the toxin, a sign that algal relationships are a safeguard against its harmful effects. The coral’s algae seem to be particularly protective: Over eight days, no mushroom corals died after being exposed to oxybenzone and simulated sunlight.

It’s still unclear what amount of oxybenzone might be toxic to coral reefs in the wild. Another lingering question, Downs says, is whether other sunscreen components that are similar in structure to oxybenzone might have the same effects. Pinning that down could help researchers make better, reef-safe sunscreens.

Joggers naturally pace themselves to conserve energy even on short runs

For many recreational runners, taking a jog is a fun way to stay fit and burn calories. But it turns out an individual has a tendency to settle into the same, comfortable pace on short and long runs — and that pace is the one that minimizes their body’s energy use over a given distance.

“I was really surprised,” says Jessica Selinger, a biomechanist at Queen’s University in Kingston, Canada. “Intuitively, I would have thought people run faster at shorter distances and slow their pace at longer distances.”

Selinger and colleagues combined data from more than 4,600 runners, who went on 37,201 runs while wearing a fitness device called the Lumo Run, with lab-based physiology data. The analysis, described April 28 in Current Biology, also shows that it takes more energy for someone to run a given distance if they run faster or slower than their optimum speed.
“There is a speed that for you is going to feel the best,” Selinger says. “That speed is the one where you’re actually burning fewer calories.”

The runners ranged in age from 16 to 83, and had body mass indices spanning from 14.3 to 45.4. But no matter participants’ age, weight or sex — or whether they ran only a narrow range of distances or runs of varying lengths — the same pattern showed up in the data repeatedly.

Researchers have thought that running was performance-driven, says Melissa Thompson, a biomechanist at Fort Lewis College in Durango, Colo., who was not involved in the new study. This new research, she says, is “talking about preference, not performance.”

Most related research, Selinger says, has been done in university laboratories, with study subjects who are generally younger and healthier than the general population. By using wearable devices, the researchers could track many more runs, across more real-life conditions than is possible in a lab. That allowed the scientists to look at a “much broader cross section of humanity,” she says. Treadmill tests measuring energy use at different paces with people representative of those included in the fitness tracker data were used to determine optimum energy-efficient speeds.

Because the study includes a wide range of conditions and doesn’t control for things like fasting before running, it’s messier than data gathered in labs. Still, the sheer volume of real-world runs recorded by the wearable devices supports a convincing general rule about how humans run, says Rodger Kram, a physiologist at the University of Colorado Boulder not involved with the study. “I think the rule’s right.”

The results don’t apply to very long runs when fatigue starts to set in, or to race performance by elite athletes or others consciously training for speed. And a runner’s optimum pace can change over time, with training or age for instance.

There are quick tricks for those who want to speed up and go for a little more calorie burn to temporarily trump their body’s natural inclinations: Listen to upbeat music or jog alongside someone with a faster pace, Selinger says. “But it seems like your preference is actually to sink back into that optimum.”

The results match observations of optimum pacing from animals like horses and wildebeests, and also correspond to the way humans tend to walk at a speed that minimizes their individual energy use (SN: 9/10/15).

It does make sense that humans would be adapted to run at an optimum speed for minimizing energy use, says coauthor Scott Delp, a biomechanist at Stanford University. Imagine being an early human ancestor going out to hunt difficult prey. “It might be days before I get my next food,” he says. “So I want to spend the least energy en route to getting that food.”

These male spiders catapult away to avoid being cannibalized after sex

An act of acrobatics keeps males of one orb-weaving spider species from becoming their mates’ post-sex snack.

After mating, Philoponella prominens males catapult away from females at speeds up to nearly 90 centimeters per second, researchers report April 25 in Current Biology. Other spiders jump to capture prey or avoid predators (SN: 3/16/19). But P. prominens is unique among spiders in that males soar through the air to avoid sexual cannibalism, the researchers say.

P. prominens is a social species that’s native to countries such as Japan and Korea. Up to 300 individual spiders can come together to weave an entire neighborhood of webs. While studying P. prominens’ sexual behavior, arachnologist Shichang Zhang and colleagues noticed that sex seemed to always end with a catapulting male. But the movement was “so fast that common cameras could not record the details,” says Zhang, of Hubei University in Wuhan, China.

High-resolution video of mating partners clocked the male arachnids’ speed from around 32 cm/s to 88 cm/s, the researchers report. That’s equal to just under 1 mile per hour to nearly 2 mph.
The jump looks a little like the start of a backstroke swimming race, Zhang says. Males hold the tips of their front legs against a female’s body. The spiders then use hydraulic pressure to extend a joint in those legs, quickly launching a male off a female before she can capture and eat him.

Of 155 successful mating rituals that the researchers observed, 152 males catapulted to survival. The remaining three that didn’t fell victim to their partner. Female spiders also ate all 30 males that the team stopped from jumping to freedom with a paintbrush.

These male orb weavers probably acquired their jumping abilities to counter females’ cannibalistic tendencies, Zhang says. The spiders’ leap to survival is a “fantastic kinetic performance.”