A century ago, Alexander Friedmann envisioned the universe’s expansion

For millennia, the universe did a pretty good job of keeping its secrets from science.

Ancient Greeks thought the universe was a sphere of fixed stars surrounding smaller spheres carrying planets around the central Earth. Even Copernicus, who in the 16th century correctly replaced the Earth with the sun, viewed the universe as a single solar system encased by the star-studded outer sphere.

But in the centuries that followed, the universe revealed some of its vastness. It contained countless stars agglomerated in huge clusters, now called galaxies.

Then, at the end of the 1920s, the cosmos disclosed its most closely held secret of all: It was getting bigger. Rather than static and stable, an everlasting and ever-the-same entity encompassing all of reality, the universe continually expanded. Observations of distant galaxies showed them flying apart from each other, suggesting the current cosmos to be just the adult phase of a universe born long ago in the burst of a tiny blotch of energy.

It was a surprise that shook science at its foundations, undercutting philosophical preconceptions about existence and launching a new era in cosmology, the study of the universe. But even more surprising, in retrospect, is that such a deep secret had already been suspected by a mathematician whose specialty was predicting the weather.
A century ago this month (May 1922), Russian mathematician-meteorologist Alexander Friedmann composed a paper, based on Einstein’s general theory of relativity, that outlined multiple possible histories of the universe. One such possibility described cosmic expansion, starting from a singular point. In essence, even without considering any astronomical evidence, Friedmann had anticipated the modern Big Bang theory of the birth and evolution of the universe.

“The new vision of the universe opened by Friedmann,” writes Russian physicist Vladimir Soloviev in a recent paper, “has become a foundation of modern cosmology.”

Friedmann was not well known at the time. He had graduated in 1910 from St. Petersburg University in Russia, having studied math along with some physics. In graduate school he investigated the use of math in meteorology and atmospheric dynamics. He applied that expertise in aiding the Russian air force during World War I, using math to predict the optimum release point for dropping bombs on enemy targets.

After the war, Friedmann learned of Einstein’s general theory of relativity, which describes gravity as a manifestation of the geometry of space (or more accurately, spacetime). In Einstein’s theory, mass distorts spacetime, producing spacetime “curvature,” which makes masses appear to attract each other.

Friedmann was especially intrigued by Einstein’s 1917 paper (and a similar paper by Willem de Sitter) applying general relativity to the universe as a whole. Einstein found that his original equations allowed the universe to grow or shrink. But he considered that unthinkable, so he added a term representing a repulsive force that (he thought) would keep the size of the cosmos constant. Einstein concluded that space had a positive spatial curvature (like the surface of a ball), implying a “closed,” or finite universe.

Friedmann accepted the new term, called the cosmological constant, but pointed out that for various values of that constant, along with other assumptions, the universe might exhibit very different behaviors. Einstein’s static universe was a special case; the universe might also expand forever, or expand for a while, then contract to a point and then begin expanding again.

Friedmann’s paper describing dynamic universes, titled “On the Curvature of Space,” was accepted for publication in the prestigious Zeitschrift für Physik on June 29, 1922.

Einstein objected. He wrote a note to the journal contending that Friedmann had committed a mathematical error. But the error was Einstein’s. He later acknowledged that Friedmann’s math was correct, while still denying that it had any physical validity.

Friedmann insisted otherwise.

He was not just a pure mathematician, oblivious to the physical meanings of his symbols on paper. His in-depth appreciation of the relationship between equations and the atmosphere persuaded him that the math meant something physical. He even wrote a book (The World as Space and Time) delving deeply into the connection between the math of spatial geometry and the motion of physical bodies. Physical bodies “interpret” the “geometrical world,” he declared, enabling scientists to test which of the various possible geometrical worlds humans actually inhabit. Because of the physics-math connection, he averred, “it becomes possible to determine the geometry of the geometrical world through experimental studies of the physical world.”

So when Friedmann derived solutions to Einstein’s equations, he translated them into the possible physical meanings for the universe. Depending on various factors, the universe could be expanding from a point, or from a finite but smaller initial state, for instance. In one case he envisioned, the universe began to expand at a decelerating rate, but then reached an inflection point, whereupon it began expanding at a faster and faster rate. At the end of the 20th century, astronomers measuring the brightness of distant supernovas concluded that the universe had taken just such a course, a shock almost as surprising as the expansion of the universe itself. But Friedmann’s math had already forecast such a possibility.
No doubt Friedmann’s deep appreciation for the synergy of abstract math and concrete physics prepared his mind to consider the notion that the universe could be expanding. But maybe he had some additional help. Although he was the first scientist to seriously propose an expanding universe, he wasn’t the first person. Almost 75 years before Friedmann’s paper, the poet Edgar Allan Poe had published an essay (or “prose poem”) called Eureka. In that essay Poe described the history of the universe as expanding from the explosion of a “primordial particle.” Poe even described the universe as growing and then contracting back to a point again, just as envisioned in one of Friedmann’s scenarios.

Although Poe had studied math during his brief time as a student at West Point, he had used no equations in Eureka, and his essay was not recognized as a contribution to science. At least not directly. It turns out, though, that Friedmann was an avid reader, and among his favorite authors were Dostoevsky and Poe. So perhaps that’s why Friedmann was more receptive to an expanding universe than other scientists of his day.

Today Friedmann’s math remains at the core of modern cosmological theory. “The fundamental equations he derived still provide the basis for the current cosmological theories of the Big Bang and the accelerating universe,” Israeli mathematician and historian Ari Belenkiy noted in a 2013 paper. “He introduced the fundamental idea of modern cosmology — that the universe is dynamic and may evolve in different manners.”

Friedmann emphasized that astronomical knowledge in his day was insufficient to reveal which of the possible mathematical histories the universe has chosen. Now scientists have much more data, and have narrowed the possibilities in a way that confirms the prescience of Friedmann’s math.

Friedmann did not live to see the triumphs of his insights, though, or even the early evidence that the universe really does expand. He died in 1925 from typhoid fever, at the age of 37. But he died knowing that he had deciphered a secret about the universe deeper than any suspected by any scientist before him. As his wife remembered, he liked to quote a passage from Dante: “The waters I am entering, no one yet has crossed.”

A very specific kind of brain cell dies off in people with Parkinson’s

Deep in the human brain, a very specific kind of cell dies during Parkinson’s disease.

For the first time, researchers have sorted large numbers of human brain cells in the substantia nigra into 10 distinct types. Just one is especially vulnerable in Parkinson’s disease, the team reports May 5 in Nature Neuroscience. The result could lead to a clearer view of how Parkinson’s takes hold, and perhaps even ways to stop it.

The new research “goes right to the core of the matter,” says neuroscientist Raj Awatramani of Northwestern University Feinberg School of Medicine in Chicago. Pinpointing the brain cells that seem to be especially susceptible to the devastating disease is “the strength of this paper,” says Awatramani, who was not involved in the study.

Parkinson’s disease steals people’s ability to move smoothly, leaving balance problems, tremors and rigidity. In the United States, nearly 1 million people are estimated to have Parkinson’s. Scientists have known for decades that these symptoms come with the death of nerve cells in the substantia nigra. Neurons there churn out dopamine, a chemical signal involved in movement, among other jobs (SN: 9/7/17).

But those dopamine-making neurons are not all equally vulnerable in Parkinson’s, it turns out.

“This seemed like an opportunity to … really clarify which kinds of cells are actually dying in Parkinson’s disease,” says Evan Macosko, a psychiatrist and neuroscientist at Massachusetts General Hospital in Boston and the Broad Institute of MIT and Harvard.
The tricky part was that dopamine-making neurons in the substantia nigra are rare. In samples of postmortem brains, “we couldn’t survey enough of [the cells] to really get an answer,” Macosko says. But Abdulraouf Abdulraouf, a researcher in Macosko’s laboratory, led experiments that sorted these cells, figuring out a way to selectively pull the cells’ nuclei out from the rest of the cells present in the substantia nigra. That enrichment ultimately led to an abundance of nuclei to analyze.

By studying over 15,000 nuclei from the brains of eight formerly healthy people, the researchers further sorted dopamine-making cells in the substantia nigra into 10 distinct groups. Each of these cell groups was defined by a specific brain location and certain combinations of genes that were active.

When the researchers looked at substantia nigra neurons in the brains of people who died with either Parkinson’s disease or the related Lewy body dementia, the team noticed something curious: One of these 10 cell types was drastically diminished.

These missing neurons were identified by their location in the lower part of the substantia nigra and an active AGTR1 gene, lab member Tushar Kamath and colleagues found. That gene was thought to serve simply as a good way to identify these cells, Macosko says; researchers don’t know whether the gene has a role in these dopamine-making cells’ fate in people.

The new finding points to ways to perhaps counter the debilitating diseases. Scientists have been keen to replace the missing dopamine-making neurons in the brains of people with Parkinson’s. The new study shows what those cells would need to look like, Awatramani says. “If a particular subtype is more vulnerable in Parkinson’s disease, maybe that’s the one we should be trying to replace,” he says.

In fact, Macosko says that stem cell scientists have already been in contact, eager to make these specific cells. “We hope this is a guidepost,” Macosko says.

The new study involved only a small number of human brains. Going forward, Macosko and his colleagues hope to study more brains, and more parts of those brains. “We were able to get some pretty interesting insights with a relatively small number of people,” he says. “When we get to larger numbers of people with other kinds of diseases, I think we’re going to learn a lot.”

How some sunscreens damage coral reefs

One common chemical in sunscreen can have devastating effects on coral reefs. Now, scientists know why.

Sea anemones, which are closely related to corals, and mushroom coral can turn oxybenzone — a chemical that protects people against ultraviolet light — into a deadly toxin that’s activated by light. The good news is that algae living alongside the creatures can soak up the toxin and blunt its damage, researchers report in the May 6 Science.

But that also means that bleached coral reefs lacking algae may be more vulnerable to death. Heat-stressed corals and anemones can eject helpful algae that provide oxygen and remove waste products, which turns reefs white. Such bleaching is becoming more common as a result of climate change (SN: 4/7/20).
The findings hint that sunscreen pollution and climate change combined could be a greater threat to coral reefs and other marine habitats than either would be separately, says Craig Downs. He is a forensic ecotoxicologist with the nonprofit Haereticus Environmental Laboratory in Amherst, Va., and was not involved with the study.

Previous work suggested that oxybenzone can kill young corals or prevent adult corals from recovering after tissue damage. As a result, some places, including Hawaii and Thailand, have banned oxybenzone-containing sunscreens.

In the new study, environmental chemist Djordje Vuckovic of Stanford University and colleagues found that glass anemones (Exaiptasia pallida) exposed to oxybenzone and UV light add sugars to the chemical. While such sugary add-ons would typically help organisms detoxify chemicals and clear them from the body, the oxybenzone-sugar compound instead becomes a toxin that’s activated by light.

Anemones exposed to either simulated sunlight or oxybenzone alone survived the length of the experiment, or 21 days, the team showed. But all anemones exposed to fake sunlight while submersed in water containing the chemical died within 17 days.
The anemones’ algal friends absorbed much of the oxybenzone and the toxin that the animals were exposed to in the lab. Anemones lacking algae died days sooner than anemones with algae.

In similar experiments, algae living inside mushroom coral (Discosoma sp.) also soaked up the toxin, a sign that algal relationships are a safeguard against its harmful effects. The coral’s algae seem to be particularly protective: Over eight days, no mushroom corals died after being exposed to oxybenzone and simulated sunlight.

It’s still unclear what amount of oxybenzone might be toxic to coral reefs in the wild. Another lingering question, Downs says, is whether other sunscreen components that are similar in structure to oxybenzone might have the same effects. Pinning that down could help researchers make better, reef-safe sunscreens.

Replacing some meat with microbial protein could help fight climate change

“Fungi Fridays” could save a lot of trees — and take a bite out of greenhouse gas emissions. Eating one-fifth less red meat and instead munching on microbial proteins derived from fungi or algae could cut annual deforestation in half by 2050, researchers report May 5 in Nature.

Raising cattle and other ruminants contributes methane and nitrous oxide to the atmosphere, while clearing forests for pasture lands adds carbon dioxide (SN: 4/4/22; SN: 7/13/21). So the hunt is on for environmentally friendly substitutes, such as lab-grown hamburgers and cricket farming (SN: 9/20/18; SN: 5/2/19).

Another alternative is microbial protein, made from cells cultivated in a laboratory and nurtured with glucose. Fermented fungal spores, for example, produce a dense, doughy substance called mycoprotein, while fermented algae produce spirulina, a dietary supplement.
Cell-cultured foods do require sugar from croplands, but studies show that mycoprotein produces fewer greenhouse gas emissions and uses less land and water than raising cattle, says Florian Humpenöder, a climate modeler at Potsdam Institute for Climate Impact Research in Germany. However, a full comparison of foods’ future environmental impacts also requires accounting for changes in population, lifestyle, dietary patterns and technology, he says.

So Humpenöder and colleagues incorporated projected socioeconomic changes into computer simulations of land use and deforestation from 2020 through 2050. Then they simulated four scenarios, substituting microbial protein for 0 percent, 20 percent, 50 percent or 80 percent of the global red meat diet by 2050.
A little substitution went a long way, the team found: Just 20 percent microbial protein substitution cut annual deforestation rates — and associated CO2 emissions — by 56 percent from 2020 to 2050.

Eating more microbial proteins could be part of a portfolio of strategies to address the climate and biodiversity crises — alongside measures to protect forests and decarbonize electricity generation, Humpenöder says.

Latin America defies cultural theories based on East-West comparisons

When Igor de Almeida moved to Japan from Brazil nine years ago, the transition should have been relatively easy. Both Japan and Brazil are collectivist nations, where people tend to value the group’s needs over their own. And research shows that immigrants adapt more easily when the home and new country’s cultures match.

But to de Almeida, a cultural psychologist now at Kyoto University, the countries’ cultural differences were striking. Japanese people prioritize formal relationships, such as with coworkers or members of the same “bukatsu,” or extracurricular club, for instance, while Brazilian people prioritize friends in their informal social network. “Sometimes I try to find [cultural] similarities but it’s really hard,” de Almeida says.

Now, new research helps explain that disconnect. For decades, psychologists have studied how culture shapes the mind, or people’s thoughts and behaviors, by comparing Eastern and Western nations. But two research groups working independently in Latin America are finding that a cultural framework that splits the world in two is overly simplistic, obscuring nuances elsewhere in the world.

Due to differences in methodology and interpretation, the teams’ findings about how people living in the collectivist nations of Latin America think are also contradictory. And that raises a larger question: Will overarching cultural theories based on East-West divisions hold up over time, or are new theories needed?

However this debate unfolds, cultural psychologists argue that the field must expand. “If you make most of the cultures of the world … invisible,” says Vivian Vignoles, a cultural psychologist at the University of Sussex in England, “you will get all sorts of things wrong.”

Such misconceptions can jeopardize political alliances, business relationships, public health initiatives and general theories for how people find happiness and meaning. “Culture shapes what it means to be a person,” says Stanford University behavioral scientist Hazel Rose Markus. “What it means to be a person guides all of our behavior, how we think, how we feel, what motivates us [and] how we respond to other individuals and groups.”
Culture and the mind
Until four decades ago, most psychologists believed that culture had little bearing on the mind. That changed in 1980. Surveys of IBM employees taken across some 70 countries showed that attitudes toward work largely depended on workers’ home country, IBM organizational psychologist Geert Hofstede’s wrote in Culture’s Consequences.

Markus and Shinobu Kitayama, a cultural psychologist at the University of Michigan in Ann Arbor, subsequently fleshed out one Hofstede’s four cultural principles: Individualism versus collectivism. Culture does influence thinking, the duo claimed in a now widely cited paper in the 1991 Psychological Review. By comparing people in mostly the East and West, they surmised that living in individualist countries (i.e. Western ones) led people to think independently while living in collectivist countries (the East) led people to think interdependently.

That paper was pioneering at the time, Vignoles says. Before that, with psychological research based almost exclusively in the West, the Western mind had become the default mind. Now, “instead of being only one kind of person in the world, there [were] two kinds of persons in the world.”
Latin America: A case study
How individualism/collectivism shape the mind now undergirds the field of cross-cultural psychology. But researchers continue to treat the East and West, chiefly Japan and the United States, as prototypes, Vignoles and colleagues say.

To expand beyond that narrow lens, the team surveyed 7,279 participants in 33 nations and 55 cultures. Participants read such statements as “I prefer to turn to other people for help rather than solely rely on myself” and “I consider my happiness separate from the happiness of my friends and family.” They then responded to how well those comments reflected their values on a scale from 1 for “not at all” to 9 for “exactly.”

That analysis allowed the researchers to identify seven dimensions of independence/interdependence, including self-reliance versus dependence on others and emphasis on self-expression versus harmony. Strikingly, Latin Americans were as, or more, independent as Westerners in six out of the seven dimensions, the team reported in 2016 in the Journal of Experimental Psychology: General.

The researchers’ subsequent analysis of four studies comprising 17,255 participants across 53 nations largely reaffirmed that surprising finding. For instance, Latin Americans are more expressive than even Westerners, Vignoles, de Almeida and colleagues report in February in Perspectives in Psychological Science. But that finding violates the common view that people living in collectivist societies suppress their emotions to foster harmony, while people in individualistic countries emote as a form of self-expression.
Latin American nations are collectivist, as defined by Hofstede and others, but the people think and behave independently, the team concludes.

Kitayama’s team has a different take: Latin Americans are interdependent, just in a wholly different way than East Asians. Rather than suppressing emotions, Latin Americans tend to express positive, socially engaging emotions to communicate with others, says cultural psychologist Cristina Salvador of Duke University. That fosters interdependence, unlike the way Westerners express emotions to show their personal feelings. Westerners’ feelings can be negative or positive and often have little to do with their social surroundings — a sign of independence.

Salvador, Kitayama and colleagues had more than 1,000 respondents in Chile, Colombia, Mexico, Japan and the United States reflect on various social scenarios, instead of asking explicit questions like Vignoles’ team. For instance, respondents were asked to imagine winning a prize. They then picked what emotions — such as shame, guilt, anger, friendliness or closeness to others — they would express with family and friends.

Respondents from Latin America and the United States both expressed strong emotions, Salvador reported in February at the Society for Personality and Social Psychology conference in San Francisco. But people in the United States expressed egocentric emotions, such as pride, while people in Latin America expressed emotions that emphasize connection with others.

Because Latin America’s high ethnic and linguistic diversity made communication with words difficult, people learned how to communicate in other ways, Kitayama says. “Emotion became a very important means of social communication.”

Decentering the West
More research is needed to reconcile those findings. But how should that research proceed? Though a shift to a broader framework has begun, research in cultural psychology still hinges on the East-West binary, researchers from both teams say.

Psychologists who peer review studies for acceptance into scientific journals still “want a mainstream, white, U.S. comparison sample,” Salvador says. “[Often] you need an Asian sample, as well.”

The primacy of the East and West means that psychological differences between those two regions dominate research and discussions. But both teams are expanding the scope of their research despite those challenges.

Kitayama’s team, for instance, maps out how interdependence, which it argues precedes the emergence of independence, might have morphed as it spread around the globe, in a theory paper also presented at the San Francisco conference (SN: 11/7/19). Besides diversity giving way to “expressive interdependence” in Latin America, the team describes “self-effacing interdependence in East Asia” stemming from the communal nature of rice farming, “self-assertive interdependence” in Arab regions arising from the nomadic life and “argumentative interdependence” in South Asia arising from its central role in trade (SN: 7/14/14).
This research started with a “West and the rest” mentality, Kitayama says. His work with Markus created an “East-West and the rest” mentality. Now finally, psychologists are grappling with “the rest,” he says. “The time is really ready to expand this [research] to cover the rest of the world.”

De Almeida imagines decentering the West yet further. What if researchers had started off by comparing Japan and Brazil instead of Japan and the United States, he wonders. Instead of the current laser focus on individualism/collectivism, some other defining facet of culture would have likely risen to prominence. “I would say emotional expression, that’s the most important thing,” de Almeida says.

He sees a straightforward solution. “We could increase the number of studies not involving the United States,” he says. “Then we could develop new paradigms.”

Oat and soy milks are planet friendly, but not as nutritious as cow milk

If you’ve got milk, you’ve got options. You can lighten your coffee or soak a cookie, ferment a cheese or bestow yourself a mustache. You can float some cereal or mix a shake. Replacing such a versatile substance is a tall order. And yet there is ample reason to pursue alternatives.

Producing a single liter of cow’s milk requires about 9 square meters of land and about 630 liters of water. That’s the area of two king-size beds and the volume of 10.5 beer kegs. The process of making a liter of dairy milk also generates about 3.2 kilograms of greenhouse gases.

With milk’s global popularity, those costs are enormous. In 2015, the dairy sector generated 1.7 billion metric tons of greenhouse gases, roughly 3 percent of human-related greenhouse gas emissions, according to the Food and Agricultural Organization of the United Nations.

Making plant-based milks — including oat, almond, rice and soy — generates about one-third of the greenhouse gases and uses far less land and water than producing dairy milk, according to a 2018 report in Science.
Fueled by a growing base of environmentally conscious consumers, a slew of plant-based milks has entered the market. According to SPINS, a company that collects data on natural and organic products, $2.6 billion of plant-based milks were sold in the United States in 2021. That’s a 33 percent growth in dollar sales since 2019. “Food industries have realized that consumers… want change,” says food scientist David McClements of the University of Massachusetts Amherst.

Although plant milks by and large are better for the environment and the climate, they don’t provide the same nutrition. As the iconic dairy campaign of the 1980s said, “Milk, it does a body good.” The creamy beverage contains 13 essential nutrients, including muscle-building protein, immune-boosting vitamin A and zinc, and bone-strengthening calcium and vitamin D. Plant-based milks tend to contain smaller amounts of these nutrients, and even when plant milks are fortified, researchers aren’t yet sure how well the body absorbs those nutrients.

Dairy is very challenging to try and replace, says Leah Bessa, chief science officer of De Novo Dairy, a biotechnology company in Cape Town, South Africa, that produces dairy proteins without the animals. “You don’t really have a good alternative that’s sustainable and has the same nutritional profile and functionality.”
Room for improvement
What even is milk?

By its classic definition, milk is a fluid that comes from the mammary gland of a female mammal. But Eva Tornberg, a food scientist at Lund University in Sweden who has developed a potato milk, prefers to focus on milk’s chemical structure. That is the essence of its nourishing nature, she says. “It’s an emulsion…many tiny oil droplets that are dispersed in water.”

That emulsion imbues milk with its signature creaminess and makes milk the ideal vehicle for transporting nutrients, McClements notes. The duality of oil and water means milk can carry both water-soluble nutrients, such as riboflavin and vitamin B12, and oil-soluble ones, such as vitamins A and D.

And with the fat content separated into a multitude of oil droplets — rather than a single layer — human digestive enzymes have a vast amount of surface area to react with. This makes the nutrients packed inside the droplets easy and quick to absorb.

Most plant-based milks are also emulsions, McClements says, so they too have the potential to serve as excellent nutrient-delivery systems. But for the most part, plant-based milk producers have focused much more on providing the right flavor and mouthfeel to appeal to consumers’ tastes, he says. “We need much more work with the nutritional aspects.”

What’s missing?
When it comes to nutrition, the closest competitor among the plant-based milks available today is probably soy milk, says Megan Lott, a registered dietitian with Healthy Eating Research, a Durham, N.C.-based program of the Robert Wood Johnson Foundation. Soy milk contains almost as much protein as cow milk and that protein is similarly complete — containing all the essential amino acids. “It’s actually approved by the USDA in child nutrition programs and school meal programs as a substitute for dairy milk,” she says.

But soy milks and other plant-based milks fall short on other important nutrients. Parents often think they can give their children one cup of just plant-based milk in place of one cup of cow’s milk, and they’ll be getting everything they need, Lott says. “That’s just not the case.”
Vitamin D and calcium — especially important for a growing child — are the hardest nutrients to get when dropping dairy. Most of milk’s other important components can be obtained from a healthy diet of whole grains, vegetables, fruits and lean meats, Lott says. “If you’re a parent looking to find an alternative for your child, it’s probably the calcium and vitamin D … where you should focus your decision.”

Many producers fortify plant-based milks with vitamin D and calcium to rival or exceed the level in dairy milk. But whether the body can absorb those added nutrients is another story. What consumers read on the Nutrition Facts label does not necessarily reflect how much their body will actually be able to absorb and use, Lott says.

That’s because plant-based milks may contain naturally occurring plant molecules that hinder the absorption of nutrients. For example, some plant milks, including oat and soy milks, contain phytic acid, which binds to calcium, iron and zinc and reduces the body’s absorption of these nutrients.

And adding too much of one good thing can backfire. For instance, introducing high levels of calcium into almond milk may interfere with the body’s absorption of vitamin D, McClements and colleagues reported in 2021 in the Journal of Agricultural and Food Chemistry.

More research is needed to better understand how compounds interact in plant milks and how those interactions affect nutrient absorption in the body, McClements says. Homing in on the ideal balance of ingredients will help producers of plant-based milks craft more nutritious products that taste good too, he says. “What we’re trying to do is find that sweet spot.”

50 years ago, scientists had hints of a planet beyond Pluto

There have been suggestions that our solar system might have a tenth planet…. In the April Publications of the Astronomical Society of the Pacific, a mathematician … presents what he says is “some very interesting evidence of a planet beyond Pluto.” The evidence comes from calculations of the orbit of Halley’s comet.

Update
The 1972 evidence never yielded a planet, but astronomers haven’t stopped looking — though it became a search for Planet 9 with Pluto’s 2006 switch to dwarf status. In the mid-2010s, scientists hypothesized that the tug of a large planet 500 to 600 times as far from the sun as Earth could explain the peculiar orbits of some objects in the solar system’s debris-filled Kuiper Belt (SN: 7/23/16, p. 7). But that evidence might not stand up to further scrutiny (SN: 3/13/21, p. 9). Researchers using the Atacama C­osmology Telescope in Chile to scan nearly 90 percent of the Southern Hemisphere’s sky had no luck finding the planet, the team reported in December 2021.

Joggers naturally pace themselves to conserve energy even on short runs

For many recreational runners, taking a jog is a fun way to stay fit and burn calories. But it turns out an individual has a tendency to settle into the same, comfortable pace on short and long runs — and that pace is the one that minimizes their body’s energy use over a given distance.

“I was really surprised,” says Jessica Selinger, a biomechanist at Queen’s University in Kingston, Canada. “Intuitively, I would have thought people run faster at shorter distances and slow their pace at longer distances.”

Selinger and colleagues combined data from more than 4,600 runners, who went on 37,201 runs while wearing a fitness device called the Lumo Run, with lab-based physiology data. The analysis, described April 28 in Current Biology, also shows that it takes more energy for someone to run a given distance if they run faster or slower than their optimum speed.
“There is a speed that for you is going to feel the best,” Selinger says. “That speed is the one where you’re actually burning fewer calories.”

The runners ranged in age from 16 to 83, and had body mass indices spanning from 14.3 to 45.4. But no matter participants’ age, weight or sex — or whether they ran only a narrow range of distances or runs of varying lengths — the same pattern showed up in the data repeatedly.

Researchers have thought that running was performance-driven, says Melissa Thompson, a biomechanist at Fort Lewis College in Durango, Colo., who was not involved in the new study. This new research, she says, is “talking about preference, not performance.”

Most related research, Selinger says, has been done in university laboratories, with study subjects who are generally younger and healthier than the general population. By using wearable devices, the researchers could track many more runs, across more real-life conditions than is possible in a lab. That allowed the scientists to look at a “much broader cross section of humanity,” she says. Treadmill tests measuring energy use at different paces with people representative of those included in the fitness tracker data were used to determine optimum energy-efficient speeds.

Because the study includes a wide range of conditions and doesn’t control for things like fasting before running, it’s messier than data gathered in labs. Still, the sheer volume of real-world runs recorded by the wearable devices supports a convincing general rule about how humans run, says Rodger Kram, a physiologist at the University of Colorado Boulder not involved with the study. “I think the rule’s right.”

The results don’t apply to very long runs when fatigue starts to set in, or to race performance by elite athletes or others consciously training for speed. And a runner’s optimum pace can change over time, with training or age for instance.

There are quick tricks for those who want to speed up and go for a little more calorie burn to temporarily trump their body’s natural inclinations: Listen to upbeat music or jog alongside someone with a faster pace, Selinger says. “But it seems like your preference is actually to sink back into that optimum.”

The results match observations of optimum pacing from animals like horses and wildebeests, and also correspond to the way humans tend to walk at a speed that minimizes their individual energy use (SN: 9/10/15).

It does make sense that humans would be adapted to run at an optimum speed for minimizing energy use, says coauthor Scott Delp, a biomechanist at Stanford University. Imagine being an early human ancestor going out to hunt difficult prey. “It might be days before I get my next food,” he says. “So I want to spend the least energy en route to getting that food.”

Mom’s voice holds a special place in kids’ brains. That changes for teens

Young kids’ brains are especially tuned to their mothers’ voices. Teenagers’ brains, in their typical rebellious glory, are most decidedly not.

That conclusion, described April 28 in the Journal of Neuroscience, may seem laughably obvious to parents of teenagers, including neuroscientist Daniel Abrams of Stanford University School of Medicine. “I have two teenaged boys myself, and it’s a kind of funny result,” he says.

But the finding may reflect something much deeper than a punch line. As kids grow up and expand their social connections beyond their family, their brains need to be attuned to that growing world. “Just as an infant is tuned into a mom, adolescents have this whole other class of sounds and voices that they need to tune into,” Abrams says.
He and his colleagues scanned the brains of 7- to 16-year-olds as they heard the voices of either their mothers or unfamiliar women. To simplify the experiment down to just the sound of a voice, the words were gibberish: teebudieshawlt, keebudieshawlt and peebudieshawlt. As the children and teenagers listened, certain parts of their brains became active.

Previous experiments by Abrams and his colleagues have shown that certain regions of the brains of kids ages 7 to 12 — particularly those parts involved in detecting rewards and paying attention — respond more strongly to mom’s voice than to a voice of an unknown woman. “In adolescence, we show the exact opposite of that,” Abrams says.

In these same brain regions in teens, unfamiliar voices elicited greater responses than the voices of their own dear mothers. The shift from mother to other seems to happen between ages 13 and 14.

It’s not that these adolescent brain areas stop responding to mom, Abrams says. Rather, the unfamiliar voices become more rewarding and worthy of attention.

And that’s exactly how it should be, Abrams says. Exploring new people and situations is a hallmark of adolescence. “What we’re seeing here is just purely a reflection of this phenomenon.”

Voices can carry powerful signals. When stressed-out girls heard their moms’ voices on the phone, the girls’ stress hormones dropped, biological anthropologist Leslie Seltzer of the University of Wisconsin–Madison and colleagues found in 2011 (SN: 8/12/11). The same was not true for texts from their mothers.

The current results support the idea that the brain changes to reflect new needs that come with time and experience, Seltzer says. “As we mature, our survival depends less and less on maternal support and more on our group affiliations with peers.”

It’s not clear how universal this neural shift is. The finding might change across various mother-child relationships, including those that have different parenting styles, or even a history of neglect or abuse, Seltzer says.

So while teenagers and parents may sometimes feel frustrated by missed messages, take heart, Abrams says. “This is the way the brain is wired, and there’s a good reason for it.”

Dog breed is a surprisingly poor predictor of individual behavior

Turns out we may be unfairly stereotyping dogs.

Modern breeds are shaped around aesthetics: Chihuahuas’ batlike ears, poodles’ curly fur, dachshunds’ hot dog shape. But breeds are frequently associated with certain behaviors, too. For instance, the American Kennel Club describes border collies as “affectionate, smart, energetic” and beagles as “friendly, curious, merry.”

Now, genetic information from more than 2,000 dogs, paired with self-reported surveys from dog owners, indicates that a dog’s breed is a poor predictor of its behavior. On average, breed explains only 9 percent of the behavioral differences between individual dogs, researchers report April 28 in Science.
“Everybody was assuming that breed was predictive of behavior in dogs,” geneticist Elinor Karlsson of the University of Massachusetts Chan Medical School in Worcester said in an April 26 news briefing. But “that had never really been asked particularly well.”

Geneticists had asked the question before in different ways. One study in 2019 looked at whether genetics might explain collective variation between breeds and found that genes could explain some of the differences between, say, poodles and chihuahuas (SN: 10/1/19). But Karlsson and her colleagues wanted to learn how much breed can predict variation in individual dogs’ behavior.

To study variation at the individual level, the team needed genetic and behavior data from a lot of dogs. So they developed Darwin’s Ark, an open-source database where more than 18,000 pet owners responded to surveys about their dog’s traits and behavior. The survey asked over 100 questions about observable behaviors, which the researchers grouped into eight “behavioral factors,” including human sociability (how comfortable a dog is around humans) and biddability (how responsive it is to commands).

The researchers also collected genetic data from 2,155 purebred and mixed-breed dogs, including 1,715 dogs from Darwin’s Ark whose owners sent in dog saliva swabs. The inclusion of mixed-breed dogs, or mutts, shed light on how ancestry affects behavior while removing the purebred stereotypes that could affect the way the dog is treated — and thus behaves.

Studying mutts also makes it easier to decouple traits from one another, says Kathleen Morrill, a geneticist in Karlsson’s lab. “And that means on an individual basis, you’re going to have a better shot at mapping a gene that is actually tied to the question you’re asking.”

Then the team combined the genetic and survey data for the individual dogs to identify genes associated with particular traits. The new study revealed that the most heritable behavioral factor for dogs is human sociability, and that motor patterns — such as howling and retrieving — are generally more heritable than other behaviors.

That makes sense, Kathryn Lord, an evolutionary canine biologist in Karlsson’s lab, said during the briefing. Before modern breeding started within the last couple hundred years or so, dogs were selected for the functional roles they could provide, such as hunting or herding (SN: 4/26/17). Today, these selections still show up in breed groups. For instance, herding dogs on average tend to be more biddable and interested in toys. It also follows that, within breed groups, individual breeds are more likely to display certain motor patterns: Retrievers, unsurprisingly, are more likely to retrieve.

Still, even though breed was associated with certain behaviors, it was not a reliable predictor of individual behavior. While retrievers are less likely to howl, some owners reported that their retrievers howled often; greyhounds rarely bury toys, except some do.

The research solidifies what people have observed: Dog breeds differ on average in behavior, but there’s a lot of variation within breeds, says Adam Boyko, a canine geneticist at Cornell University who was not involved in the study.

Surprisingly, size had even less of an effect — as in, virtually none — on an individual’s behavior, despite the yappiness commonly associated with small dogs. Boyko points out that small dogs may often behave worse than large dogs, but rather than that being built into their genetics, “I think it’s that we typically tolerate poor behavior more in small dogs than we do in big dogs.”

As a dog trainer, Curtis Kelley of Pet Parent Allies in Philadelphia says that he meets a dog where it’s at. “Dogs are as individual as people are,” he says. Breed gives a loose guideline for what kind of behaviors to expect, “but it’s certainly not a hard-and-fast rule.”

If a person is looking to buy a dog, he says, they shouldn’t put too much stock in the dog’s breed. Even within a litter, dogs can show very different personalities. “A puppy will show you who they are at eight weeks old,” Kelley says. “It’s just our job to believe them.”