How some sunscreens damage coral reefs

One common chemical in sunscreen can have devastating effects on coral reefs. Now, scientists know why.

Sea anemones, which are closely related to corals, and mushroom coral can turn oxybenzone — a chemical that protects people against ultraviolet light — into a deadly toxin that’s activated by light. The good news is that algae living alongside the creatures can soak up the toxin and blunt its damage, researchers report in the May 6 Science.

But that also means that bleached coral reefs lacking algae may be more vulnerable to death. Heat-stressed corals and anemones can eject helpful algae that provide oxygen and remove waste products, which turns reefs white. Such bleaching is becoming more common as a result of climate change (SN: 4/7/20).
The findings hint that sunscreen pollution and climate change combined could be a greater threat to coral reefs and other marine habitats than either would be separately, says Craig Downs. He is a forensic ecotoxicologist with the nonprofit Haereticus Environmental Laboratory in Amherst, Va., and was not involved with the study.

Previous work suggested that oxybenzone can kill young corals or prevent adult corals from recovering after tissue damage. As a result, some places, including Hawaii and Thailand, have banned oxybenzone-containing sunscreens.

In the new study, environmental chemist Djordje Vuckovic of Stanford University and colleagues found that glass anemones (Exaiptasia pallida) exposed to oxybenzone and UV light add sugars to the chemical. While such sugary add-ons would typically help organisms detoxify chemicals and clear them from the body, the oxybenzone-sugar compound instead becomes a toxin that’s activated by light.

Anemones exposed to either simulated sunlight or oxybenzone alone survived the length of the experiment, or 21 days, the team showed. But all anemones exposed to fake sunlight while submersed in water containing the chemical died within 17 days.
The anemones’ algal friends absorbed much of the oxybenzone and the toxin that the animals were exposed to in the lab. Anemones lacking algae died days sooner than anemones with algae.

In similar experiments, algae living inside mushroom coral (Discosoma sp.) also soaked up the toxin, a sign that algal relationships are a safeguard against its harmful effects. The coral’s algae seem to be particularly protective: Over eight days, no mushroom corals died after being exposed to oxybenzone and simulated sunlight.

It’s still unclear what amount of oxybenzone might be toxic to coral reefs in the wild. Another lingering question, Downs says, is whether other sunscreen components that are similar in structure to oxybenzone might have the same effects. Pinning that down could help researchers make better, reef-safe sunscreens.

Replacing some meat with microbial protein could help fight climate change

“Fungi Fridays” could save a lot of trees — and take a bite out of greenhouse gas emissions. Eating one-fifth less red meat and instead munching on microbial proteins derived from fungi or algae could cut annual deforestation in half by 2050, researchers report May 5 in Nature.

Raising cattle and other ruminants contributes methane and nitrous oxide to the atmosphere, while clearing forests for pasture lands adds carbon dioxide (SN: 4/4/22; SN: 7/13/21). So the hunt is on for environmentally friendly substitutes, such as lab-grown hamburgers and cricket farming (SN: 9/20/18; SN: 5/2/19).

Another alternative is microbial protein, made from cells cultivated in a laboratory and nurtured with glucose. Fermented fungal spores, for example, produce a dense, doughy substance called mycoprotein, while fermented algae produce spirulina, a dietary supplement.
Cell-cultured foods do require sugar from croplands, but studies show that mycoprotein produces fewer greenhouse gas emissions and uses less land and water than raising cattle, says Florian Humpenöder, a climate modeler at Potsdam Institute for Climate Impact Research in Germany. However, a full comparison of foods’ future environmental impacts also requires accounting for changes in population, lifestyle, dietary patterns and technology, he says.

So Humpenöder and colleagues incorporated projected socioeconomic changes into computer simulations of land use and deforestation from 2020 through 2050. Then they simulated four scenarios, substituting microbial protein for 0 percent, 20 percent, 50 percent or 80 percent of the global red meat diet by 2050.
A little substitution went a long way, the team found: Just 20 percent microbial protein substitution cut annual deforestation rates — and associated CO2 emissions — by 56 percent from 2020 to 2050.

Eating more microbial proteins could be part of a portfolio of strategies to address the climate and biodiversity crises — alongside measures to protect forests and decarbonize electricity generation, Humpenöder says.

Latin America defies cultural theories based on East-West comparisons

When Igor de Almeida moved to Japan from Brazil nine years ago, the transition should have been relatively easy. Both Japan and Brazil are collectivist nations, where people tend to value the group’s needs over their own. And research shows that immigrants adapt more easily when the home and new country’s cultures match.

But to de Almeida, a cultural psychologist now at Kyoto University, the countries’ cultural differences were striking. Japanese people prioritize formal relationships, such as with coworkers or members of the same “bukatsu,” or extracurricular club, for instance, while Brazilian people prioritize friends in their informal social network. “Sometimes I try to find [cultural] similarities but it’s really hard,” de Almeida says.

Now, new research helps explain that disconnect. For decades, psychologists have studied how culture shapes the mind, or people’s thoughts and behaviors, by comparing Eastern and Western nations. But two research groups working independently in Latin America are finding that a cultural framework that splits the world in two is overly simplistic, obscuring nuances elsewhere in the world.

Due to differences in methodology and interpretation, the teams’ findings about how people living in the collectivist nations of Latin America think are also contradictory. And that raises a larger question: Will overarching cultural theories based on East-West divisions hold up over time, or are new theories needed?

However this debate unfolds, cultural psychologists argue that the field must expand. “If you make most of the cultures of the world … invisible,” says Vivian Vignoles, a cultural psychologist at the University of Sussex in England, “you will get all sorts of things wrong.”

Such misconceptions can jeopardize political alliances, business relationships, public health initiatives and general theories for how people find happiness and meaning. “Culture shapes what it means to be a person,” says Stanford University behavioral scientist Hazel Rose Markus. “What it means to be a person guides all of our behavior, how we think, how we feel, what motivates us [and] how we respond to other individuals and groups.”
Culture and the mind
Until four decades ago, most psychologists believed that culture had little bearing on the mind. That changed in 1980. Surveys of IBM employees taken across some 70 countries showed that attitudes toward work largely depended on workers’ home country, IBM organizational psychologist Geert Hofstede’s wrote in Culture’s Consequences.

Markus and Shinobu Kitayama, a cultural psychologist at the University of Michigan in Ann Arbor, subsequently fleshed out one Hofstede’s four cultural principles: Individualism versus collectivism. Culture does influence thinking, the duo claimed in a now widely cited paper in the 1991 Psychological Review. By comparing people in mostly the East and West, they surmised that living in individualist countries (i.e. Western ones) led people to think independently while living in collectivist countries (the East) led people to think interdependently.

That paper was pioneering at the time, Vignoles says. Before that, with psychological research based almost exclusively in the West, the Western mind had become the default mind. Now, “instead of being only one kind of person in the world, there [were] two kinds of persons in the world.”
Latin America: A case study
How individualism/collectivism shape the mind now undergirds the field of cross-cultural psychology. But researchers continue to treat the East and West, chiefly Japan and the United States, as prototypes, Vignoles and colleagues say.

To expand beyond that narrow lens, the team surveyed 7,279 participants in 33 nations and 55 cultures. Participants read such statements as “I prefer to turn to other people for help rather than solely rely on myself” and “I consider my happiness separate from the happiness of my friends and family.” They then responded to how well those comments reflected their values on a scale from 1 for “not at all” to 9 for “exactly.”

That analysis allowed the researchers to identify seven dimensions of independence/interdependence, including self-reliance versus dependence on others and emphasis on self-expression versus harmony. Strikingly, Latin Americans were as, or more, independent as Westerners in six out of the seven dimensions, the team reported in 2016 in the Journal of Experimental Psychology: General.

The researchers’ subsequent analysis of four studies comprising 17,255 participants across 53 nations largely reaffirmed that surprising finding. For instance, Latin Americans are more expressive than even Westerners, Vignoles, de Almeida and colleagues report in February in Perspectives in Psychological Science. But that finding violates the common view that people living in collectivist societies suppress their emotions to foster harmony, while people in individualistic countries emote as a form of self-expression.
Latin American nations are collectivist, as defined by Hofstede and others, but the people think and behave independently, the team concludes.

Kitayama’s team has a different take: Latin Americans are interdependent, just in a wholly different way than East Asians. Rather than suppressing emotions, Latin Americans tend to express positive, socially engaging emotions to communicate with others, says cultural psychologist Cristina Salvador of Duke University. That fosters interdependence, unlike the way Westerners express emotions to show their personal feelings. Westerners’ feelings can be negative or positive and often have little to do with their social surroundings — a sign of independence.

Salvador, Kitayama and colleagues had more than 1,000 respondents in Chile, Colombia, Mexico, Japan and the United States reflect on various social scenarios, instead of asking explicit questions like Vignoles’ team. For instance, respondents were asked to imagine winning a prize. They then picked what emotions — such as shame, guilt, anger, friendliness or closeness to others — they would express with family and friends.

Respondents from Latin America and the United States both expressed strong emotions, Salvador reported in February at the Society for Personality and Social Psychology conference in San Francisco. But people in the United States expressed egocentric emotions, such as pride, while people in Latin America expressed emotions that emphasize connection with others.

Because Latin America’s high ethnic and linguistic diversity made communication with words difficult, people learned how to communicate in other ways, Kitayama says. “Emotion became a very important means of social communication.”

Decentering the West
More research is needed to reconcile those findings. But how should that research proceed? Though a shift to a broader framework has begun, research in cultural psychology still hinges on the East-West binary, researchers from both teams say.

Psychologists who peer review studies for acceptance into scientific journals still “want a mainstream, white, U.S. comparison sample,” Salvador says. “[Often] you need an Asian sample, as well.”

The primacy of the East and West means that psychological differences between those two regions dominate research and discussions. But both teams are expanding the scope of their research despite those challenges.

Kitayama’s team, for instance, maps out how interdependence, which it argues precedes the emergence of independence, might have morphed as it spread around the globe, in a theory paper also presented at the San Francisco conference (SN: 11/7/19). Besides diversity giving way to “expressive interdependence” in Latin America, the team describes “self-effacing interdependence in East Asia” stemming from the communal nature of rice farming, “self-assertive interdependence” in Arab regions arising from the nomadic life and “argumentative interdependence” in South Asia arising from its central role in trade (SN: 7/14/14).
This research started with a “West and the rest” mentality, Kitayama says. His work with Markus created an “East-West and the rest” mentality. Now finally, psychologists are grappling with “the rest,” he says. “The time is really ready to expand this [research] to cover the rest of the world.”

De Almeida imagines decentering the West yet further. What if researchers had started off by comparing Japan and Brazil instead of Japan and the United States, he wonders. Instead of the current laser focus on individualism/collectivism, some other defining facet of culture would have likely risen to prominence. “I would say emotional expression, that’s the most important thing,” de Almeida says.

He sees a straightforward solution. “We could increase the number of studies not involving the United States,” he says. “Then we could develop new paradigms.”

Oat and soy milks are planet friendly, but not as nutritious as cow milk

If you’ve got milk, you’ve got options. You can lighten your coffee or soak a cookie, ferment a cheese or bestow yourself a mustache. You can float some cereal or mix a shake. Replacing such a versatile substance is a tall order. And yet there is ample reason to pursue alternatives.

Producing a single liter of cow’s milk requires about 9 square meters of land and about 630 liters of water. That’s the area of two king-size beds and the volume of 10.5 beer kegs. The process of making a liter of dairy milk also generates about 3.2 kilograms of greenhouse gases.

With milk’s global popularity, those costs are enormous. In 2015, the dairy sector generated 1.7 billion metric tons of greenhouse gases, roughly 3 percent of human-related greenhouse gas emissions, according to the Food and Agricultural Organization of the United Nations.

Making plant-based milks — including oat, almond, rice and soy — generates about one-third of the greenhouse gases and uses far less land and water than producing dairy milk, according to a 2018 report in Science.
Fueled by a growing base of environmentally conscious consumers, a slew of plant-based milks has entered the market. According to SPINS, a company that collects data on natural and organic products, $2.6 billion of plant-based milks were sold in the United States in 2021. That’s a 33 percent growth in dollar sales since 2019. “Food industries have realized that consumers… want change,” says food scientist David McClements of the University of Massachusetts Amherst.

Although plant milks by and large are better for the environment and the climate, they don’t provide the same nutrition. As the iconic dairy campaign of the 1980s said, “Milk, it does a body good.” The creamy beverage contains 13 essential nutrients, including muscle-building protein, immune-boosting vitamin A and zinc, and bone-strengthening calcium and vitamin D. Plant-based milks tend to contain smaller amounts of these nutrients, and even when plant milks are fortified, researchers aren’t yet sure how well the body absorbs those nutrients.

Dairy is very challenging to try and replace, says Leah Bessa, chief science officer of De Novo Dairy, a biotechnology company in Cape Town, South Africa, that produces dairy proteins without the animals. “You don’t really have a good alternative that’s sustainable and has the same nutritional profile and functionality.”
Room for improvement
What even is milk?

By its classic definition, milk is a fluid that comes from the mammary gland of a female mammal. But Eva Tornberg, a food scientist at Lund University in Sweden who has developed a potato milk, prefers to focus on milk’s chemical structure. That is the essence of its nourishing nature, she says. “It’s an emulsion…many tiny oil droplets that are dispersed in water.”

That emulsion imbues milk with its signature creaminess and makes milk the ideal vehicle for transporting nutrients, McClements notes. The duality of oil and water means milk can carry both water-soluble nutrients, such as riboflavin and vitamin B12, and oil-soluble ones, such as vitamins A and D.

And with the fat content separated into a multitude of oil droplets — rather than a single layer — human digestive enzymes have a vast amount of surface area to react with. This makes the nutrients packed inside the droplets easy and quick to absorb.

Most plant-based milks are also emulsions, McClements says, so they too have the potential to serve as excellent nutrient-delivery systems. But for the most part, plant-based milk producers have focused much more on providing the right flavor and mouthfeel to appeal to consumers’ tastes, he says. “We need much more work with the nutritional aspects.”

What’s missing?
When it comes to nutrition, the closest competitor among the plant-based milks available today is probably soy milk, says Megan Lott, a registered dietitian with Healthy Eating Research, a Durham, N.C.-based program of the Robert Wood Johnson Foundation. Soy milk contains almost as much protein as cow milk and that protein is similarly complete — containing all the essential amino acids. “It’s actually approved by the USDA in child nutrition programs and school meal programs as a substitute for dairy milk,” she says.

But soy milks and other plant-based milks fall short on other important nutrients. Parents often think they can give their children one cup of just plant-based milk in place of one cup of cow’s milk, and they’ll be getting everything they need, Lott says. “That’s just not the case.”
Vitamin D and calcium — especially important for a growing child — are the hardest nutrients to get when dropping dairy. Most of milk’s other important components can be obtained from a healthy diet of whole grains, vegetables, fruits and lean meats, Lott says. “If you’re a parent looking to find an alternative for your child, it’s probably the calcium and vitamin D … where you should focus your decision.”

Many producers fortify plant-based milks with vitamin D and calcium to rival or exceed the level in dairy milk. But whether the body can absorb those added nutrients is another story. What consumers read on the Nutrition Facts label does not necessarily reflect how much their body will actually be able to absorb and use, Lott says.

That’s because plant-based milks may contain naturally occurring plant molecules that hinder the absorption of nutrients. For example, some plant milks, including oat and soy milks, contain phytic acid, which binds to calcium, iron and zinc and reduces the body’s absorption of these nutrients.

And adding too much of one good thing can backfire. For instance, introducing high levels of calcium into almond milk may interfere with the body’s absorption of vitamin D, McClements and colleagues reported in 2021 in the Journal of Agricultural and Food Chemistry.

More research is needed to better understand how compounds interact in plant milks and how those interactions affect nutrient absorption in the body, McClements says. Homing in on the ideal balance of ingredients will help producers of plant-based milks craft more nutritious products that taste good too, he says. “What we’re trying to do is find that sweet spot.”

50 years ago, scientists had hints of a planet beyond Pluto

There have been suggestions that our solar system might have a tenth planet…. In the April Publications of the Astronomical Society of the Pacific, a mathematician … presents what he says is “some very interesting evidence of a planet beyond Pluto.” The evidence comes from calculations of the orbit of Halley’s comet.

Update
The 1972 evidence never yielded a planet, but astronomers haven’t stopped looking — though it became a search for Planet 9 with Pluto’s 2006 switch to dwarf status. In the mid-2010s, scientists hypothesized that the tug of a large planet 500 to 600 times as far from the sun as Earth could explain the peculiar orbits of some objects in the solar system’s debris-filled Kuiper Belt (SN: 7/23/16, p. 7). But that evidence might not stand up to further scrutiny (SN: 3/13/21, p. 9). Researchers using the Atacama C­osmology Telescope in Chile to scan nearly 90 percent of the Southern Hemisphere’s sky had no luck finding the planet, the team reported in December 2021.

Joggers naturally pace themselves to conserve energy even on short runs

For many recreational runners, taking a jog is a fun way to stay fit and burn calories. But it turns out an individual has a tendency to settle into the same, comfortable pace on short and long runs — and that pace is the one that minimizes their body’s energy use over a given distance.

“I was really surprised,” says Jessica Selinger, a biomechanist at Queen’s University in Kingston, Canada. “Intuitively, I would have thought people run faster at shorter distances and slow their pace at longer distances.”

Selinger and colleagues combined data from more than 4,600 runners, who went on 37,201 runs while wearing a fitness device called the Lumo Run, with lab-based physiology data. The analysis, described April 28 in Current Biology, also shows that it takes more energy for someone to run a given distance if they run faster or slower than their optimum speed.
“There is a speed that for you is going to feel the best,” Selinger says. “That speed is the one where you’re actually burning fewer calories.”

The runners ranged in age from 16 to 83, and had body mass indices spanning from 14.3 to 45.4. But no matter participants’ age, weight or sex — or whether they ran only a narrow range of distances or runs of varying lengths — the same pattern showed up in the data repeatedly.

Researchers have thought that running was performance-driven, says Melissa Thompson, a biomechanist at Fort Lewis College in Durango, Colo., who was not involved in the new study. This new research, she says, is “talking about preference, not performance.”

Most related research, Selinger says, has been done in university laboratories, with study subjects who are generally younger and healthier than the general population. By using wearable devices, the researchers could track many more runs, across more real-life conditions than is possible in a lab. That allowed the scientists to look at a “much broader cross section of humanity,” she says. Treadmill tests measuring energy use at different paces with people representative of those included in the fitness tracker data were used to determine optimum energy-efficient speeds.

Because the study includes a wide range of conditions and doesn’t control for things like fasting before running, it’s messier than data gathered in labs. Still, the sheer volume of real-world runs recorded by the wearable devices supports a convincing general rule about how humans run, says Rodger Kram, a physiologist at the University of Colorado Boulder not involved with the study. “I think the rule’s right.”

The results don’t apply to very long runs when fatigue starts to set in, or to race performance by elite athletes or others consciously training for speed. And a runner’s optimum pace can change over time, with training or age for instance.

There are quick tricks for those who want to speed up and go for a little more calorie burn to temporarily trump their body’s natural inclinations: Listen to upbeat music or jog alongside someone with a faster pace, Selinger says. “But it seems like your preference is actually to sink back into that optimum.”

The results match observations of optimum pacing from animals like horses and wildebeests, and also correspond to the way humans tend to walk at a speed that minimizes their individual energy use (SN: 9/10/15).

It does make sense that humans would be adapted to run at an optimum speed for minimizing energy use, says coauthor Scott Delp, a biomechanist at Stanford University. Imagine being an early human ancestor going out to hunt difficult prey. “It might be days before I get my next food,” he says. “So I want to spend the least energy en route to getting that food.”

Mom’s voice holds a special place in kids’ brains. That changes for teens

Young kids’ brains are especially tuned to their mothers’ voices. Teenagers’ brains, in their typical rebellious glory, are most decidedly not.

That conclusion, described April 28 in the Journal of Neuroscience, may seem laughably obvious to parents of teenagers, including neuroscientist Daniel Abrams of Stanford University School of Medicine. “I have two teenaged boys myself, and it’s a kind of funny result,” he says.

But the finding may reflect something much deeper than a punch line. As kids grow up and expand their social connections beyond their family, their brains need to be attuned to that growing world. “Just as an infant is tuned into a mom, adolescents have this whole other class of sounds and voices that they need to tune into,” Abrams says.
He and his colleagues scanned the brains of 7- to 16-year-olds as they heard the voices of either their mothers or unfamiliar women. To simplify the experiment down to just the sound of a voice, the words were gibberish: teebudieshawlt, keebudieshawlt and peebudieshawlt. As the children and teenagers listened, certain parts of their brains became active.

Previous experiments by Abrams and his colleagues have shown that certain regions of the brains of kids ages 7 to 12 — particularly those parts involved in detecting rewards and paying attention — respond more strongly to mom’s voice than to a voice of an unknown woman. “In adolescence, we show the exact opposite of that,” Abrams says.

In these same brain regions in teens, unfamiliar voices elicited greater responses than the voices of their own dear mothers. The shift from mother to other seems to happen between ages 13 and 14.

It’s not that these adolescent brain areas stop responding to mom, Abrams says. Rather, the unfamiliar voices become more rewarding and worthy of attention.

And that’s exactly how it should be, Abrams says. Exploring new people and situations is a hallmark of adolescence. “What we’re seeing here is just purely a reflection of this phenomenon.”

Voices can carry powerful signals. When stressed-out girls heard their moms’ voices on the phone, the girls’ stress hormones dropped, biological anthropologist Leslie Seltzer of the University of Wisconsin–Madison and colleagues found in 2011 (SN: 8/12/11). The same was not true for texts from their mothers.

The current results support the idea that the brain changes to reflect new needs that come with time and experience, Seltzer says. “As we mature, our survival depends less and less on maternal support and more on our group affiliations with peers.”

It’s not clear how universal this neural shift is. The finding might change across various mother-child relationships, including those that have different parenting styles, or even a history of neglect or abuse, Seltzer says.

So while teenagers and parents may sometimes feel frustrated by missed messages, take heart, Abrams says. “This is the way the brain is wired, and there’s a good reason for it.”

Dog breed is a surprisingly poor predictor of individual behavior

Turns out we may be unfairly stereotyping dogs.

Modern breeds are shaped around aesthetics: Chihuahuas’ batlike ears, poodles’ curly fur, dachshunds’ hot dog shape. But breeds are frequently associated with certain behaviors, too. For instance, the American Kennel Club describes border collies as “affectionate, smart, energetic” and beagles as “friendly, curious, merry.”

Now, genetic information from more than 2,000 dogs, paired with self-reported surveys from dog owners, indicates that a dog’s breed is a poor predictor of its behavior. On average, breed explains only 9 percent of the behavioral differences between individual dogs, researchers report April 28 in Science.
“Everybody was assuming that breed was predictive of behavior in dogs,” geneticist Elinor Karlsson of the University of Massachusetts Chan Medical School in Worcester said in an April 26 news briefing. But “that had never really been asked particularly well.”

Geneticists had asked the question before in different ways. One study in 2019 looked at whether genetics might explain collective variation between breeds and found that genes could explain some of the differences between, say, poodles and chihuahuas (SN: 10/1/19). But Karlsson and her colleagues wanted to learn how much breed can predict variation in individual dogs’ behavior.

To study variation at the individual level, the team needed genetic and behavior data from a lot of dogs. So they developed Darwin’s Ark, an open-source database where more than 18,000 pet owners responded to surveys about their dog’s traits and behavior. The survey asked over 100 questions about observable behaviors, which the researchers grouped into eight “behavioral factors,” including human sociability (how comfortable a dog is around humans) and biddability (how responsive it is to commands).

The researchers also collected genetic data from 2,155 purebred and mixed-breed dogs, including 1,715 dogs from Darwin’s Ark whose owners sent in dog saliva swabs. The inclusion of mixed-breed dogs, or mutts, shed light on how ancestry affects behavior while removing the purebred stereotypes that could affect the way the dog is treated — and thus behaves.

Studying mutts also makes it easier to decouple traits from one another, says Kathleen Morrill, a geneticist in Karlsson’s lab. “And that means on an individual basis, you’re going to have a better shot at mapping a gene that is actually tied to the question you’re asking.”

Then the team combined the genetic and survey data for the individual dogs to identify genes associated with particular traits. The new study revealed that the most heritable behavioral factor for dogs is human sociability, and that motor patterns — such as howling and retrieving — are generally more heritable than other behaviors.

That makes sense, Kathryn Lord, an evolutionary canine biologist in Karlsson’s lab, said during the briefing. Before modern breeding started within the last couple hundred years or so, dogs were selected for the functional roles they could provide, such as hunting or herding (SN: 4/26/17). Today, these selections still show up in breed groups. For instance, herding dogs on average tend to be more biddable and interested in toys. It also follows that, within breed groups, individual breeds are more likely to display certain motor patterns: Retrievers, unsurprisingly, are more likely to retrieve.

Still, even though breed was associated with certain behaviors, it was not a reliable predictor of individual behavior. While retrievers are less likely to howl, some owners reported that their retrievers howled often; greyhounds rarely bury toys, except some do.

The research solidifies what people have observed: Dog breeds differ on average in behavior, but there’s a lot of variation within breeds, says Adam Boyko, a canine geneticist at Cornell University who was not involved in the study.

Surprisingly, size had even less of an effect — as in, virtually none — on an individual’s behavior, despite the yappiness commonly associated with small dogs. Boyko points out that small dogs may often behave worse than large dogs, but rather than that being built into their genetics, “I think it’s that we typically tolerate poor behavior more in small dogs than we do in big dogs.”

As a dog trainer, Curtis Kelley of Pet Parent Allies in Philadelphia says that he meets a dog where it’s at. “Dogs are as individual as people are,” he says. Breed gives a loose guideline for what kind of behaviors to expect, “but it’s certainly not a hard-and-fast rule.”

If a person is looking to buy a dog, he says, they shouldn’t put too much stock in the dog’s breed. Even within a litter, dogs can show very different personalities. “A puppy will show you who they are at eight weeks old,” Kelley says. “It’s just our job to believe them.”

Pterosaurs may have had brightly colored feathers on their heads

Pterosaurs not only had feathers, but also were flamboyantly colorful, scientists say.
That could mean that feathers — and vibrant displays of mate-seeking plumage — may have originated as far back as the common ancestor of dinosaurs and pterosaurs, during the early Triassic Period around 250 million years ago.
Analyses of the partial skull of a 113-million-year-old pterosaur fossil revealed that the flying reptile had two types of feathers, paleontologist Aude Cincotta of University College Cork in Ireland and colleagues report April 20 in Nature. On its head, the creature, thought to be Tupandactylus imperator, had whiskerlike, single filaments and more complicated branching structures akin to those of modern bird feathers.
Because the fossil’s soft tissues were also well-preserved, the team identified a variety of different shapes of pigment-bearing melanosomes in both feathers and skin. Those shapes ranged from “very elongate cigar shapes to flattened platelike disks,” says Maria McNamara, a paleobiologist also at University College Cork.
Different melanosome shapes have been linked to different colors. Short, stubby spheroidal melanosomes are usually associated with yellow to reddish-brown colors, while the longer shapes are linked to darker colors, McNamara says.
The range of melanosome geometries found in this Tupandactylus specimen suggests that the creature may have been quite colorful, the team says. And that riot of color, in turn, hints that the feathers weren’t there just to keep the creatures warm, but may have been used for visual signaling, such as displays to attract a mate.
Scientists have wrangled over whether pterosaurs, Earth’s first true vertebrate flyers, had true feathers, or whether their bodies were covered in something more primitive and hairlike, dubbed “pycnofibers” (SN: 7/22/21). If the flying reptiles did have feathers, they weren’t needed for flying; pterosaurs had fibrous membranes stretched between their long, tapering wings, much like modern bats (SN: 10/22/20).
In 2018, a team of researchers including McNamara reported that some of the fuzz covering two fossilized pterosaur specimens wasn’t just simple pycnofibers but showed distinct, complex, branching patterns similar to those seen in modern feathers (SN: 12/21/18). But some researchers have disputed this, saying that the branching observed in the fossils was an artifact of preservation, the appearance of branching created by overlapping fibers.
The new pterosaur specimen has “turned all that on its head,” McNamara says. In this fossil, “it’s very clear. We see feathers that are separated, isolated — you can’t say it’s an overlap of structures.” The fossilized feathers show successive branches of consistent length, extending all the way along a feather’s shaft.
And though the previous pterosaur fossils described in 2018 did have some preserved melanosomes, those were “middle-of-the-road shapes, little short ovoids,” McNamara says. In Tupandactylus, “for the first time we see melanosomes of different geometries” in the feathers. That all adds up to bright, colorful plumage.
“To me, these fossils close the case. Pterosaurs really had feathers,” says Stephen Brusatte, a vertebrate paleontologist at the University of Edinburgh who was not involved in the study. “Not only were many famous dinosaurs actually big fluffballs,” he says, but so were many pterosaurs.
Many dinosaurs, particularly theropod dinosaurs, also had colorful feathers (SN: 7/24/14). What this study shows is that feathers aren’t merely a bird thing, or even just a dinosaur thing, but that feathers evolved even deeper in time, Brusatte adds. And, as pterosaurs had wing membranes for flying, their feathers must have served other purposes, such as for insulation and communication.
It’s possible that dinosaurs and pterosaurs evolved this colorful plumage independently, McNamara says. But the shared structural complexity of the pigments in both groups of reptiles makes it “much more likely that it was derived from a common ancestor in the early Triassic.”
“That’s a big new implication,” says Michael Benton, a paleontologist at the University of Bristol in England.
Benton, a coauthor on the 2018 paper, wrote a separate commentary on the new study in the same issue of Nature. If feathers arose in a common ancestor, Benton says, that would push back the origin of feathers by about 100 million years, to roughly 250 million years ago.
And that might have other interesting implications, Benton writes. The early Triassic was a rough time for life on Earth; it was the aftermath of the mass extinction at the end of the Permian that killed off more than 90 percent of the planet’s species (SN: 12/6/18). If feathers did evolve during that time, the insulating fuzz, as well as warm-bloodedness, may have been part of an early arms race between reptilian mammal ancestors called synapsids and the pterosaur-dinosaur ancestor.

How I decided on a second COVID-19 booster shot

Booster shots against COVID-19 are once again on my mind. The U.S. Food and Drug Administration says that older people and immunocompromised people are eligible for a second booster shot provided it has been at least four months since their last shot. After I got over the shock of the FDA calling me “older” — meaning anyone 50 and up — I’ve been pondering whether to get a second booster (otherwise known as a fourth dose of an mRNA vaccine, or third dose of any vaccine if you initially got the Johnson & Johnson vaccine), and if so, when.

Peter, a 60-year-old acquaintance who asked me not to use his last name to protect his privacy, told me he’s going to get a second booster, but not now. He’s holding out for fall and hoping for a variant-specific version of the vaccine. Right now, he and his wife “are vaxxed out,” he says. And he worries that getting boosted too often could hurt his immune system’s ability to respond to new variants. “I just think it’s the law of diminishing returns,” he says.

Lots of scientists and policy makers are thinking about these issues, too. For instance, last week an advisory committee to the U.S. Centers for Disease Control and Prevention met to discuss boosters. And a bevy of studies about how well boosters work and how they affect the immune system have come out in recent weeks, some of them peer-reviewed, some still preliminary.
In making my own decision, I wanted to know several things. First, does a second booster really provide additional protection from the coronavirus beyond what I got from my first booster (SN: 11/8/21)? Second, are there downsides to getting boosted again? And finally, if I’m going to do it, when should that be and which vaccine will I get?

To get a handle on the first question, I need to know how much protection the first booster actually gave me. I’m not immunocompromised, so there’s no reason for me to get an antibody test to see if I have enough of those defenders to fend off the coronavirus. I just have to assume that my immune system is behaving normally and that what’s true for others in my age group also goes for me.

How long does COVID-19 booster immunity last?
Although the exact numbers vary, several studies have found that a third dose of the Pfizer COVID-19 vaccine gave higher levels of protection against the omicron variant than two doses did (SN: 3/1/22). But that protection wanes after a few months.

Data from Israel, where some people have been getting fourth doses for months, suggest that a second booster does indeed bolster protection, but again only temporarily. In health care workers who got a fourth dose, antibody levels shot up above levels achieved after the third jab, researchers reported April 7 in the New England Journal of Medicine. Vaccine effectiveness against infection was 30 percent with the Pfizer shot and 11 percent with Moderna. Both were better at preventing symptomatic disease, with Pfizer weighing in at 43 percent and Moderna at 31 percent. But those who did get infected produced high levels of the virus, suggesting they were contagious to others.

In a separate study published in the same journal, researchers looking at people 60 and older found that a fourth dose gave protection against both infection and severe disease, but the protection against infection began to decline after about five weeks.

There’s more data on protection against severe illness from a study of more than 11,000 people admitted for COVID-19 to a hospital or emergency department in the Kaiser Permanente Southern California health care system. At nine months after the second shot, two doses of the Pfizer vaccine were 31 percent effective at keeping people out of the emergency room with omicron, researchers reported April 22 in Lancet Respiratory Medicine. The shots were 41 percent effective at preventing more severe illness resulting in hospitalizations from the omicron variant.

The third dose (first booster) bumped the effectiveness way up to 85 percent against hospitalization and 77 percent against ER visits, the team found. But the effect was temporary. By three months after the booster, effectiveness had declined to 55 percent against hospitalization and 53 percent against emergency room visits. The same jump in protection and quick waning from the first booster has also been noted in the United Kingdom and Qatar.

It’s been about six months since my first booster shot, so any extra protection I got from it is probably gone by now. But will a fourth dose restore protection?

The CDC calculates that for every million people 50 and older who get a fourth dose of vaccine, 830 hospitalizations, 183 intensive care unit admissions and 85 deaths could be prevented. Those are impressive numbers, but many people think efforts should be focused more on getting still-unvaccinated people immunized instead of worrying about additional shots for the already vaxxed. CDC’s numbers support that. Because unvaccinated people are so vulnerable to the coronavirus, you would need to vaccinate just 135 people aged 50 and older with two shots to prevent one hospitalization. But already vaccinated people still have quite a bit of immunity, so you’d need to vaccinate 1,205 older people with a fourth dose to prevent one hospitalization.
How does my health factor in?
Of course, that’s data concerning populations. I and millions of others are trying to make individual calculations. “People need to make decisions based on their health condition as well as their exposure levels,” says Prakash Nagarkatti, an immunologist at the University of South Carolina School of Medicine Columbia. For instance, people whose jobs or other activities put them in contact with lots of people have higher exposure risks than someone who works at home. People who are older or have underlying health conditions, such as diabetes, obesity, high blood pressure, or lung, kidney, liver and heart diseases are all at higher risk. Those people might benefit from a shot now. “But if you’re 50 to 60 and very healthy, I don’t know if you need it right away,” Nagarkatti says. “You could maybe wait a few months.”

I’ve got some health risks that may make me more likely to get severely ill, and I have a couple of big events coming up this summer where I could get exposed to the virus. So getting boosted now to get a little bump in immunity that should last for a few months seems like a good idea. I’m also basing that decision about when to get a booster on what’s happening with the virus.

Case counts in my county are on the upswing. Nationally, BA.2.12.1, a potentially even wilier subvariant of the already slippery BA.2 omicron variant, is on the rise, making up almost 29 percent of cases in the week ending April 23. South Africa is experiencing a rise in cases caused by the omicron subvariants BA.4 and BA.5. It could be the start of a fifth wave of infection in that country, something researchers thought wouldn’t happen because so many people there were previously infected and vaccinated, Jacob Lemieux, an infectious disease researcher at Massachusetts General Hospital in Boston said April 26 in a news briefing. “It has the flavor of, ‘Here we go, again,’” he said. “So much for the idea of herd immunity.”

Are there any downsides to a second booster?
But would I be harming my immune system if I get a booster shot now? Previous experience with vaccines against other viruses suggests repeated boosting isn’t always a good thing, Galit Alter, codirector of the Harvard University Center for AIDS Research said in the news briefing. For instance, in one HIV vaccine trial, people were boosted six times with the same protein. Each time their antibody levels went up, but the researchers found that the immune system was making nonfunctional, unhelpful antibodies that blocked the action of good ones. So far, that hasn’t happened with the COVID-19 vaccines, but it could be important to space out doses to prevent such a scenario.

Another worry for immunologists is original antigenic sin. That has nothing to do with apples, serpents and gardens. Instead it happens when the immune system sees a virus or portion of the virus for the first time and trains memory cells to make antibodies against the virus. The next time the person encounters the virus or another version of it, instead of adding to the antibody arsenal, it continues to make only those original antibodies.

With the coronavirus, though, “what’s happened is the opposite of antigenic sin,” says Michel Nussenzweig, an immunologist and Howard Hughes Medical Institute investigator at Rockefeller University in New York City. He and colleagues examined what happens to the immune response after a third dose of vaccine, focusing especially on very long-lived immune cells called memory B cells. Those memory cells still made new antibodies when they got a third look at the vaccine, Nussenzweig and colleagues reported April 21 in Nature. That wouldn’t happen if antigenic sin were a problem. And it’s great news since an ever-growing repertoire of antibodies may help defend against future variants.

A separate Nature Immunology study found that other immune cells called T cells also learn new tricks after a booster dose or a breakthrough infection. Those and other studies seem to indicate that getting a booster isn’t bad for my immune system and could help me against future variants.

Is it okay to mix and match COVID-19 booster shots?
Now the question is, which booster to get? Mixing vaccines doesn’t seem to push the immune system toward making the unhelpful antibodies, Alter said. It “tantalizes the immune system with different flavors of vaccines, and seems to reawaken it,” she said. “Even mixing and matching mRNAs may be highly advantageous to the immune system.” She and colleagues found that the Moderna vaccine may make more IgA antibodies, the type that help protect mucous membranes in the nose, mouth and other slick surfaces in the body from infection, than the Pfizer vaccine does. Pfizer’s makes more of the IgM and IgG antibodies that circulate in the blood, data published March 29 in Science Translational Medicine show.

Since I got the Pfizer vaccine for my first three doses, it seems wise to shake things up with Moderna this time. I’ve already booked my shot.

As for Peter, after I laid out the evidence, he said he was convinced that he should probably get a shot now, as his doctor recommends. But he admitted he might just wait to see if Moderna comes out with an updated version of its vaccine.

What’s really needed, all the experts tell me, is to better understand how the immune system operates so researchers can build better vaccines with longer-lasting protection so we won’t be facing needles multiple times per year.