King Tut’s tomb still has secrets to reveal 100 years after its discovery

One hundred years ago, archaeologist Howard Carter stumbled across the tomb of ancient Egypt’s King Tutankhamun. Carter’s life was never the same. Neither was the young pharaoh’s afterlife.

Newspapers around the world immediately ran stories about Carter’s discovery of a long-lost pharaoh’s grave and the wonders it might contain, propelling the abrasive Englishman to worldwide acclaim. A boy king once consigned to ancient obscurity became the most famous of pharaohs (SN: 12/18/76).

It all started on November 4, 1922, when excavators led by Carter discovered a step cut into the valley floor of a largely unexplored part of Egypt’s Valley of the Kings. By November 23, the team had uncovered stairs leading down to a door. A hieroglyphic seal on the door identified what lay beyond: King Tutankhamun’s tomb.
Tutankhamun assumed power around 1334 B.C., when he was about 10 years old. His reign lasted nearly a decade until his untimely demise. Although a minor figure among Egyptian pharaohs, Tutankhamun is one of the few whose richly appointed burial place was found largely intact.

An unusually meticulous excavator for his time, Carter organized a 10-year project to document, conserve and remove more than 6,000 items from Tutankhamun’s four-chambered tomb. While some objects, like Tut’s gold burial mask, are now iconic, many have been in storage and out of sight for decades. But that’s about to change. About 5,400 of Tutankhamun’s well-preserved tomb furnishings are slated to soon go on display when the new Grand Egyptian Museum, near the Pyramids of Giza, opens.

“The [Tut] burial hoard is something very unique,” Shirin Frangoul-Brückner, managing director of Atelier Brückner in Stuttgart, Germany, the firm that designed the museum’s Tutankhamun Gallery, said in an interview released by her company. Among other items, the exhibit will include the gold burial mask, musical instruments, hunting equipment, jewelry and six chariots.

Even as more of Tut’s story is poised to come to light, here are four things to know on the 100th anniversary of his tomb’s discovery.

  1. Tut may not have been frail.
    Tutankhamun has a reputation as a fragile young man who limped on a clubfoot. Some researchers suspect a weakened immune system set him up for an early death.

But “recent research suggests it’s wrong to portray Tut as a fragile pharaoh,” says Egyptologist and mummy researcher Bob Brier, who is an expert on King Tut. His new book Tutankhamun and the Tomb That Changed the World chronicles how 100 years of research have shaped both Tut’s story and archaeology itself.

Clues from Tutankhamun’s mummy and tomb items boost his physical standing, says Brier, of Long Island University in Brookville, N.Y. The young pharaoh might even have participated in warfare.

Military chariots, leather armor and archery equipment buried with Tutankhamun show that he wanted to be viewed as a hunter and a warrior, Brier says. Inscribed blocks from Tutankhamun’s temple, which were reused in later building projects before researchers identified them, portray the pharaoh leading charioteers in undated battles.

If more blocks turn up showing battle scenes marked with dates, it would suggest Tutankhamun probably participated in those conflicts, Brier says. Pharaohs typically recorded dates of actual battles depicted in their temples, though inscribed scenes may have exaggerated their heroism.

The frail story line has been built in part on the potential discovery of a deformity in Tut’s left foot, along with 130 walking sticks found in his tomb. But ancient Egyptian officials were often depicted with walking sticks as signs of authority, not infirmity, Brier says. And researchers’ opinions vary about whether images of Tut’s bones reveal serious deformities.

X-rays of the recovered mummy from the 1960s show no signs of a misshapen ankle that would have caused a limp. Neither did CT images examined in 2005 by the Egyptian Mummy Project, headed by Egyptologist and former Egyptian Minister of Antiquities Zahi Hawass.

Then a 2009 reexamination of the CT images by the same researchers indicated that Tutankhamun had a left-foot deformity generally associated with walking on the ankle or the side of the foot, the team reported. The team’s radiologist, Sahar Saleem of Egypt’s Cairo University, says the CT images show that Tutankhamun experienced a mild left clubfoot, bone tissue death at the ends of two long bones that connect to the second and third left toes and a missing bone in the second left toe.
Those foot problems would have “caused the king pain when he walked or pressed his weight on his foot, and the clubfoot must have caused limping,” Saleem says. So a labored gait, rather than an appeal to royal authority, could explain the many walking sticks placed in Tutankhamun’s tomb, she says.

Brier, however, doubts that scenario. Tutankhamun’s legs appear to be symmetrical in the CT images, he says, indicating that any left foot deformity was too mild to cause the pharaoh regularly to put excess weight on his right side while walking.

Whether or not the boy king limped through life, the discovery and study of his mummy made it clear that he died around age 19, on the cusp of adulthood. Yet Tut’s cause of death still proves elusive.

In a 2010 analysis of DNA extracted from the pharaoh’s mummy, Hawass and colleagues contended that malaria, as well as the tissue-destroying bone disorder cited by Saleem from the CT images, hastened Tutankhamun’s death. But other researchers, including Brier, disagree with that conclusion. Further ancient DNA studies using powerful new tools for extracting and testing genetic material from the mummy could help solve that mystery.

  1. Tut’s initial obscurity led to his fame.
    After Tutankhamun’s death, ancient Egyptian officials did their best to erase historical references to him. His reign was rubbed out because his father, Akhenaten, was a “heretic king” who alienated his own people by banishing the worship of all Egyptian gods save for one.

“Akhenaten is the first monotheist recorded in history,” Brier says. Ordinary Egyptians who had prayed to hundreds of gods suddenly could worship only Aten, a sun god formerly regarded as a minor deity.

Meeting intense resistance to his banning of cherished religious practices, Akhenaten — who named himself after Aten — moved to an isolated city, Amarna, where he lived with his wife Nefertiti, six girls, one boy and around 20,000 followers. After Akhenaten died, residents of the desert outpost returned to their former homes. Egyptians reclaimed their old-time religion. Akhenaten’s son, Tutankhaten — also originally named after Aten — became king, and his name was changed to Tutankhamun in honor of Amun, the most powerful of the Egyptian gods at the time.

Later pharaohs omitted from written records any mentions of Akhenaten and Tutankhamun. Tut’s tomb was treated just as dismissively. Huts of craftsmen working on the tomb of King Ramses VI nearly 200 years after Tut’s death were built over the stairway leading down to Tutankhamun’s nearby, far smaller tomb. Limestone chips from the construction littered the site.
The huts remained in place until Carter showed up. While Carter found evidence that the boy king’s tomb had been entered twice after it was sealed, whoever had broken in took no major objects.

“Tutankhamun’s ignominy and insignificance saved him” from tomb robbers, says UCLA Egyptologist Kara Cooney.

  1. Tut’s tomb was a rushed job.
    Pharaohs usually prepared their tombs over decades, building many rooms to hold treasures and extravagant coffins. Egyptian traditions required the placement of a mummified body in a tomb about 70 days after death. That amount of time may have allowed a mummy to dry out sufficiently while retaining enough moisture to fold the arms across the body inside a coffin, Brier suspects.

Because Tutankhamun died prematurely, he had no time for extended tomb preparations. And the 70-day burial tradition gave craftsmen little time to finish crucial tomb items, many of which required a year or more to make. Those objects include a carved stone sarcophagus that encased three nested coffins, four shrines, hundreds of servant statues, a gold mask, chariots, jewelry, beds, chairs and an alabaster chest that contained four miniature gold coffins for Tutankhamun’s internal organs removed during mummification.

Evidence points to workers repurposing many objects from other people’s tombs for Tutankhamun. Even then, time ran out.

Consider the sarcophagus. Two of four goddesses on the stone container lack fully carved jewelry. Workers painted missing jewelry parts. Carved pillars on the sarcophagus are also unfinished.

Tutankhamun’s granite sarcophagus lid, a mismatch for the quartzite bottom, provides another clue to workers’ frenzied efforts. Something must have happened to the original quartzite lid, so workers carved a new lid from available granite and painted it to look like quartzite, Brier says.

Repairs on the new lid indicate that it broke in half during the carving process. “Tutankhamun was buried with a cracked, mismatched sarcophagus lid,” Brier says.

Tutankhamun’s sarcophagus may originally have been made for Smenkare, a mysterious individual who some researchers identify as the boy king’s half brother. Little is known about Smenkare, who possibly reigned for about a year after Akhenaten’s death, just before Tutankhamun, Brier says. But Smenkare’s tomb has not been found, leaving the sarcophagus puzzle unsolved.

Objects including the young king’s throne, three nested coffins and the shrine and tiny coffins for his internal organs also contain evidence of having originally belonged to someone else before being modified for reuse, says Harvard University archaeologist Peter Der Manuelian.
Even Tutankhamun’s tomb may not be what it appears. Egyptologist Nicholas Reeves of the University of Arizona Egyptian Expedition in Tucson has argued since 2015 that the boy king’s burial place was intended for Nefertiti. He argues that Nefertiti briefly succeeded Akhenaten as Egypt’s ruler and was the one given the title Smenkare.

No one has found Nefertiti’s tomb yet. But Reeves predicts that one wall of Tutankhamun’s burial chamber blocks access to a larger tomb where Nefertiti lies. Painted scenes and writing on that wall depict Tutankhamun performing a ritual on Nefertiti’s mummy, he asserts. And the gridded structure of those paintings was used by Egyptian artists years before Tutankhamun’s burial but not at the time of his interment.

But four of five remote sensing studies conducted inside Tutankhamun’s tomb have found no evidence of a hidden tomb. Nefertiti, like Smenkare, remains a mystery.

  1. Tut’s tomb changed archaeology and the antiquities trade.
    Carter’s stunning discovery occurred as Egyptians were protesting British colonial rule and helped fuel that movement. Among the actions that enraged Egyptian officials: Carter and his financial backer, a wealthy British aristocrat named Lord Carnarvon, sold exclusive newspaper coverage of the excavation to The Times of London. Things got so bad that Egypt’s government locked Carter out of the tomb for nearly a year, starting in early 1924.

Egyptian nationalists wanted political independence — and an end to decades of foreign adventurers bringing ancient Egyptian finds back to their home countries. Tutankhamun’s resurrected tomb pushed Egyptian authorities toward enacting laws and policies that helped to end the British colonial state and reduce the flow of antiquities out of Egypt, Brier says, though it took decades. Egypt became a nation totally independent of England in 1953. A 1983 law decreed that antiquities could no longer be taken out of Egypt (though those removed before 1983 are still legal to own and can be sold through auction houses).

In 1922, however, Carter and Lord Carnarvon regarded many objects in Tutankhamun’s tomb as theirs for the taking, Brier says. That was the way that Valley of the Kings excavations had worked for the previous 50 years, in a system that divided finds equally between Cairo’s Egyptian Museum and an expedition’s home institution. Taking personal mementos was also common.

Evidence of Carter’s casual pocketing of various artifacts while painstakingly clearing the boy king’s tomb continues to emerge. “Carter didn’t sell what he took,” Brier says. “But he felt he had a right to take certain items as the tomb’s excavator.”
Recently recovered letters of English Egyptologist Alan Gardiner from the 1930s, described by Brier in his book, recount how Carter gave Gardiner several items from Tutankhamun’s tomb, including an ornament used as a food offering for the dead. French Egyptologist Marc Gabolde of Paul-Valéry Montpellier 3 University has tracked down beads, jewelry, a headdress fragment and other items taken from Tutankhamun’s tomb by Carter and Carnarvon.

Yet it is undeniable that one of Tutankhamun’s greatest legacies, thanks to Carter, is the benchmark the excavation of his tomb set for future excavations, Brier says. Carter started his career as an artist who copied painted images on the walls of Egyptian tombs for excavators. He later learned excavation techniques in the field working with an eminent English Egyptologist, Flinders Petrie. Carter took tomb documentation to a new level, rounding up a crack team consisting of a photographer, a conservator, two draftsmen, an engineer and an authority on ancient Egyptian writing.

Their decade-long effort also made possible the new Tutankhamun exhibition at the Grand Egyptian Museum. Now, not only museum visitors but also a new generation of researchers will have unprecedented access to the pharaoh’s tomb trove.

“Most of Tutankhamun’s [tomb] objects have been given little if any study beyond what Carter was able to do,” says UCLA’s Cooney.

That won’t be true for much longer, as the most famous tomb in the Valley of the Kings enters the next stage of its public and scientific afterlife.

Here’s how polar bears might get traction on snow

Tiny “fingers” can help polar bears get a grip.

Like the rubbery nubs on the bottom of baby socks, microstructures on the bears’ paw pads offer some extra friction, scientists report November 1 in the Journal of the Royal Society Interface. The pad protrusions may keep polar bears from slipping on snow, says Ali Dhinojwala, a polymer scientist at the University of Akron in Ohio who has also studied the sticking power of gecko feet (SN: 8/9/05).
Nathaniel Orndorf, a materials scientist at Akron who focuses on ice, adhesion and friction, was interested in the work Dhinojwala’s lab did on geckos, but “we can’t really put geckos on the ice,” he says. So he turned to polar bears.

Orndorf teamed up with Dhinojwala and Austin Garner, an animal biologist now at Syracuse University in New York, and compared the paws of polar bears, brown bears, American black bears and a sun bear. All but the sun bear had paw pad bumps. But the polar bears’ bumps looked a little different. For a given diameter, their bumps tend to be taller, the team found. That extra height translates to more traction on lab-made snow, experiments with 3-D printed models of the bumps suggest.

Until now, scientists didn’t know that bump shape could make the difference between gripping and slipping, Dhinojwala says.
Polar bear paw pads are also ringed with fur and are smaller than those of other bears, the team reports, adaptations that might let the Arctic animals conserve body heat as they trod upon ice. Smaller pads generally mean less real estate for grabbing the ground. So extra-grippy pads could help polar bears make the most of what they’ve got, Orndorf says.

Along with bumpy pads, the team hopes to study polar bears’ fuzzy paws and short claws, which might also give the animals a nonslip grip.

Astronomers have found the closest known black hole to Earth

The closest black hole yet found is just 1,560 light-years from Earth, a new study reports. The black hole, dubbed Gaia BH1, is about 10 times the mass of the sun and orbits a sunlike star.

Most known black holes steal and eat gas from massive companion stars. That gas forms a disk around the black hole and glows brightly in X-rays. But hungry black holes are not the most common ones in our galaxy. Far more numerous are the tranquil black holes that are not mid-meal, which astronomers have dreamed of finding for decades. Previous claims of finding such black holes have so far not held up (SN: 5/6/20; SN: 3/11/22).
So astrophysicist Kareem El-Badry and colleagues turned to newly released data from the Gaia spacecraft, which precisely maps the positions of billions of stars (SN: 6/13/22). A star orbiting a black hole at a safe distance won’t get eaten, but it will be pulled back and forth by the black hole’s gravity. Astronomers can detect the star’s motion and deduce the black hole’s presence.

Out of hundreds of thousands of stars that looked like they were tugged by an unseen object, just one seemed like a good black hole candidate. Follow-up observations with other telescopes support the black hole idea, the team reports November 2 in Monthly Notices of the Royal Astronomical Society.

Gaia BH1 is the nearest black hole to Earth ever discovered — the next closest is around 3,200 light-years away. But it’s probably not the closest that exists, or even the closest we’ll ever find. Astronomers think there are about 100 million black holes in the Milky Way, but almost all of them are invisible. “They’re just isolated, so we can’t see them,” says El-Badry, of the Harvard-Smithsonian Center for Astrophysics in Cambridge, Mass.

The next data release from Gaia is due out in 2025, and El-Badry expects it to bring more black hole bounty. “We think there are probably a lot that are closer,” he says. “Just finding one … suggests there are a bunch more to be found.”

Cabbage circadian clocks tick even after picking

Cabbages with jet lag are less nutritious and more vulnerable to insect pests.
Fruits and vegetables have an internal clock that can be reset by a daily cycle of light and dark, but storing produce in darkened refrigerators could disrupt this natural rhythm, researchers report June 20 in Current Biology.

Plants, even after being cropped from the stalk, are much more responsive to their external environment than we give them credit for, says Janet Braam, a plant biologist at Rice University. “When we harvest them they’re still metabolizing,” she says. “They’re still alive.”
Braam normally studies circadian rhythms in plants that are growing, but an offhand comment by her son inspired her to turn to the grocery store for new research subjects.

She and her colleagues had previously found that the plant Arabidopsis thaliana schedules production of insect-repelling chemical defenses to match caterpillar feeding peaks. These defenses include compounds called glucosinolates, which are thought to have anticancer and antimicrobial properties in addition to their caterpillar-discouraging ones.

When Braam told her son about these experiments, he joked that now he knew the best time to eat his vegetables. She realized that cabbages — which also produce glucosinolates — might have similar daily cycles even after being picked, packed and shipped.

“So we went to the grocery store, bought some cabbage and put them under dark/light cycles that were either in phase or out of phase with our insects, and then asked whether the insects could tell the difference,” says Braam.

Like Arabidopsis, the cabbage leaves had daily glucosinolate cycles if the vegetables were exposed to alternating 12-hour periods of light and dark. Caterpillars on a cycle offset by 12 hours to the cabbages’ (so the cabbages’ dawn was the caterpillars’ dusk) ate about 20 times more than did caterpillars on a schedule synchronized to their food. Caterpillars also ate twice as much cabbage if the vegetable had been kept either in constant light or constant darkness.

It’s not just cabbages that adjust daily rhythm to better fend off caterpillars; the team found similar results for spinach, zucchini, sweet potatoes, carrots and blueberries. These fruits and vegetables don’t produce glucosinolates, so they must make some other kind of defenses on a daily cycle, says Braam.

The researchers suggest that we might improve the health benefits and pest resistance of fruits and vegetables by storing them under lighting conditions that mimic day and night. But Cathie Martin, a plant biologist at the John Innes Centre in England, is skeptical. She says most postharvest vegetable losses are from fungal infections, not the insects that eat vegetables in the field. And cabbages are sometimes cold-stored for months in the dark before being sold. Cabbages lose the clock-regulated pest resistance about a week after harvesting, the new study shows.

“But maybe I’ll be proven completely wrong,” says Martin. “Maybe one day we’ll all have little LEDs in the fridge.”

How scientists are shifting their search for links between diet and dementia

The internet is rife with advice for keeping the brain sharp as we age, and much of it is focused on the foods we eat. Headlines promise that oatmeal will fight off dementia. Blueberries improve memory. Coffee can slash your risk of Alzheimer’s disease. Take fish oil. Eat more fiber. Drink red wine. Forgo alcohol. Snack on nuts. Don’t skip breakfast. But definitely don’t eat bacon.

One recent diet study got media attention, with one headline claiming, “Many people may be eating their way to dementia.” The study, published last December in Neurology, found that people who ate a diet rich in anti-inflammatory foods like fruits, vegetables, beans and tea or coffee had a lower risk of dementia than those who ate foods that boost inflammation, such as sugar, processed foods, unhealthy fats and red meat.
But the study, like most research on diet and dementia, couldn’t prove a causal link. And that’s not good enough to make recommendations that people should follow. Why has it proved such a challenge to pin down whether the foods we eat can help stave off dementia?

First, dementia, like most chronic diseases, is the result of a complex interplay of genes, lifestyle and environment that researchers don’t fully understand. Diet is just one factor. Second, nutrition research is messy. People struggle to recall the foods they’ve eaten, their diets change over time, and modifying what people eat — even as part of a research study — is exceptionally difficult.

For decades, researchers devoted little effort to trying to prevent or delay Alzheimer’s disease and other types of dementia because they thought there was no way to change the trajectory of these diseases. Dementia seemed to be the result of aging and an unlucky roll of the genetic dice.

While scientists have identified genetic variants that boost risk for dementia, researchers now know that people can cut their risk by adopting a healthier lifestyle: avoiding smoking, keeping weight and blood sugar in check, exercising, managing blood pressure and avoiding too much alcohol — the same healthy behaviors that lower the risk of many chronic diseases.

Diet is wrapped up in several of those healthy behaviors, and many studies suggest that diet may also directly play a role. But what makes for a brain-healthy diet? That’s where the research gets muddled.

Despite loads of studies aimed at dissecting the influence of nutrition on dementia, researchers can’t say much with certainty. “I don’t think there’s any question that diet influences dementia risk or a variety of other age-related diseases,” says Matt Kaeberlein, who studies aging at the University of Washington in Seattle. But “are there specific components of diet or specific nutritional strategies that are causal in that connection?” He doubts it will be that simple.

Worth trying
In the United States, an estimated 6.5 million people, the vast majority of whom are over age 65, are living with Alzheimer’s disease and related dementias. Experts expect that by 2060, as the senior population grows, nearly 14 million residents over age 65 will have Alzheimer’s disease. Despite decades of research and more than 100 drug trials, scientists have yet to find a treatment for dementia that does more than curb symptoms temporarily (SN: 7/3/21 & 7/17/21, p. 8). “Really what we need to do is try and prevent it,” says Maria Fiatarone Singh, a geriatrician at the University of Sydney.

Forty percent of dementia cases could be prevented or delayed by modifying a dozen risk factors, according to a 2020 report commissioned by the Lancet. The report doesn’t explicitly call out diet, but some researchers think it plays an important role. After years of fixating on specific foods and dietary components — things like fish oil and vitamin E supplements — many researchers in the field have started looking at dietary patterns.

That shift makes sense. “We do not have vitamin E for breakfast, vitamin C for lunch. We eat foods in combination,” says Nikolaos Scarmeas, a neurologist at National and Kapodistrian University of Athens and Columbia University. He led the study on dementia and anti-inflammatory diets published in Neurology. But a shift from supplements to a whole diet of myriad foods complicates the research. A once-daily pill is easier to swallow than a new, healthier way of eating.
Earning points
Suspecting that inflammation plays a role in dementia, many researchers posit that an anti-inflammatory diet might benefit the brain. In Scarmeas’ study, more than 1,000 older adults in Greece completed a food frequency questionnaire and earned a score based on how “inflammatory” their diet was. The lower the score, the better. For example, fatty fish, which is rich in omega-3 fatty acids, was considered an anti-inflammatory food and earned negative points. Cheese and many other dairy products, high in saturated fat, earned positive points.

During the next three years, 62 people, or 6 percent of the study participants, developed dementia. People with the highest dietary inflammation scores were three times as likely to develop dementia as those with the lowest. Scores ranged from –5.83 to 6.01. Each point increase was linked to a 21 percent rise in dementia risk.

Such epidemiological studies make connections, but they can’t prove cause and effect. Perhaps people who eat the most anti-inflammatory diets also are those least likely to develop dementia for some other reason. Maybe they have more social interactions. Or it could be, Scarmeas says, that people who eat more inflammatory diets do so because they’re already experiencing changes in their brain that lead them to consume these foods and “what we really see is the reverse causality.”

To sort all this out, researchers rely on randomized controlled trials, the gold standard for providing proof of a causal effect. But in the arena of diet and dementia, these studies have challenges.

Dementia is a disease of aging that takes decades to play out, Kaeberlein says. To show that a particular diet could reduce the risk of dementia, “it would take two-, three-, four-decade studies, which just aren’t feasible.” Many clinical trials last less than two years.

As a work-around, researchers often rely on some intermediate outcome, like changes in cognition. But even that can be hard to observe. “If you’re already relatively healthy and don’t have many risks, you might not show much difference, especially if the duration of the study is relatively short,” says Sue Radd-Vagenas, a nutrition scientist at the University of Sydney. “The thinking is if you’re older and you have more risk factors, it’s more likely we might see something in a short period of time.” Yet older adults might already have some cognitive decline, so it might be more difficult to see an effect.

Many researchers now suspect that intervening earlier will have a bigger impact. “We now know that the brain is stressed from midlife and there’s a tipping point at 65 when things go sour,” says Hussein Yassine, an Alzheimer’s researcher at the Keck School of Medicine of the University of Southern California in Los Angeles. But intervene too early, and a trial might not show any effect. Offering a healthier diet to a 50- or 60-year-old might pay off in the long run but fail to make a difference in cognition that can be measured during the relatively short length of a study.

And it’s not only the timing of the intervention that matters, but also the duration. Do you have to eat a particular diet for two decades for it to have an impact? “We’ve got a problem of timescale,” says Kaarin Anstey, a dementia researcher at the University of New South Wales in Sydney.

And then there are all the complexities that come with studying diet. “You can’t isolate it in the way you can isolate some of the other factors,” Anstey says. “It’s something that you’re exposed to all the time and over decades.”

Food as medicine?
In a clinical trial, researchers often test the effectiveness of a drug by offering half the study participants the medication and half a placebo pill. But when the treatment being tested is food, studies become much more difficult to control. First, food doesn’t come in a pill, so it’s tricky to hide whether participants are in the intervention group or the control group.

Imagine a trial designed to test whether the Mediterranean diet can help slow cognitive decline. The participants aren’t told which group they’re in, but the control group sees that they aren’t getting nuts or fish or olive oil. “What ends up happening is a lot of participants will start actively increasing the consumption of the Mediterranean diet despite being on the control arm, because that’s why they signed up,” Yassine says. “So at the end of the trial, the two groups are not very dissimilar.”

Second, we all need food to live, so a true placebo is out of the question. But what diet should the control group consume? Do you compare the diet intervention to people’s typical diets (which may differ from person to person and country to country)? Do you ask the comparison group to eat a healthy diet but avoid the food expected to provide brain benefits? (Offering them an unhealthy diet would be unethical.)

And tracking what people eat during a clinical trial can be a challenge. Many of these studies rely on food frequency questionnaires to tally up all the foods in an individual’s diet. An ongoing study is assessing the impact of the MIND diet (which combines part of the Mediterranean diet with elements of the low-salt DASH diet) on cognitive decline. Researchers track adherence to the diet by asking participants to fill out a food frequency questionnaire every six to 12 months. But many of us struggle to remember what we ate a day or two ago. So some researchers also rely on more objective measures to assess compliance. For the MIND diet assessment, researchers are also tracking biomarkers in the blood and urine — vitamins such as folate, B12 and vitamin E, plus levels of certain antioxidants.
Another difficulty is that these surveys often don’t account for variables that could be really important, like how the food was prepared and where it came from. Was the fish grilled? Fried? Slathered in butter? “Those things can matter,” says dementia researcher Nathaniel Chin of the University of Wisconsin–Madison.

Plus there are the things researchers can’t control. For example, how does the food interact with an individual’s medications and microbiome? “We know all of those factors have an interplay,” Chin says.

The few clinical trials looking at dementia and diet seem to measure different things, so it’s hard to make comparisons. In 2018, Radd-Vagenas and her colleagues looked at all the trials that had studied the impact of the Mediterranean diet on cognition. There were five at the time. “What struck me even then was how variable the interventions were,” she says. “Some of the studies didn’t even mention olive oil in their intervention. Now, how can you run a Mediterranean diet study and not mention olive oil?”

Another tricky aspect is recruitment. The kind of people who sign up for clinical trials tend to be more educated, more motivated and have healthier lifestyles. That can make differences between the intervention group and the control group difficult to spot. And if the study shows an effect, whether it will apply to the broader, more diverse population comes into question. To sum up, these studies are difficult to design, difficult to conduct and often difficult to interpret.

Kaeberlein studies aging, not dementia specifically, but he follows the research closely and acknowledges that the lack of clear answers can be frustrating. “I get the feeling of wanting to throw up your hands,” he says. But he points out that there may not be a single answer. Many diets can help people maintain a healthy weight and avoid diabetes, and thus reduce the risk of dementia. Beyond that obvious fact, he says, “it’s hard to get definitive answers.”

A better way
In July 2021, Yassine gathered with more than 30 other dementia and nutrition experts for a virtual symposium to discuss the myriad challenges and map out a path forward. The speakers noted several changes that might improve the research.

One idea is to focus on populations at high risk. For example, one clinical trial is looking at the impact of low- and high-fat diets on short-term changes in the brain in people who carry the genetic variant APOE4, a risk factor for Alzheimer’s. One small study suggested that a high-fat Western diet actually improved cognition in some individuals. Researchers hope to get clarity on that surprising result.
Another possible fix is redefining how researchers measure success. Hypertension and diabetes are both well-known risk factors for dementia. So rather than running a clinical trial that looks at whether a particular diet can affect dementia, researchers could look at the impact of diet on one of these risk factors. Plenty of studies have assessed the impact of diet on hypertension and diabetes, but Yassine knows of none launched with dementia prevention as the ultimate goal.

Yassine envisions a study that recruits participants at risk of developing dementia because of genetics or cardiovascular disease and then looks at intermediate outcomes. “For example, a high-salt diet can be associated with hypertension, and hypertension can be associated with dementia,” he says. If the study shows that the diet lowers hypertension, “we achieved our aim.” Then the study could enter a legacy period during which researchers track these individuals for another decade to determine whether the intervention influences cognition and dementia.

One way to amplify the signal in a clinical trial is to combine diet with other interventions likely to reduce the risk of dementia. The Finnish Geriatric Intervention Study to Prevent Cognitive Impairment and Disability, or FINGER, trial, which began in 2009, did just that. Researchers enrolled more than 1,200 individuals ages 60 to 77 who were at an elevated risk of developing dementia and had average or slightly impaired performance on cognition tests. Half received nutritional guidance, worked out at a gym, engaged in online brain-training games and had routine visits with a nurse to talk about managing dementia risk factors like high blood pressure and diabetes. The other half received only general health advice.

After two years, the control group had a 25 percent greater cognitive decline than the intervention group. It was the first trial, reported in the Lancet in 2015, to show that targeting multiple risk factors could slow the pace of cognitive decline.

Now researchers are testing this approach in more than 30 countries. Christy Tangney, a nutrition researcher at Rush University in Chicago, is one of the investigators on the U.S. arm of the study, enrolling 2,000 people ages 60 to 79 who have at least one dementia risk factor. The study is called POINTER, or U.S. Study to Protect Brain Health Through Lifestyle Intervention to Reduce Risk. The COVID-19 pandemic has delayed the research — organizers had to pause the trial briefly — but Tangney expects to have results in the next few years.

This kind of multi-intervention study makes sense, Chin says. “One of the reasons why things are so slow in our field is we’re trying to address a heterogeneous disease with one intervention at a time. And that’s just not going to work.” A trial that tests multiple interventions “allows for people to not be perfect,” he adds. Maybe they can’t follow the diet exactly, but they can stick to the workout program, which might have an effect on its own. The drawback in these kinds of studies, however, is that it’s impossible to tease out the contribution of each individual intervention.
Preemptive guidelines
Two major reports came out in recent years addressing dementia prevention. The first, from the World Health Organization in 2019, recommends a healthy, balanced diet for all adults, and notes that the Mediterranean diet may help people who have normal to mildly impaired cognition.

The 2020 Lancet Commission report, however, does not include diet in its list of modifiable risk factors, at least not yet. “Nutrition and dietary components are challenging to research with controversies still raging around the role of many micronutrients and health outcomes in dementia,” the report notes. The authors point out that a Mediterranean or the similar Scandinavian diet might help prevent cognitive decline in people with intact cognition, but “how long the exposure has to be or during which ages is unclear.” Neither report recommends any supplements.

Plenty of people are waiting for some kind of advice to follow. Improving how these studies are done might enable scientists to finally sort out what kinds of diets can help hold back the heartbreaking damage that comes with Alzheimer’s disease. For some people, that knowledge might be enough to create change.
“Inevitably, if you’ve had Alzheimer’s in your family, you want to know, ‘What can I do today to potentially reduce my risk?’ ” says molecular biologist Heather Snyder, vice president of medical and scientific relations at the Alzheimer’s Association.

But changing long-term dietary habits can be hard. The foods we eat aren’t just fuel; our diets represent culture and comfort and more. “Food means so much to us,” Chin says.

“Even if you found the perfect diet,” he adds, “how do you get people to agree to and actually change their habits to follow that diet?” The MIND diet, for example, suggests people eat less than one serving of cheese a week. In Wisconsin, where Chin is based, that’s a nonstarter, he says.

But it’s not just about changing individual behaviors. Radd-Vagenas and other researchers hope that if they can show the brain benefits of some of these diets in rigorous studies, policy changes might follow. For example, research shows that lifestyle changes can have a big impact on type 2 diabetes. As a result, many insurance providers now pay for coaching programs that help participants maintain healthy diet and exercise habits.

“You need to establish policies. You need to change cities, change urban design. You need to do a lot of things to enable healthier choices to become easier choices,” Radd-Vagenas says. But that takes meatier data than exist now.

How to build better ice towers for drinking water and irrigation

There’s a better way to build a glacier.

During winter in India’s mountainous Ladakh region, some farmers use pipes and sprinklers to construct building-sized cones of ice. These towering, humanmade glaciers, called ice stupas, slowly release water as they melt during the dry spring months for communities to drink or irrigate crops. But the pipes often freeze when conditions get too cold, stifling construction.

Now, preliminary results show that an automated system can erect an ice stupa while avoiding frozen pipes, using local weather data to control when and how much water is spouted. What’s more, the new system uses roughly a tenth the amount of water that the conventional method uses, researchers reported June 23 at the Frontiers in Hydrology meeting in San Juan, Puerto Rico.
“This is one of the technological steps forward that we need to get this innovative idea to the point where it’s realistic as a solution,” says glaciologist Duncan Quincey of the University of Leeds in England who was not involved in the research. Automation could help communities build larger, longer-lasting ice stupas that provide more water during dry periods, he says.

Ice stupas emerged in 2014 as a means for communities to cope with shrinking alpine glaciers due to human-caused climate change (SN: 5/29/19). Typically, high-mountain communities in India, Kyrgyzstan and Chile pipe glacial meltwater into gravity-driven fountains that sprinkle continuously in the winter. Cold air freezes the drizzle, creating frozen cones that can store millions of liters of water.

The process is simple, though inefficient. More than 70 percent of the spouted water may flow away instead of freezing, says glaciologist Suryanarayanan Balasubramanian of the University of Fribourg in Switzerland.

So Balasubramanian and his team outfitted an ice stupa’s fountain with a computer that automatically adjusted the spout’s flow rate based on local temperatures, humidity and wind speed. Then the scientists tested the system by building two ice stupas in Guttannen, Switzerland — one using a continuously spraying fountain and one using the automated system.

After four months, the team found that the continuously sprinkling fountain had spouted about 1,100 cubic meters of water and amassed 53 cubic meters of ice, with pipes freezing once. The automated system sprayed only around 150 cubic meters of water but formed 61 cubic meters of ice, without any frozen pipes.

The researchers are now trying to simplify their prototype to make it more affordable for high-mountain communities around the world. “We eventually want to reduce the cost so that it is within two months of salary of the farmers in Ladakh,” Balasubramanian says. “Around $200 to $400.”

North America’s oldest skull surgery dates to at least 3,000 years ago

A man with a hole in his forehead, who was interred in what’s now northwest Alabama between around 3,000 and 5,000 years ago, represents North America’s oldest known case of skull surgery.

Damage around the man’s oval skull opening indicates that someone scraped out that piece of bone, probably to reduce brain swelling caused by a violent attack or a serious fall, said bioarchaeologist Diana Simpson of the University of Nevada, Las Vegas. Either scenario could explain fractures and other injuries above the man’s left eye and to his left arm, leg and collarbone.

Bone regrowth on the edges of the skull opening indicates that the man lived for up to one year after surgery, Simpson estimated. She presented her analysis of the man’s remains on March 28 at a virtual session of the annual meeting of the American Association of Biological Anthropologists.
Skull surgery occurred as early as 13,000 years ago in North Africa (SN: 8/17/11). Until now, the oldest evidence of this practice in North America dated to no more than roughly 1,000 years ago.

In his prime, the new record holder likely served as a ritual practitioner or shaman. His grave included items like those found in shamans’ graves at nearby North American hunter-gatherer sites dating to between about 3,000 and 5,000 years ago. Ritual objects buried with him included sharpened bone pins and modified deer and turkey bones that may have been tattooing tools (SN: 5/25/21).

Investigators excavated the man’s grave and 162 others at the Little Bear Creek Site, a seashell covered burial mound, in the 1940s. Simpson studied the man’s museum-held skeleton and grave items in 2018, shortly before the discoveries were returned to local Native American communities for reburial.

Here are the Top 10 times scientific imagination failed

Science, some would say, is an enterprise that should concern itself solely with cold, hard facts. Flights of imagination should be the province of philosophers and poets.

On the other hand, as Albert Einstein so astutely observed, “Imagination is more important than knowledge.” Knowledge, he said, is limited to what we know now, while “imagination embraces the entire world, stimulating progress.”

So with science, imagination has often been the prelude to transformative advances in knowledge, remaking humankind’s understanding of the world and enabling powerful new technologies.
And yet while sometimes spectacularly successful, imagination has also frequently failed in ways that retard the revealing of nature’s secrets. Some minds, it seems, are simply incapable of imagining that there’s more to reality than what they already know.

On many occasions scientists have failed to foresee ways of testing novel ideas, ridiculing them as unverifiable and therefore unscientific. Consequently it is not too challenging to come up with enough failures of scientific imagination to compile a Top 10 list, beginning with:

  1. Atoms
    By the middle of the 19th century, most scientists believed in atoms. Chemists especially. John Dalton had shown that the simple ratios of different elements making up chemical compounds strongly implied that each element consisted of identical tiny particles. Subsequent research on the weights of those atoms made their reality pretty hard to dispute. But that didn’t deter physicist-philosopher Ernst Mach. Even as late as the beginning of the 20th century, he and a number of others insisted that atoms could not be real, as they were not accessible to the senses. Mach believed that atoms were a “mental artifice,” convenient fictions that helped in calculating the outcomes of chemical reactions. “Have you ever seen one?” he would ask.

Apart from the fallacy of defining reality as “observable,” Mach’s main failure was his inability to imagine a way that atoms could be observed. Even after Einstein proved the existence of atoms by indirect means in 1905, Mach stood his ground. He was unaware, of course, of the 20th century technologies that quantum mechanics would enable, and so did not foresee powerful new microscopes that could show actual images of atoms (and allow a certain computing company to drag them around to spell out IBM).

  1. Composition of stars
    Mach’s views were similar to those of Auguste Comte, a French philosopher who originated the idea of positivism, which denies reality to anything other than objects of sensory experience. Comte’s philosophy led (and in some cases still leads) many scientists astray. His greatest failure of imagination was an example he offered for what science could never know: the chemical composition of the stars.

Unable to imagine anybody affording a ticket on some entrepreneur’s space rocket, Comte argued in 1835 that the identity of the stars’ components would forever remain beyond human knowledge. We could study their size, shapes and movements, he said, “whereas we would never know how to study by any means their chemical composition, or their mineralogical structure,” or for that matter, their temperature, which “will necessarily always be concealed from us.”

Within a few decades, though, a newfangled technology called spectroscopy enabled astronomers to analyze the colors of light emitted by stars. And since each chemical element emits (or absorbs) precise colors (or frequencies) of light, each set of colors is like a chemical fingerprint, an infallible indicator for an element’s identity. Using a spectroscope to observe starlight therefore can reveal the chemistry of the stars, exactly what Comte thought impossible.

  1. Canals on Mars
    Sometimes imagination fails because of its overabundance rather than absence. In the case of the never-ending drama over the possibility of life on Mars, that planet’s famous canals turned out to be figments of overactive scientific imagination.

First “observed” in the late 19th century, the Martian canals showed up as streaks on the planet’s surface, described as canali by Italian astronomer Giovanni Schiaparelli. Canali is, however, Italian for channels, not canals. So in this case something was gained (rather than lost) in translation — the idea that Mars was inhabited. “Canals are dug,” remarked British astronomer Norman Lockyer in 1901, “ergo there were diggers.” Soon astronomers imagined an elaborate system of canals transporting water from Martian poles to thirsty metropolitan areas and agricultural centers. (Some observers even imagined seeing canals on Venus and Mercury.)
With more constrained imaginations, aided by better telescopes and translations, belief in the Martian canals eventually faded. It was merely the Martian winds blowing dust (bright) and sand (dark) around the surface in ways that occasionally made bright and dark streaks line up in a deceptive manner — to eyes attached to overly imaginative brains.

  1. Nuclear fission
    In 1934, Italian physicist Enrico Fermi bombarded uranium (atomic number 92) and other elements with neutrons, the particle discovered just two years earlier by James Chadwick. Fermi found that among the products was an unidentifiable new element. He thought he had created element 93, heavier than uranium. He could not imagine any other explanation. In 1938 Fermi was awarded the Nobel Prize in physics for demonstrating “the existence of new radioactive elements produced by neutron irradiation.”

It turned out, however, that Fermi had unwittingly demonstrated nuclear fission. His bombardment products were actually lighter, previously known elements — fragments split from the heavy uranium nucleus. Of course, the scientists later credited with discovering fission, Otto Hahn and Fritz Strassmann, didn’t understand their results either. Hahn’s former collaborator Lise Meitner was the one who explained what they’d done. Another woman, chemist Ida Noddack, had imagined the possibility of fission to explain Fermi’s results, but for some reason nobody listened to her.

  1. Detecting neutrinos
    In the 1920s, most physicists had convinced themselves that nature was built from just two basic particles: positively charged protons and negatively charged electrons. Some had, however, imagined the possibility of a particle with no electric charge. One specific proposal for such a particle came in 1930 from Austrian physicist Wolfgang Pauli. He suggested that a no-charge particle could explain a suspicious loss of energy observed in beta-particle radioactivity. Pauli’s idea was worked out mathematically by Fermi, who named the neutral particle the neutrino. Fermi’s math was then examined by physicists Hans Bethe and Rudolf Peierls, who deduced that the neutrino would zip through matter so easily that there was no imaginable way of detecting its existence (short of building a tank of liquid hydrogen 6 million billion miles wide). “There is no practically possible way of observing the neutrino,” Bethe and Peierls concluded.

But they had failed to imagine the possibility of finding a source of huge numbers of high-energy neutrinos, so that a few could be captured even if almost all escaped. No such source was known until nuclear fission reactors were invented. In the 1950s, Frederick Reines and Clyde Cowan used reactors to definitely establish the neutrino’s existence. Reines later said he sought a way to detect the neutrino precisely because everybody had told him it wasn’t possible to detect the neutrino.

  1. Nuclear energy
    Ernest Rutherford, one of the 20th century’s greatest experimental physicists, was not exactly unimaginative. He imagined the existence of the neutron a dozen years before it was discovered, and he figured out that a weird experiment conducted by his assistants had revealed that atoms contained a dense central nucleus. It was clear that the atomic nucleus packed an enormous quantity of energy, but Rutherford could imagine no way to extract that energy for practical purposes. In 1933, at a meeting of the British Association for the Advancement of Science, he noted that although the nucleus contained a lot of energy, it would also require energy to release it. Anyone saying we can exploit atomic energy “is talking moonshine,” Rutherford declared. To be fair, Rutherford qualified the moonshine remark by saying “with our present knowledge,” so in a way he perhaps was anticipating the discovery of nuclear fission a few years later. (And some historians have suggested that Rutherford did imagine the powerful release of nuclear energy, but thought it was a bad idea and wanted to discourage people from attempting it.)
  2. Age of the Earth
    Rutherford’s reputation for imagination was bolstered by his inference that radioactive matter deep underground could solve the mystery of the age of the Earth. In the mid-19th century, William Thomson (later known as Lord Kelvin) calculated the Earth’s age to be something a little more than 100 million years, and possibly much less. Geologists insisted that the Earth must be much older — perhaps billions of years — to account for the planet’s geological features.

Kelvin calculated his estimate assuming the Earth was born as a molten rocky mass that then cooled to its present temperature. But following the discovery of radioactivity at the end of the 19th century, Rutherford pointed out that it provided a new source of heat in the Earth’s interior. While giving a talk (in Kelvin’s presence), Rutherford suggested that Kelvin had basically prophesized a new source of planetary heat.

While Kelvin’s neglect of radioactivity is the standard story, a more thorough analysis shows that adding that heat to his math would not have changed his estimate very much. Rather, Kelvin’s mistake was assuming the interior to be rigid. John Perry (one of Kelvin’s former assistants) showed in 1895 that the flow of heat deep within the Earth’s interior would alter Kelvin’s calculations considerably — enough to allow the Earth to be billions of years old. It turned out that the Earth’s mantle is fluid on long time scales, which not only explains the age of the Earth, but also plate tectonics.

  1. Charge-parity violation
    Before the mid-1950s, nobody imagined that the laws of physics gave a hoot about handedness. The same laws should govern matter in action when viewed straight-on or in a mirror, just as the rules of baseball applied equally to Ted Williams and Willie Mays, not to mention Mickey Mantle. But in 1956 physicists Tsung-Dao Lee and Chen Ning Yang suggested that perfect right-left symmetry (or “parity”) might be violated by the weak nuclear force, and experiments soon confirmed their suspicion.

Restoring sanity to nature, many physicists thought, required antimatter. If you just switched left with right (mirror image), some subatomic processes exhibited a preferred handedness. But if you also replaced matter with antimatter (switching electric charge), left-right balance would be restored. In other words, reversing both charge (C) and parity (P) left nature’s behavior unchanged, a principle known as CP symmetry. CP symmetry had to be perfectly exact; otherwise nature’s laws would change if you went backward (instead of forward) in time, and nobody could imagine that.

In the early 1960s, James Cronin and Val Fitch tested CP symmetry’s perfection by studying subatomic particles called kaons and their antimatter counterparts. Kaons and antikaons both have zero charge but are not identical, because they are made from different quarks. Thanks to the quirky rules of quantum mechanics, kaons can turn into antikaons and vice versa. If CP symmetry is exact, each should turn into the other equally often. But Cronin and Fitch found that antikaons turn into kaons more often than the other way around. And that implied that nature’s laws allowed a preferred direction of time. “People didn’t want to believe it,” Cronin said in a 1999 interview. Most physicists do believe it today, but the implications of CP violation for the nature of time and other cosmic questions remain mysterious.

  1. Behaviorism versus the brain
    In the early 20th century, the dogma of behaviorism, initiated by John Watson and championed a little later by B.F. Skinner, ensnared psychologists in a paradigm that literally excised imagination from science. The brain — site of all imagination — is a “black box,” the behaviorists insisted. Rules of human psychology (mostly inferred from experiments with rats and pigeons) could be scientifically established only by observing behavior. It was scientifically meaningless to inquire into the inner workings of the brain that directed such behavior, as those workings were in principle inaccessible to human observation. In other words, activity inside the brain was deemed scientifically irrelevant because it could not be observed. “When what a person does [is] attributed to what is going on inside him,” Skinner proclaimed, “investigation is brought to an end.”

Skinner’s behaviorist BS brainwashed a generation or two of followers into thinking the brain was beyond study. But fortunately for neuroscience, some physicists foresaw methods for observing neural activity in the brain without splitting the skull open, exhibiting imagination that the behaviorists lacked. In the 1970s Michel Ter-Pogossian, Michael Phelps and colleagues developed PET (positron emission tomography) scanning technology, which uses radioactive tracers to monitor brain activity. PET scanning is now complemented by magnetic resonance imaging, based on ideas developed in the 1930s and 1940s by physicists I.I. Rabi, Edward Purcell and Felix Bloch.

  1. Gravitational waves
    Nowadays astrophysicists are all agog about gravitational waves, which can reveal all sorts of secrets about what goes on in the distant universe. All hail Einstein, whose theory of gravity — general relativity — explains the waves’ existence. But Einstein was not the first to propose the idea. In the 19th century, James Clerk Maxwell devised the math explaining electromagnetic waves, and speculated that gravity might similarly induce waves in a gravitational field. He couldn’t figure out how, though. Later other scientists, including Oliver Heaviside and Henri Poincaré, speculated about gravity waves. So the possibility of their existence certainly had been imagined.

But many physicists doubted that the waves existed, or if they did, could not imagine any way of proving it. Shortly before Einstein completed his general relativity theory, German physicist Gustav Mie declared that “the gravitational radiation emitted … by any oscillating mass particle is so extraordinarily weak that it is unthinkable ever to detect it by any means whatsoever.” Even Einstein had no idea how to detect gravitational waves, although he worked out the math describing them in a 1918 paper. In 1936 he decided that general relativity did not predict gravitational waves at all. But the paper rejecting them was simply wrong.
As it turned out, of course, gravitational waves are real and can be detected. At first they were verified indirectly, by the diminishing distance between mutually orbiting pulsars. And more recently they were directly detected by huge experiments relying on lasers. Nobody had been able to imagine detecting gravitational waves a century ago because nobody had imagined the existence of pulsars or lasers.

All these failures show how prejudice can sometimes dull the imagination. But they also show how an imagination failure can inspire the quest for a new success. And that’s why science, so often detoured by dogma, still manages somehow, on long enough time scales, to provide technological wonders and cosmic insights beyond philosophers’ and poets’ wildest imagination.

Binary stars keep masquerading as black holes

As astronomy datasets grow larger, scientists are scouring them for black holes, hoping to better understand the exotic objects. But the drive to find more black holes is leading some astronomers astray.

“You say black holes are like a needle in a haystack, but suddenly we have way more haystacks than we did before,” says astrophysicist Kareem El-Badry of the Harvard-Smithsonian Center for Astrophysics in Cambridge, Mass. “You have better chances of finding them, but you also have more opportunities to find things that look like them.”

Two more claimed black holes have turned out to be the latter: weird things that look like them. They both are actually double-star systems at never-before-seen stages in their evolutions, El-Badry and his colleagues report March 24 in Monthly Notices of the Royal Astronomical Society. The key to understanding the systems is figuring out how to interpret light coming from them, the researchers say.
In early 2021, astronomer Tharindu Jayasinghe of Ohio State University and his colleagues reported finding a star system — affectionately named the Unicorn — about 1,500 light-years from Earth that they thought held a giant red star in its senior years orbiting an invisible black hole. Some of the same researchers, including Jayasinghe, later reported a second similar system, dubbed the Giraffe, found about 12,000 light-years away.

But other researchers, including El-Badry, weren’t convinced that the systems harbored black holes. So Jayasinghe, El-Badry and others combined forces to reanalyze the data.

To verify each star system’s nature, the researchers turned to stellar spectra, the rainbows that are produced when starlight is split up into its component wavelengths. Any star’s spectrum will have lines where atoms in the stellar atmosphere have absorbed particular wavelengths of light. A slow-spinning star has very sharp lines, but a fast-spinning one has blurred and smeared lines.

“If the star spins fast enough, basically all the spectral features become almost invisible,” El-Badry says. “Normally, you detect a second star in a spectrum by looking for another set of lines,” he adds. “And that’s harder to do if a star is rapidly rotating.”

That’s why Jayasinghe and colleagues misunderstood each of these systems initially, the team found.

“The problem was that there was not just one star, but a second one that was basically hiding,” says astrophysicist Julia Bodensteiner of the European Southern Observatory in Garching, Germany, who was not involved in the new study. That second star in each system spins very fast, which makes them difficult to see in the spectra.

What’s more, the lines in the spectrum of a star orbiting something will shift back and forth, El-Badry says. If one assumes the spectrum shows just one average, slow-spinning star in an orbit — which is what appeared to be happening in these systems at first glance — that assumption then leads to the erroneous conclusion that the star is orbiting an invisible black hole.

Instead, the Unicorn and Giraffe each hold two stars, caught in a never-before-seen stage of stellar evolution, the researchers found after reanalyzing the data. Both systems contain an older red giant star with a puffy atmosphere and a “subgiant,” a star on its way to that late-life stage. The subgiants are near enough to their companion red giants that they are gravitationally stealing material from them. As these subgiants accumulate more mass, they spin faster, El-Badry says, which is what made them undetectable initially.

“Everyone was looking for really interesting black holes, but what they found is really interesting binaries,” Bodensteiner says.

These are not the only systems to trick astronomers recently. What was thought to be the nearest black hole to Earth also turned out to be pair of stars in a rarely seen stage of evolution (SN: 3/11/22).

“Of course, it’s disappointing that what we thought were black holes were actually not, but it’s part of the process,” Jayasinghe says. He and his colleagues are still looking for black holes, he says, but with a greater awareness of how pairs of interacting stars might trick them.

These dolphins may turn to corals for skin care

On her deep-sea dives, wildlife biologist Angela Ziltener of the University of Zurich often noticed Indo-Pacific bottlenosed dolphins doing something intriguing. The dolphins (Tursiops aduncus) would line up to take turns brushing their bodies against corals or sea sponges lining the seafloor. After more than a decade as an “adopted” member of the pod — a status that let Ziltener get up close without disturbing the animals — she and her team may have figured out why the animals behave this way: The dolphins may use corals and sea sponges as their own private pharmacies.

The invertebrates make antibacterial compounds — as well as others with antioxidant or hormonal properties — that are probably released into the waters of the Northern Red Sea when dolphins make contact, Ziltener and colleagues report May 19 in iScience. So the rubbing could help dolphins maintain healthy skin.
Ziltener captured video showing members of the pod using corals as if they were a bath brush, swimming through to rub various parts of their bodies. Oftentimes it’s a peaceful social gathering. “It’s not like they’re fighting each other for the turn,” Ziltener says. “No, they wait and then they go through.” Other times, an individual dolphin will arrive at a patch of coral on its own.

But the dolphins won’t buff their bodies against just any corals, Ziltener says. They’re picky, primarily rubbing up against gorgonian corals (Rumphella aggregata) and leather corals (Sarcophyton sp.), as well as a kind of sea sponge (Ircinia sp.).

Ziltener and colleagues analyzed one-centimeter slices taken from wild corals and sponges. The team identified 17 compounds overall, including 10 with antibacterial or antimicrobial activity. It’s possible that as the dolphins swim through the corals, the compounds help protect the animals from skin irritations or infections, says coauthor Gertrud Morlock, an analytical chemist at Justus Liebig University Giessen in Germany.
Other animals, including chimpanzees, can self-medicate (SN: 11/3/90). Marine biologist Jeremy Kiszka of Florida International University in Miami says the new study convinces him that the dolphins are using corals and sea sponges for that purpose. But, he says, additional experiments are necessary to prove the link. Lab tests, for instance, could help identify the types of bacteria that the compounds might work against.

Ziltener agrees there’s more to be done. For instance, it’s also possible that in addition to prevention, dolphins use corals and sea sponges to treat active skin infections, she says, but the team has yet to see proof of a coral cure. Next up though, Ziltener says, is figuring out whether dolphins prefer to rub specific body parts on specific corals in such an “underwater spa.”