The closest black hole yet found is just 1,560 light-years from Earth, a new study reports. The black hole, dubbed Gaia BH1, is about 10 times the mass of the sun and orbits a sunlike star.
Most known black holes steal and eat gas from massive companion stars. That gas forms a disk around the black hole and glows brightly in X-rays. But hungry black holes are not the most common ones in our galaxy. Far more numerous are the tranquil black holes that are not mid-meal, which astronomers have dreamed of finding for decades. Previous claims of finding such black holes have so far not held up (SN: 5/6/20; SN: 3/11/22). So astrophysicist Kareem El-Badry and colleagues turned to newly released data from the Gaia spacecraft, which precisely maps the positions of billions of stars (SN: 6/13/22). A star orbiting a black hole at a safe distance won’t get eaten, but it will be pulled back and forth by the black hole’s gravity. Astronomers can detect the star’s motion and deduce the black hole’s presence.
Out of hundreds of thousands of stars that looked like they were tugged by an unseen object, just one seemed like a good black hole candidate. Follow-up observations with other telescopes support the black hole idea, the team reports November 2 in Monthly Notices of the Royal Astronomical Society.
Gaia BH1 is the nearest black hole to Earth ever discovered — the next closest is around 3,200 light-years away. But it’s probably not the closest that exists, or even the closest we’ll ever find. Astronomers think there are about 100 million black holes in the Milky Way, but almost all of them are invisible. “They’re just isolated, so we can’t see them,” says El-Badry, of the Harvard-Smithsonian Center for Astrophysics in Cambridge, Mass.
The next data release from Gaia is due out in 2025, and El-Badry expects it to bring more black hole bounty. “We think there are probably a lot that are closer,” he says. “Just finding one … suggests there are a bunch more to be found.”
A new 3-D map of the brain is the best thing since sliced cold cuts, at least to some neuroscientists. “It’s a remarkable tour-de-force to reconstruct an entire human brain with such accuracy,” says David Van Essen, a neuroscientist at Washington University in St. Louis.
Using a high-tech deli slicer and about 100,000 computer processors, researchers shaved a human brain into thousands of thin slivers and then digitally glued them together. The result is the most detailed brain atlas ever published. Dubbed BigBrain, the digital model has a resolution 50 times greater in each of the three spatial dimensions than currently available maps, researchers report in the June 21 Science. The difference is like zooming from a satellite view of a city down to the street level, says coauthor Alan Evans, a neuroimaging scientist at McGill University in Montreal.
BigBrain allows researchers to navigate the landscape of the human cortex, the rugged outer layer of the brain. And unlike previous maps, the tool also lets scientists burrow beneath the surface, tunnel through the brain’s hemispheres and step slice-by-slice through high-res structural data.
Around 100 years ago, neuroscientists relied on thick slabs of brain tissue to crudely chart out neural regions. More recently, imaging tools such as MRI have let researchers take a more detailed look. But even the very best MRI maps are still a little fuzzy, says Hanchuan Peng, a computational biologist at the Allen Institute for Brain Science in Seattle.
In 2010, a team of Chinese researchers constructed a digital map of the mouse brain using techniques similar to the ones that produced BigBrain. But until now, no one had done it in humans. Because the human brain is thousands of times bigger than the mouse brain, Evans and colleagues had to massively scale up slicing and computing methods. First, Katrin Amunts and colleagues at the Jülich Research Center in Germany carved the donated brain of a 65-year-old woman into 7,404 ultrathin sheets, each about the thickness of plastic wrap.
Next, researchers stained the sheets to boost contrast, took pictures of each sheet with a flatbed scanner, and then harnessed the processing power from seven supercomputing facilities across Canada to digitally stitch together the images. In all, the researchers analyzed about one terabyte, or 1,000 gigabytes, of image data. That’s about the same amount of data as 250,000 MP3 songs.
“Your laptop would choke if it tried to run a typical image-processing program to look at this dataset,” Evans says.
His team designed a software program that lets researchers dig into BigBrain’s data. Users will be able to pick up the brain, rotate it in any direction and cut through any plane they want. “It’s like a video game,” he says.
Evans hopes BigBrain will provide a digital scaffold for other researchers to layer on different kinds of brain data. Scientists could stack on information about chemical concentrations or electrophysical signals, just as climate and traffic data can be layered onto a geographical map.
The 3-D map could also help researchers interpret data from lower-resolution brain-scanning techniques such as MRI and PET, study coauthor Karl Zilles of the Jülich Research Center said during a press briefing June 19. Overlaying images from these scans onto BigBrain might give neuroimagers a better idea of where exactly damaged tissue lies in diseased brains.
And neurosurgeons might use BigBrain to guide placement of electrodes during deep-brain stimulation for Alzheimer’s or Parkinson’s diseases, he said.
Though all human brains have largely similar architecture, Evans says, every person has subtle shape variations. As a result, he’d like to make maps of more brains for comparison.
Now that the teams have ironed out BigBrain’s technical kinks, the researchers think they can compile a second brain’s map in about a year. “The computational tools are all largely in place now,” Evans says.
The internet is rife with advice for keeping the brain sharp as we age, and much of it is focused on the foods we eat. Headlines promise that oatmeal will fight off dementia. Blueberries improve memory. Coffee can slash your risk of Alzheimer’s disease. Take fish oil. Eat more fiber. Drink red wine. Forgo alcohol. Snack on nuts. Don’t skip breakfast. But definitely don’t eat bacon.
One recent diet study got media attention, with one headline claiming, “Many people may be eating their way to dementia.” The study, published last December in Neurology, found that people who ate a diet rich in anti-inflammatory foods like fruits, vegetables, beans and tea or coffee had a lower risk of dementia than those who ate foods that boost inflammation, such as sugar, processed foods, unhealthy fats and red meat. But the study, like most research on diet and dementia, couldn’t prove a causal link. And that’s not good enough to make recommendations that people should follow. Why has it proved such a challenge to pin down whether the foods we eat can help stave off dementia?
First, dementia, like most chronic diseases, is the result of a complex interplay of genes, lifestyle and environment that researchers don’t fully understand. Diet is just one factor. Second, nutrition research is messy. People struggle to recall the foods they’ve eaten, their diets change over time, and modifying what people eat — even as part of a research study — is exceptionally difficult.
For decades, researchers devoted little effort to trying to prevent or delay Alzheimer’s disease and other types of dementia because they thought there was no way to change the trajectory of these diseases. Dementia seemed to be the result of aging and an unlucky roll of the genetic dice.
While scientists have identified genetic variants that boost risk for dementia, researchers now know that people can cut their risk by adopting a healthier lifestyle: avoiding smoking, keeping weight and blood sugar in check, exercising, managing blood pressure and avoiding too much alcohol — the same healthy behaviors that lower the risk of many chronic diseases.
Diet is wrapped up in several of those healthy behaviors, and many studies suggest that diet may also directly play a role. But what makes for a brain-healthy diet? That’s where the research gets muddled.
Despite loads of studies aimed at dissecting the influence of nutrition on dementia, researchers can’t say much with certainty. “I don’t think there’s any question that diet influences dementia risk or a variety of other age-related diseases,” says Matt Kaeberlein, who studies aging at the University of Washington in Seattle. But “are there specific components of diet or specific nutritional strategies that are causal in that connection?” He doubts it will be that simple.
Worth trying In the United States, an estimated 6.5 million people, the vast majority of whom are over age 65, are living with Alzheimer’s disease and related dementias. Experts expect that by 2060, as the senior population grows, nearly 14 million residents over age 65 will have Alzheimer’s disease. Despite decades of research and more than 100 drug trials, scientists have yet to find a treatment for dementia that does more than curb symptoms temporarily (SN: 7/3/21 & 7/17/21, p. 8). “Really what we need to do is try and prevent it,” says Maria Fiatarone Singh, a geriatrician at the University of Sydney.
Forty percent of dementia cases could be prevented or delayed by modifying a dozen risk factors, according to a 2020 report commissioned by the Lancet. The report doesn’t explicitly call out diet, but some researchers think it plays an important role. After years of fixating on specific foods and dietary components — things like fish oil and vitamin E supplements — many researchers in the field have started looking at dietary patterns.
That shift makes sense. “We do not have vitamin E for breakfast, vitamin C for lunch. We eat foods in combination,” says Nikolaos Scarmeas, a neurologist at National and Kapodistrian University of Athens and Columbia University. He led the study on dementia and anti-inflammatory diets published in Neurology. But a shift from supplements to a whole diet of myriad foods complicates the research. A once-daily pill is easier to swallow than a new, healthier way of eating. Earning points Suspecting that inflammation plays a role in dementia, many researchers posit that an anti-inflammatory diet might benefit the brain. In Scarmeas’ study, more than 1,000 older adults in Greece completed a food frequency questionnaire and earned a score based on how “inflammatory” their diet was. The lower the score, the better. For example, fatty fish, which is rich in omega-3 fatty acids, was considered an anti-inflammatory food and earned negative points. Cheese and many other dairy products, high in saturated fat, earned positive points.
During the next three years, 62 people, or 6 percent of the study participants, developed dementia. People with the highest dietary inflammation scores were three times as likely to develop dementia as those with the lowest. Scores ranged from –5.83 to 6.01. Each point increase was linked to a 21 percent rise in dementia risk.
Such epidemiological studies make connections, but they can’t prove cause and effect. Perhaps people who eat the most anti-inflammatory diets also are those least likely to develop dementia for some other reason. Maybe they have more social interactions. Or it could be, Scarmeas says, that people who eat more inflammatory diets do so because they’re already experiencing changes in their brain that lead them to consume these foods and “what we really see is the reverse causality.”
To sort all this out, researchers rely on randomized controlled trials, the gold standard for providing proof of a causal effect. But in the arena of diet and dementia, these studies have challenges.
Dementia is a disease of aging that takes decades to play out, Kaeberlein says. To show that a particular diet could reduce the risk of dementia, “it would take two-, three-, four-decade studies, which just aren’t feasible.” Many clinical trials last less than two years.
As a work-around, researchers often rely on some intermediate outcome, like changes in cognition. But even that can be hard to observe. “If you’re already relatively healthy and don’t have many risks, you might not show much difference, especially if the duration of the study is relatively short,” says Sue Radd-Vagenas, a nutrition scientist at the University of Sydney. “The thinking is if you’re older and you have more risk factors, it’s more likely we might see something in a short period of time.” Yet older adults might already have some cognitive decline, so it might be more difficult to see an effect.
Many researchers now suspect that intervening earlier will have a bigger impact. “We now know that the brain is stressed from midlife and there’s a tipping point at 65 when things go sour,” says Hussein Yassine, an Alzheimer’s researcher at the Keck School of Medicine of the University of Southern California in Los Angeles. But intervene too early, and a trial might not show any effect. Offering a healthier diet to a 50- or 60-year-old might pay off in the long run but fail to make a difference in cognition that can be measured during the relatively short length of a study.
And it’s not only the timing of the intervention that matters, but also the duration. Do you have to eat a particular diet for two decades for it to have an impact? “We’ve got a problem of timescale,” says Kaarin Anstey, a dementia researcher at the University of New South Wales in Sydney.
And then there are all the complexities that come with studying diet. “You can’t isolate it in the way you can isolate some of the other factors,” Anstey says. “It’s something that you’re exposed to all the time and over decades.”
Food as medicine? In a clinical trial, researchers often test the effectiveness of a drug by offering half the study participants the medication and half a placebo pill. But when the treatment being tested is food, studies become much more difficult to control. First, food doesn’t come in a pill, so it’s tricky to hide whether participants are in the intervention group or the control group.
Imagine a trial designed to test whether the Mediterranean diet can help slow cognitive decline. The participants aren’t told which group they’re in, but the control group sees that they aren’t getting nuts or fish or olive oil. “What ends up happening is a lot of participants will start actively increasing the consumption of the Mediterranean diet despite being on the control arm, because that’s why they signed up,” Yassine says. “So at the end of the trial, the two groups are not very dissimilar.”
Second, we all need food to live, so a true placebo is out of the question. But what diet should the control group consume? Do you compare the diet intervention to people’s typical diets (which may differ from person to person and country to country)? Do you ask the comparison group to eat a healthy diet but avoid the food expected to provide brain benefits? (Offering them an unhealthy diet would be unethical.)
And tracking what people eat during a clinical trial can be a challenge. Many of these studies rely on food frequency questionnaires to tally up all the foods in an individual’s diet. An ongoing study is assessing the impact of the MIND diet (which combines part of the Mediterranean diet with elements of the low-salt DASH diet) on cognitive decline. Researchers track adherence to the diet by asking participants to fill out a food frequency questionnaire every six to 12 months. But many of us struggle to remember what we ate a day or two ago. So some researchers also rely on more objective measures to assess compliance. For the MIND diet assessment, researchers are also tracking biomarkers in the blood and urine — vitamins such as folate, B12 and vitamin E, plus levels of certain antioxidants. Another difficulty is that these surveys often don’t account for variables that could be really important, like how the food was prepared and where it came from. Was the fish grilled? Fried? Slathered in butter? “Those things can matter,” says dementia researcher Nathaniel Chin of the University of Wisconsin–Madison.
Plus there are the things researchers can’t control. For example, how does the food interact with an individual’s medications and microbiome? “We know all of those factors have an interplay,” Chin says.
The few clinical trials looking at dementia and diet seem to measure different things, so it’s hard to make comparisons. In 2018, Radd-Vagenas and her colleagues looked at all the trials that had studied the impact of the Mediterranean diet on cognition. There were five at the time. “What struck me even then was how variable the interventions were,” she says. “Some of the studies didn’t even mention olive oil in their intervention. Now, how can you run a Mediterranean diet study and not mention olive oil?”
Another tricky aspect is recruitment. The kind of people who sign up for clinical trials tend to be more educated, more motivated and have healthier lifestyles. That can make differences between the intervention group and the control group difficult to spot. And if the study shows an effect, whether it will apply to the broader, more diverse population comes into question. To sum up, these studies are difficult to design, difficult to conduct and often difficult to interpret.
Kaeberlein studies aging, not dementia specifically, but he follows the research closely and acknowledges that the lack of clear answers can be frustrating. “I get the feeling of wanting to throw up your hands,” he says. But he points out that there may not be a single answer. Many diets can help people maintain a healthy weight and avoid diabetes, and thus reduce the risk of dementia. Beyond that obvious fact, he says, “it’s hard to get definitive answers.”
A better way In July 2021, Yassine gathered with more than 30 other dementia and nutrition experts for a virtual symposium to discuss the myriad challenges and map out a path forward. The speakers noted several changes that might improve the research.
One idea is to focus on populations at high risk. For example, one clinical trial is looking at the impact of low- and high-fat diets on short-term changes in the brain in people who carry the genetic variant APOE4, a risk factor for Alzheimer’s. One small study suggested that a high-fat Western diet actually improved cognition in some individuals. Researchers hope to get clarity on that surprising result. Another possible fix is redefining how researchers measure success. Hypertension and diabetes are both well-known risk factors for dementia. So rather than running a clinical trial that looks at whether a particular diet can affect dementia, researchers could look at the impact of diet on one of these risk factors. Plenty of studies have assessed the impact of diet on hypertension and diabetes, but Yassine knows of none launched with dementia prevention as the ultimate goal.
Yassine envisions a study that recruits participants at risk of developing dementia because of genetics or cardiovascular disease and then looks at intermediate outcomes. “For example, a high-salt diet can be associated with hypertension, and hypertension can be associated with dementia,” he says. If the study shows that the diet lowers hypertension, “we achieved our aim.” Then the study could enter a legacy period during which researchers track these individuals for another decade to determine whether the intervention influences cognition and dementia.
One way to amplify the signal in a clinical trial is to combine diet with other interventions likely to reduce the risk of dementia. The Finnish Geriatric Intervention Study to Prevent Cognitive Impairment and Disability, or FINGER, trial, which began in 2009, did just that. Researchers enrolled more than 1,200 individuals ages 60 to 77 who were at an elevated risk of developing dementia and had average or slightly impaired performance on cognition tests. Half received nutritional guidance, worked out at a gym, engaged in online brain-training games and had routine visits with a nurse to talk about managing dementia risk factors like high blood pressure and diabetes. The other half received only general health advice.
After two years, the control group had a 25 percent greater cognitive decline than the intervention group. It was the first trial, reported in the Lancet in 2015, to show that targeting multiple risk factors could slow the pace of cognitive decline.
Now researchers are testing this approach in more than 30 countries. Christy Tangney, a nutrition researcher at Rush University in Chicago, is one of the investigators on the U.S. arm of the study, enrolling 2,000 people ages 60 to 79 who have at least one dementia risk factor. The study is called POINTER, or U.S. Study to Protect Brain Health Through Lifestyle Intervention to Reduce Risk. The COVID-19 pandemic has delayed the research — organizers had to pause the trial briefly — but Tangney expects to have results in the next few years.
This kind of multi-intervention study makes sense, Chin says. “One of the reasons why things are so slow in our field is we’re trying to address a heterogeneous disease with one intervention at a time. And that’s just not going to work.” A trial that tests multiple interventions “allows for people to not be perfect,” he adds. Maybe they can’t follow the diet exactly, but they can stick to the workout program, which might have an effect on its own. The drawback in these kinds of studies, however, is that it’s impossible to tease out the contribution of each individual intervention. Preemptive guidelines Two major reports came out in recent years addressing dementia prevention. The first, from the World Health Organization in 2019, recommends a healthy, balanced diet for all adults, and notes that the Mediterranean diet may help people who have normal to mildly impaired cognition.
The 2020 Lancet Commission report, however, does not include diet in its list of modifiable risk factors, at least not yet. “Nutrition and dietary components are challenging to research with controversies still raging around the role of many micronutrients and health outcomes in dementia,” the report notes. The authors point out that a Mediterranean or the similar Scandinavian diet might help prevent cognitive decline in people with intact cognition, but “how long the exposure has to be or during which ages is unclear.” Neither report recommends any supplements.
Plenty of people are waiting for some kind of advice to follow. Improving how these studies are done might enable scientists to finally sort out what kinds of diets can help hold back the heartbreaking damage that comes with Alzheimer’s disease. For some people, that knowledge might be enough to create change. “Inevitably, if you’ve had Alzheimer’s in your family, you want to know, ‘What can I do today to potentially reduce my risk?’ ” says molecular biologist Heather Snyder, vice president of medical and scientific relations at the Alzheimer’s Association.
But changing long-term dietary habits can be hard. The foods we eat aren’t just fuel; our diets represent culture and comfort and more. “Food means so much to us,” Chin says.
“Even if you found the perfect diet,” he adds, “how do you get people to agree to and actually change their habits to follow that diet?” The MIND diet, for example, suggests people eat less than one serving of cheese a week. In Wisconsin, where Chin is based, that’s a nonstarter, he says.
But it’s not just about changing individual behaviors. Radd-Vagenas and other researchers hope that if they can show the brain benefits of some of these diets in rigorous studies, policy changes might follow. For example, research shows that lifestyle changes can have a big impact on type 2 diabetes. As a result, many insurance providers now pay for coaching programs that help participants maintain healthy diet and exercise habits.
“You need to establish policies. You need to change cities, change urban design. You need to do a lot of things to enable healthier choices to become easier choices,” Radd-Vagenas says. But that takes meatier data than exist now.
During winter in India’s mountainous Ladakh region, some farmers use pipes and sprinklers to construct building-sized cones of ice. These towering, humanmade glaciers, called ice stupas, slowly release water as they melt during the dry spring months for communities to drink or irrigate crops. But the pipes often freeze when conditions get too cold, stifling construction.
Now, preliminary results show that an automated system can erect an ice stupa while avoiding frozen pipes, using local weather data to control when and how much water is spouted. What’s more, the new system uses roughly a tenth the amount of water that the conventional method uses, researchers reported June 23 at the Frontiers in Hydrology meeting in San Juan, Puerto Rico. “This is one of the technological steps forward that we need to get this innovative idea to the point where it’s realistic as a solution,” says glaciologist Duncan Quincey of the University of Leeds in England who was not involved in the research. Automation could help communities build larger, longer-lasting ice stupas that provide more water during dry periods, he says.
Ice stupas emerged in 2014 as a means for communities to cope with shrinking alpine glaciers due to human-caused climate change (SN: 5/29/19). Typically, high-mountain communities in India, Kyrgyzstan and Chile pipe glacial meltwater into gravity-driven fountains that sprinkle continuously in the winter. Cold air freezes the drizzle, creating frozen cones that can store millions of liters of water.
The process is simple, though inefficient. More than 70 percent of the spouted water may flow away instead of freezing, says glaciologist Suryanarayanan Balasubramanian of the University of Fribourg in Switzerland.
So Balasubramanian and his team outfitted an ice stupa’s fountain with a computer that automatically adjusted the spout’s flow rate based on local temperatures, humidity and wind speed. Then the scientists tested the system by building two ice stupas in Guttannen, Switzerland — one using a continuously spraying fountain and one using the automated system.
After four months, the team found that the continuously sprinkling fountain had spouted about 1,100 cubic meters of water and amassed 53 cubic meters of ice, with pipes freezing once. The automated system sprayed only around 150 cubic meters of water but formed 61 cubic meters of ice, without any frozen pipes.
The researchers are now trying to simplify their prototype to make it more affordable for high-mountain communities around the world. “We eventually want to reduce the cost so that it is within two months of salary of the farmers in Ladakh,” Balasubramanian says. “Around $200 to $400.”
A man with a hole in his forehead, who was interred in what’s now northwest Alabama between around 3,000 and 5,000 years ago, represents North America’s oldest known case of skull surgery.
Damage around the man’s oval skull opening indicates that someone scraped out that piece of bone, probably to reduce brain swelling caused by a violent attack or a serious fall, said bioarchaeologist Diana Simpson of the University of Nevada, Las Vegas. Either scenario could explain fractures and other injuries above the man’s left eye and to his left arm, leg and collarbone.
Bone regrowth on the edges of the skull opening indicates that the man lived for up to one year after surgery, Simpson estimated. She presented her analysis of the man’s remains on March 28 at a virtual session of the annual meeting of the American Association of Biological Anthropologists. Skull surgery occurred as early as 13,000 years ago in North Africa (SN: 8/17/11). Until now, the oldest evidence of this practice in North America dated to no more than roughly 1,000 years ago.
In his prime, the new record holder likely served as a ritual practitioner or shaman. His grave included items like those found in shamans’ graves at nearby North American hunter-gatherer sites dating to between about 3,000 and 5,000 years ago. Ritual objects buried with him included sharpened bone pins and modified deer and turkey bones that may have been tattooing tools (SN: 5/25/21).
Investigators excavated the man’s grave and 162 others at the Little Bear Creek Site, a seashell covered burial mound, in the 1940s. Simpson studied the man’s museum-held skeleton and grave items in 2018, shortly before the discoveries were returned to local Native American communities for reburial.
Science, some would say, is an enterprise that should concern itself solely with cold, hard facts. Flights of imagination should be the province of philosophers and poets.
On the other hand, as Albert Einstein so astutely observed, “Imagination is more important than knowledge.” Knowledge, he said, is limited to what we know now, while “imagination embraces the entire world, stimulating progress.”
So with science, imagination has often been the prelude to transformative advances in knowledge, remaking humankind’s understanding of the world and enabling powerful new technologies. And yet while sometimes spectacularly successful, imagination has also frequently failed in ways that retard the revealing of nature’s secrets. Some minds, it seems, are simply incapable of imagining that there’s more to reality than what they already know.
On many occasions scientists have failed to foresee ways of testing novel ideas, ridiculing them as unverifiable and therefore unscientific. Consequently it is not too challenging to come up with enough failures of scientific imagination to compile a Top 10 list, beginning with:
Atoms By the middle of the 19th century, most scientists believed in atoms. Chemists especially. John Dalton had shown that the simple ratios of different elements making up chemical compounds strongly implied that each element consisted of identical tiny particles. Subsequent research on the weights of those atoms made their reality pretty hard to dispute. But that didn’t deter physicist-philosopher Ernst Mach. Even as late as the beginning of the 20th century, he and a number of others insisted that atoms could not be real, as they were not accessible to the senses. Mach believed that atoms were a “mental artifice,” convenient fictions that helped in calculating the outcomes of chemical reactions. “Have you ever seen one?” he would ask.
Apart from the fallacy of defining reality as “observable,” Mach’s main failure was his inability to imagine a way that atoms could be observed. Even after Einstein proved the existence of atoms by indirect means in 1905, Mach stood his ground. He was unaware, of course, of the 20th century technologies that quantum mechanics would enable, and so did not foresee powerful new microscopes that could show actual images of atoms (and allow a certain computing company to drag them around to spell out IBM).
Composition of stars Mach’s views were similar to those of Auguste Comte, a French philosopher who originated the idea of positivism, which denies reality to anything other than objects of sensory experience. Comte’s philosophy led (and in some cases still leads) many scientists astray. His greatest failure of imagination was an example he offered for what science could never know: the chemical composition of the stars.
Unable to imagine anybody affording a ticket on some entrepreneur’s space rocket, Comte argued in 1835 that the identity of the stars’ components would forever remain beyond human knowledge. We could study their size, shapes and movements, he said, “whereas we would never know how to study by any means their chemical composition, or their mineralogical structure,” or for that matter, their temperature, which “will necessarily always be concealed from us.”
Within a few decades, though, a newfangled technology called spectroscopy enabled astronomers to analyze the colors of light emitted by stars. And since each chemical element emits (or absorbs) precise colors (or frequencies) of light, each set of colors is like a chemical fingerprint, an infallible indicator for an element’s identity. Using a spectroscope to observe starlight therefore can reveal the chemistry of the stars, exactly what Comte thought impossible.
Canals on Mars Sometimes imagination fails because of its overabundance rather than absence. In the case of the never-ending drama over the possibility of life on Mars, that planet’s famous canals turned out to be figments of overactive scientific imagination.
First “observed” in the late 19th century, the Martian canals showed up as streaks on the planet’s surface, described as canali by Italian astronomer Giovanni Schiaparelli. Canali is, however, Italian for channels, not canals. So in this case something was gained (rather than lost) in translation — the idea that Mars was inhabited. “Canals are dug,” remarked British astronomer Norman Lockyer in 1901, “ergo there were diggers.” Soon astronomers imagined an elaborate system of canals transporting water from Martian poles to thirsty metropolitan areas and agricultural centers. (Some observers even imagined seeing canals on Venus and Mercury.) With more constrained imaginations, aided by better telescopes and translations, belief in the Martian canals eventually faded. It was merely the Martian winds blowing dust (bright) and sand (dark) around the surface in ways that occasionally made bright and dark streaks line up in a deceptive manner — to eyes attached to overly imaginative brains.
Nuclear fission In 1934, Italian physicist Enrico Fermi bombarded uranium (atomic number 92) and other elements with neutrons, the particle discovered just two years earlier by James Chadwick. Fermi found that among the products was an unidentifiable new element. He thought he had created element 93, heavier than uranium. He could not imagine any other explanation. In 1938 Fermi was awarded the Nobel Prize in physics for demonstrating “the existence of new radioactive elements produced by neutron irradiation.”
It turned out, however, that Fermi had unwittingly demonstrated nuclear fission. His bombardment products were actually lighter, previously known elements — fragments split from the heavy uranium nucleus. Of course, the scientists later credited with discovering fission, Otto Hahn and Fritz Strassmann, didn’t understand their results either. Hahn’s former collaborator Lise Meitner was the one who explained what they’d done. Another woman, chemist Ida Noddack, had imagined the possibility of fission to explain Fermi’s results, but for some reason nobody listened to her.
Detecting neutrinos In the 1920s, most physicists had convinced themselves that nature was built from just two basic particles: positively charged protons and negatively charged electrons. Some had, however, imagined the possibility of a particle with no electric charge. One specific proposal for such a particle came in 1930 from Austrian physicist Wolfgang Pauli. He suggested that a no-charge particle could explain a suspicious loss of energy observed in beta-particle radioactivity. Pauli’s idea was worked out mathematically by Fermi, who named the neutral particle the neutrino. Fermi’s math was then examined by physicists Hans Bethe and Rudolf Peierls, who deduced that the neutrino would zip through matter so easily that there was no imaginable way of detecting its existence (short of building a tank of liquid hydrogen 6 million billion miles wide). “There is no practically possible way of observing the neutrino,” Bethe and Peierls concluded.
But they had failed to imagine the possibility of finding a source of huge numbers of high-energy neutrinos, so that a few could be captured even if almost all escaped. No such source was known until nuclear fission reactors were invented. In the 1950s, Frederick Reines and Clyde Cowan used reactors to definitely establish the neutrino’s existence. Reines later said he sought a way to detect the neutrino precisely because everybody had told him it wasn’t possible to detect the neutrino.
Nuclear energy Ernest Rutherford, one of the 20th century’s greatest experimental physicists, was not exactly unimaginative. He imagined the existence of the neutron a dozen years before it was discovered, and he figured out that a weird experiment conducted by his assistants had revealed that atoms contained a dense central nucleus. It was clear that the atomic nucleus packed an enormous quantity of energy, but Rutherford could imagine no way to extract that energy for practical purposes. In 1933, at a meeting of the British Association for the Advancement of Science, he noted that although the nucleus contained a lot of energy, it would also require energy to release it. Anyone saying we can exploit atomic energy “is talking moonshine,” Rutherford declared. To be fair, Rutherford qualified the moonshine remark by saying “with our present knowledge,” so in a way he perhaps was anticipating the discovery of nuclear fission a few years later. (And some historians have suggested that Rutherford did imagine the powerful release of nuclear energy, but thought it was a bad idea and wanted to discourage people from attempting it.)
Age of the Earth Rutherford’s reputation for imagination was bolstered by his inference that radioactive matter deep underground could solve the mystery of the age of the Earth. In the mid-19th century, William Thomson (later known as Lord Kelvin) calculated the Earth’s age to be something a little more than 100 million years, and possibly much less. Geologists insisted that the Earth must be much older — perhaps billions of years — to account for the planet’s geological features.
Kelvin calculated his estimate assuming the Earth was born as a molten rocky mass that then cooled to its present temperature. But following the discovery of radioactivity at the end of the 19th century, Rutherford pointed out that it provided a new source of heat in the Earth’s interior. While giving a talk (in Kelvin’s presence), Rutherford suggested that Kelvin had basically prophesized a new source of planetary heat.
While Kelvin’s neglect of radioactivity is the standard story, a more thorough analysis shows that adding that heat to his math would not have changed his estimate very much. Rather, Kelvin’s mistake was assuming the interior to be rigid. John Perry (one of Kelvin’s former assistants) showed in 1895 that the flow of heat deep within the Earth’s interior would alter Kelvin’s calculations considerably — enough to allow the Earth to be billions of years old. It turned out that the Earth’s mantle is fluid on long time scales, which not only explains the age of the Earth, but also plate tectonics.
Charge-parity violation Before the mid-1950s, nobody imagined that the laws of physics gave a hoot about handedness. The same laws should govern matter in action when viewed straight-on or in a mirror, just as the rules of baseball applied equally to Ted Williams and Willie Mays, not to mention Mickey Mantle. But in 1956 physicists Tsung-Dao Lee and Chen Ning Yang suggested that perfect right-left symmetry (or “parity”) might be violated by the weak nuclear force, and experiments soon confirmed their suspicion.
Restoring sanity to nature, many physicists thought, required antimatter. If you just switched left with right (mirror image), some subatomic processes exhibited a preferred handedness. But if you also replaced matter with antimatter (switching electric charge), left-right balance would be restored. In other words, reversing both charge (C) and parity (P) left nature’s behavior unchanged, a principle known as CP symmetry. CP symmetry had to be perfectly exact; otherwise nature’s laws would change if you went backward (instead of forward) in time, and nobody could imagine that.
In the early 1960s, James Cronin and Val Fitch tested CP symmetry’s perfection by studying subatomic particles called kaons and their antimatter counterparts. Kaons and antikaons both have zero charge but are not identical, because they are made from different quarks. Thanks to the quirky rules of quantum mechanics, kaons can turn into antikaons and vice versa. If CP symmetry is exact, each should turn into the other equally often. But Cronin and Fitch found that antikaons turn into kaons more often than the other way around. And that implied that nature’s laws allowed a preferred direction of time. “People didn’t want to believe it,” Cronin said in a 1999 interview. Most physicists do believe it today, but the implications of CP violation for the nature of time and other cosmic questions remain mysterious.
Behaviorism versus the brain In the early 20th century, the dogma of behaviorism, initiated by John Watson and championed a little later by B.F. Skinner, ensnared psychologists in a paradigm that literally excised imagination from science. The brain — site of all imagination — is a “black box,” the behaviorists insisted. Rules of human psychology (mostly inferred from experiments with rats and pigeons) could be scientifically established only by observing behavior. It was scientifically meaningless to inquire into the inner workings of the brain that directed such behavior, as those workings were in principle inaccessible to human observation. In other words, activity inside the brain was deemed scientifically irrelevant because it could not be observed. “When what a person does [is] attributed to what is going on inside him,” Skinner proclaimed, “investigation is brought to an end.”
Skinner’s behaviorist BS brainwashed a generation or two of followers into thinking the brain was beyond study. But fortunately for neuroscience, some physicists foresaw methods for observing neural activity in the brain without splitting the skull open, exhibiting imagination that the behaviorists lacked. In the 1970s Michel Ter-Pogossian, Michael Phelps and colleagues developed PET (positron emission tomography) scanning technology, which uses radioactive tracers to monitor brain activity. PET scanning is now complemented by magnetic resonance imaging, based on ideas developed in the 1930s and 1940s by physicists I.I. Rabi, Edward Purcell and Felix Bloch.
Gravitational waves Nowadays astrophysicists are all agog about gravitational waves, which can reveal all sorts of secrets about what goes on in the distant universe. All hail Einstein, whose theory of gravity — general relativity — explains the waves’ existence. But Einstein was not the first to propose the idea. In the 19th century, James Clerk Maxwell devised the math explaining electromagnetic waves, and speculated that gravity might similarly induce waves in a gravitational field. He couldn’t figure out how, though. Later other scientists, including Oliver Heaviside and Henri Poincaré, speculated about gravity waves. So the possibility of their existence certainly had been imagined.
But many physicists doubted that the waves existed, or if they did, could not imagine any way of proving it. Shortly before Einstein completed his general relativity theory, German physicist Gustav Mie declared that “the gravitational radiation emitted … by any oscillating mass particle is so extraordinarily weak that it is unthinkable ever to detect it by any means whatsoever.” Even Einstein had no idea how to detect gravitational waves, although he worked out the math describing them in a 1918 paper. In 1936 he decided that general relativity did not predict gravitational waves at all. But the paper rejecting them was simply wrong. As it turned out, of course, gravitational waves are real and can be detected. At first they were verified indirectly, by the diminishing distance between mutually orbiting pulsars. And more recently they were directly detected by huge experiments relying on lasers. Nobody had been able to imagine detecting gravitational waves a century ago because nobody had imagined the existence of pulsars or lasers.
All these failures show how prejudice can sometimes dull the imagination. But they also show how an imagination failure can inspire the quest for a new success. And that’s why science, so often detoured by dogma, still manages somehow, on long enough time scales, to provide technological wonders and cosmic insights beyond philosophers’ and poets’ wildest imagination.
A young, massive planet is orbiting in an unusual place in its star system, and it’s leading researchers to revive a long-debated view of how giant planets can form.
The protoplanet, nine times the mass of Jupiter, is too far away from its star to have formed by accreting matter piece by piece, images suggest. Instead, the massive world probably formed all at once in a violent implosion of gas and dust, researchers report April 4 in Nature Astronomy.
“My first reaction was, there’s no way this can be true,” says Thayne Currie, an astrophysicist at the Subaru Telescope headquartered in Hilo, Hawaii.
For years, astronomers have debated the ways in which giant planets might form (SN: 12/3/10). In the “core accretion” story, a planet starts out as small bits of matter within a disk of gas, dust and ice swirling around a young star. The clumps continue to accrete other matter, growing to become the core of the planet. Out past a certain distance from the star, that core then accumulates a thick blanket of hydrogen and helium, turning it into a bloated, gassy world.
But the new planet, orbiting a star called AB Aurigae, is in the outskirts of its system, where there’s less matter to gather into a core. In this position, the core can’t become massive enough to create its gaseous envelope. The planet’s remote location, Currie and colleagues argue, makes it more likely to form via “disk instability,” where the disk around the star breaks into planet-sized fragments. The fragments then rapidly collapse in on themselves, drawn together by their own gravity, and clump together, forming a giant planet. Using the Subaru Telescope atop Mauna Kea, Currie and colleagues observed AB Aurigae periodically from 2016 to 2020. NASA’s Hubble Space Telescope also observed the star repeatedly over 13 years. Looking at all these images, the team saw a bright spot next to the star. The bright clump was a clear protoplanet, named AB Aur b, orbiting nearly 14 billion kilometers from its star — roughly 3 times as far as Neptune is from the sun.
In the images, AB Aur b looked like it was straight out of a simulation of planet formation by disk instability, Currie says. Except it was real. “For the longest time, I never believed that planet formation by disk instability could actually work,” he says.
Because AB Aur b is still growing, embedded in the young star’s disk, it could help to explain how the handful of known massive planets orbiting far from their stars formed.
“We only know maybe a few dozen total of these types of planets,” says Quinn Konopacky, an astrophysicist at the University of California, San Diego who was not involved in the research. “Every single one that we find is basically precious.”
It’s difficult to distinguish whether a planet formed by core accretion or disk instability through observations alone, Konopacky says. The fact that AB Aur b is at such a wide separation from its star is “good evidence” that disk instability is what’s happening, she says. Still, “I think that there’s a lot more work to be done and other ways that we can try to assess if that’s what’s going on in the system.”
Both Konopacky and Currie say this research represents only the second direct observation of a protoplanet (SN: 7/2/18). Oftentimes, researchers have trouble distinguishing an actual forming planet from a planetary disk.
The recently launched James Webb Space Telescope could help researchers understand these anomalous gas giants very distant from their stars by studying the AB Aurigae system and others like it, Currie says (SN: 1/24/22). “I think this will spur a lot of debates and follow-up studies by other researchers.”
That’s one small stem for a plant, one giant leap for plant science.
In a tiny, lab-grown garden, the first seeds ever sown in lunar dirt have sprouted. This small crop, planted in samples returned by Apollo missions, offers hope that astronauts could someday grow their own food on the moon.
But plants potted in lunar dirt grew more slowly and were scrawnier than others grown in volcanic material from Earth, researchers report May 12 in Communications Biology. That finding suggests that farming on the moon would take a lot more than a green thumb. “Ah! It’s so cool!” says University of Wisconsin–Madison astrobotanist Richard Barker of the experiment.
“Ever since these samples came back, there’s been botanists that wanted to know what would happen if you grew plants in them,” says Barker, who wasn’t involved in the study. “But everyone knows those precious samples … are priceless, and so you can understand why [NASA was] reluctant to release them.”
Now, NASA’s upcoming plans to send astronauts back to the moon as part of its Artemis program have offered a new incentive to examine that precious dirt and explore how lunar resources could support long-term missions (SN: 7/15/19).
The dirt, or regolith, that covers the moon is basically a gardener’s worst nightmare. This fine powder of razor-sharp bits is full of metallic iron, rather than the oxidized kind that is palatable to plants (SN: 9/15/20). It’s also full of tiny glass shards forged by space rocks pelting the moon. What it is not full of is nitrogen, phosphorus or much else plants need to grow. So, even though scientists have gotten pretty good at coaxing plants to grow in fake moon dust made of earthly materials, no one knew whether newborn plants could put down their delicate roots in the real stuff.
To find out, a trio of researchers at the University of Florida in Gainesville ran experiments with thale cress (Arabidopsis thaliana). This well-studied plant is in the same family as mustards and can grow in just a tiny clod of material. That was key because the researchers had only a little bit of the moon to go around.
The team planted seeds in tiny pots that each held about a gram of dirt. Four pots were filled with samples returned by Apollo 11, another four with Apollo 12 samples and a final four with dirt from Apollo 17. Another 16 pots were filled with earthly volcanic material used in past experiments to mimic moon dirt. All were grown under LED lights in the lab and watered with a broth of nutrients. “Nothing really compared to when we first saw the seedlings as they were sprouting in the lunar regolith,” says Anna-Lisa Paul, a plant molecular biologist. “That was a moving experience, to be able to say that we’re watching the very first terrestrial organisms to grow in extraterrestrial materials, ever. And it was amazing. Just amazing.”
Plants grew in all the pots of lunar dirt, but none grew as well as those cultivated in earthly material. “The healthiest ones were just smaller,” Paul says. The sickliest moon-grown plants were tiny and had purplish pigmentation — a red flag for plant stress. Plants grown in Apollo 11 samples, which had been exposed on the lunar surface the longest, were most stunted.
Paul and colleagues also inspected the genes in their mini alien Eden. “By seeing what kind of genes are turned on and turned off in response to a stress, that shows you what tools plants are pulling out of their metabolic toolbox to deal with that stress,” she says. All plants grown in moon dirt pulled out genetic tools typically seen in plants struggling with stress from salt, metals or reactive oxygen species (SN: 9/8/21).
Apollo 11 seedlings had the most severely stressed genetic profile, offering more evidence that regolith exposed to the lunar surface longer — and therefore littered with more impact glass and metallic iron — is more toxic to plants.
Future space explorers could choose the site for their lunar habitat accordingly. Perhaps lunar dirt could also be modified somehow to make it more comfortable for plants. Or plants could be genetically engineered to feel more at home in alien soil. “We can also choose plants that do better,” Paul says. “Maybe spinach plants, which are very salt-tolerant, would have no trouble growing in lunar regolith.”
Barker isn’t daunted by the challenges promised by this first attempt at lunar gardening. “There’s many, many steps and pieces of technology to be developed before humanity can really engage in lunar agriculture,” he says. “But having this particular dataset is really important for those of us that believe it’s possible and important.”
For millennia, the universe did a pretty good job of keeping its secrets from science.
Ancient Greeks thought the universe was a sphere of fixed stars surrounding smaller spheres carrying planets around the central Earth. Even Copernicus, who in the 16th century correctly replaced the Earth with the sun, viewed the universe as a single solar system encased by the star-studded outer sphere.
But in the centuries that followed, the universe revealed some of its vastness. It contained countless stars agglomerated in huge clusters, now called galaxies.
Then, at the end of the 1920s, the cosmos disclosed its most closely held secret of all: It was getting bigger. Rather than static and stable, an everlasting and ever-the-same entity encompassing all of reality, the universe continually expanded. Observations of distant galaxies showed them flying apart from each other, suggesting the current cosmos to be just the adult phase of a universe born long ago in the burst of a tiny blotch of energy.
It was a surprise that shook science at its foundations, undercutting philosophical preconceptions about existence and launching a new era in cosmology, the study of the universe. But even more surprising, in retrospect, is that such a deep secret had already been suspected by a mathematician whose specialty was predicting the weather. A century ago this month (May 1922), Russian mathematician-meteorologist Alexander Friedmann composed a paper, based on Einstein’s general theory of relativity, that outlined multiple possible histories of the universe. One such possibility described cosmic expansion, starting from a singular point. In essence, even without considering any astronomical evidence, Friedmann had anticipated the modern Big Bang theory of the birth and evolution of the universe.
“The new vision of the universe opened by Friedmann,” writes Russian physicist Vladimir Soloviev in a recent paper, “has become a foundation of modern cosmology.”
Friedmann was not well known at the time. He had graduated in 1910 from St. Petersburg University in Russia, having studied math along with some physics. In graduate school he investigated the use of math in meteorology and atmospheric dynamics. He applied that expertise in aiding the Russian air force during World War I, using math to predict the optimum release point for dropping bombs on enemy targets.
After the war, Friedmann learned of Einstein’s general theory of relativity, which describes gravity as a manifestation of the geometry of space (or more accurately, spacetime). In Einstein’s theory, mass distorts spacetime, producing spacetime “curvature,” which makes masses appear to attract each other.
Friedmann was especially intrigued by Einstein’s 1917 paper (and a similar paper by Willem de Sitter) applying general relativity to the universe as a whole. Einstein found that his original equations allowed the universe to grow or shrink. But he considered that unthinkable, so he added a term representing a repulsive force that (he thought) would keep the size of the cosmos constant. Einstein concluded that space had a positive spatial curvature (like the surface of a ball), implying a “closed,” or finite universe.
Friedmann accepted the new term, called the cosmological constant, but pointed out that for various values of that constant, along with other assumptions, the universe might exhibit very different behaviors. Einstein’s static universe was a special case; the universe might also expand forever, or expand for a while, then contract to a point and then begin expanding again.
Friedmann’s paper describing dynamic universes, titled “On the Curvature of Space,” was accepted for publication in the prestigious Zeitschrift für Physik on June 29, 1922.
Einstein objected. He wrote a note to the journal contending that Friedmann had committed a mathematical error. But the error was Einstein’s. He later acknowledged that Friedmann’s math was correct, while still denying that it had any physical validity.
Friedmann insisted otherwise.
He was not just a pure mathematician, oblivious to the physical meanings of his symbols on paper. His in-depth appreciation of the relationship between equations and the atmosphere persuaded him that the math meant something physical. He even wrote a book (The World as Space and Time) delving deeply into the connection between the math of spatial geometry and the motion of physical bodies. Physical bodies “interpret” the “geometrical world,” he declared, enabling scientists to test which of the various possible geometrical worlds humans actually inhabit. Because of the physics-math connection, he averred, “it becomes possible to determine the geometry of the geometrical world through experimental studies of the physical world.”
So when Friedmann derived solutions to Einstein’s equations, he translated them into the possible physical meanings for the universe. Depending on various factors, the universe could be expanding from a point, or from a finite but smaller initial state, for instance. In one case he envisioned, the universe began to expand at a decelerating rate, but then reached an inflection point, whereupon it began expanding at a faster and faster rate. At the end of the 20th century, astronomers measuring the brightness of distant supernovas concluded that the universe had taken just such a course, a shock almost as surprising as the expansion of the universe itself. But Friedmann’s math had already forecast such a possibility. No doubt Friedmann’s deep appreciation for the synergy of abstract math and concrete physics prepared his mind to consider the notion that the universe could be expanding. But maybe he had some additional help. Although he was the first scientist to seriously propose an expanding universe, he wasn’t the first person. Almost 75 years before Friedmann’s paper, the poet Edgar Allan Poe had published an essay (or “prose poem”) called Eureka. In that essay Poe described the history of the universe as expanding from the explosion of a “primordial particle.” Poe even described the universe as growing and then contracting back to a point again, just as envisioned in one of Friedmann’s scenarios.
Although Poe had studied math during his brief time as a student at West Point, he had used no equations in Eureka, and his essay was not recognized as a contribution to science. At least not directly. It turns out, though, that Friedmann was an avid reader, and among his favorite authors were Dostoevsky and Poe. So perhaps that’s why Friedmann was more receptive to an expanding universe than other scientists of his day.
Today Friedmann’s math remains at the core of modern cosmological theory. “The fundamental equations he derived still provide the basis for the current cosmological theories of the Big Bang and the accelerating universe,” Israeli mathematician and historian Ari Belenkiy noted in a 2013 paper. “He introduced the fundamental idea of modern cosmology — that the universe is dynamic and may evolve in different manners.”
Friedmann emphasized that astronomical knowledge in his day was insufficient to reveal which of the possible mathematical histories the universe has chosen. Now scientists have much more data, and have narrowed the possibilities in a way that confirms the prescience of Friedmann’s math.
Friedmann did not live to see the triumphs of his insights, though, or even the early evidence that the universe really does expand. He died in 1925 from typhoid fever, at the age of 37. But he died knowing that he had deciphered a secret about the universe deeper than any suspected by any scientist before him. As his wife remembered, he liked to quote a passage from Dante: “The waters I am entering, no one yet has crossed.”
Deep in the human brain, a very specific kind of cell dies during Parkinson’s disease.
For the first time, researchers have sorted large numbers of human brain cells in the substantia nigra into 10 distinct types. Just one is especially vulnerable in Parkinson’s disease, the team reports May 5 in Nature Neuroscience. The result could lead to a clearer view of how Parkinson’s takes hold, and perhaps even ways to stop it.
The new research “goes right to the core of the matter,” says neuroscientist Raj Awatramani of Northwestern University Feinberg School of Medicine in Chicago. Pinpointing the brain cells that seem to be especially susceptible to the devastating disease is “the strength of this paper,” says Awatramani, who was not involved in the study.
Parkinson’s disease steals people’s ability to move smoothly, leaving balance problems, tremors and rigidity. In the United States, nearly 1 million people are estimated to have Parkinson’s. Scientists have known for decades that these symptoms come with the death of nerve cells in the substantia nigra. Neurons there churn out dopamine, a chemical signal involved in movement, among other jobs (SN: 9/7/17).
But those dopamine-making neurons are not all equally vulnerable in Parkinson’s, it turns out.
“This seemed like an opportunity to … really clarify which kinds of cells are actually dying in Parkinson’s disease,” says Evan Macosko, a psychiatrist and neuroscientist at Massachusetts General Hospital in Boston and the Broad Institute of MIT and Harvard. The tricky part was that dopamine-making neurons in the substantia nigra are rare. In samples of postmortem brains, “we couldn’t survey enough of [the cells] to really get an answer,” Macosko says. But Abdulraouf Abdulraouf, a researcher in Macosko’s laboratory, led experiments that sorted these cells, figuring out a way to selectively pull the cells’ nuclei out from the rest of the cells present in the substantia nigra. That enrichment ultimately led to an abundance of nuclei to analyze.
By studying over 15,000 nuclei from the brains of eight formerly healthy people, the researchers further sorted dopamine-making cells in the substantia nigra into 10 distinct groups. Each of these cell groups was defined by a specific brain location and certain combinations of genes that were active.
When the researchers looked at substantia nigra neurons in the brains of people who died with either Parkinson’s disease or the related Lewy body dementia, the team noticed something curious: One of these 10 cell types was drastically diminished.
These missing neurons were identified by their location in the lower part of the substantia nigra and an active AGTR1 gene, lab member Tushar Kamath and colleagues found. That gene was thought to serve simply as a good way to identify these cells, Macosko says; researchers don’t know whether the gene has a role in these dopamine-making cells’ fate in people.
The new finding points to ways to perhaps counter the debilitating diseases. Scientists have been keen to replace the missing dopamine-making neurons in the brains of people with Parkinson’s. The new study shows what those cells would need to look like, Awatramani says. “If a particular subtype is more vulnerable in Parkinson’s disease, maybe that’s the one we should be trying to replace,” he says.
In fact, Macosko says that stem cell scientists have already been in contact, eager to make these specific cells. “We hope this is a guidepost,” Macosko says.
The new study involved only a small number of human brains. Going forward, Macosko and his colleagues hope to study more brains, and more parts of those brains. “We were able to get some pretty interesting insights with a relatively small number of people,” he says. “When we get to larger numbers of people with other kinds of diseases, I think we’re going to learn a lot.”