A man with a hole in his forehead, who was interred in what’s now northwest Alabama between around 3,000 and 5,000 years ago, represents North America’s oldest known case of skull surgery.
Damage around the man’s oval skull opening indicates that someone scraped out that piece of bone, probably to reduce brain swelling caused by a violent attack or a serious fall, said bioarchaeologist Diana Simpson of the University of Nevada, Las Vegas. Either scenario could explain fractures and other injuries above the man’s left eye and to his left arm, leg and collarbone.
Bone regrowth on the edges of the skull opening indicates that the man lived for up to one year after surgery, Simpson estimated. She presented her analysis of the man’s remains on March 28 at a virtual session of the annual meeting of the American Association of Biological Anthropologists. Skull surgery occurred as early as 13,000 years ago in North Africa (SN: 8/17/11). Until now, the oldest evidence of this practice in North America dated to no more than roughly 1,000 years ago.
In his prime, the new record holder likely served as a ritual practitioner or shaman. His grave included items like those found in shamans’ graves at nearby North American hunter-gatherer sites dating to between about 3,000 and 5,000 years ago. Ritual objects buried with him included sharpened bone pins and modified deer and turkey bones that may have been tattooing tools (SN: 5/25/21).
Investigators excavated the man’s grave and 162 others at the Little Bear Creek Site, a seashell covered burial mound, in the 1940s. Simpson studied the man’s museum-held skeleton and grave items in 2018, shortly before the discoveries were returned to local Native American communities for reburial.
Science, some would say, is an enterprise that should concern itself solely with cold, hard facts. Flights of imagination should be the province of philosophers and poets.
On the other hand, as Albert Einstein so astutely observed, “Imagination is more important than knowledge.” Knowledge, he said, is limited to what we know now, while “imagination embraces the entire world, stimulating progress.”
So with science, imagination has often been the prelude to transformative advances in knowledge, remaking humankind’s understanding of the world and enabling powerful new technologies. And yet while sometimes spectacularly successful, imagination has also frequently failed in ways that retard the revealing of nature’s secrets. Some minds, it seems, are simply incapable of imagining that there’s more to reality than what they already know.
On many occasions scientists have failed to foresee ways of testing novel ideas, ridiculing them as unverifiable and therefore unscientific. Consequently it is not too challenging to come up with enough failures of scientific imagination to compile a Top 10 list, beginning with:
Atoms By the middle of the 19th century, most scientists believed in atoms. Chemists especially. John Dalton had shown that the simple ratios of different elements making up chemical compounds strongly implied that each element consisted of identical tiny particles. Subsequent research on the weights of those atoms made their reality pretty hard to dispute. But that didn’t deter physicist-philosopher Ernst Mach. Even as late as the beginning of the 20th century, he and a number of others insisted that atoms could not be real, as they were not accessible to the senses. Mach believed that atoms were a “mental artifice,” convenient fictions that helped in calculating the outcomes of chemical reactions. “Have you ever seen one?” he would ask.
Apart from the fallacy of defining reality as “observable,” Mach’s main failure was his inability to imagine a way that atoms could be observed. Even after Einstein proved the existence of atoms by indirect means in 1905, Mach stood his ground. He was unaware, of course, of the 20th century technologies that quantum mechanics would enable, and so did not foresee powerful new microscopes that could show actual images of atoms (and allow a certain computing company to drag them around to spell out IBM).
Composition of stars Mach’s views were similar to those of Auguste Comte, a French philosopher who originated the idea of positivism, which denies reality to anything other than objects of sensory experience. Comte’s philosophy led (and in some cases still leads) many scientists astray. His greatest failure of imagination was an example he offered for what science could never know: the chemical composition of the stars.
Unable to imagine anybody affording a ticket on some entrepreneur’s space rocket, Comte argued in 1835 that the identity of the stars’ components would forever remain beyond human knowledge. We could study their size, shapes and movements, he said, “whereas we would never know how to study by any means their chemical composition, or their mineralogical structure,” or for that matter, their temperature, which “will necessarily always be concealed from us.”
Within a few decades, though, a newfangled technology called spectroscopy enabled astronomers to analyze the colors of light emitted by stars. And since each chemical element emits (or absorbs) precise colors (or frequencies) of light, each set of colors is like a chemical fingerprint, an infallible indicator for an element’s identity. Using a spectroscope to observe starlight therefore can reveal the chemistry of the stars, exactly what Comte thought impossible.
Canals on Mars Sometimes imagination fails because of its overabundance rather than absence. In the case of the never-ending drama over the possibility of life on Mars, that planet’s famous canals turned out to be figments of overactive scientific imagination.
First “observed” in the late 19th century, the Martian canals showed up as streaks on the planet’s surface, described as canali by Italian astronomer Giovanni Schiaparelli. Canali is, however, Italian for channels, not canals. So in this case something was gained (rather than lost) in translation — the idea that Mars was inhabited. “Canals are dug,” remarked British astronomer Norman Lockyer in 1901, “ergo there were diggers.” Soon astronomers imagined an elaborate system of canals transporting water from Martian poles to thirsty metropolitan areas and agricultural centers. (Some observers even imagined seeing canals on Venus and Mercury.) With more constrained imaginations, aided by better telescopes and translations, belief in the Martian canals eventually faded. It was merely the Martian winds blowing dust (bright) and sand (dark) around the surface in ways that occasionally made bright and dark streaks line up in a deceptive manner — to eyes attached to overly imaginative brains.
Nuclear fission In 1934, Italian physicist Enrico Fermi bombarded uranium (atomic number 92) and other elements with neutrons, the particle discovered just two years earlier by James Chadwick. Fermi found that among the products was an unidentifiable new element. He thought he had created element 93, heavier than uranium. He could not imagine any other explanation. In 1938 Fermi was awarded the Nobel Prize in physics for demonstrating “the existence of new radioactive elements produced by neutron irradiation.”
It turned out, however, that Fermi had unwittingly demonstrated nuclear fission. His bombardment products were actually lighter, previously known elements — fragments split from the heavy uranium nucleus. Of course, the scientists later credited with discovering fission, Otto Hahn and Fritz Strassmann, didn’t understand their results either. Hahn’s former collaborator Lise Meitner was the one who explained what they’d done. Another woman, chemist Ida Noddack, had imagined the possibility of fission to explain Fermi’s results, but for some reason nobody listened to her.
Detecting neutrinos In the 1920s, most physicists had convinced themselves that nature was built from just two basic particles: positively charged protons and negatively charged electrons. Some had, however, imagined the possibility of a particle with no electric charge. One specific proposal for such a particle came in 1930 from Austrian physicist Wolfgang Pauli. He suggested that a no-charge particle could explain a suspicious loss of energy observed in beta-particle radioactivity. Pauli’s idea was worked out mathematically by Fermi, who named the neutral particle the neutrino. Fermi’s math was then examined by physicists Hans Bethe and Rudolf Peierls, who deduced that the neutrino would zip through matter so easily that there was no imaginable way of detecting its existence (short of building a tank of liquid hydrogen 6 million billion miles wide). “There is no practically possible way of observing the neutrino,” Bethe and Peierls concluded.
But they had failed to imagine the possibility of finding a source of huge numbers of high-energy neutrinos, so that a few could be captured even if almost all escaped. No such source was known until nuclear fission reactors were invented. In the 1950s, Frederick Reines and Clyde Cowan used reactors to definitely establish the neutrino’s existence. Reines later said he sought a way to detect the neutrino precisely because everybody had told him it wasn’t possible to detect the neutrino.
Nuclear energy Ernest Rutherford, one of the 20th century’s greatest experimental physicists, was not exactly unimaginative. He imagined the existence of the neutron a dozen years before it was discovered, and he figured out that a weird experiment conducted by his assistants had revealed that atoms contained a dense central nucleus. It was clear that the atomic nucleus packed an enormous quantity of energy, but Rutherford could imagine no way to extract that energy for practical purposes. In 1933, at a meeting of the British Association for the Advancement of Science, he noted that although the nucleus contained a lot of energy, it would also require energy to release it. Anyone saying we can exploit atomic energy “is talking moonshine,” Rutherford declared. To be fair, Rutherford qualified the moonshine remark by saying “with our present knowledge,” so in a way he perhaps was anticipating the discovery of nuclear fission a few years later. (And some historians have suggested that Rutherford did imagine the powerful release of nuclear energy, but thought it was a bad idea and wanted to discourage people from attempting it.)
Age of the Earth Rutherford’s reputation for imagination was bolstered by his inference that radioactive matter deep underground could solve the mystery of the age of the Earth. In the mid-19th century, William Thomson (later known as Lord Kelvin) calculated the Earth’s age to be something a little more than 100 million years, and possibly much less. Geologists insisted that the Earth must be much older — perhaps billions of years — to account for the planet’s geological features.
Kelvin calculated his estimate assuming the Earth was born as a molten rocky mass that then cooled to its present temperature. But following the discovery of radioactivity at the end of the 19th century, Rutherford pointed out that it provided a new source of heat in the Earth’s interior. While giving a talk (in Kelvin’s presence), Rutherford suggested that Kelvin had basically prophesized a new source of planetary heat.
While Kelvin’s neglect of radioactivity is the standard story, a more thorough analysis shows that adding that heat to his math would not have changed his estimate very much. Rather, Kelvin’s mistake was assuming the interior to be rigid. John Perry (one of Kelvin’s former assistants) showed in 1895 that the flow of heat deep within the Earth’s interior would alter Kelvin’s calculations considerably — enough to allow the Earth to be billions of years old. It turned out that the Earth’s mantle is fluid on long time scales, which not only explains the age of the Earth, but also plate tectonics.
Charge-parity violation Before the mid-1950s, nobody imagined that the laws of physics gave a hoot about handedness. The same laws should govern matter in action when viewed straight-on or in a mirror, just as the rules of baseball applied equally to Ted Williams and Willie Mays, not to mention Mickey Mantle. But in 1956 physicists Tsung-Dao Lee and Chen Ning Yang suggested that perfect right-left symmetry (or “parity”) might be violated by the weak nuclear force, and experiments soon confirmed their suspicion.
Restoring sanity to nature, many physicists thought, required antimatter. If you just switched left with right (mirror image), some subatomic processes exhibited a preferred handedness. But if you also replaced matter with antimatter (switching electric charge), left-right balance would be restored. In other words, reversing both charge (C) and parity (P) left nature’s behavior unchanged, a principle known as CP symmetry. CP symmetry had to be perfectly exact; otherwise nature’s laws would change if you went backward (instead of forward) in time, and nobody could imagine that.
In the early 1960s, James Cronin and Val Fitch tested CP symmetry’s perfection by studying subatomic particles called kaons and their antimatter counterparts. Kaons and antikaons both have zero charge but are not identical, because they are made from different quarks. Thanks to the quirky rules of quantum mechanics, kaons can turn into antikaons and vice versa. If CP symmetry is exact, each should turn into the other equally often. But Cronin and Fitch found that antikaons turn into kaons more often than the other way around. And that implied that nature’s laws allowed a preferred direction of time. “People didn’t want to believe it,” Cronin said in a 1999 interview. Most physicists do believe it today, but the implications of CP violation for the nature of time and other cosmic questions remain mysterious.
Behaviorism versus the brain In the early 20th century, the dogma of behaviorism, initiated by John Watson and championed a little later by B.F. Skinner, ensnared psychologists in a paradigm that literally excised imagination from science. The brain — site of all imagination — is a “black box,” the behaviorists insisted. Rules of human psychology (mostly inferred from experiments with rats and pigeons) could be scientifically established only by observing behavior. It was scientifically meaningless to inquire into the inner workings of the brain that directed such behavior, as those workings were in principle inaccessible to human observation. In other words, activity inside the brain was deemed scientifically irrelevant because it could not be observed. “When what a person does [is] attributed to what is going on inside him,” Skinner proclaimed, “investigation is brought to an end.”
Skinner’s behaviorist BS brainwashed a generation or two of followers into thinking the brain was beyond study. But fortunately for neuroscience, some physicists foresaw methods for observing neural activity in the brain without splitting the skull open, exhibiting imagination that the behaviorists lacked. In the 1970s Michel Ter-Pogossian, Michael Phelps and colleagues developed PET (positron emission tomography) scanning technology, which uses radioactive tracers to monitor brain activity. PET scanning is now complemented by magnetic resonance imaging, based on ideas developed in the 1930s and 1940s by physicists I.I. Rabi, Edward Purcell and Felix Bloch.
Gravitational waves Nowadays astrophysicists are all agog about gravitational waves, which can reveal all sorts of secrets about what goes on in the distant universe. All hail Einstein, whose theory of gravity — general relativity — explains the waves’ existence. But Einstein was not the first to propose the idea. In the 19th century, James Clerk Maxwell devised the math explaining electromagnetic waves, and speculated that gravity might similarly induce waves in a gravitational field. He couldn’t figure out how, though. Later other scientists, including Oliver Heaviside and Henri Poincaré, speculated about gravity waves. So the possibility of their existence certainly had been imagined.
But many physicists doubted that the waves existed, or if they did, could not imagine any way of proving it. Shortly before Einstein completed his general relativity theory, German physicist Gustav Mie declared that “the gravitational radiation emitted … by any oscillating mass particle is so extraordinarily weak that it is unthinkable ever to detect it by any means whatsoever.” Even Einstein had no idea how to detect gravitational waves, although he worked out the math describing them in a 1918 paper. In 1936 he decided that general relativity did not predict gravitational waves at all. But the paper rejecting them was simply wrong. As it turned out, of course, gravitational waves are real and can be detected. At first they were verified indirectly, by the diminishing distance between mutually orbiting pulsars. And more recently they were directly detected by huge experiments relying on lasers. Nobody had been able to imagine detecting gravitational waves a century ago because nobody had imagined the existence of pulsars or lasers.
All these failures show how prejudice can sometimes dull the imagination. But they also show how an imagination failure can inspire the quest for a new success. And that’s why science, so often detoured by dogma, still manages somehow, on long enough time scales, to provide technological wonders and cosmic insights beyond philosophers’ and poets’ wildest imagination.
As astronomy datasets grow larger, scientists are scouring them for black holes, hoping to better understand the exotic objects. But the drive to find more black holes is leading some astronomers astray.
“You say black holes are like a needle in a haystack, but suddenly we have way more haystacks than we did before,” says astrophysicist Kareem El-Badry of the Harvard-Smithsonian Center for Astrophysics in Cambridge, Mass. “You have better chances of finding them, but you also have more opportunities to find things that look like them.”
Two more claimed black holes have turned out to be the latter: weird things that look like them. They both are actually double-star systems at never-before-seen stages in their evolutions, El-Badry and his colleagues report March 24 in Monthly Notices of the Royal Astronomical Society. The key to understanding the systems is figuring out how to interpret light coming from them, the researchers say. In early 2021, astronomer Tharindu Jayasinghe of Ohio State University and his colleagues reported finding a star system — affectionately named the Unicorn — about 1,500 light-years from Earth that they thought held a giant red star in its senior years orbiting an invisible black hole. Some of the same researchers, including Jayasinghe, later reported a second similar system, dubbed the Giraffe, found about 12,000 light-years away.
But other researchers, including El-Badry, weren’t convinced that the systems harbored black holes. So Jayasinghe, El-Badry and others combined forces to reanalyze the data.
To verify each star system’s nature, the researchers turned to stellar spectra, the rainbows that are produced when starlight is split up into its component wavelengths. Any star’s spectrum will have lines where atoms in the stellar atmosphere have absorbed particular wavelengths of light. A slow-spinning star has very sharp lines, but a fast-spinning one has blurred and smeared lines.
“If the star spins fast enough, basically all the spectral features become almost invisible,” El-Badry says. “Normally, you detect a second star in a spectrum by looking for another set of lines,” he adds. “And that’s harder to do if a star is rapidly rotating.”
That’s why Jayasinghe and colleagues misunderstood each of these systems initially, the team found.
“The problem was that there was not just one star, but a second one that was basically hiding,” says astrophysicist Julia Bodensteiner of the European Southern Observatory in Garching, Germany, who was not involved in the new study. That second star in each system spins very fast, which makes them difficult to see in the spectra.
What’s more, the lines in the spectrum of a star orbiting something will shift back and forth, El-Badry says. If one assumes the spectrum shows just one average, slow-spinning star in an orbit — which is what appeared to be happening in these systems at first glance — that assumption then leads to the erroneous conclusion that the star is orbiting an invisible black hole.
Instead, the Unicorn and Giraffe each hold two stars, caught in a never-before-seen stage of stellar evolution, the researchers found after reanalyzing the data. Both systems contain an older red giant star with a puffy atmosphere and a “subgiant,” a star on its way to that late-life stage. The subgiants are near enough to their companion red giants that they are gravitationally stealing material from them. As these subgiants accumulate more mass, they spin faster, El-Badry says, which is what made them undetectable initially.
“Everyone was looking for really interesting black holes, but what they found is really interesting binaries,” Bodensteiner says.
These are not the only systems to trick astronomers recently. What was thought to be the nearest black hole to Earth also turned out to be pair of stars in a rarely seen stage of evolution (SN: 3/11/22).
“Of course, it’s disappointing that what we thought were black holes were actually not, but it’s part of the process,” Jayasinghe says. He and his colleagues are still looking for black holes, he says, but with a greater awareness of how pairs of interacting stars might trick them.
On her deep-sea dives, wildlife biologist Angela Ziltener of the University of Zurich often noticed Indo-Pacific bottlenosed dolphins doing something intriguing. The dolphins (Tursiops aduncus) would line up to take turns brushing their bodies against corals or sea sponges lining the seafloor. After more than a decade as an “adopted” member of the pod — a status that let Ziltener get up close without disturbing the animals — she and her team may have figured out why the animals behave this way: The dolphins may use corals and sea sponges as their own private pharmacies.
The invertebrates make antibacterial compounds — as well as others with antioxidant or hormonal properties — that are probably released into the waters of the Northern Red Sea when dolphins make contact, Ziltener and colleagues report May 19 in iScience. So the rubbing could help dolphins maintain healthy skin. Ziltener captured video showing members of the pod using corals as if they were a bath brush, swimming through to rub various parts of their bodies. Oftentimes it’s a peaceful social gathering. “It’s not like they’re fighting each other for the turn,” Ziltener says. “No, they wait and then they go through.” Other times, an individual dolphin will arrive at a patch of coral on its own.
But the dolphins won’t buff their bodies against just any corals, Ziltener says. They’re picky, primarily rubbing up against gorgonian corals (Rumphella aggregata) and leather corals (Sarcophyton sp.), as well as a kind of sea sponge (Ircinia sp.).
Ziltener and colleagues analyzed one-centimeter slices taken from wild corals and sponges. The team identified 17 compounds overall, including 10 with antibacterial or antimicrobial activity. It’s possible that as the dolphins swim through the corals, the compounds help protect the animals from skin irritations or infections, says coauthor Gertrud Morlock, an analytical chemist at Justus Liebig University Giessen in Germany. Other animals, including chimpanzees, can self-medicate (SN: 11/3/90). Marine biologist Jeremy Kiszka of Florida International University in Miami says the new study convinces him that the dolphins are using corals and sea sponges for that purpose. But, he says, additional experiments are necessary to prove the link. Lab tests, for instance, could help identify the types of bacteria that the compounds might work against.
Ziltener agrees there’s more to be done. For instance, it’s also possible that in addition to prevention, dolphins use corals and sea sponges to treat active skin infections, she says, but the team has yet to see proof of a coral cure. Next up though, Ziltener says, is figuring out whether dolphins prefer to rub specific body parts on specific corals in such an “underwater spa.”
Deep in the human brain, a very specific kind of cell dies during Parkinson’s disease.
For the first time, researchers have sorted large numbers of human brain cells in the substantia nigra into 10 distinct types. Just one is especially vulnerable in Parkinson’s disease, the team reports May 5 in Nature Neuroscience. The result could lead to a clearer view of how Parkinson’s takes hold, and perhaps even ways to stop it.
The new research “goes right to the core of the matter,” says neuroscientist Raj Awatramani of Northwestern University Feinberg School of Medicine in Chicago. Pinpointing the brain cells that seem to be especially susceptible to the devastating disease is “the strength of this paper,” says Awatramani, who was not involved in the study.
Parkinson’s disease steals people’s ability to move smoothly, leaving balance problems, tremors and rigidity. In the United States, nearly 1 million people are estimated to have Parkinson’s. Scientists have known for decades that these symptoms come with the death of nerve cells in the substantia nigra. Neurons there churn out dopamine, a chemical signal involved in movement, among other jobs (SN: 9/7/17).
But those dopamine-making neurons are not all equally vulnerable in Parkinson’s, it turns out.
“This seemed like an opportunity to … really clarify which kinds of cells are actually dying in Parkinson’s disease,” says Evan Macosko, a psychiatrist and neuroscientist at Massachusetts General Hospital in Boston and the Broad Institute of MIT and Harvard. The tricky part was that dopamine-making neurons in the substantia nigra are rare. In samples of postmortem brains, “we couldn’t survey enough of [the cells] to really get an answer,” Macosko says. But Abdulraouf Abdulraouf, a researcher in Macosko’s laboratory, led experiments that sorted these cells, figuring out a way to selectively pull the cells’ nuclei out from the rest of the cells present in the substantia nigra. That enrichment ultimately led to an abundance of nuclei to analyze.
By studying over 15,000 nuclei from the brains of eight formerly healthy people, the researchers further sorted dopamine-making cells in the substantia nigra into 10 distinct groups. Each of these cell groups was defined by a specific brain location and certain combinations of genes that were active.
When the researchers looked at substantia nigra neurons in the brains of people who died with either Parkinson’s disease or the related Lewy body dementia, the team noticed something curious: One of these 10 cell types was drastically diminished.
These missing neurons were identified by their location in the lower part of the substantia nigra and an active AGTR1 gene, lab member Tushar Kamath and colleagues found. That gene was thought to serve simply as a good way to identify these cells, Macosko says; researchers don’t know whether the gene has a role in these dopamine-making cells’ fate in people.
The new finding points to ways to perhaps counter the debilitating diseases. Scientists have been keen to replace the missing dopamine-making neurons in the brains of people with Parkinson’s. The new study shows what those cells would need to look like, Awatramani says. “If a particular subtype is more vulnerable in Parkinson’s disease, maybe that’s the one we should be trying to replace,” he says.
In fact, Macosko says that stem cell scientists have already been in contact, eager to make these specific cells. “We hope this is a guidepost,” Macosko says.
The new study involved only a small number of human brains. Going forward, Macosko and his colleagues hope to study more brains, and more parts of those brains. “We were able to get some pretty interesting insights with a relatively small number of people,” he says. “When we get to larger numbers of people with other kinds of diseases, I think we’re going to learn a lot.”
“Fungi Fridays” could save a lot of trees — and take a bite out of greenhouse gas emissions. Eating one-fifth less red meat and instead munching on microbial proteins derived from fungi or algae could cut annual deforestation in half by 2050, researchers report May 5 in Nature.
Raising cattle and other ruminants contributes methane and nitrous oxide to the atmosphere, while clearing forests for pasture lands adds carbon dioxide (SN: 4/4/22; SN: 7/13/21). So the hunt is on for environmentally friendly substitutes, such as lab-grown hamburgers and cricket farming (SN: 9/20/18; SN: 5/2/19).
Another alternative is microbial protein, made from cells cultivated in a laboratory and nurtured with glucose. Fermented fungal spores, for example, produce a dense, doughy substance called mycoprotein, while fermented algae produce spirulina, a dietary supplement. Cell-cultured foods do require sugar from croplands, but studies show that mycoprotein produces fewer greenhouse gas emissions and uses less land and water than raising cattle, says Florian Humpenöder, a climate modeler at Potsdam Institute for Climate Impact Research in Germany. However, a full comparison of foods’ future environmental impacts also requires accounting for changes in population, lifestyle, dietary patterns and technology, he says.
So Humpenöder and colleagues incorporated projected socioeconomic changes into computer simulations of land use and deforestation from 2020 through 2050. Then they simulated four scenarios, substituting microbial protein for 0 percent, 20 percent, 50 percent or 80 percent of the global red meat diet by 2050. A little substitution went a long way, the team found: Just 20 percent microbial protein substitution cut annual deforestation rates — and associated CO2 emissions — by 56 percent from 2020 to 2050.
Eating more microbial proteins could be part of a portfolio of strategies to address the climate and biodiversity crises — alongside measures to protect forests and decarbonize electricity generation, Humpenöder says.
Young kids’ brains are especially tuned to their mothers’ voices. Teenagers’ brains, in their typical rebellious glory, are most decidedly not.
That conclusion, described April 28 in the Journal of Neuroscience, may seem laughably obvious to parents of teenagers, including neuroscientist Daniel Abrams of Stanford University School of Medicine. “I have two teenaged boys myself, and it’s a kind of funny result,” he says.
But the finding may reflect something much deeper than a punch line. As kids grow up and expand their social connections beyond their family, their brains need to be attuned to that growing world. “Just as an infant is tuned into a mom, adolescents have this whole other class of sounds and voices that they need to tune into,” Abrams says. He and his colleagues scanned the brains of 7- to 16-year-olds as they heard the voices of either their mothers or unfamiliar women. To simplify the experiment down to just the sound of a voice, the words were gibberish: teebudieshawlt, keebudieshawlt and peebudieshawlt. As the children and teenagers listened, certain parts of their brains became active.
Previous experiments by Abrams and his colleagues have shown that certain regions of the brains of kids ages 7 to 12 — particularly those parts involved in detecting rewards and paying attention — respond more strongly to mom’s voice than to a voice of an unknown woman. “In adolescence, we show the exact opposite of that,” Abrams says.
In these same brain regions in teens, unfamiliar voices elicited greater responses than the voices of their own dear mothers. The shift from mother to other seems to happen between ages 13 and 14.
It’s not that these adolescent brain areas stop responding to mom, Abrams says. Rather, the unfamiliar voices become more rewarding and worthy of attention.
And that’s exactly how it should be, Abrams says. Exploring new people and situations is a hallmark of adolescence. “What we’re seeing here is just purely a reflection of this phenomenon.”
Voices can carry powerful signals. When stressed-out girls heard their moms’ voices on the phone, the girls’ stress hormones dropped, biological anthropologist Leslie Seltzer of the University of Wisconsin–Madison and colleagues found in 2011 (SN: 8/12/11). The same was not true for texts from their mothers.
The current results support the idea that the brain changes to reflect new needs that come with time and experience, Seltzer says. “As we mature, our survival depends less and less on maternal support and more on our group affiliations with peers.”
It’s not clear how universal this neural shift is. The finding might change across various mother-child relationships, including those that have different parenting styles, or even a history of neglect or abuse, Seltzer says.
So while teenagers and parents may sometimes feel frustrated by missed messages, take heart, Abrams says. “This is the way the brain is wired, and there’s a good reason for it.”
Roughly 400 million years before the founding father invented bifocals, the now extinct trilobite Dalmanitina socialis already had a superior version (SN: 2/2/74). Not only could the sea critter see things both near and far, it could also see both distances in focus at the same time — an ability that eludes most eyes and cameras.
Now, a new type of camera sees the world the way this trilobite did. Inspired by D. socialis’s eyes, the camera can simultaneously focus on two points anywhere between three centimeters and nearly two kilometers away, researchers report April 19 in Nature Communications. “In optics, there was a problem,” says Amit Agrawal, a physicist at the National Institute of Standards and Technology in Gaithersburg, Md. If you wanted to focus a single lens to two different points, you just simply could not do it, he says.
If a camera could see like a trilobite, Agrawal figured, it could capture high-quality images with higher depths of field. A high depth of field — the distance between the nearest and farthest points that a camera can bring into focus — is important for the relatively new technique of light-field photography, which uses many tiny lenses to produce 3-D photos.
To mimic the trilobite’s ability, the team constructed a metalens, a type of flat lens made up of millions of differently-sized rectangular nanopillars arranged like a cityscape — if skyscrapers were one two-hundredth the width of a human hair. The nanopillars act as obstacles that bend light in different ways depending on their shape, size and arrangement. The researchers arranged the pillars so some light traveled through one part of the lens and some light through another, creating two different focal points. To use the device in a light-field camera, the team then built an array of identical metalenses that could capture thousands of tiny images. When combined, the result is an image that’s in focus closeup and far away, but blurry in between. The blurry bits are then sharpened with a type of machine learning computer program.
Achieving a large depth of field can help the program recover depth information, says Ivo Ihrke, a computational imaging scientist at the University of Siegen in Germany who was not involved with this research. Standard images don’t contain information about the distances to objects in the photo, but 3-D images do. So the more depth information that can be captured, the better.
The trilobite approach isn’t the only way to boost the range of visual acuity. Other cameras using a different method have accomplished a similar depth of field, Ihrke says. For instance, a light-field camera made by the company Raytrix contains an array of tiny glass lenses of three different types that work in concert, with each type tailored to focus light from a particular distance. The trilobite way also uses an array of lenses, but all the lenses are the same, each one capable of doing all the depth-of-focus work on its own — which helps achieve a slightly higher resolution than using different types of lenses.
Regardless of how it’s done, all the recent advances in capturing depth with light-field cameras will improve imaging techniques that depend on that depth, Agrawal says. These techniques could someday help self-driving cars to track distances to other vehicles, for example, or Mars rovers to gauge distances to and sizes of landmarks in their vicinity.
The multitalented, Renaissance genius wrote down his “rule of trees” over 500 years ago. It described the way he thought that trees branch. Though it was a brilliant insight that helped him to draw realistic landscapes, Leonardo’s rule breaks down for many types of trees. Now, a new branching rule — dubbed “Leonardo-like” — works for virtually any leafy tree, researchers report in a paper accepted April 13 in Physical Review E.
“The older Leonardo rule describes the thickness of the branches, while the length of the branch was not taken into account,” says physicist Sergey Grigoriev of the Petersburg Nuclear Physics Institute in Gatchina, Russia. “Therefore, the description using the older rule is not complete.” Leonardo’s rule says that the thickness of a limb before it branches into smaller ones is the same as the combined thickness of the limbs sprouting from it (SN: 6/1/11). But according to Grigoriev and his colleagues, it’s the surface area that stays the same.
Using surface area as a guide, the new rule incorporates limb widths and lengths, and predicts that long branches end up being thinner than short ones. Unlike Leonardo’s guess, the updated rule works for slender birches as well as it does for sturdy oaks, the team reports.
The connection between the surface area of branches and overall tree structure shows that it’s the living, outer layers that guide tree structure, the researchers say. “The life of a tree flows according to the laws of conservation of area in two-dimensional space,” the authors write in their study, “as if the tree were a two-dimensional object.” In other words, it’s as if just two dimensions — the width of each limb and the distance between branchings on a limb — determine any tree’s structure. As a result, when trees are rendered in two dimensions in a painting or on a screen, the new rule describes them particularly well. The new Leonardo-like rule is an improvement, says Katherine McCulloh, a botanist at the University of Wisconsin–Madison who was not involved with this study. But she has her doubts about the Russian group’s rationale for it. In most trees, she says, the living portion extends much deeper than the thin surface layer.
“It’s really species-dependent, and even age-dependent,” McCulloh says. “A giant, old oak tree might have a centimeter of living wood … [but] there are certainly tropical tree species that have very deep sapwood and may have living wood for most of their cross sections.”
Still, the fact that the Leonardo-like rule appears to hold for many trees intrigues McCulloh. “To me, it drives home the question of why are [trees] conserving this geometry for their external tissue, and how is that related to the microscopic level differences that we observe in wood,” she says. “It’s a really interesting question.”
To test their rule, Grigoriev and colleagues took photographs of trees from a variety of species and analyzed the branches to confirm that the real-world patterns matched the predictions. The photos offer “a direct measurement of the characteristics of a tree without touching it, which can be important when dealing with a living object,” Grigoriev says.
Though the team hasn’t studied evergreens yet, the rule holds for all of the deciduous trees that the researchers have looked at. “We have applied our methodology to maple, linden, apple,” Grigoriev says, in addition to oak, birch and chestnut. “They show the same general structure and obey the Leonardo-like rule.”
While it’s possible to confirm the rule by measuring branches by hand, it would require climbing into trees and checking all the limbs — a risky exercise for trees and scientists alike. “Note,” the researchers write, “that not a single tree was harmed during these experiments.”