A man with a hole in his forehead, who was interred in what’s now northwest Alabama between around 3,000 and 5,000 years ago, represents North America’s oldest known case of skull surgery.
Damage around the man’s oval skull opening indicates that someone scraped out that piece of bone, probably to reduce brain swelling caused by a violent attack or a serious fall, said bioarchaeologist Diana Simpson of the University of Nevada, Las Vegas. Either scenario could explain fractures and other injuries above the man’s left eye and to his left arm, leg and collarbone.
Bone regrowth on the edges of the skull opening indicates that the man lived for up to one year after surgery, Simpson estimated. She presented her analysis of the man’s remains on March 28 at a virtual session of the annual meeting of the American Association of Biological Anthropologists. Skull surgery occurred as early as 13,000 years ago in North Africa (SN: 8/17/11). Until now, the oldest evidence of this practice in North America dated to no more than roughly 1,000 years ago.
In his prime, the new record holder likely served as a ritual practitioner or shaman. His grave included items like those found in shamans’ graves at nearby North American hunter-gatherer sites dating to between about 3,000 and 5,000 years ago. Ritual objects buried with him included sharpened bone pins and modified deer and turkey bones that may have been tattooing tools (SN: 5/25/21).
Investigators excavated the man’s grave and 162 others at the Little Bear Creek Site, a seashell covered burial mound, in the 1940s. Simpson studied the man’s museum-held skeleton and grave items in 2018, shortly before the discoveries were returned to local Native American communities for reburial.
Science, some would say, is an enterprise that should concern itself solely with cold, hard facts. Flights of imagination should be the province of philosophers and poets.
On the other hand, as Albert Einstein so astutely observed, “Imagination is more important than knowledge.” Knowledge, he said, is limited to what we know now, while “imagination embraces the entire world, stimulating progress.”
So with science, imagination has often been the prelude to transformative advances in knowledge, remaking humankind’s understanding of the world and enabling powerful new technologies. And yet while sometimes spectacularly successful, imagination has also frequently failed in ways that retard the revealing of nature’s secrets. Some minds, it seems, are simply incapable of imagining that there’s more to reality than what they already know.
On many occasions scientists have failed to foresee ways of testing novel ideas, ridiculing them as unverifiable and therefore unscientific. Consequently it is not too challenging to come up with enough failures of scientific imagination to compile a Top 10 list, beginning with:
Atoms By the middle of the 19th century, most scientists believed in atoms. Chemists especially. John Dalton had shown that the simple ratios of different elements making up chemical compounds strongly implied that each element consisted of identical tiny particles. Subsequent research on the weights of those atoms made their reality pretty hard to dispute. But that didn’t deter physicist-philosopher Ernst Mach. Even as late as the beginning of the 20th century, he and a number of others insisted that atoms could not be real, as they were not accessible to the senses. Mach believed that atoms were a “mental artifice,” convenient fictions that helped in calculating the outcomes of chemical reactions. “Have you ever seen one?” he would ask.
Apart from the fallacy of defining reality as “observable,” Mach’s main failure was his inability to imagine a way that atoms could be observed. Even after Einstein proved the existence of atoms by indirect means in 1905, Mach stood his ground. He was unaware, of course, of the 20th century technologies that quantum mechanics would enable, and so did not foresee powerful new microscopes that could show actual images of atoms (and allow a certain computing company to drag them around to spell out IBM).
Composition of stars Mach’s views were similar to those of Auguste Comte, a French philosopher who originated the idea of positivism, which denies reality to anything other than objects of sensory experience. Comte’s philosophy led (and in some cases still leads) many scientists astray. His greatest failure of imagination was an example he offered for what science could never know: the chemical composition of the stars.
Unable to imagine anybody affording a ticket on some entrepreneur’s space rocket, Comte argued in 1835 that the identity of the stars’ components would forever remain beyond human knowledge. We could study their size, shapes and movements, he said, “whereas we would never know how to study by any means their chemical composition, or their mineralogical structure,” or for that matter, their temperature, which “will necessarily always be concealed from us.”
Within a few decades, though, a newfangled technology called spectroscopy enabled astronomers to analyze the colors of light emitted by stars. And since each chemical element emits (or absorbs) precise colors (or frequencies) of light, each set of colors is like a chemical fingerprint, an infallible indicator for an element’s identity. Using a spectroscope to observe starlight therefore can reveal the chemistry of the stars, exactly what Comte thought impossible.
Canals on Mars Sometimes imagination fails because of its overabundance rather than absence. In the case of the never-ending drama over the possibility of life on Mars, that planet’s famous canals turned out to be figments of overactive scientific imagination.
First “observed” in the late 19th century, the Martian canals showed up as streaks on the planet’s surface, described as canali by Italian astronomer Giovanni Schiaparelli. Canali is, however, Italian for channels, not canals. So in this case something was gained (rather than lost) in translation — the idea that Mars was inhabited. “Canals are dug,” remarked British astronomer Norman Lockyer in 1901, “ergo there were diggers.” Soon astronomers imagined an elaborate system of canals transporting water from Martian poles to thirsty metropolitan areas and agricultural centers. (Some observers even imagined seeing canals on Venus and Mercury.) With more constrained imaginations, aided by better telescopes and translations, belief in the Martian canals eventually faded. It was merely the Martian winds blowing dust (bright) and sand (dark) around the surface in ways that occasionally made bright and dark streaks line up in a deceptive manner — to eyes attached to overly imaginative brains.
Nuclear fission In 1934, Italian physicist Enrico Fermi bombarded uranium (atomic number 92) and other elements with neutrons, the particle discovered just two years earlier by James Chadwick. Fermi found that among the products was an unidentifiable new element. He thought he had created element 93, heavier than uranium. He could not imagine any other explanation. In 1938 Fermi was awarded the Nobel Prize in physics for demonstrating “the existence of new radioactive elements produced by neutron irradiation.”
It turned out, however, that Fermi had unwittingly demonstrated nuclear fission. His bombardment products were actually lighter, previously known elements — fragments split from the heavy uranium nucleus. Of course, the scientists later credited with discovering fission, Otto Hahn and Fritz Strassmann, didn’t understand their results either. Hahn’s former collaborator Lise Meitner was the one who explained what they’d done. Another woman, chemist Ida Noddack, had imagined the possibility of fission to explain Fermi’s results, but for some reason nobody listened to her.
Detecting neutrinos In the 1920s, most physicists had convinced themselves that nature was built from just two basic particles: positively charged protons and negatively charged electrons. Some had, however, imagined the possibility of a particle with no electric charge. One specific proposal for such a particle came in 1930 from Austrian physicist Wolfgang Pauli. He suggested that a no-charge particle could explain a suspicious loss of energy observed in beta-particle radioactivity. Pauli’s idea was worked out mathematically by Fermi, who named the neutral particle the neutrino. Fermi’s math was then examined by physicists Hans Bethe and Rudolf Peierls, who deduced that the neutrino would zip through matter so easily that there was no imaginable way of detecting its existence (short of building a tank of liquid hydrogen 6 million billion miles wide). “There is no practically possible way of observing the neutrino,” Bethe and Peierls concluded.
But they had failed to imagine the possibility of finding a source of huge numbers of high-energy neutrinos, so that a few could be captured even if almost all escaped. No such source was known until nuclear fission reactors were invented. In the 1950s, Frederick Reines and Clyde Cowan used reactors to definitely establish the neutrino’s existence. Reines later said he sought a way to detect the neutrino precisely because everybody had told him it wasn’t possible to detect the neutrino.
Nuclear energy Ernest Rutherford, one of the 20th century’s greatest experimental physicists, was not exactly unimaginative. He imagined the existence of the neutron a dozen years before it was discovered, and he figured out that a weird experiment conducted by his assistants had revealed that atoms contained a dense central nucleus. It was clear that the atomic nucleus packed an enormous quantity of energy, but Rutherford could imagine no way to extract that energy for practical purposes. In 1933, at a meeting of the British Association for the Advancement of Science, he noted that although the nucleus contained a lot of energy, it would also require energy to release it. Anyone saying we can exploit atomic energy “is talking moonshine,” Rutherford declared. To be fair, Rutherford qualified the moonshine remark by saying “with our present knowledge,” so in a way he perhaps was anticipating the discovery of nuclear fission a few years later. (And some historians have suggested that Rutherford did imagine the powerful release of nuclear energy, but thought it was a bad idea and wanted to discourage people from attempting it.)
Age of the Earth Rutherford’s reputation for imagination was bolstered by his inference that radioactive matter deep underground could solve the mystery of the age of the Earth. In the mid-19th century, William Thomson (later known as Lord Kelvin) calculated the Earth’s age to be something a little more than 100 million years, and possibly much less. Geologists insisted that the Earth must be much older — perhaps billions of years — to account for the planet’s geological features.
Kelvin calculated his estimate assuming the Earth was born as a molten rocky mass that then cooled to its present temperature. But following the discovery of radioactivity at the end of the 19th century, Rutherford pointed out that it provided a new source of heat in the Earth’s interior. While giving a talk (in Kelvin’s presence), Rutherford suggested that Kelvin had basically prophesized a new source of planetary heat.
While Kelvin’s neglect of radioactivity is the standard story, a more thorough analysis shows that adding that heat to his math would not have changed his estimate very much. Rather, Kelvin’s mistake was assuming the interior to be rigid. John Perry (one of Kelvin’s former assistants) showed in 1895 that the flow of heat deep within the Earth’s interior would alter Kelvin’s calculations considerably — enough to allow the Earth to be billions of years old. It turned out that the Earth’s mantle is fluid on long time scales, which not only explains the age of the Earth, but also plate tectonics.
Charge-parity violation Before the mid-1950s, nobody imagined that the laws of physics gave a hoot about handedness. The same laws should govern matter in action when viewed straight-on or in a mirror, just as the rules of baseball applied equally to Ted Williams and Willie Mays, not to mention Mickey Mantle. But in 1956 physicists Tsung-Dao Lee and Chen Ning Yang suggested that perfect right-left symmetry (or “parity”) might be violated by the weak nuclear force, and experiments soon confirmed their suspicion.
Restoring sanity to nature, many physicists thought, required antimatter. If you just switched left with right (mirror image), some subatomic processes exhibited a preferred handedness. But if you also replaced matter with antimatter (switching electric charge), left-right balance would be restored. In other words, reversing both charge (C) and parity (P) left nature’s behavior unchanged, a principle known as CP symmetry. CP symmetry had to be perfectly exact; otherwise nature’s laws would change if you went backward (instead of forward) in time, and nobody could imagine that.
In the early 1960s, James Cronin and Val Fitch tested CP symmetry’s perfection by studying subatomic particles called kaons and their antimatter counterparts. Kaons and antikaons both have zero charge but are not identical, because they are made from different quarks. Thanks to the quirky rules of quantum mechanics, kaons can turn into antikaons and vice versa. If CP symmetry is exact, each should turn into the other equally often. But Cronin and Fitch found that antikaons turn into kaons more often than the other way around. And that implied that nature’s laws allowed a preferred direction of time. “People didn’t want to believe it,” Cronin said in a 1999 interview. Most physicists do believe it today, but the implications of CP violation for the nature of time and other cosmic questions remain mysterious.
Behaviorism versus the brain In the early 20th century, the dogma of behaviorism, initiated by John Watson and championed a little later by B.F. Skinner, ensnared psychologists in a paradigm that literally excised imagination from science. The brain — site of all imagination — is a “black box,” the behaviorists insisted. Rules of human psychology (mostly inferred from experiments with rats and pigeons) could be scientifically established only by observing behavior. It was scientifically meaningless to inquire into the inner workings of the brain that directed such behavior, as those workings were in principle inaccessible to human observation. In other words, activity inside the brain was deemed scientifically irrelevant because it could not be observed. “When what a person does [is] attributed to what is going on inside him,” Skinner proclaimed, “investigation is brought to an end.”
Skinner’s behaviorist BS brainwashed a generation or two of followers into thinking the brain was beyond study. But fortunately for neuroscience, some physicists foresaw methods for observing neural activity in the brain without splitting the skull open, exhibiting imagination that the behaviorists lacked. In the 1970s Michel Ter-Pogossian, Michael Phelps and colleagues developed PET (positron emission tomography) scanning technology, which uses radioactive tracers to monitor brain activity. PET scanning is now complemented by magnetic resonance imaging, based on ideas developed in the 1930s and 1940s by physicists I.I. Rabi, Edward Purcell and Felix Bloch.
Gravitational waves Nowadays astrophysicists are all agog about gravitational waves, which can reveal all sorts of secrets about what goes on in the distant universe. All hail Einstein, whose theory of gravity — general relativity — explains the waves’ existence. But Einstein was not the first to propose the idea. In the 19th century, James Clerk Maxwell devised the math explaining electromagnetic waves, and speculated that gravity might similarly induce waves in a gravitational field. He couldn’t figure out how, though. Later other scientists, including Oliver Heaviside and Henri Poincaré, speculated about gravity waves. So the possibility of their existence certainly had been imagined.
But many physicists doubted that the waves existed, or if they did, could not imagine any way of proving it. Shortly before Einstein completed his general relativity theory, German physicist Gustav Mie declared that “the gravitational radiation emitted … by any oscillating mass particle is so extraordinarily weak that it is unthinkable ever to detect it by any means whatsoever.” Even Einstein had no idea how to detect gravitational waves, although he worked out the math describing them in a 1918 paper. In 1936 he decided that general relativity did not predict gravitational waves at all. But the paper rejecting them was simply wrong. As it turned out, of course, gravitational waves are real and can be detected. At first they were verified indirectly, by the diminishing distance between mutually orbiting pulsars. And more recently they were directly detected by huge experiments relying on lasers. Nobody had been able to imagine detecting gravitational waves a century ago because nobody had imagined the existence of pulsars or lasers.
All these failures show how prejudice can sometimes dull the imagination. But they also show how an imagination failure can inspire the quest for a new success. And that’s why science, so often detoured by dogma, still manages somehow, on long enough time scales, to provide technological wonders and cosmic insights beyond philosophers’ and poets’ wildest imagination.
It was one of the biggest climate change questions of the early 2000s: Had the planet’s rising fever stalled, even as humans pumped more heat-trapping gases into Earth’s atmosphere?
By the turn of the century, the scientific understanding of climate change was on firm footing. Decades of research showed that carbon dioxide was accumulating in Earth’s atmosphere, thanks to human activities like burning fossil fuels and cutting down carbon-storing forests, and that global temperatures were rising as a result. Yet weather records seemed to show that global warming slowed between around 1998 and 2012. How could that be? After careful study, scientists found the apparent pause to be a hiccup in the data. Earth had, in fact, continued to warm. This hiccup, though, prompted an outsize response from climate skeptics and scientists. It serves as a case study for how public perception shapes what science gets done, for better or worse.
The mystery of what came to be called the “global warming hiatus” arose as scientists built up, year after year, data on the planet’s average surface temperature. Several organizations maintain their own temperature datasets; each relies on observations gathered at weather stations and from ships and buoys around the globe. The actual amount of warming varies from year to year, but overall the trend is going up, and record-hot years are becoming more common. The 1995 Intergovernmental Panel on Climate Change report, for instance, noted that recent years had been among the warmest recorded since 1860.
And then came the powerful El Niño of 1997–1998, a weather pattern that transferred large amounts of heat from the ocean into the atmosphere. The planet’s temperature soared as a result — but then, according to the weather records, it appeared to slacken dramatically. Between 1998 and 2012, the global average surface temperature rose at less than half the rate it did between 1951 and 2012. That didn’t make sense. Global warming should be accelerating over time as people ramp up the rate at which they add heat-trapping gases to the atmosphere. By the mid-2000s, climate skeptics had seized on the narrative that “global warming has stopped.” Most professional climate scientists were not studying the phenomenon, since most believed the apparent pause fell within the range of natural temperature variability. But public attention soon caught up to them, and researchers began investigating whether the pause was a real thing. It was a high-profile shift in scientific focus.
“In studying that anomalous period, we learned a lot of lessons about both the climate system and the scientific process,” says Zeke Hausfather, a climate scientist now with the technology company Stripe.
By the early 2010s, scientists were busily working to explain why the global temperature records seemed to be flatlining. Ideas included the contribution of cooling sulfur particles emitted by coal-burning power plants and heat being taken up by the Atlantic and Southern oceans. Such studies were the most focused attempt ever to understand the factors that drive year-to-year temperature variability. They revealed how much natural variability can be expected when factors such as a powerful El Niño are superimposed onto a long-term warming trend.
Scientists spent years investigating the purported warming pause — devoting more time and resources than they otherwise might have. So many papers were published on the apparent pause that scientists began joking that the journal Nature Climate Change should change its name to Nature Hiatus. Then in 2015, a team led by researchers at the U.S. National Oceanic and Atmospheric Administration published a jaw-dropping conclusion in the journal Science. The rise in global temperatures had not plateaued; rather, incomplete data had obscured ongoing global warming. When more Arctic temperature records were included and biases in ocean temperature data were corrected, the NOAA dataset showed the heat-up continuing. With the newly corrected data, the apparent pause in global warming vanished. A 2017 study led by Hausfather confirmed and extended these findings, as did other reports.
Even after these studies were published, the hiatus remained a favored topic among climate skeptics, who used it to argue that concern over global warming was overblown. Congressman Lamar Smith, a Republican from Texas who chaired the House of Representatives’ science committee in the mid-2010s, was particularly incensed by the 2015 NOAA study. He demanded to see the underlying data while also accusing NOAA of altering it. (The agency denied fudging the data.)
“In retrospect, it is clear that we focused too much on the apparent hiatus,” Hausfather says. Figuring out why global temperature records seemed to plateau between 1998 and 2012 is important — but so is keeping a big-picture view of the broader understanding of climate change. The hiccup represented a short fluctuation in a much longer and much more important trend. Science relies on testing hypotheses and questioning conclusions, but here’s a case where probing an anomaly was taken arguably too far. It caused researchers to doubt their conclusions and spend large amounts of time questioning their well-established methods, says Stephan Lewandowsky, a cognitive scientist at the University of Bristol who has studied climate scientists’ response to the hiatus. Scientists studying the hiatus could have been working instead on providing clear information to policy makers about the reality of global warming and the urgency of addressing it.
The debates over whether the hiatus was real or not fed public confusion and undermined efforts to convince people to take aggressive action to reduce climate change’s impacts. That’s an important lesson going forward, Lewandowsky says.
“My sense is that the scientific community has moved on,” he says. “By contrast, the political operatives behind organized denial have learned a different lesson, which is that the ‘global warming has stopped’ meme is very effective in generating public complacency, and so they will use it at every opportunity.”
Already, some climate deniers are talking about a new “pause” in global warming because not every one of the past five years has set a new record, he notes. Yet the big-picture trend remains clear: Global temperatures have continued to rise in recent years. The warmest seven years on record have all occurred since 2015, and each decade since the 1980s has been warmer than the one before.
As astronomy datasets grow larger, scientists are scouring them for black holes, hoping to better understand the exotic objects. But the drive to find more black holes is leading some astronomers astray.
“You say black holes are like a needle in a haystack, but suddenly we have way more haystacks than we did before,” says astrophysicist Kareem El-Badry of the Harvard-Smithsonian Center for Astrophysics in Cambridge, Mass. “You have better chances of finding them, but you also have more opportunities to find things that look like them.”
Two more claimed black holes have turned out to be the latter: weird things that look like them. They both are actually double-star systems at never-before-seen stages in their evolutions, El-Badry and his colleagues report March 24 in Monthly Notices of the Royal Astronomical Society. The key to understanding the systems is figuring out how to interpret light coming from them, the researchers say. In early 2021, astronomer Tharindu Jayasinghe of Ohio State University and his colleagues reported finding a star system — affectionately named the Unicorn — about 1,500 light-years from Earth that they thought held a giant red star in its senior years orbiting an invisible black hole. Some of the same researchers, including Jayasinghe, later reported a second similar system, dubbed the Giraffe, found about 12,000 light-years away.
But other researchers, including El-Badry, weren’t convinced that the systems harbored black holes. So Jayasinghe, El-Badry and others combined forces to reanalyze the data.
To verify each star system’s nature, the researchers turned to stellar spectra, the rainbows that are produced when starlight is split up into its component wavelengths. Any star’s spectrum will have lines where atoms in the stellar atmosphere have absorbed particular wavelengths of light. A slow-spinning star has very sharp lines, but a fast-spinning one has blurred and smeared lines.
“If the star spins fast enough, basically all the spectral features become almost invisible,” El-Badry says. “Normally, you detect a second star in a spectrum by looking for another set of lines,” he adds. “And that’s harder to do if a star is rapidly rotating.”
That’s why Jayasinghe and colleagues misunderstood each of these systems initially, the team found.
“The problem was that there was not just one star, but a second one that was basically hiding,” says astrophysicist Julia Bodensteiner of the European Southern Observatory in Garching, Germany, who was not involved in the new study. That second star in each system spins very fast, which makes them difficult to see in the spectra.
What’s more, the lines in the spectrum of a star orbiting something will shift back and forth, El-Badry says. If one assumes the spectrum shows just one average, slow-spinning star in an orbit — which is what appeared to be happening in these systems at first glance — that assumption then leads to the erroneous conclusion that the star is orbiting an invisible black hole.
Instead, the Unicorn and Giraffe each hold two stars, caught in a never-before-seen stage of stellar evolution, the researchers found after reanalyzing the data. Both systems contain an older red giant star with a puffy atmosphere and a “subgiant,” a star on its way to that late-life stage. The subgiants are near enough to their companion red giants that they are gravitationally stealing material from them. As these subgiants accumulate more mass, they spin faster, El-Badry says, which is what made them undetectable initially.
“Everyone was looking for really interesting black holes, but what they found is really interesting binaries,” Bodensteiner says.
These are not the only systems to trick astronomers recently. What was thought to be the nearest black hole to Earth also turned out to be pair of stars in a rarely seen stage of evolution (SN: 3/11/22).
“Of course, it’s disappointing that what we thought were black holes were actually not, but it’s part of the process,” Jayasinghe says. He and his colleagues are still looking for black holes, he says, but with a greater awareness of how pairs of interacting stars might trick them.
It’s a weird time in the pandemic. COVID-19 cases are once again climbing in some parts of the United States, but still falling from the January surge in other places. The omicron subvariant BA.2 is now dominant in the country, accounting for more than 50 percent of new cases in the week ending March 26, according to the U.S. Centers for Disease Control and Prevention.
BA.2 has already taken parts of the world by storm, spurring large outbreaks in Europe and Asia. With the rising spread of the subvariant in the United States, signs are pointing to another COVID-19 wave here, although it’s unclear how big it could be. There is a good amount of immunity from vaccination and infections from other omicron siblings to help flatten the next peak. But the highly transmissible subvariant is advancing at a time when many have tossed masks aside.
I can’t help but feel that we’re sitting ducks. There’s no movement yet to reinstate protective measures to prepare for the coming wave. Instead, there are loud calls to “return to normal.” But even though it’s been two years, this pandemic isn’t over, no matter how much we wish it were. And when people talk about “normal,” I am struck by what can’t be “normal” again. For millions and millions of people who have lost children, partners, parents and friends, life won’t be the same. One study reported that the “mortality shock” of COVID-19 has left nine people bereaved for every one U.S. death. So for the more than 975,000 who have died of COVID-19 in the United States, there are close to 9 million who are grieving. Although the study didn’t calculate the ratio globally, more than 6 million have died worldwide, undoubtedly leaving tens of millions bereaved. At the global scale, researchers estimate that through October 2021, more than 5 million children have lost a parent or caregiver to COVID-19, putting these children’s health, development and future education at risk (SN: 2/24/22).
Adding to the loss, the pandemic robbed many people of the chance to be with their loved ones as they died or to gather for a funeral. Psychiatrists are concerned that cases of prolonged grief disorder could rise, considering the scale of this mass mourning event.
Many millions who weathered an infection with SARS-CoV-2 went on to develop debilitating symptoms from long COVID, preventing their return to “normal.” A recent report from the U.S. Government Accountability Office estimates that 7.7 to 23 million people in the United States may have developed the condition. Worldwide, an estimated 100 million people currently have, or have had, long-term symptoms from COVID-19, researchers reported in a preprint study last November. Many with long COVID can no longer work and are struggling to get financial assistance in the United States. Some have lost their homes.
And as masking and vaccine mandates have fallen away, people with compromised immune systems have no choice but to fend for themselves and remain vigilant about restricting their interactions. People taking drugs that suppress the immune system or who have immune system disorders can’t muster much protection, if any, following vaccination against the coronavirus. And if they get COVID-19, their weakened defenses put them at risk of more severe disease. With all that people have endured — and continue to endure — during the pandemic, it would be a colossal missed opportunity to throw aside what we’ve learned from this experience. The pandemic brought wider attention to how racism fuels health disparities in the United States and renewed calls to make more progress dismantling inequities. With remote work and virtual school, many people with disabilities have gained important accommodations. The argument that internet access is a social determinant of health has been reinforced: Places that had limited access to broadband internet were associated with higher COVID-19 mortality rates in the United States, researchers reported this month in JAMA Network Open.
The pandemic could also provide the push to bring indoor air under public health’s wing, joining common goods like water and food. The recognition that the virus that causes COVID-19 is primarily spread through the air has also been a reminder of the airborne risk posed by other respiratory diseases, including influenza and tuberculosis (SN: 12/16/21). Improved ventilation — bringing fresh outdoor air inside — can temper an influenza outbreak, and it helped to control a real-world tuberculosis outbreak in Taiwan.
Paying attention to indoor air quality has also paid dividends during the COVID-19 pandemic. Schools that combined better ventilation with high-efficiency filters reduced the incidence of COVID-19 by 48 percent compared with schools that didn’t, researchers reported last year. This month, the Fondazione David Hume released not-yet-peer-reviewed results of an experiment in the Marche Region in Italy that looked at schools with and without controlled mechanical ventilation and the impact of different rates of air exchange. Replacing the air in a classroom 2.4 times per hour reduced the risk of COVID-19 spread by a factor of 1.7. More frequent exchanges reduced the risk even more, up to a factor of 5.7 with replacement 6 times an hour. British scientists who advise the U.K. government would like buildings to display signs to inform the public of the status of the air inside. They have developed prototype placards with different icons and color-coding schemes to convey information such as whether a room is mechanically ventilated, uses filtration or monitors carbon dioxide, which is a proxy for how much fresh air a room gets. The group is testing options in a pilot program.
In the United States, the White House has launched the Clean Air in Buildings Challenge as part of the National COVID-19 Preparedness Plan. The Biden administration and Congress have made federal funding available to improve air quality in schools, public buildings and other structures. The Environmental Protection Agency has released recommendations on how to plan for and take action on indoor air quality.
“Healthy and clean indoor air should become an expectation for all of us,” Alondra Nelson, head of the White House Office of Science and Technology Policy, said at a webinar about the new initiatives on March 29. “It’s just as important as the food we eat and the water we drink.”
Making clean indoor air a public health priority, and putting in the work and money to make it a reality across the country, would go a long way to helping us prepare for infectious disease outbreaks to come. It also reinforces the public in public health, a commitment to protecting as many people as possible, just as masking mandates at appropriate times do. It’s how we get to some kind of “normal” that everyone can share in.