In many realms of science today, “statistical wisdom” seems to be in short supply. Misuse of statistics in scientific research has contributed substantially to the widespread “reproducibility crisis” afflicting many fields (SN: 4/2/16, p. 8; SN: 1/24/15, p. 20). Recently the American Statistical Association produced a list of principles warning against multiple misbeliefs about drawing conclusions from statistical tests. Statistician Stephen Stigler has now issued a reminder that there is some wisdom in the science of statistics. He identifes seven “pillars” that collectively provide a foundation for understanding the scope and depth of statistical reasoning. Stigler’s pillars include methods for measuring or representing aggregation (measures, such as averages, that represent a collection of data); information (quantifying it and assessing how it changes); likelihood (coping with probabilities); intercomparison (involving measures of variation within datasets); regression (analyzing data to draw inferences); design (of experiments, emphasizing randomization); and residual (identifying the unexplained “leftovers” and comparing scientific models).
His approach is to identify the historical origins of these seven key pillars, providing some idea of what they are and how they can assist in making sense of numerical data. His explanations are engaging but not thorough (it’s not a textbook), and while mostly accessible, his writing often assumes a nontrivial level of mathematical knowledge. You’ll have to cope with expressions such as L(Θ)=L(Θ)|Χ and Cov(L,W)=E{Cov(L,W|S)}+Cov(E{L|S}, E{W|S}) every now and then.
While Stigler defends statistics from some of the criticisms against it — noting, for instance, that specific misuses should not be grounds for condemning the generic enterprise — he acknowledges that some issues are still a source of concern, especially in the new era of “big data” (SN: 2/7/15, p. 22). Using common statistical tests when many comparisons are made at once, or applying tests at multiple stages of an experimental process, introduces problems that the seven pillars do not accommodate. Stigler notes that there is room, therefore, for an eighth pillar. “The pillar may well exist,” he writes, “but no overall structure has yet attracted the general assent needed for recognition.”
Overuse of antibiotics in livestock can spread drug-resistant microbes — via farm workers or even breezy weather. But there’s more than one reason stay upwind of drugged cattle.
Dung beetles (Aphodius fossor) make their living on cattle dung pats, which are rich in nutritious microbes. To investigate the effects of cattle antibiotics on this smaller scale, Tobin Hammer of the University of Colorado at Boulder and his colleagues studied the tiny communities around tetracycline-dosed and undosed cows. Compared with untreated cows’ dung, microbes in dung produced by treated cows were less diverse and dominated by a genus with documented resistance, the researchers report May 25 in the Proceedings of the Royal Society B.
Beetles typically reduce methane gas wafting off dung, but pats from treated cows showed a 1.8-fold increase in methane output. How this might figure into greater cattle methane production remains to be studied, but Hammer and company speculate that the antibiotics may wipe out the bacterial competition for microbial methane factories.
Editor’s note: On May 3, 2017, Science retracted the study described in this article. Based on findings from a review board at Uppsala University, Science cites three reasons for pulling the study: The experiments lacked ethical approval, the original data do not appear in the paper and questions emerged about experimental methods.
Microscopic pieces of plastic rule Earth’s oceans, with numbers in the billions — possibly trillions. These tiny plastic rafts provide homes to microbes (SN: 2/20/16, p. 20), but their ecological effects remain murky. In a lab at Uppsala University in Sweden, researchers exposed European perch (Perca fluviatilis) larvae to a microplastic called polystyrene to see how they might react. The exposure triggered a slew of potentially negative effects: Fewer eggs hatched, growth rates dropped and feeding habits changed, with some larvae preferring polystyrene to more nutritious food options. Exposed larvae were also sluggish in responding to scents that signal approaching predators in the wild, the team reports in the June 3 Science.
European perch, a keystone species in the Baltic Sea, have recently experienced a population dive. Because the drop has been linked to juvenile feeding issues, the researchers argue that microplastics could be to blame.
SAN DIEGO — A remote planet — the first with hints of a limestone shell — has been shredded by its dead sun, a new study suggests.
A generous heaping of carbon is raining down on a white dwarf, the exposed core of a dead star, astrophysicist Carl Melis of the University of California, San Diego said June 13 at a meeting of the American Astronomical Society. The carbon — along with a dash of other elements such as calcium, silicon and iron — is probably all that remains of a rocky planet, torn apart by its dying sun’s gravity. Many other white dwarfs show similar signs of planetary cannibalism (SN Online: 10/21/15), but none are as flooded with carbon atoms as this one.
A planet slathered in calcium carbonate, a mineral found in limestone, could explain the shower of carbon as well as the relative amounts of other elements, said Melis. He and Patrick Dufour, an astrophysicist at the University of Montreal, estimate that calcium carbonate could have made up to 9 percent of the doomed world’s mass.
While a limestone-encrusted world is a first, it’s not shocking, says Melis. The recipe for calcium carbonate is just carbon and calcium in the presence of water. “If you have those conditions, it’s going to form,” he says.
“The real interesting thing is the carbon,” Melis adds. Carbon needs to be frozen — most likely as carbon dioxide — to be incorporated into a forming planet. But CO2 freezes far from a star, beyond where researchers suspect rocky planets are assembled. A limestone planet could have formed in an unexpected place and later wandered in while somehow retaining its carbon stores in the warm environs closer to its sun. Or the carbon might have been delivered to the world after it formed. But, Melis says, it’s not clear how either would happen.
The Rev. Thomas Bayes was, as the honorific the Rev. suggests, a clergyman. Too bad he wasn’t a lawyer. Maybe if he had been, lawyers today wouldn’t be so reluctant to enlist his mathematical insights in the pursuit of justice.
In many sorts of court cases, from whether talcum powder causes ovarian cancer to The People v. O.J. Simpson, statistics play (or ought to play) a vital role in evaluating the evidence. Sometimes the evidence itself is statistical, as with the odds of a DNA match or the strength of a scientific research finding. Even more often the key question is how evidence should be added up to assess the probability of guilt. In either circumstance, the statistical methods devised by Bayes are often the only reasonable way of drawing an intelligent conclusion.
Yet the courts today seem suspicious of statistics of any sort, and not without reason. In several famous cases, flawed statistical reasoning has sent innocent people to prison. But in most such instances the statistics applied in court have been primarily the standard type that scientists use to test hypotheses (producing numbers for gauging “statistical significance”). These are the same approaches that have been so widely criticized for rendering many scientific results irreproducible. Many experts believe Bayesian statistics, the legacy of a paper by Bayes published posthumously in 1763, offers a better option.
“The Bayesian approach is especially well suited for a broad range of legal reasoning,” write mathematician Norman Fenton and colleagues in a recent paper in the Annual Review of Statistics and Its Application.
But Bayes has for the most part been neglected by the legal system. “Outside of paternity cases its impact on legal practice has been minimal,” say Fenton, Martin Neil and Daniel Berger, all of the School of Electronic Engineering and Computer Science at Queen Mary University London.
That’s unfortunate, they contend, because non-Bayesian statistical methods have severe shortcomings when applied in legal contexts. Most famously, the standard approach is typically misinterpreted in a way known as the “prosecutor’s fallacy.”
In formal logical terms, the prosecutor’s fallacy is known as “the error of the transposed conditional,” as British pharmacologist David Colquhoun explains in a recent blog post. Consider a murder on a hypothetical island, populated by 1,000 people. Police find a DNA fragment at the crime scene, a fragment that would be found in only 0.4 percent of the population. For no particular reason, the police arrest Jack and give him a DNA test. Jack’s DNA matches the crime scene fragment, so he is charged and sent to trial. The prosecutor proclaims that since only 0.4 percent of innocent people have this DNA fragment, it is 99.6 percent certain that Jack is the killer — evidence beyond reasonable doubt. But that reasoning is fatally (for Jack) flawed. Unless there was some good reason to suspect Jack in the first place, he is just one of 1,000 possible suspects. Among those 1,000, four people (0.4 percent) should have the same DNA fragment found at the crime scene. Jack is therefore just one of four possibilities to be the murderer — so the probability that he’s the killer is merely 25 percent, not 99.6 percent.
Bayesian reasoning averts this potential miscarriage of justice by including the “prior probability” of guilt when calculating the probability of guilt after the evidence is in.
Suppose, for instance, that the crime in question is not murder, but theft of cupcakes from a bakery employing 100 people. Security cameras reveal 10 employees sneaking off with the cupcakes but without a good view of their identities. So the prior probability of any given employee’s guilt is 10 percent. Police sent to investigate choose an employee at random and conduct a frosting residue test known to be accurate 90 percent of the time. If the employee tests positive, the police might conclude there is therefore a 90 percent probability of guilt. But that’s another example of the prosecutor’s fallacy — it neglects the prior probability. Well-trained Bayesian police would use the formula known as Bayes’ theorem to calculate that given a 10 percent prior probability, 90 percent reliable evidence yields an actual probability of guilt of only 50 percent.
You don’t even need to know Bayes’ formula to reason out that result. If the test is 90 percent accurate, it will erroneously identify nine out of the 90 innocent employees as guilty, and it would identify only nine out of the 10 truly guilty employees. If the police tested all 100 people, then, 18 would appear guilty, but nine of those 18 (half of them) would actually be innocent. So a positive frosting test means only a 50 percent chance of guilt. Bayesian math would in this case (and in many real life cases) prevent a rush to injustice.
“Unfortunately, people without statistical training — and this includes most highly respected legal professionals — find Bayes’ theorem both difficult to understand and counterintuitive,” Fenton and colleagues lament.
One major problem is that real criminal cases are rarely as simple as the cupcake example. “Practical legal arguments normally involve multiple hypotheses and pieces of evidence with complex causal dependencies,” Fenton and colleagues note. Adapting Bayes’ formula to complex situations is not always straightforward. Combining testimony and various other sorts of evidence requires mapping out a network of interrelated probabilities; the math quickly can become much too complicated for pencil and paper — and, until relatively recently, even for computers.
“Until the late 1980s there were no known efficient computer algorithms for doing the calculations,” Fenton and colleagues point out.
But nowadays, better computers — and more crucially, better algorithms — are available to compute the probabilities in just the sorts of complicated Bayesian networks that legal cases present. So Bayesian math now provides the ideal method for weighing competing evidence in order to reach a sound legal judgment. Yet the legal system seems unimpressed.
“Although Bayes is the perfect formalism for this type of reasoning, it is difficult to find any well-reported examples of the successful use of Bayes in combining diverse evidence in a real case,” Fenton and coauthors note. “There is a persistent attitude among some members of the legal profession that probability theory has no role in the courtroom.”
In one case in England, in fact, an appeals court denounced the use of Bayesian calculations, asserting that members of the jury should apply “their individual common sense and knowledge of the world” to the evidence presented.
Apart from the obvious idiocy of using common sense to resolve complex issues, the court’s call to apply “knowledge of the world” to the evidence is exactly what Bayesian math does. Bayesian reasoning provides guidance for applying prior knowledge properly in assessing new knowledge (or evidence) to reach a sound conclusion. Which is what the judicial system is supposed to do.
Bayesian statistics offers a technical tool for avoiding fallacious reasoning. Lawyers should learn to use it. So should scientists. And then maybe then someday justice will be done, and science and the law can work more seamlessly together. But as Fenton and colleagues point out, there remain “massive cultural barriers between the fields of science and law” that “will only be broken down by achieving a critical mass of relevant experts and stakeholders, united in their objectives.”
The surveillance video shows a peaceful city streetscape: People walking, cars driving, birds chirping.
“Then, abruptly, there’s the sound of gunfire,” said electrical engineer Robert Maher. “A big bang followed by another bang.”
Witnesses saw two shooters facing off, a few meters apart — one aiming north, the other south. But no one knew who shot first. That’s where Maher comes in. His specialty is gunshot acoustics, and he’s helping shore up the science behind a relatively new forensics field. In the case of the two shooters, surveillance cameras missed the action, but the sounds told a story that was loud and clear.
A distinctive echo followed the first gunshot but not the second. The first gunshot’s sound probably bounced off a big building to the north, causing the echo, Maher concluded. So the first person to shoot was the person facing north, he reported May 24 in Salt Lake City at a meeting of the Acoustical Society of America.
Maher has analyzed the booming echoes of gunshots in dozens of cases, but he’s also studying the millisecond-long sound of a bullet blasting out of the barrel — and finding differences from one type of gun to the next.
He and colleagues at Montana State University in Bozeman erected a semicircular aluminum frame studded with 12 microphones, evenly spaced and raised 3 meters off the ground. When someone standing on a raised platform in the center of the contraption shoots a gun — a 12-gauge shotgun, for example, or a .38 Special handgun — the microphones pick up the sound.
“Each of the different firearms has a distinctive signal,” he says. His team is building a database of sounds made by 20 different guns. To the ear, the gunshots seem alike, but Maher can chart out differences in the sound waves. One day, investigators might be able to use the information to figure out what kind of guns were fired at a crime scene. Of course, Maher says, most crime scene recordings aren’t high quality — they often come from cellphones or surveillance systems. But his team will compare those recordings with ones made in his outdoor “lab” and try to figure out which aspects of crime scene audio they can analyze.
Maher, a music lover who plays the cello and sings in a choir, didn’t intend this career. “If I were really talented at music, that’s what I’d be doing full time,” he says. Instead, he has applied his skills in math and science to problems involving sound: studying humans’ contribution to noise in national parks, for example, and now, gunshot acoustics.
For him, it’s “a nice way to bridge the gap between the science and the sound.”
The brain doesn’t really go out like a light when anesthesia kicks in. Nor does neural activity gradually dim, a new study in monkeys reveals. Rather, intermittent flickers of brain activity appear as the effects of an anesthetic take hold.
Some synchronized networks of brain activity fall out of step as the monkeys gradually drift from wakefulness, the study showed. But those networks resynchronized when deep unconsciousness set in, researchers reported in the July 20 Journal of Neuroscience. That the two networks behave so differently during the drifting-off stage is surprising, says study coauthor Yumiko Ishizawa of Harvard Medical School and Massachusetts General Hospital. It isn’t clear what exactly is going on, she says, except that the anesthetic’s effects are a lot more complex than previously thought.
Most studies examining the how anesthesia works use electroencephalograms, or EEGs, which record brain activity using electrodes on the scalp. The new study offers unprecedented surveillance by eavesdropping via electrodes implanted inside macaque monkeys’ brains. This new view provides clues to how the brain loses and gains consciousness.
“It’s a very detailed description of something we know very little about,” says cognitive neuroscientist Tristan Bekinschtein of the University of Cambridge, who was not involved with the work. Although the study is elegant, it isn’t clear what to make of the findings, he says. “These are early days.”
Researchers from Massachusetts General, Harvard and MIT recorded the activity of small populations of nerve cells in two interconnected brain networks: one that deals with incoming sensory information and one involved with some kinds of movement, and with merging different kinds of information. Before the anesthetic propofol kicked in, brain activity in the two regions was similar and synchronized. But as the monkeys drifted off, the networks dropped out of sync, even though each networks’ own nerve cells kept working together.
Around the moment when the monkeys went unconscious, there was a surge in a particular kind of nerve cell activity in the movement network, followed by a different surge in the sensory network about two minutes later. The two networks then began to synchronize again, becoming more in lockstep as the anesthetic state deepened.
CHICAGO — Cooling stars could shine some light on the nature of dark matter.
Certain types of stars are cooling faster than scientists expect. New research suggests that the oddity could hint at the presence of hypothetical particles known as axions. Such particles have also been proposed as a candidate for dark matter, the unknown substance that makes up most of the matter in the universe.
Researchers analyzed previous measurements of white dwarf variable stars, which periodically grow dimmer and brighter at a rate that indicates how fast the star is cooling. For all five stars measured, the cooling was larger than predicted. Likewise, red giant stars have also shown excess cooling. Considering each result on its own, “each one is not that interesting,” says physicist Maurizio Giannotti of Barry University in Miami Shores, Fla., who presented the result at the International Conference on High Energy Physics on August 4. But taken together, the consistent pattern could indicate something funny is going on.
After evaluating several possible explanations for the cooling of the stars, the researchers concluded that the axion explanation was most likely — barring some more mundane explanation like measurement error. Axions produced within the star stream outward, carrying energy away as they go, and cooling the star.
Although it may be more likely that the phenomenon will eventually be chalked up to measurement errors, it’s important to take note when something doesn’t add up, Giannotti says. “We can’t ignore the small hints.”
Some guys really know how to kill a moment. Among Mediterranean fish called ocellated wrasse (Symphodus ocellatus), single males sneak up on mating pairs in their nest and release a flood of sperm in an effort to fertilize some of the female’s eggs. But female fish may safeguard against such skullduggery through their ovarian fluid, gooey film that covers fish eggs.
Suzanne Alonzo, a biologist at Yale University, and her colleagues exposed sperm from both types of males to ovarian fluid from female ocellated wrasse in the lab. Nesting males release speedier sperm in lower numbers (about a million per spawn), while sneaking males release a lot of slower sperm (about four million per spawn). Experiments showed that ovarian fluid enhanced sperm velocity and motility and favored speed over volume. Thus, the fluid gives a female’s chosen mate an edge in the race to the egg, the researchers report August 16 in Nature Communications.
While methods to thwart unwanted sperm are common in species that fertilize within the body, evidence from Chinook salmon previously hinted that external fertilizers don’t have that luxury. However, these new results suggest otherwise: Some female fish retain a level of control over who fathers their offspring even after laying their eggs.
WASHINGTON — To human thinking, songbird nests now seem to have evolved backwards: The most distant ancestor probably built complex, roofed structures. Simple open-top cup nests came later.
More than 70 percent of songbird species today build some form of that iconic open cup, evolutionary biologist Jordan Price said August 18 at the North American Ornithological Conference. Yet looking at patterns of nest style across recent bird family trees convinced him that the widespread cup style probably isn’t just a leftover from deepest bird origins. Old bird lineages thought to have branched out near the base of the avian family tree tend to have plentiful roof-builders. Price, of St. Mary’s College of Maryland, and coauthor Simon Griffith of Macquarie University in Sydney reconstructed probable nest styles for various branching points in the tree. That reconstruction suggests that open cups showed up independently four times among songbirds, such as in bowerbirds and honeyeaters, the scientists conclude. Also, here and there, some of the earlier cup builders reverted to roofs.
Price said he began musing about nest history while reveling in Australia’s birds during a sabbatical with Griffith. Evolutionary biologists have proposed that the broader Australasia region was probably the starting point for the rise of songbirds. Price said that it isn’t clear what drove a switch from protective roofs to what looks like the quick and dirty alternative of open cups.