Scientists may work to prevent bias, but they don’t always say so

For a scientist, conducting a scientific study is walking into a minefield of potential biases that could detonate all over the results. Are the mice in the study randomly distributed among treatment groups? Does the person evaluating an animal’s behavior know what treatment the mouse got — and thus have an expectation for the outcome? Are there enough subjects in each group to reduce the odds that the results are due to chance?

“I think we’re getting increasingly better at identifying these risks and identifying clever and practical solutions,” says Hanno Würbel, an applied ethologist at the University of Bern in Switzerland. “But it’s not all obvious, and if you look back at the history of science you find that these methods have accumulated through a learning process.”

In theory, every time scientists design an experiment, they keep an eye out for these and other potential sources of bias. Then, when scientists submit the study design for approval or write journal articles about the work, they share that research design with their colleagues.

But scientists may be leaving some rather key bits out of their reports. Few animal research applications and published research reports include specific mentions of key factors used to eliminate bias in research studies, Würbel and colleagues Lucile Vogt, Thomas Reichlin and Christina Nathues report December 2 in PLOS Biology. The results suggest that the officials who approve animal research studies — and the scientists who peer review studies before publication — are trusting that researchers have accounted for potential biases, whether or not there is evidence to support that trust.

The team gained access to 1,277 applications for animal experiments submitted to the Swiss government in 2008, 2010 and 2012. The researchers examined the applications for seven common measures used to prevent bias: Randomization, calculations to make sure sample sizes were large enough, not telling the experimenter what treatment will be administered to the next animal, blinding the experimenter during testing to which animals received which treatment, criteria used to include or exclude subjects (say, the animal’s age or sex), explicitly stating the primary outcome to be measured and plans for statistical analysis of the data.

Most of the time, the applications didn’t mention how or whether any of those measures were considered. Scientists included statistical plans 2.4 percent of the time and sample size calculations only 8 percent of the time. Even the primary outcome variable — the main objective measured in the study — was mentioned in only 18.5 percent of the applications.

Würbel’s group also looked at 50 publications that came out of some of those animal research applications that were ultimately approved. Here, scientists were better about reporting their effort to stave off bias in their studies. If scientists mentioned one of the seven efforts to combat bias in their animal experiment applications, they were more likely to mention it when their final papers were published. But they still only reported the statistical plan 34 percent of the time, and none of the 50 papers reported sample size calculations.
Switzerland’s animal research application process didn’t actually require that any of these measures of bias be disclosed, Würbel notes. But “unless [the licensing officials] know the studies have been designed rigorously, they can’t assess the benefit.” The implication, the researchers suggest, is that authorities approving the studies trusted that the scientists knew what they were doing, and that peer reviewers and editors trusted that the authors of journal articles took those forms of bias into account.

Was that trust well placed? To find out, Reichlin, Vogt and Würbel surveyed 302 Swiss scientists who do experiments on living organisms, asking about the efforts they made to combat bias, and how often they reported those efforts. When asked directly, scientists said that of course they control their studies for certain risks of bias. The vast majority — 90 percent — reported that they included the primary outcome variable, and 82 percent included a statistical analysis plan. A full 69 percent reported that they calculated their sample sizes. Most also reported that they wrote these antibias efforts into their latest published research article.

But when the team probed deeper, asking scientists specific questions about what methods they used to combat bias, “you find out they don’t know much about methods,” Würbel notes. Only about 44 percent of the researchers knew about the guidelines for how to report animal experiments, even though 51 percent of them had published in a journal that endorsed those guidelines, the researchers report December 2 in PLOS ONE.

“This is a type of empirical work that we need, to see how people think and what they do,” says John Ioannidis, a methods researcher at Stanford University in California.

Just because scientists aren’t reporting certain calculations or plans doesn’t mean that their research will be subject to those biases. But without efforts to rigorously prevent bias, it can sneak in — subjects that aren’t randomly assigned properly, or an experimenter who unconsciously leans toward one result or another. Too small of a sample size and a researcher could detect a difference that disappears in a larger group. If a researcher knows which animals got which treatment, they may unconsciously focus more carefully on some aspects of the treated animals’ behavior — ignoring similar behavior in the control. And that can result in studies that are tougher to replicate — or that can’t be replicated at all.

None of this means that scientists are ill-educated or performing science badly, notes Malcolm Macleod, a neurologist at the University of Edinburgh. “I think there’s a temptation to make this binary, [to say] people don’t know so we need to train them,” he says. “The fact is most scientists know a bit … [but] everyone has something they can do better.”

How do you make scientists take more actions against bias, and then report what they’ve done? Journals, funding bodies and agencies approving animal research projects could require more information of scientists. Journals could require checklists for reporting methods, for example. Journals or funding bodies could also require full preregistration of animal studies, where a scientist gives all the details of a study and how it will be analyzed before the experiments are ever performed. (Such pre-registration became mandatory for clinical studies in humans in the United States in 2000.) Detailed reporting isn’t complete insurance against an irreproducible result, but “the more information you have, the easier it is to reproduce,” says Vogt, who studies animal welfare at the University of Bern.

Some scientists might worry that preregistration is too onerous, or that it could straitjacket researchers into unproductive studies. It would be frustrating, after all, to be stuck with a hypothesis that is clearly not bearing out, when the data provide tantalizing hints of another path to pursue. But it’s possible to provide the flexibility to pursue interesting questions, while still making sure the studies are rigorous, Ioannidis says.

But when scientists don’t know sources of bias in the first place, more education might be a good place to start. “When I first came into the lab for my master’s thesis, I had a lot of information [about research design] but I wasn’t ready to apply it,” Vogt explains. “I needed guidance through the steps of how to plan an experiment, and how to plan to report the experiment afterward.” Education doesn’t stop when graduate students leave the classroom, and more continuing education might help scientists — students and emeriti alike — recognize unfamiliar sources of bias and provide tools to combat it.

Sea creatures’ sticky ‘mucus houses’ catch ocean carbon really fast

Never underestimate the value of a disposable mucus house.

Filmy, see-through envelopes of mucus, called “houses,” get discarded daily by the largest of the sea creatures that exude them. The old houses, often more than a meter across, sink toward the ocean bottom carrying with them plankton and other biological tidbits snagged in their goo.

Now, scientists have finally caught the biggest of these soft and fragile houses in action, filtering particles out of seawater for the animal to eat. The observations, courtesy of a new deepwater laser-and-camera system, could start to clarify a missing piece of biological roles in sequestering carbon in the deep ocean, researchers say May 3 in Science Advances.
The houses come from sea animals called larvaceans, not exactly a household name. Their bodies are diaphanous commas afloat in the oceans: a blob of a head attached to a long tail that swishes water through its house. From millimeter-scale dots in surface waters to relative giants in the depths, larvaceans have jellyfish-translucent bodies but a cordlike structure (called a notochord) reminiscent of very ancient ancestors of vertebrates. “They’re more closely related to us than to jellyfish,” says bioengineer Kakani Katija of the Monterey Bay Aquarium Research Institute in Moss Landing, Calif.

The giants among larvaceans, with bodies in the size range of candy bars, don’t form their larger, enveloping houses when brought into the lab. So Katija and colleagues took a standard engineering strategy of tracking particle movement to measure flow rates in fluids and reengineered equipment to watch giant houses at work deep in the ocean.
Getting the hardware right was challenging, and so was deploying it remotely from a research ship at the surface of the Monterey Bay. “This is a 1-millimeter-thick laser sheet bisecting an animal that’s about 2 centimeters wide that is 400 meters below the surface vessel,” Katija says.
The rig managed to capture measurements of water flow through houses of larvaceans belonging to two Bathochordaeus species. The top rate for B. mcnutti, more than 20 milliliters per second, broke the record (previously held by salps) for fastest recorded filtration rates from a zooplankton. If the maximum population of giant larvaceans in Monterey Bay pumped water that fast, they would clean all the particles out of their home depth in about 13 days.
Larvacean feeding rates matter because the sea creatures send organic matter, including carbon, to the deep ocean in two ways, explains biological oceanographer Stephanie Wilson of Bangor University in Wales. Larvaceans discard houses that become clogged with particles pumped in. (Small species can secrete a replacement in minutes though giants take longer. “Imagine that you have a head full of snot, and you sneeze your house,” Wilson says.)

Larvaceans also send carbon to the seafloor in football-shaped excrement. That’s an American football, Wilson clarifies. The tiny plankton that larvaceans near the surface eat wouldn’t sink far on their own, but once an animal ingests them and excretes a dense pellet, the carbon in the meal sinks better.

If carbon-containing fallout from the upper ocean falls fast enough, it bypasses diversions by other creatures and reaches depths where nothing much happens to it for a long time, says Sari Giering of the National Oceanography Centre in Southampton, England, where she studies oceanic carbon. “The faster a particle sinks, the more likely its carbon will be stored in the ocean for centuries,” she says.

Giering enthusiastically welcomes the new laser-and-camera system. She points out that researchers thought giant larvaceans could be important in sequestering carbon. But the fragile houses have been hard to study in action until now.

Brain chemical lost in Parkinson’s may contribute to its own demise

The brain chemical missing in Parkinson’s disease may have a hand in its own death. Dopamine, the neurotransmitter that helps keep body movements fluid, can kick off a toxic chain reaction that ultimately kills the nerve cells that make it, a new study suggests.

By studying lab dishes of human nerve cells, or neurons, derived from Parkinson’s patients, researchers found that a harmful form of dopamine can inflict damage on cells in multiple ways. The result, published online September 7 in Science, “brings multiple pieces of the puzzle together,” says neuroscientist Teresa Hastings of the University of Pittsburgh School of Medicine.
The finding also hints at a potential treatment for the estimated 10 million people worldwide with Parkinson’s: Less cellular damage occurred when some of the neurons were treated early on with antioxidants, molecules that can scoop up harmful chemicals inside cells.

Study coauthor Dimitri Krainc, a neurologist and neuroscientist at Northwestern University Feinberg School of Medicine in Chicago, and colleagues took skin biopsies from healthy people and people with one of two types of Parkinson’s disease, inherited or spontaneously arising. The researchers then coaxed these skin cells into becoming dopamine-producing neurons. These cells were similar to those found in the substantia nigra, the movement-related region of the brain that degenerates in Parkinson’s.
After neurons carrying a mutation that causes the inherited form of Parkinson’s had grown in a dish for 70 days, the researchers noticed some worrisome changes in the cells’ mitochondria. Levels of a harmful form of dopamine known as oxidized dopamine began rising in these energy-producing organelles, reaching high levels by day 150. Neurons derived from people with the more common, sporadic form of Parkinson’s showed a similar increase but later, beginning at day 150. Cells derived from healthy people didn’t accumulate oxidized dopamine.
This dangerous form of dopamine seemed to kick off other types of cellular trouble. Defects in the cells’ lysosomes, cellular cleanup machines, soon followed. So did the accumulation of a protein called alpha-synuclein, which is known to play a big role in Parkinson’s disease.
Those findings are “direct experimental evidence from human cells that the very chemical lost in Parkinson’s disease contributes to its own demise,” says analytical neurochemist Dominic Hare, of the Florey Institute of Neuroscience and Mental Health in Melbourne, Australia. Because these cells churn out dopamine, they are more susceptible to dopamine’s potential destructive forces, he says.

When researchers treated neurons carrying a mutation that causes inherited Parkinson’s with several different types of antioxidants, the damage was lessened. To work in people, antioxidants would need to cross the blood-brain barrier, a difficult task, and reach the mitochondria in the brain. And this would need to happen early, probably even before symptoms appear, Krainc says.

“Without this human model, we would not have been able to untangle the pathway,” Krainc says. In dishes of mouse neurons with Parkinson’s-related mutations, dopamine didn’t kick off the same toxic cascade, a difference that might be due to human neurons containing more dopamine than mice neurons. Dopamine-producing neurons in mice and people “have some very fundamental differences,” Krainc says. And those differences might help explain why discoveries in mice haven’t translated to treatments for people with Parkinson’s, he says.

Over the past few decades, scientists have been accumulating evidence that oxidized dopamine can contribute to Parkinson’s disease, Hastings says. Given that knowledge, the new results are expected, she says, but still welcome confirmation of the idea.

These toxic cellular events occurred in lab dishes, not actual brains. “Cell cultures aren’t the perfect re-creation of what’s going on in the human brain,” Hare cautions. But these types of experiments are “the next best thing for monitoring the chemical changes” in these neurons, he says.

3-D scans of fossils suggest new fish family tree

When it comes to some oddball fish, looks can be deceiving.

Polypterus, today found only in Africa, and its close kin have generally been considered some of the most primitive ray-finned fishes alive, thanks in part to skeletal features that resemble those on some ancient fish. Now a new analysis of fish fossils of an early polypterid relative called Fukangichthys unearthed in China suggests that those features aren’t so old. The finding shakes up the evolutionary tree of ray-finned fishes, making the group as a whole about 20 million to 40 million years younger than thought, researchers propose online August 30 in Nature.
Ray-finned fishes named for the spines, or rays, that support their fins — are the largest group of vertebrates, making up about half of all backboned animals. They include 30,000 living species, such as gars, bowfins and salmon. The group was thought to originate about 385 million years ago, in the Devonian Period. But the new research, using 3-D CT scans of the previously discovered fossils, shifts the fishes’ apparent origin to the start of the Carboniferous Period some 360 million years ago, says study coauthor Matt Friedman, a paleontologist at the University of Michigan in Ann Arbor.
One of the largest extinction events in Earth’s history marks the boundary between the Devonian and Carboniferous. “We know that many groups of backboned animals were hard hit by the event,” Friedman says. But after the massive die-off, ray-finned fishes popped up and, according to previous fossil evidence, their diversity exploded. The new finding “brings the origin of the modern ray-finned fish group in line with this conspicuous pattern that we see in the fossil record,” Friedman says. It suggests these vertebrates didn’t survive the event. They came after, then flourished.

Quantum computing steps forward with 50-qubit prototype

Bit by qubit, scientists are edging closer to the realm where quantum computers will reign supreme.

IBM is testing a prototype quantum processor with 50 quantum bits, or qubits, the company announced November 10. That’s about the number needed to meet a sought-after milestone: demonstrating that quantum computers can perform specific tasks that are beyond the reach of traditional computers (SN: 7/8/17, p. 28).

Unlike standard bits, which represent either 0 or 1, qubits can indicate a combination of the two, using what’s called quantum superposition. This property allows quantum computers to perform certain kinds of calculations more quickly. But because qubits are finicky, scaling up is no easy task. Previously, IBM’s largest quantum processor boasted 17 qubits.

IBM also announced a 20-qubit processor that the company plans to make commercially available by the end of the year.