Scientific American recapped a recent conversation at a major scientific meeting about whether or not there is really a “war on science.”
As a communications practitioner who has helped many scientists deal with political attacks on their work, I don’t think there is a war on science itself. Rather, I think we’re dealing with politicians and advocates who see science as one of many fronts in a series of ongoing battles. Or, to drop the war metaphor for a second, science is simply caught up in a bunch of different democratic disputes.
Science historian Mark Largent told meeting attendees that scientists shouldn’t approach this topic from a position of defensiveness, even when they are being attacked. Instead, he argued, researchers should recognize that science remains an incredibly powerful institution and that with great power, comes great responsibility.
— becca harrison (@_beccaharrison) February 13, 2016
It’s a provocative thought and my first instinct was to reject it. I know a lot of scientists who have felt absolutely helpless when politicians, advocates and straight-up trolls have attacked their research and their integrity. But as I thought about Largent’s point a bit more, I asked myself why people attack science in the first place. It’s usually because everybody in a political debate wants to be seen as having science on their side.
So even when individual researchers feel unfairly targeted by politicians or advocates, it’s helpful to understand that that is happening precisely because outsiders rightfully see the scientific community as a powerful arbiter of credibility in many democratic debates.
In response to a series of Tweets I posted about this topic, sociology professor Aaron McCright pointed me to a helpful distinction he and his colleagues have made between various fields of science that have been caught up in public controversies.
“Production science,” they say, creates new things. That includes new medicines, new ways to harness and produce energy, new chemical compounds and new crops and food products. “Impact science,” meanwhile, focuses on what’s happening to our health, our air, our water, our atmosphere, and the world’s built and natural environments.
These distinctions help explain how someone like Rep. Lamar Smith (R-Texas), the chairman of the House science committee, can be bullish on NASA’s production science, such as looking for life on Europa or the James Webb Telescope, but eager to cut NASA’s impact science, namely the agency’s Earth monitoring and climate research.
Conversely, anti-vaccine advocates think the pharmaceutical industry – a behemoth in production science – is poisoning children. So when they saw a piece of impact science by Andrew Wakefield, long-since retracted, which linked early-childhood vaccines to autism, they ran with it. And when public health impact scientists tell them they’re wrong – that vaccines save lives – they try to box those public health researchers in with the pharmaceutical industry, too.
Similar dynamics also play out in the scorched-earth world of disputes about agriculture and biotech. It strikes me that many people involved in that debate brand themselves as pro-science and are often tacitly arguing about whether or not agricultural production science or environmental and public health impact science should determine public policy outcomes on everything from crop subsidies to GMO labeling.
There are plenty of other useful frameworks for examining why people accept or reject certain types of science. Cultural cognition research offers another lens, based on political ideology, for understanding challenges to everything from nuclear power to gun violence research. Conspiratorial thinking also helps explain the persistence of misinformation on many of these topics, which can stem from a simple distrust of elites, including scientists. And political science offers many lessons in how corporate spending on front groups, scientific consultants, and political speech can warp democratic debates, as well as the practice of science itself.
None of these frameworks offers a holistic explanation for why people reject science on a given topic, but they are all useful.
If we want to resolve any of these “wars” or “battles” or “disputes,” it’s important to understand why people outside the scientific community accept or reject specific scientific findings. We need to be able to accurately and effectively communicate with audiences about these topics on their terms rather than hoping, or insisting, that people outside the scientific community will learn to love and appreciate science for the same reasons we do.Continue reading
The Northeast Conference on Science and Skepticism has dropped Richard Dawkins as its keynote speaker this year. That’s a good thing.
Dawkins is a gifted writer and speaker and many of us enjoy a greater understanding of biology and evolution thanks to his work. He’s also written beautifully about the positive influence science and reason can have in our lives, arguing passionately that our scientific understanding of light, for instance, does not “unweave the rainbow,” but only makes it more gorgeous to behold.
That’s why it’s so disappointing to see him engage in such negative, boorish behaviors online, from criticizing Rebecca Watson for speaking out about rampant sexism among skeptics, to attacking “clock kid,” to his latest – sharing a terribly stupid, juvenile video titled, I kid you not, “Feminists love Islamists.”
(Caution from personal experience: If you find it and watch it, the tune will be stuck in your head while you’re making eggs in the morning…in a bad way! And you’ll be flummoxed as you try to figure out why a respected scientist would share such a thing online with his 1.3 million followers.)
Naturally, this will become part of the broader conversation about self-censorship, and free speech. Dawkins would certainly like to frame it that way and has been sharing messages like this on Twitter:
@RichardDawkins Stand strong, Prof. They’re good at deliberately misinterpreting arguments, claiming hate speech and demanding apologies
— Lyle Yiannopoulos (@Lyde15) January 28, 2016
But I don’t really buy it. There’s something more basic and important happening here. Dawkins often acts like a jerk on social media. And Twitter is such an open, free-wheeling platform that he winds up alienating audiences for whom he never intended his original messages. As a result, Dawkins online posts feed into bad stereotypes about aloof, arrogant scientists and, at least in my opinion, do more to set him back as a public intellectual.
Additionally, no government is stepping in to censor Dawkins or his words. No one is shutting down his Twitter account. It’s a conference saying, “You know what? This guy’s not for us or our audience.” I’m sure Dawkins has plenty of other conferences at which he can speak and I’m sure there are plenty of speakers who are a better fit for NECSS and its members.
My completely unsolicited advice for Dawkins is that he should ask himself what his goals really are with Twitter and other social media. If it’s to piss people off and distract from science communication and humanism, he’s doing a bang up job. He might even have some lessons to learn from Alec Baldwin and other celebrities who have given the world a little too much access to their internal brain musings.
He might also follow some tips from other successful scientists on Twitter and try being a little more positive, inspirational and constructive, while saving the vicious takedowns and hyper-critical thought for blog posts, books and the lecture circuit, where those messages can be better understood in context by their intended audiences.
NECSS’s statement is below. And as a long time fan of the Skeptics Guide to the Universe podcast, I have to say that Steve Novella and crew have always approached their work with heart. They have empathy for people who disagree with them as well as people who are duped by charlatans and misinformers. That’s all too rare these days, but it’s critically important for good science communication. I appreciate their stand and the open way they’re communicating with members.
A Statement Concerning Richard Dawkins
The Northeast Conference on Science & Skepticism has withdrawn its invitation to Richard Dawkins to participate at NECSS 2016. We have taken this action in response to Dr. Dawkins’ approving re-tweet of a highly offensive video.
We believe strongly in freedom of speech and freedom to express unpopular, and even offensive, views. However, unnecessarily divisive, counterproductive, and even hateful speech runs contrary to our mission and the environment we wish to foster at NECSS. The sentiments expressed in the video do not represent the values of NECSS or its sponsoring organizations.
We will issue a full refund to any NECSS attendee who wishes to cancel their registration due to this announcement.
The NECSS Team
Feb. 2 update: I just read Steve Novella’s 1/30 explanation for dropping Dawkins. I agree with all of it, including his rebuttal to claims that the decision is somehow anti-free-speech given Dawkins’s already expansive platform. He also addresses what he considers a few valid criticisms of how he and his colleagues handled things. That’s classy and it’s in keeping with good skeptical thinking. The whole post is well worth a read.Continue reading
Nuclear power was a science issue until is wasn’t. In the 1950s, policymakers agreed that nuclear energy could harness the destructive power the American military and scientific establishment unleashed in Hiroshima and Nagasaki to produce electricity instead. We would tame the atom and use it for peace. Ford even toyed around with the Nucleon, a nuclear-powered concept car.
But nuclear power policy shifted in the 1960s and 70s. Citizens wanted more of a say in how nuclear power plants were sited and operated. The environmental and peace movements questioned the utility of nuclear technology itself, especially as the Soviet Union and United States adopted positions of mutually assured destruction. Eventually, Congress sundered the duties of the Atomic Energy Commission – which promoted and regulated nuclear power and made nuclear weapons, too – and created the Nuclear Regulatory Commission. After the Three Mile Island accident, U.S. nuclear power plant construction ground to a halt.
Two political scientists argue in a seminal work on policy change in the United States that this change-over is one of many examples of the punctuated equilibrium model of evolution playing out in U.S. politics, rather than in the Galapagos Islands. As more people found they had a stake in the nuclear power debate, the more diverse and chaotic the debate became, until a new order was established. Along with that shift, the influence of science reporting on the topic diminished as political debates intensified and as more people focused on the running and regulation of nuclear power rather than the promise of nuclear technology.
That history has long informed my skepticism when I talk to advocates and scientists who are enthusiastic about nuclear power. I sympathize with them, though: I grew up in a nuke plant town; the Oyster Creek Nuclear Generating Station was part of the landscape along Highway 9. We paid little mind to the sporadic evacuation drills at school in the event of a nuclear accident. And while a few parents pulled their children out of our high school on September 11th because they had heard false reports that nuclear power plants would be targeted next, I remember shrugging it off with a few of my buddies – why would anyone attack South Jersey when they were that close to Philly and New York?
I have classmates who’ve worked as security guards at the plant – obviously security stepped up after the attacks. Years after graduating high school, when reviewing an NRC safety incidence report from the plant, I recognized the name of another classmate’s father. In nuke plants towns, it’s our neighbors who are in charge of keeping the plants safe.
My home town seal even features an atom:
So my problems with nuclear power plants that exist today are narrow: let’s make sure the safety engineers and inspectors at the plants can do their jobs, by all means. And let’s figure out the waste problem, at some point, please.
But as far as climate change goes, there is something kind of nice about nuclear power plants that are operating today: they’re producing low-carbon electricity and they’re already paid for. But when we talk about the future of nuclear power, it’s not science or even safety that dominates the debate – it’s economics.
To no one’s surprise, it’s remains stupendously expensive to create a facility that uses controlled fission reactions to boil water and make electricity. And, in an essay for Gullies.org, I argue that that’s where opeds in favor of nuclear power should focus. It’s too easy to assume that green skepticism about nukes is what’s holding the technology back – that’s the sort of simplistic “but you’re a hypocrite” rhetoric that plagues our politics.
When it comes to curing energy production of its carbon blues, we need cogent arguments on economics and, indeed, economies of scale. Such arguments are becoming easier to make for wind, solar and renewables. And in a world with many energy options, the renewable success story makes the argument for nuclear power worse.
You can read the essay here. Enjoy!Continue reading
Sarah Myhre and Tessa Hill, two scientists who study the ocean and climate, published an interesting and, at times, challenging conversation on Medium yesterday that touched on an overlooked aspect of professional development in science communication.
As Hill puts it:
What is missing in many of these discussions and documents is how engaging in science communication will impact the scientists themselves. How will scientists walk the line between relaying scientific information and expressing personal views? How will researchers weigh the impact on their career — both positive & negative — that arise from speaking publicly about their work? How can universities and research institutes provide support to scientists who chose to spend time engaging and communicating?
Myhre agrees, noting:
We have almost no conversation within our community about how science communication and media exposure might impact individual scientists. I think this is where much of the moral quandaries exist.
They go on to discuss the hard work scientists have to do examining their own values and, indeed, their ultimate goals, when it comes to communicating to the public, policymakers and media. They also critically examine the practical trade offs scientists have to make when they prioritize communications work.
These are questions every scientist who does research of public import has had to grapple with, but it’s clear that a new generation of scientists is making a significant argument that Myhre articulates succinctly and powerfully:
Our institutions are responsible for evolving along with us.
Absolutely. The communications landscape has radically shifted since I earned my degree in the field. It will continue to do so under our feet and fingertips. Scientific societies, universities and training programs have to embrace constantly shifting communications best practices and effectively convey them to scientists.
But let’s not let these concerns hold us back, Myhre and Hill argue. All these changes mean we also have room to experiment, to figure out new things and to do so knowing that science has so much tell us about our world and about ourselves. Myhre and Hill conclude with a hopeful message suitable for framing and desktop backgrounds:
Be brave: there has never been a more important time to be a well-spoken member of the scientific community.
In fact, I found their message so inspiring, I went ahead and made a desktop background out of it. You can download it by clicking on the image below.
(It’s 1600 x 1200 and the base layer image is from NASA — naturally! — and was taken by Apollo astronaut William Anders.)
You can follow Myhre and Hill on Twitter. Their conversation is well worth a full read; it also includes a discussion of routine sexism in media coverage focused on female scientists that will ring true for many readers, too. Myhre also has another excellent Medium piece in which she guides readers through her process for carefully developing main messages around her research.Continue reading
Tickets are available here and, naturally, you can buy tickets in person at DC9, too.
If you’re wondering what to expect, check out Story Collider’s podcast. Or, as they inspiringly put it:
At the Story Collider, we believe that everyone has a story about science—a story about how science made a difference, affected them, or changed them on a personal and emotional level. We find those stories and share them in live shows and on our podcast. Sometimes, it’s even funny.
What you’re getting into: 1200 words, a 4 to 6 minute read.
Paul Thacker argues in the New York Times that scientists should cough up their emails when politicians, advocacy groups and investigators request them. It’s an interesting thought experiment, but Thacker’s op-ed downplays the value of preventing scientific harassment and fails to make the case that disclosure is actually suffering as scientists defend themselves from various attacks on their work and reputations.
When research is paid for by the public, the public has a right to demand transparency and to have access to documents related to the research.
That’s true, but figuring out what constitutes “documents related to the research” gets to the heart of political and legal disputes on this topic. Unfortunately, Thacker’s piece doesn’t delve into these distinctions, despite a growing body of legal rulings on this topic.
For instance, it’s hard to think of any reason taxpayer-funded data and research shouldn’t be public, except for narrow cases like protecting patient privacy or national security. There’s also broad agreement among scientists, advocates and journalists, that correspondence with a funder about the scope and nature of a project should be subject to disclosure.
But a public university scientist’s correspondence with a colleague in which they criticize a peer’s ideas or rate the quality of a grad student’s work, for instance, should not be disclosed, scientists and academic groups have argued. Making such correspondence public, they say, harms researchers’ ability to freely bat around ideas, thus infringing on their ability to do their jobs and their right to free inquiry.
For these reasons, among others, the Virginia Supreme Court blocked a fossil fuel funded non-profit from accessing years of scientific correspondence among climate researchers. The Court said that these exemptions would prevent “harm to university-wide research efforts…and impairment of free thought and expression.”
Exemptions to disclosure laws vary greatly by state, of course – Texas specifically exempts scientific data related to oil exploration – so what gets fairly exempted in one state might not in another, or at the federal level. But it’s clear that courts recognize that there’s significant public interest in preventing harassment and protecting academic freedom at public universities.
Thacker writes that:
the harassment argument should not be used as an excuse to bar access to scientific research that the public is paying for and has a legitimate interest in seeing.
I can’t think of any scientific or academic society or group that has attempted to bar access to taxpayer-funded “scientific research.” We need to be clear here: these arguments are very rarely about access to things like scientific data. They are usually about things like funding and email correspondence.
Of course, we should sympathize with watchdog groups and journalists who already have far too tough a time getting public agencies to comply with FOIA requests. The Society of Professional Journalists, for instance, has a guide to helping reporters rebuff the many silly excuses they get, including from universities, that don’t want to disclose information which should obviously be public. But those problems with FOIA compliance are far broader than the narrow circumstances under which scientific societies and academic groups have asked for exemptions.
Further, it’s not clear that these narrow exemptions are causing the problems Thacker worries about. For instance, Thacker links to a recent freedom of information request that was rejected by a university on harassment grounds. Fair enough, but the rejection is from a British university, where freedom of information (and libel) laws are quite a bit different than they are in the United States. Further, the example involves a dispute among researchers for access to a data set, not an attempt by a politician, watchdog group or media outlet to get access to scientists’ inboxes.
Thacker also cites many examples of disclosure requests revealing corporate interference in science. Again, fair enough, but he doesn’t make the case that the narrow academic freedom exemptions scientists have asked for would have prevented any of those investigations from succeeding. Maybe they could, but there’s at least one high-profile example of an academic who tried to hide suspect financial ties by appealing to such exemptions and lost.
Thacker says that scientists contradict themselves when they embrace transparency on one front, but not another. For instance, scientists have objected to Rep. Lamar Smith (R-Texas) demanding correspondence from NOAA scientists who authored a study that torpedoed a climate contrarian talking point. Thacker’s strongest argument is to point to another notable set of Congressional and FOIA-based inquiries targeting NOAA:
About 10 years ago, the agency released emails showing that officials in the administration of George W. Bush squashed a NOAA statement and that Bush political appointees were selecting which NOAA scientists could speak to the media based on their willingness to deny connections between climate change and hurricane activity.
Is this really a contradiction, though? Those investigations targeted political appointees in the administration who were silencing scientists. Disclosure of that political interference was clearly in the public interest. Rep. Smith’s investigation, by contrast, is much more muddled, especially since it started with questioning the validity of scientific research itself. In each case, scientists have supported efforts that prevent political interference in the scientific process.
These issues are complex and it makes sense that watchdogs like Thacker want to draw a hard line on disclosure laws. In fact, they absolutely should. We benefit when transparency advocates push for more sunlight. But trying to paint scientists as hypocritical on these issues does little to advance transparency. For his part, Thacker concludes his piece with this admonition:
Scientists who profess agreement with transparency only when it is on their terms are really not for transparency at all. The public should be alarmed.
Scientists would argue that the public should be alarmed when politicians and advocates attempt to stymie scientific research they don’t like. The argument scientists and scientific societies have made, repeatedly, is that there is a public interest in disclosure and a public interest in protecting scientists from political interference and harassment. Thacker only acknowledges the former point, arguing that harassment is the price worth paying for fuller transparency.
Transparency advocates could do more to recognize that scientists are right to stand up against political interference in their work. The Climate Science Legal Defense Fund detailed some of the costs of dealing with harassing requests in response to Thacker’s oped, too.
At the same time, scientists can do more to be transparent, too. As the public demands greater transparency from legacy institutions – including government agencies and universities – scientists are in a position to push their institutions toward proactive disclosure, including data, methods, funding sources and funding agreements.
Regardless, these disputes over harassment, funding and email disclosure won’t stop any time soon. The best way for scientists – and the public – to enjoy the benefits of transparency and freedom from political interference is to embrace proactive disclosure. If everyone in science was more transparent, the outliers would stick out like sore thumbs, and scientists would be smart to get out ahead of public demands for more transparency.
(I wrote about these issues last year when I was working at – and blogging for – the Union of Concerned Scientists.)Continue reading
I finally caught up on my reading over the holidays and was pleased to examine a rich presentation of views on science communication from the University of Michigan.
The report is based on a conference the university held called”Academic Engagement in Public and Political Discourse,” which featured many of the leading lights in science communication, such as Dietram Scheufele, who smartly acknowledged that most debates involving science aren’t about facts, but about the “messy space” where science and values intersect.
Similarly, former Rep. Brian Baird (Wash.-D) challenged the participants to consider what flipping the conference’s title might mean and why the idea of “public and political engagement in academic discourse” tends to give us pause. It’s a thought worth contemplating: technology and democracy are making all institutions, including universities, more open to public participation – and public scrutiny. Academics are increasingly embracing that openness, along with greater transparency about their own values.
Andrew Hoffman, who directs the university’s Erb Institute, organized the conference, which also included keynotes from NOAA administrator Jane Lubchenco and the always-positive, always-inspiring glaciologist Richard Alley. Hoffman’s book on climate communication, which I’ve reviewed previously, is an excellent resource for scientists, students and citizens on how people think about climate science and climate policy.Continue reading
John Abraham has a nice writeup in the Guardian about the American Geophysical Union’s science communication work. The organization’s Sharing Science initiative, in particular, is a growing hub for Earth scientists who are looking to convey their work with everyone from kindergartners to cabinet members.
I’ve worked with AGU staff for several years on member workshops and I was particularly struck this year by how ready scientists were to think through tough communications problems.
Like a lot of people who have run workshops with scientists, I’ve often found that I need to lead off by explaining why science communication is a good thing, why it doesn’t have to involve dumbing down your message and why it’s not up to somebody else (the media, the education system) to do it for you. More than once, I’ve had scientists ask me very critical questions about the very premise of even doing science communication in the place. Not that I minded — critical thinking and openness is one of the things I love about the scientific community.
But over the past year…I just haven’t had to do that. Scientists increasingly see and feel the need for better, stronger, faster, cooler science communication. And I think it’s easier than ever – thanks to the Internet – to see what happens when ignorance wins out over reason and conspiracy theories, misinformation and just plain goofiness on science-related topics proliferate.
Other societies are doing great work, too, of course, but I suspect AGU has been out front on a lot of communications work, in part, because Earth scientists are used to dealing with public controversies on two big hot-button topics: evolution and climate change. Importantly, it’s a society that’s open to lessons from other fields, too, including epidemiologists, tobacco researchers and historians of science.
There’s a lot to learn when PhDs take on science communication. For scientists, societies are often the very first place they turn to for help. AGU is right on to create a “positive feedback” effect of their own when it comes to fostering accurate, effective science communicationContinue reading
What you’re getting into: 900 words, a 3 to 5 minute read.
Framing is one of the most important concepts in public communication. The term can get thrown around loosely, but in my mind, framing comes down to how we define problems and, as a consequence, how we think about potential solutions.
Most scientists and technical experts tend to define problems on a spectrum, whether it’s the risk of ecosystem collapse, temperature ranges for a warming planet, or the potential side effects of medication. When policymakers and members of the public approach these same issues, though, they often think of such risks in binary terms: Can we save these wetlands? Will we blow past the 1.5 to 2 C warming goal? Does this pill need a warning label?
Often, scientists wish they could help people see things their way: with the risks on a nuanced spectrum. In order to do so, they may have to speak binary first.
In 2009, Gladwell wrote, Toyota engineers were having a lot of frustrating conversations with customers who thought their cars had undergone “sudden acceleration.” In some rare cases, there were problems with people’s accelerators. But most of the time, the problem was human error: people were unconsciously hitting their accelerator, something drivers do with much more regularity than we tend to assume. Normally we just tap the brakes to slow down, get our feet back where they’re supposed to go, and go on driving. But Toyota drivers were worried, likely as a result of extensive media reporting about possible problems with the vehicles.
Gladwell elegantly captured the disconnect:
The public…didn’t think about the necessary compromises inherent in the design process. They didn’t understand that a car was engineered to be tolerant of things like sticky pedals. They looked at the part in isolation, saw that it did not work as they expected it to work—and foresaw the worst. What if an inexperienced driver found his car behaving unexpectedly and panicked? To the engineer, a car sits somewhere on the gradient of acceptability. To the public, a car’s status is binary: it is either broken or working, flawed or functional.
Toyota had to help their employees “reframe” their message. Yes, they could talk to customers about sticky pedals and design tolerance, but they first had to acknowledge that customers simply wanted to feel safe in their cars. Toyota went so far as to offer to replace perfectly fine vehicles if people felt unsafe in them. According to a management expert Gladwell interviewed, it completely turned things around. Instead of feeling ignored, customers started sending “love letters” to the company. (Gladwell doesn’t address this, but I’m assuming Toyota didn’t have to actually replace thousands of cars unnecessarily. As with many other such offers, it’s the thought that counts.)
A former colleague was testifying before Congress about fisheries once. He told a committee that there was a 95 percent chance a certain fishery would collapse over the next several years without intervention. A Congressman responded by asking him to come back when he was 100 percent certain.
If you’re a scientist reading this, I know you’re shaking your head. One of the bedrock truths in science is that nothing is 100% certain. Even if the fishery collapsed, perhaps scientists would cautiously state that there was 99% certainty that all the fish were gone based on available data.
The Congressman was demanding a binary answer: tell me if it will collapse or not, yes or no. But science doesn’t often doesn’t do binary, especially on topics that the public and policymakers see as controversial.
In 2013, the Intergovernmental Panel on Climate Change made big news when it announced that scientists were 95 percent certain that industrial carbon burning and other activities were causing global warming.
In attempting to explain where this basic conclusion of climate science sits on the certainty spectrum, the AP’s Seth Borenstein asked researchers what else in science enjoys that same level of certainty. He got some interesting answers, including the link between smoking and lung disease.
Most people, myself included, have not internalized certainty levels and percentages the same ways scientists have. What made Borenstein’s article particularly effective was that it translated a spectrum frame to a binary one:
Importantly, both of these questions are useful and both of these answers are accurate. We don’t have to choose between them and, in fact, people might ultimately need both frames to understand scientific evidence about societal risks.
What you’re getting into: about 3500 words, a 12-18 minute read
Scientists often assume that journalists are on their side when it comes to educating the public about scientific topics. That’s true for a lot of basic science, like, say, when journalists write about the discovery of a new exoplanet or explain the work of a scientist who just won a major prize. Those typically aren’t controversial topics, so scientists and journalists alike are simply trying their best to explain some cool science.
The second we start talking about anything perceived as controversial outside the lab, though, the rules of engagement can dramatically shift. It’s incredibly easy for scientists, science communicators and journalists to talk past one other when we’re dealing with topics like climate change, vaccines, evolution and genetic engineering, as well as science funding. And it can happen when journalists hold scientists and scientific institutions accountable, too.
The good news, I think, is that we can do better. And doing so requires being clearer about when we’re talking about science and when we’re talking about competing values and how science fits into societal debates.
Below, I offer a story, some observations and suggestions. I‘d love to hear more.
I talked past a reporter pretty badly back in 2011. Members of Congress had invited several scientists to testify about whether or not the Environmental Protection Agency’s efforts to reduce heat-trapping emissions were justified. One member — a lawyer by training — used his time to pepper scientists with loaded questions while demanding simple yes or no answers, a standard tactic at such hearings. Of course, that’s anathema to any scientist.
Here’s how a major news outlet ended an article about the hearing:
Mr. Griffith also wanted to know why the ice caps on Mars were melting and why he had been taught 40 years ago in middle school that Earth was entering a cooling period.
“What is the optimum temperature for man?” he asked. “Have we looked at that? These are questions that, believe it or not, I lay awake at night trying to figure out.”
The scientists promised to provide written answers.
Like a lot of folks working on climate science communication at the time, I thought this was a problematic ending. To a reader unfamiliar with these issues, it could sound like these were mysterious questions for which science had no good answers. (Briefly, here are answers on Mars, 1970s climate science, and why rates of change are more worrisome than absolute temperature.)
I fired off an email to the reporter, arguing — quite well I thought — that his reporting was unfair to the scientists who testified and detrimental to public understanding of science.
He told me, in so many words, that edifying the public about Martian climate variance wasn’t the point of his article.
First of all, I hadn’t been the first person to contact him, so he felt like he was getting pressured (reporters hate that) and his reporting on the hearing was accurate. That was, in fact, what happened at the hearing and an informed reader, he argued, would know exactly where the politicians and scientists stood in relation to one another. Further, his story also focused on an exchange in which a representative made it clear that climate science — and risks from industrially driven climate change — were well-established in the scientific literature.
I realized that in his mind, my complaint wasn’t really about science; my complaint was that he hadn’t beaten up a member of Congress for giving scientists a hard time.
We also had different audiences in mind. My complaint was based on the assumption that the article’s audience would be otherwise uninformed about climate science or policy. He assumed that readers would be well-armed enough to draw their own conclusions.
Maybe I was right, but that and $3.25 will get you a Chai Latte at Starbucks. The point is that I was telling him to do science communication and he was reminding me that he was doing political reporting. In the ensuing years, I think journalists have done a better job reminding readers where climate science stands when politicians challenge or reject the evidence, but the exchange taught me a broader lesson: just because a story has a lot of science in it doesn’t mean it’s going to get treated like a science story.
Journalists and scientists do both care deeply about accuracy and credibility. It’s tempting to say that it’s because the noble ideals of both professions rest on uncovering the truth and boldly going where the facts lead, regardless of one’s beliefs or biases. And, yeah, okay that’s true, but the day-to-day is a lot more brass tacks: in both professions, credibility is currency and too many errors over time can sink a career.
Real errors are a problem, of course. And scientists and journalists are both sometimes guilty of intransigence when people point out errors in their work. Regardless, both professions benefit from the self-correcting nature of the larger enterprises around them. A bad story will get factchecked by other outlets in ways that are similar to how a bunktastic scientific paper will fail replication by other scientists.
The problem I’m writing about isn’t really about factual errors, though; it’s about what happens when science-related stories move out of the lab, into the world, and yes, into the political arena. We need to be careful about how we think and talk about accuracy in that context, because it’s easy to talk past each other based on assumptions about what audiences know and what role journalism is playing in a given debate.
This is important to get right because science is still the best tool we have for learning about the world and journalism is still the best tool we have for informing the public about what those scientific tools have uncovered.
Scientists care deeply about what policymakers and the public think about their fields, especially on issues that are perceived as controversial. When politicians and interest groups seek to highlight, inflate and manufacture controversies, scientists’ desire for accuracy often puts them in the position of wanting journalists to downplay or actively challenge those outside attempts at influencing the public and focus on what is well-established among scientists.
But when those same outside interests groups focus on controversies, it’s journalists’ job to report on them. Their commitment to fairness means bringing in all the stakeholders in a debate and reporting what they believe and why, even when it cuts against the science.
So sometimes, when scientists are demanding accurate reporting, what they’re really asking is for journalists to critically assess inaccurate views from outside the scientific community. Journalists can’t always do that, especially on deadline when they’re covering noisy policy fights. I‘d argue that this often puts the onus — rightly or wrongly — on scientists to repeatedly make their views clear to journalists and media outlets. That means consistently reminding journalists what scientists have to say about these topics and why prevalent misinformation is wrong.
Of course, journalists have a responsibility, too. They can’t pass on inaccurate information simply because there are quote marks around it. Journalism professor Jay Rosen, for instance, describes several ways reporters can handle political disputes about established climate science ranging from explaining the ideological roots of rejecting climate science to simply noting what the science does say in their own journalistic voice. Additionally, media outlets have a special responsibility to report on industry attempts to influence the public and policymaking, whether on climate change or toxic chemicals.
The bottom line is that scientists and science communicators shouldn’t conflate their disappointment with some media reporting with their deeper disappointment in a society that is often simply out of step with scientists on a host of topics. It’s journalists’ job to report on science-related societal controversies accurately, but it’s not journalists’ job to actively push the public toward established science. That also means that science communicators and scientists need to think more about how they can help journalists do effective, accurate reporting around contentious societal debates.
There’s another type of complaint scientists often have with reporting on and around science: the story is going to be abused by people who want to attack the broader scientific field.
For instance, scientists understandably gripe about the “Darwin was wrong” trope that regularly pops up in biology reporting. In 2009, New Scientist even used it as the title for a cover story. Scientists bemoaned the choice, noting that creationists quickly hopped on the article as “evidence” that mainstream biology was in shambles.
Of course, anyone motivated enough to pick up a copy of New Scientist probably already has their mind made up about the theory of evolution, but scientists rightfully worry about how groups outside the scientific mainstream will use — and more often, abuse — reporting on scientific topics. It can happen with any scientific finding, even seemingly routine ones, on vaccination, industrial agriculture, dietary and nutrition choices, and anything anyone wants to pick a fight about for reasons that usually have nothing at all to do with actual science. Because scientists enjoy so much public trust, advocates always want to have science on their side, so they’ll comb through literature, trade reports, and science-related press releases and media coverage hunting for anything they can use (and dismissing what they can’t).
Ideally, media outlets should anticipate this sort of thing.
Here’s that New Scientist cover.
And here’s how National Geographic arguably handled it better with a clear message for people who bothered to crack the magazine open.
Of course, science communicators and scientists would probably much rather see something like this.
To which a science journalist might say: love the Warhol thing, but where’s the conflict for a good story?
Scientists and journalists had to artfully deal with a rather odd combination of substance and perception recently when a NASA-sponsored study — by accounts, an outlier — found that Antarctica is gaining ice mass overall even as the West Antarctic ice sheet continues to melt, as sea levels continue to rise, and as global warming goes on broadly in line with what scientists have been saying about it for decades.
At first blush, the study’s findings are a head-turner that runs counter to the simple main message the public has heard from scientists for decades: global warming melts ice and raises sea levels pretty much everywhere. Of course, there are a lot of nuances under that statement, which scientists have talked about repeatedly, especially when it comes to the rate of melting and the geographic differences between places like Greenland and Antarctica, but at the headline level or broad public awareness, this was surprising news.
Predictably, ideological media outlets that routinely criticize mainstream climate science used the study to try to throw cold water on climate science. Here’s an opinion writer taking a fat, sloppy swing at it in the UK’s Express:
Nothing like ALL CAPS to make the CREDIBILITY OF YOUR ARGUMENT clear.
Some mainstream outlets jumped on it as a surprising study. From their perspective, it wasn’t their main job to beat the public over the head with the basic science on global warming and melting ice sheets or to correct what those ideological sources have said: it was their main job to report on a new and interesting “man bites dog” science story.
USA Today, with its incredibly broad audience, probably captured that reaction best:
Other journalists and outlets, notably Chris Mooney at the Washington Post went out of their way to put the study in deep scientific and policy context. They and their editors even used valuable headline space to address potential misinformation about the study, something that almost never happens when outlier studies get big coverage.
Andrew Freedman at Mashable took a similar approach in his reporting, while the headline took on the inaccurate narrative about the study directly.
Of course, Mooney and Frbeedman are well-versed beat reporters with arguably more engaged audiences. That’s the exception, not the norm, and the onus is still on scientists and scientific institutions to anticipate inaccurate takes on new research and plan their communications accordingly.
For it’s part, NASA’s social media account tried to squeeze as much nuance as it could into 140 characters:
— NASA (@NASA) October 30, 2015
Still the agency’s press release might have done more to emphasize what is known about long-term ice loss and sea-level rise globally. Interestingly, the study’s lead author was pretty blunt about how people outside the scientific community would misrepresent his research in an interview with Nature.
“I know some of the climate deniers will jump on this, and say this means we don’t have to worry as much as some people have been making out,” he says. “It should not take away from the concern about climate warming.” As global temperatures rise, Antarctica is expected to contribute more to sea-level rise, though when exactly that effect will kick in, and to what extent, remains unclear.
Such awareness is common among scientists working in controversial fields and they should be open about it, just as public health researchers devote plenty of time and thought to how their own studies are received. It’s all about helping audiences — and reporters — enjoy an accurate view of the science.
Buzzfeed’s Brooke Borel recently wrote about controversies surrounding biologist Kevin Folta and communications work he did related to GMOs, some of which was done in coordination with biotech companies running anti-labeling campaigns. Naturally, pro-and-anti GMO forces attempted to assign ideological positions to Borel’s article, but there was another thread of more interesting criticism (at least for me). Some scientists complained that the article would 1) provide more ammo for anti-GMO groups attacking Folta and other scientists and 2) discourage other researchers from doing science communication.
Borel’s response was straightforward and sensible. In a series of Twitter messages she wrote: “As science writers/journalists/etc, we hold a strange position sometimes. I love science. I admire scientists. But it’s also my job to think about both critically. My job as a science journalist is not to advocate for science and scientists at all times, no matter what.”
Indeed, scientists are often public figures who can and should face public criticism from time to time: many enjoy taxpayer support and they are often trusted, powerful figures in society. So even while scientists and science communicators rightfully condemn politicized attacks on researchers, they should also expect and even welcome journalistic scrutiny. Another journalist, Rose Eveleth, put it well, too:
Journalists don’t work for some vast “Science Is Awesome” campaign. Our job is to report, the good and the bad. To hold folks accountable. — Rose Eveleth (@roseveleth) October 21, 2015
When scientists get involved — or unwillingly find themselves involved — in public communication on controversial science-related issues, we’re not in the world of pure science reporting any more. In these debates, scientists are just one of many actors pushing for their voices to be heard above the democratic din.
Even science education and science funding choices aren’t purely about science. As climate scientist Gavin Schmidt has argued, any societal debate that involves science also involves value judgments.
Science gets inserted into these debates in perfectly accurate as well as questionable ways all the time. It can be tough for journalists and scientists to figure out how to best respond. But I think we can all do better.
Far be it from me to pontificate about a host of complex problems without at least suggesting some solutions. Here are a few ideas for how scientists, journalists and media outlets, as well as press officers at scientific institutions can help address these issues. (I’d love to hear feedback and talk about additional ideas.)
For journalists (and media outlets)
For institutions and press officers
A few additional thoughts based on some feedback from Lexi Shultz, director of public affairs at the American Geophysical Union (and a former colleague):
On climate specifically, the University Corporation for Atmospheric Research curated a lot of great resources as part of their Climate Voices initiative. If you’re interested in helping audiences sort through how their values relate to scientific findings, I strongly recommend this presentation by Jeff Kiehl, who not only has degrees in natural science, but who is also a licensed analyst. (Pretty cool, huh!)Continue reading