Science gets shared all over social media these days, and
thanks to the wonder of the comments section, people get to have their say on it.
Sometimes comments are a valuable part of the discussion, and scrolling down will reveal extra details, well-thought-out arguments and
interesting angles on the information you’ve just read. On the other end of the
scale, we have the cesspit that often forms underneath articles about any
science considered “controversial” by some laypeople: think vaccines, GMOs and
climate change. Great entertainment if you have a taste for melodrama. Not
great if you get a headache from READING CAPSLOCK AND EXCLAMATION MARKS!!!! I'm not going to touch on those here, as plenty of other bloggers have dissected the strange and often illogical arguments used by those with a penchant for denying science. In the
middle ground, we have articles about new scientific discoveries on news
websites or popular science social media pages. It’ll be on something like animal
behaviour, or healthy habits, or a promising new disease treatment. It’s great
that people can interact with research and respond to it in this way, and
negative comments aren’t always unjustified: the internet is full of bad
science and badly-communicated science alike. However, even the best bits of
research seem to attract whiny commenters, who often bring up the same old
complaints. Here I’ve compiled some moans that I see very frequently underneath
science articles, and some tips on how you can tell if they’re valid. Are you
really battling against pointless, badly-done science, or have you simply
misunderstood the intentions of some pretty decent research?
1) “That’s so obvious! We didn’t need a study to
tell us that!”
Why we write it: We’ve all done it: a science headline pops
up proclaiming some apparently cutting-edge research, and your overwhelming
reaction is “No shit, Sherlock”. Something like High Heels Are Bad For Your Feet, or Cats Recognise Their Owner's Voice. What next, a study on
the faith of the Pope?
The other side of the story: We like to think that our
everyday observations about the worlds’ workings are pretty accurate. Our brains
are fantastic at making connections between different things, inferring
cause-and-effect, grouping things into categories and predicting stuff. Trouble
is, these connections are often wrong, and it takes science and all its tinkering
under controlled conditions to untangle what’s real and what’s not. What if cats actually recognised us by smell, not sound, and they simply came to investigate the noise when we shout "I'm home!"? What if our experiences of having sore feet after wearing high heels led us to blame them for long-term damage, even if there was no true link? For every study that we roll our eyes at,
there will be another that surprises us and challenges our established knowledge. For example, a recent analysis of research into human brain size found that it has no meaningful connection to IQ- an idea that goes against most people's idea of "common sense". It’s always worth checking!
How to approach the issue: If you’re not sure whether that study with the obvious
outcome was a lazy attempt at science or not, think about the bigger picture. One
of the most important steps in scientific research is testing your assumptions.
For example, if I want to examine wild crocodile poo to find out whether they prefer to eat puppies or kittens, I first need to make damn sure that
puppies and kittens both live in crocodile habitat, or my interpretation of my results
will be wrong and my study will be a big waste of time.
Sometimes, the assumptions made by a field of research are so big that they need entire studies to themselves. If we didn’t check the things we consider to be common knowledge and ploughed on, only to later find out our assumption was wrong… all the work that we built upon it would become useless, or would at least need reinterpretation. Does the study sound like it could fit into a bigger scheme of research, or does it seem more like a publicity stunt?
| "She's the fattest! Eat her!" |
Sometimes, the assumptions made by a field of research are so big that they need entire studies to themselves. If we didn’t check the things we consider to be common knowledge and ploughed on, only to later find out our assumption was wrong… all the work that we built upon it would become useless, or would at least need reinterpretation. Does the study sound like it could fit into a bigger scheme of research, or does it seem more like a publicity stunt?
2) “X was good for us last week, and now they’re
saying it’s bad! This is why you can’t trust science”
Why we write it: It seems like science cannot make its mind
up about whether red wine and dark chocolate are edible angels or the Antichrist. One
minute we’re dipping a whole bar of Green and Blacks into a pint glass of wine,
fuelled by the promise of a long life and freedom of heart disease, the next
we’re tipping the Pinot Grigio down the drain and sobbing because we’re told
we’re going to die fat and diabetic and covered in tumours.
The other side of the story: You can (mostly) blame the media for
this one. Yes, there is a grain of truth in it: there are a lot of conflicting
studies about the health benefits or drawbacks of particular foods and
lifestyles, scientists don’t always agree on the components of a
healthy diet, and nutrition science has its fair share of problems. If you look closer, however, each identical headline probably has a
slightly different study behind it, even the ones that appear to come to the
same conclusion. One might be comparing antioxidant content (another group of substances
with mixed evidence behind them), without looking at how this affects real
people. Another might follow a group of people with varying chocolate
consumption over time, and see what happens to them. A third might also follow
a group of people, but build on the study design in some way (e.g. increasing
the sample size). All of these studies will have their own advantages and
flaws, and cannot paint a conclusive picture on their own. Yet the media will treat them all the same, jumping on their conclusions and sensationalising them, because we love that
bollocks and will eat it up without thinking. Just look how excited the media
got over a fake study that suggested chocolate was an effective weight-loss aid.
How to approach the issue: To find out whether the
conclusions you’re reading are worth listening to or just an excuse to print a
stock photo of a sexy lady eating chocolate, look for:
1) Quotes from the scientists
If you’re lucky, the article will have printed something
that the actual researchers said about their study. Usually this is stuck right
at the end, because it’s something boring like “Yeah, we found wine can
be good for you, but only if you drink ONE of those small glasses like twice a
week” or “These results are only preliminary, and we’ll need more work to
determine whether these patterns are real or not”. If not, or if you’re unsure
whether the article just lifted a tiny, out-of-context soundbite, see if you
can find the actual scientific paper, or at least an analysis that isn’t in the
Daily Mail.
It’s also important to see whether these conclusions are
from an individual study, or the Optimus Prime of scientific papers-- a
meta-analysis. After a lot of papers have come out on a particular issue, an
expert in the field will review all those studies. He or she will take the
differences in method, sample size and any flaws into account, discard
badly-done studies and compare similar ones. Say we’re doing a meta-analysis
into the risk of heart disease and its association with consuming red wine
mixed with melted chocolate. The researcher will compare the size of the
positive and negative effects found in individual studies, and using lots and
lots of statistics, will pull out an overall figure; let’s say that in this
case, the risk of heart disease falls by 8% for every 100ml extra of choco-wine
consumed in an average week (we can but dream). Of course, this method has its
own flaws, and different meta-analyses may come up with different conclusions,
but a news story about this kind of study will probably be a lot more reliable
than one about a lousy study on 5 people done by students at the Université de
Champagne.
3) “Correlation does not equal causation”
Why we write it: There’s a time and a place for this
complaint. If you’re having a great time shooting down crazy conspiracy
theorists in the aforementioned cesspit-type comment sections, and they pull
out an impressive-looking graph like this:
Remembering that correlation does not equal causation allows
you to see the flimsiness of their point and counter it with this:
2000 internet points to you, ma’am.
The other side of the story: I get it, this phrase is really
important, and something that we should teach kids in schools to keep in their
bullshit-spotting toolkit forever and ever. It starts to get grating, however,
when you get a keyboard scientist writing this under every science article in a
lazy attempt to sound like a sexy, intellectual sceptic. Yes, correlations are not very solid evidence
on their own, but they DO have a place in scientific research. People often
expect scientists to start working on a problem and not stop until they’ve
solved it, tied off the loose ends and ironed out any conceivable problems that
could ever arise. However, science doesn’t work like that. Funds tends to
arrive in packages that allow a certain amount of research to be done: money
for a PhD student to work for 3 years, for example. What the lab has to show
after that money runs out will depend on the field and the subject. If it’s a
particularly intensive and difficult-to-study topic in an early stage of
investigation, you might just end up with little more than some hard-won but interesting
correlations. IF BACKED UP WITH SUFFICIENT EVIDENCE- i.e. other explanations
for the pattern have been properly explored and discarded- a correlation isn’t
as useless as you might believe. They certainly don’t PROVE an association
between two factors, but they’re a good first step and can highlight
associations that might be worth investigating more thoroughly. A nice,
well-argued correlation is good grounds for obtaining that next batch of
funding.
How to approach the issue: Is anybody claiming
that this correlation = causation? If so, whom? Scientists and research
institution press offices are definitely not blameless when it comes to
exaggerating the benefits or future reach of their findings, but even the most
cautiously optimistic quote can be spun out of proportion by the media and overexcited
internet personalities. Plus, scientists who work on obscure or early-stage
research may have to search a long way from their actual work to find something
that seems immediately relevant to the general public: bear this in mind when a
wild headline turns out to be founded on a much less exciting, flimsy-looking correlational
study. This may just be the early stages of their research. So yes, absolutely
remember the old c≠c rule and apply it when deciding what to take away from a
science story. But let’s add a new phrase to our repertoire: correlation does
not (necessarily) equal a crappy study.
4)
“What a dumb/pointless piece of research, why
are my taxes funding this?”
Why we write it: People often have strong opinions on the
kind of science that should be funded. Cancer treatments for children? Yes
please! But spending $880,000 on the study of the mating behaviour of New
Zealand mud snails? Or those stupid news stories that appear when some academics
claim to have found the perfect formula for making a bloody cup of tea. And my
taxes are paying them to do this?! How is this benefitting humanity?! How?!
The other side of the story: Those two despicable examples are from quite different sources. The tea formula is real and has been thought up
several times by different people. It was of course widely reported in the
media each time, because the concept of tea doesn’t take much explaining. Those
crazy scientists, spinning on their spinny chairs in their fancy expensive science
labs and thinking of rehashed publicity stunts to waste money and get
attention. What are they like? Well this is what Daily Mail readers think of them:
![]() |
| Yeah, Roberto, fight the power! You should join forces with Steve from Nottingham, he has a real problem with tea science. |
With the help of Google, I can find at least 3 different “tea
formulas” that have been created over the years. The first was created in 1980,
with involvement from the Ministry of Agriculture, Fisheries and Food, to
standardise the procedure for professional tea testers (now there’s a job to
aspire to). The second was in 2011, by researchers at the University of
Northumbria, and was sponsored by Cravendale (a milk company). The third, in
2014, was the result of a competition run at the University of Leicester maths
undergraduates, again sponsored by building company Jelson Holmes (because
builders love tea. Get it?). Cheap, unoriginal publicity stunts that prey on
our country’s addiction to tea? In the latter two cases, yes. Waste of public
funds? No, seeing as the latter two had no public money input, and the first
was a legitimate attempt at industrial quality control. As in the second example,
you’ll often find that silly headlines originate from work done by science
students. There’s no harm in that: fun research projects are a good way for
students to hone their skills, especially at the stage when they’re not skilled
enough to get involved in the university’s main research areas. I fed squirrels
in the park in my second year to learn about their caching behaviour: not worth
a headline, but I’m sure if an article appeared in the paper calling me a
“scientist” rather than a student, people would get cross about the apparent
waste of money.
Concerns about the "mating snails" example are all too common with
austerity on the rise, and it’s understandable. Dig a little deeper, however,
and blue-skies research often isn’t as silly as it sounds. Mucking about to
see how things work is a very important part of science, even though we might
not see the benefits for some years. It’s called basic research, and it sets
the scene for more complex (and hopefully applicable) research in future.
Naturally, not every bit of basic research leads to a new medicine or piece of
technology, but it’s almost impossible to tell which studies will! Rich
Victorians with nothing better to do, like Charles Darwin, gave us some
incredibly important basic research. Without Darwin and friends pootling around
with dead birds and barnacles, we wouldn’t be able to comprehend how bacteria
evolve to resist antibiotics, or understand the contents of our own genome. In the case of the
aforementioned snail sex study, there are several ways it could go. Voyeuristically
watching New Zealand mud snails make babies could inform methods to control it: it’s an
invasive species in many regions, with effects all the way up the food chain.
This species is also a good candidate for studying an extremely nasty human parasite
called schistosomiasis that’s carried by snails in undeveloped tropical
regions. In essence, this grumble is linked to my first point: science is a
process that moves in incremental and non-linear steps, and if you take a
snapshot of it at any one point in time, you’re going to capture different
investigations at different stages. It’s then easy to point to finger at
apparently “pointless” and “useful” research, when in fact they’re all part of
the same machine. Cut out one part and soon many vital parts will come grinding
to a halt. And trust me- scientists spend an enormous amount of their time
writing proposals in order to get funding. It’s not easy: public funding bodies don’t
hand out grants like they’re sweeties. Shoddy science obviously exists and gets
funded on occasion, but we shouldn’t assume that every study is a waste of time
just because its merits aren’t immediately obvious to us.
How to approach the issue: Again, because the bigger picture
of science can be so confusing as to make good science look pointless and vice
versa, it’s often hard to laypeople to figure out whether a piece of research
is to be respected or ignored. If you still suspect that a science headline is a heap of decaying bollocks, chances are that actual
scientists will think so too. Nothing riles up a scientist more than their
field being misrepresented in the media, or a bad piece of research being
paraded around without question. Go and look at blogs and articles written by or quoting (well-respected) scientists and science bloggers, and see what comments they have on that
research. Hell, drop them a message and ASK for their point of view, they’ll be
flattered! Here we have takedowns of:
1) The widespread misinterpretation of the WHO's categorisation of red meat as carcinogenic
2) A study that supposedly found that tardigrade genomes contain an enormous amount of foreign DNA
3) A study widely reported as showing vegetarianism to be more environmentally unfriendly than meat eating
----------------------------------------------------------------------------------------------------------------
I'll go back to talking about individual bits of interesting science soon- this was just something that struck me when looking at a popular Facebook science page and the reactions of many of its visitors! As long as it is done by fallible humans, the funding, execution, and reporting of science can never be perfect, but knowing a little more about how contemporary science is done can go a long way in understanding the research in the news, why it's there, and how truthfully it's being represented. Please don't be "that person" making a fool of themselves in comments sections!
----------------------------------------------------------------------------------------------------------------
I'll go back to talking about individual bits of interesting science soon- this was just something that struck me when looking at a popular Facebook science page and the reactions of many of its visitors! As long as it is done by fallible humans, the funding, execution, and reporting of science can never be perfect, but knowing a little more about how contemporary science is done can go a long way in understanding the research in the news, why it's there, and how truthfully it's being represented. Please don't be "that person" making a fool of themselves in comments sections!



















