Tuesday 27 March 2012

Reductionist studies and controlled trials are a waste of time

We need answers to wicked problems 
Reductionist research methods don't work when applied to real world, wicked problems that are highly complex. By reductionist methods, I mean doing experiments where one part of the problem is isolated from the rest of the problem and studied ad infinitum. For example, a lot of medical research involves isolating individual proteins, growing them up in bacteria or human cells and then shining light at them, throwing chemicals at them, generally doing stuff to them. But what does it really mean? 


Does caffeine cause cancer?
For example, I read a research paper published in 2006 that found that caffeine inhibits DNA repair when you grow cells in media containing caffeine. If you're a caffeine drinker, this sounds a little bit scary. You don't want your DNA repair to be inhibited or else you might wind up with cancer!

But is this finding actually meaningful? There are so many confounding variables. The human body isn't just a collection of individual cells. It's this dazzling choreography of different cell types, signalling molecules, chemicals, even different organisms (all those bacteria in your gut)! How can you realistically infer whether caffeine consumption is really linked to cancer?

Would epidemiology work?
You could do epidemeological research. You could look at a large group people and ask them to complete a survey that quizzes them on their caffeine consumption, their health, their nutrition and other variables. And you might find that caffeine consumption is significantly correlated with cancer rates, therefore PROVING that caffeine consumption causes cancer. But even then, can you really authoritatively say that it's the caffeine that's doing it? What about the variables that are linked to caffeine consumption that you're not measuring? Maybe it's not caffeine itself, but a personality type associated with it: overachievement. Maybe people who drink too much caffeine are genetically earmarked to 'live fast, die young'. Or maybe it's not the caffeine, but the milk that they have with the caffeine. Or maybe it's not the caffeine but how hot the liquid is. Maybe they're getting throat cancer because they're in such a rush that they drink their coffee when it's hot. Or maybe it's where not coffee but it's the kettle. Maybe the kettle is made out of plastic that leaches out BPA when the water is boiled. The things that you least expect might matter, could very well be the things that matter. 

There are too many potential confounding variables
You could attempt to control for all of these factors. But at the end of the day, you can't do it. There's no way you can ever prove that caffeine causes cancer. Still, you can publish that research and say look, we can't prove it, but we think caffeine gives you cancer. So if you want to reduce your risk of getting cancer, you could stop drinking caffeine. That's what scientists are doing with climate change. Sure they can't prove that increased carbon pollution will lead to global warming. But they can give us their best guess. And given how dire the consequences are (sea level rise, mass extinction, ocean acidification, frozen methane melting in the arctic), it makes sense to take action to mitigate the risk of climate change. Just as it makes sense to take action to reduce your risk of cancer. (As a side note, I've stopped drinking coffee partly for that reason and also because I find when I drink too much coffee, I make bad decisions.)

Are controlled trials a suitable alternative?
So if you can't prove anything through reductionist research, what can you do? You can do controlled trials. Rather than studying a phenomena in a lab or on your computer, you can test it in the real world. You can get a group of participants, force feed them lots of caffeine, watch them for a decade or so and see if they get cancer.. (Oops wait, you probably can't get that past an ethics committee these days!) You can recruit a bunch of people, ask them to reduce their caffeine intake systematically for the next ten years, and ask them to PLEASE not change anything else in their life because that would affect the results. So if they're not exercising, they need to stay unfit, and if they're eating junk food every day, they need to keep up that lifestyle otherwise they'll screw up the results of your perfect study. At the end of those ten years, you might be able to conclusively say "caffeine consumption beyond three cups per day increases the risk of cancer." Hooray for you! You've found out something really major. 

Ethical considerations of controlled trials
You might have even helped some of your participants avoid getting cancer. Go you! But how about the people who weren't exercising and the people who were eating junk food at the start of the trial. The people you absolutely forbade from doing anything else that could make them healthy because it would mess up your results. What if one of them got cancer? How would you feel about that, especially if it had since been shown that exercise reduces the risk of cancer? You'd feel pretty bad. But you had to do it, otherwise your trial would have been messed up. Ah well, research has collateral damage sometimes, right?

An alternative to controlled trials: action research
Peter Checkman (the father of action research) would say wrong! Research doesn't need to have collateral damage. He advocates an alternative to reductionist research and controlled trials: action research. In action research, the emphasis is not on proving a hypothesis (caffeine consumption cures cancer), it's on solving a problem. So for our example, if you were to take an action research approach to caffeine study, your research objective would be: stop people getting cancer.

Non adherence is a good thing!
For the first year, the action research study would look pretty similar to the controlled trial. You'd recruit a bunch of people and ask them to stop drinking as much coffee. But here's where it would change: after the first year, you interview 100 of your subjects and ask them how they've been going. When you speak to John, he has some shocking news! He's been drinking MORE coffee but he's started going to the gym and has lost 10kg and boy is he feeling great!

Instead of jumping over the table, grabbing John by the throat and saying "You idiot! You've messed up your beautiful data! I'll kill you for this!", you would say, "John that's incredible! Well done! Why don't you tell all the other participants about this healthy lifestyle change you've made?"

Find out what really works, not what you think works
So you grab a video camera, film John telling his story and send it round to all the other participants. Pretty soon lots of people are coming back with healthy lifestyle changes that they have made. Susy has started climbing the stairs every day instead of taking the lift. Robert has stopped drinking alcohol except on weekends. Andrew has started going to Overeaters Anonymous.

Measure everything
Wow! Suddenly the scope of your study is getting much bigger. You were only interested in one thing at the beginning: caffeine consumption, but now you're tracking a whole bunch of different strategies. You need to keep track of who's following what regime and whether it's working for them, so you send out a survey and ask people to fill it out every three months. You ask them: how's your health right now (your weight, your energy levels, your happiness, your sleep - making it highly quantitative of course) and what's working for you (how much caffeine are you drinking? how's your eating? how much are you exercising).

Change the model during the study
After you've analysed the first survey results, 15 months into the study, you discover a pretty compelling trend: the people who cut their caffeine consumption and started exercising have lost a lot of weight, and are feeling better overall. Fascinating! People need to know about this. You publish your research immediately. And then when the government grants start rushing in, you recruit more study participants and this time you tell them: you need to cut back your caffeine intake and start doing some exercise.

Rinse and repeat
After three months, you do the survey again, and find out that caffeine, exercise and doing art classes is highly correlated with greater wellbeing. So you tell everyone about it. You publish your research and you invite more participants into the program and tell them to cut their caffeine consumption, start exercising and go to art classes. 

And that's it. You just keep on iterating. 

The action research model

You could break the hypothetical caffeine study you did (nice work!) into five steps:
1. Observe
What's happening in the world? At the beginning, this involves looking at what other people are doing. So that's why you started with investigating caffeine consumption because you'd read a paper about it, suggesting that caffeine consumption increases cancer risk. 

Once you've started doing stuff, the observation phase is about looking at what happening. That's why you did surveys and interviews so you can find out how people are really doing.

2. Reflect
What do these data mean? This is about number crunching and critical thinking. Out of all the strategies people are using, what seems to work the best? This is why you did your literature review originally and why you analysed the data from your surveys and did interviews.

3. Plan
How will you refine your intervention based on what you've learnt so far? When John came back and told you that his exercise regime was working for him, you had a hard think and decided to change your model to include exercise. 

4. Act
Get out there and solve some real world problems! This is why you recruited participants to be part of your trial and gave them instructions for how to improve their health. 

5. Iterate
The motto of action research is 'fail fast and fail often'. If it turned out that caffeine consumption was actually good for health, you'd want to know that as early as possible so that you could change direction. The faster you can 'iterate' (go through the action research cycle again and again), the better your model will be.

Yeah but!
Isn't reductionist research still necessary so we know what to start with?
I argue that we already know enough to get started. We already know a lot about what works in theory for pretty much every problem out there. We know that with climate change, we can solve the problem by generating electricity through renewable sources. We know that to solve traffic congestion, we need to come up with ways to get cars off the road (like ridesharing). We know that to reduce obesity, we need to get people to eat less and exercise more.

We've got plenty of theories! What we don't have is action. We don't have practical ways of implementing these theories.

How can you prove anything if you can't control for confounding variables?
The concept of 'proving' something is relevant when you're working with relatively simple systems like physics. Water boils at 100 degrees centigrade at 1.0 x atmospheric pressure. No-one is going to debate that. But in highly complex systems, there are too many variables to ever prove anything conclusively. What matters is the end result. Did you reduce carbon emissions? Did you reduce traffic congestion? Did you reduce obesity? If not, then your research is useless. It's just a piece of paper filed on the internet somewhere.

Lay people don't know anything. I'm the expert!
This is the attitude of controlled trials and reductionist research. The experimenter is the one who decides the variables to be tweaked. The subjects are just 'experimental units'. They're not people. They don't have brains. They're just participants in a trial. They will be paid to compensate them for their time and their discomfort but you don't care about their ideas. 

I believe this attitude closes you off to other hypotheses that are potentially more relevant and more useful than what you came up with originally.

This all sounds bollocks! Where's your evidence that action research works?
Here are some links to where action research has been successfully applied
- in schools

I'm scared of the real world. I just want to work in my lab
If you're happy publishing theoretical papers, keep doing that. But if you feel like your research isn't having the impact you want to have in the world, then get out there and do some action research.

Ok ok I'll consider it. Where could I use action research?
Are you a teacher?
Action research can be used in schools. Kids can be researchers too. Instead of pumping their heads with information from the experts, why not respect them as intelligent human beings and involve them in research? Let them learn by doing rather than by listening. Read more about Action Research in the classroom.

Are you in government?
Are you considering rolling out a massive infrastructure project (*cough* National Broadband Network *cough*)? Rather than shelling out all that money for the project before you even know whether it will work, why not use action research to trial the project in a few locations first. That way, you can the model right.

Are you in business/starting up a business?
Got an idea for a new product? Instead of spending millions of dollars creating a perfect prototype, create a prototype you're ashamed of. Go and test it with real users. Get their feedback. Fix the problems and go back for more feedback. Keep doing this until they're willing to pay for it. Read more about the Lean Startup philosophy.

1 comment:

John Baxter said...

I really rate this post Jeremy, definitely gells with some of my thoughts on the limits to academic research, but put so well (and with actual cred coming from a researcher!).

I've linked to these ideas, if tenuously, on a post (not yet posted, on http://jsbaxter.blogspot.com) on the need to focus on 'impetus' as a changemaker, rather than having the best idea or vision.