Evidence, evidence based policy and IP

What constitutes evidence for copyright policy? Such is the title of the upcoming event at Bournemouth University as part of the ESRC Festival of Social Science.  While evidence in IP is a already a challenge, let's take a look a further look at evidence based policy (EBP). To start, a handy definition: "...[EBP] mean(s) that policy initiatives are to be supported by research evidence and that policies introduced on a trial basis are to be evaluated in as rigorous a way as possible." (From this book as cited in this paper.)

The cat tried to eat the evidence
Economics tends to view "evidence" in policy analysis to mean quantitative data.  For example, using employment rates and wages to test the effectiveness of vocational training programmes. A criticism of this approach is that it may miss out on less measurable effects, and that qualitative data, often in the form of case studies or interviews, tells a more complete, nuanced picture.  For example, qualitative evidence might tell you that vocational training programmes have limited effect because participants in such programmes are highly motivated individuals who would find jobs without intervention (a self-selection bias.)  Another concern is that evidence may measure correlation instead of causation.

Fat cats obsessed with measurements
Writing on the topic of EBP is riddled with pithy gems such as "rooting policy in evidence has all the appeal of motherhood and apple pie.  The rhetoric is cheap and easy." (from this paper, via this paper by Australians Greg Marston and Robb Watts) EBP is appealing and has echoes of methods used in science. For example, medicinal treatments (akin to policy) are extensively tested and their impact on patient health reviewed. However, evidence and policy in the social sciences lack the relative clarity associated with outcomes and treatments in other sciences.  Furthermore, as policy is inherently political, it is subject to politics.  Therefore, the risk that evidence is only selectively used, or used in cases where it supports a political stance, can limit the effectiveness of policy evaluation.

Another concern, noted by Ray Pawson of Leeds, is that,  "evaluation research is tortured by time constraints."  Research time cycles do not sync with policy time cycles, thus it is difficult for evidence to have policy impact.  Researchers often mention the elusive search for the "gold standard" of evidence and methodology. He suggests using combinations of meta-analysis (an analysis of analyses) and narrative review (akin to a literature review) to mitigate biases associated with particular methodologies or evidence.


Marston and Watts also have this handy graphic (on the right) of the fundamental elements in EBP research.  These elements include the question being asked, the available evidence, the knowledge this creates and the assumptions upon which the argument is based.  Each of these elements is a challenge to define and remains the subject of debate.

Pamela Samuelson of Berkeley takes the argument back a step and asks, "Should Economics Play a Role in Copyright Law and Policy?"  Thankfully, the answer appears to be "yes." Samuelson notes that economics has had limited impact on copyright so far which she attributes to a lack of economic expertise in the policy making community, regulatory capture, poor communication on the part of economists and cultural differences between lawyers, policymakers and economists.  She also notes that the copyright industries have been successful in obtaining favourable policies without the aid of economics.  Things have moved on since her paper, but there is a lot of work to be done on economics of copyright and evidence-based copyright policy.  Indeed, Samuelson's prediction that groups of economists would infiltrate government IP offices is already coming true.

But what do IPKat readers think?  What actually constitutes evidence?  Will researchers eventually strike gold?