### Department of Biostatistics Seminar/Workshop Series

# Examining Accumulating Evidence: A Likelihood Approach

## Jeffrey D. Blume, PhD

### Associate Professor, Director of Graduate Studies

Department of Biostatistics, VUMC School of Medicine

### Wednesday, July 22, 1:30-2:30pm, MRBIII Conference Room 1220

### Intended Audience: Persons interested in applied statistics, statistical theory, epidemiology, health services research, clinical trials methodology, statistical computing, statistical graphics, R users or potential users

In many research ventures it is desirable to examine data as it accumulates. This is especially true in clinical trials, where the goal of interim monitoring is to detect early signs of efficacy or futility. However, the hypothesis test framework of Neyman & Pearson discourages this activity, forcing investigators to pay a penalty for each instance in which they examined (or planned to examine) the data. As a result, investigators shy away from examining accumulating data, fearing the incurred penalty will prevent them from discovering strong evidence at the end of the experiment.

In this talk, I describe how the likelihood approach to measuring statistical evidence (i.e., replacing p-values with likelihood ratios) allows for, and even encourages, reexamination of accumulating evidence. Likelihood ratios are not penalized for multiple looks at the data and they are seldom misleading, even when the study is (statistically) rigged to produce evidence favoring the pet hypothesis over the true hypothesis. Moreover, despite the lack of a penalty, the probability of observing misleading evidence remains bounded and controllable, even when accumulating evidence is reexamined continuously. Thus, there is no longer a need to ‘penalize’ scientists for looking at their data; they need only use better tools to measure statistical evidence.

I will illustrate these theoretical properties and discuss a real-world example of a study design where the primary endpoint is survival.