At Activate Care, we are transforming the lives of individuals and communities through our Path Assist social intervention program, a proactive model of social care that identifies and addresses unmet health-related social needs. Recognizing that we will not get every detail right the first time and that healthcare is an ever-evolving landscape, continuously evaluating and improving the program is essential.
To do this effectively, we have to address one fundamental question, which is the same question that anyone evaluating a social intervention program must consider: how do you know that the intervention is effective?
Understanding if an intervention is effective requires differentiating between the positive changes that are attributable to the intervention and those that are not. There are a variety of considerations that go into separating these two components. In today’s post, we will focus on the role played by the rules and circumstances governing which individuals enroll in the program. While the specific rules and circumstances are unique to each intervention, they can be classified into three scenarios: experiments, natural experiments, and non-experimental settings.
The gold standard for evaluating an intervention is in an experimental setting, typically through a randomized control trial (RCT). In an RCT, individuals who meet the eligibility criteria are randomly assigned to either the treatment or control group. Randomization ensures that the two groups are similar. As a result, the effect of the intervention can be determined by comparing the outcomes of individuals who were treated and those who were not. However, for a variety of practical or ethical reasons, randomizing treatment is not always feasible.
In a natural experiment, there are circumstances or rules outside the evaluator’s control that lead to randomized treatment assignment. There are three canonical situations where natural experiments arise. The first is when some sort of lottery determines eligibility. For example, Angrist (1998) studied the economic returns to voluntarily serving in the military by comparing the outcomes of individuals with high draft lottery numbers (less likely to be called to serve) and elected to serve anyway to those with low numbers. The second is when a predefined threshold determines treatment. One example comes from Israeli public schools in the late 1990s, where classes of 40 or more students were automatically split. The third situation arises when the timing or nature of how the treatment is introduced naturally produces a comparable treatment and control group.
For example, New Jersey and Pennsylvania had the same minimum wage of $4.25 in 1991, but in 1992 NJ raised the minimum wage to $5.05, and Pennsylvania did not. By comparing fast food restaurants in both states, the authors were able to study the causal effect of increasing the minimum wage on a host of economic outcomes. This last situation is particularly relevant to evaluating social intervention programs, often made available to a limited geographic region. Individuals meeting the intervention’s eligibility that reside outside of this region provide a natural comparison group.
In both scenarios discussed so far, the rules and circumstances surrounding program enrollment either directly (RCT) or indirectly (natural experiment) resulted in random treatment assignment. But what do you do when the program you are keen to evaluate does not match either of those scenarios? In short, you do your best to emulate randomly-assigned treatment through analytical methods. Various methods are available, but we will simply touch on the key challenge underlying them.
The simplest method for evaluating an intervention is to compare the outcomes of individuals after a program concludes to the same measures collected before the program began. However, this approach does not, in general, identify a causal effect. The reason is that there are likely important differences between individuals that received treatment and those that did not. One situation in which differences arise is when individuals can elect to receive the treatment. Individuals who opt-in to a program likely do so because they believe the treatment will be effective and are therefore not comparable to individuals who choose not to participate. Another situation is when there are barriers to participating in the intervention that only affect certain individuals.
For example, a program that imposes a financial burden on the participant (e.g., requires taking time off from work, paid transportation to a clinic, etc.) is likely to enroll more financially privileged participants. Regardless of the underlying drivers, the result is the same: treated and untreated individuals are not comparable. To get around this problem, the evaluator must be able to identify a set of characteristics that, once accounted for, allow them to make the case that those who receive treatment are comparable to those that did not. While ideally grounded in sound theory about how the outcome of interest is determined, this set of characteristics is ultimately an assumption required to establish the desired causal relationship.
In this post, we have seen how the circumstances and rules that determine which individuals enroll in the program play a critical role in quantifying the causal effect of an intervention. These rules and circumstances can be categorized into one of three settings, each with its own analytical methods and assumptions. Therefore, addressing the fundamental question of whether an intervention is effective starts by deeply understanding these rules and circumstances. However, it must not stop there. Everyone evaluating a social intervention program should be willing to scrutinize both the analytical methods and assumptions used to make a causal claim about the efficacy of the intervention. We do this regularly as we continually improve the Path Assist program.