Originally posted on The Horizons Tracker.
In the digital world experiments have become commonplace ever since titans such as Google, Amazon, and Facebook underlined the importance of an almost continuous stream of A/B tests being run to underpin any decisions around things such as interface and user experience.
Such tests allow organizations to move away from hunch-based decision-making and tap into real-world user data to drive their decisions. The approach seems intuitive, but interference can make it difficult to ascertain true cause-and-effect. Research1 from the University of Texas at Austin aims to overcome that challenge.
Cause and effect
The researchers believe their approach ably illustrates how interference affects the outcomes of randomized control trials, which they hope will enable researchers and practitioners to better account for interference when designing such experiments.
While it’s tempting to assume that experiments are free from outside interference, the reality is that whenever experiments can change the behavior of things outside of the treatment group, there will inevitably be interference, which will, in turn, muddy any inferences you draw from the experiment.
“This phenomenon occurs any time we run a randomized experiment and my experimental units are connected with each other in some way—in social networks, friend groups, households, trade networks, and city streets,” the researchers explain.
They highlight how easy this is, with social media experiments possibly including friends in both treatment and control groups, or family members likewise in an e-commerce experiment.
“Without accounting for interference, estimates of causal effects will be very wrong,” they continue. “But if incorporated into a statistical analysis in the right way, interference can be a blessing. So we designed a method that has network interference directly baked in.”
Testing for interference
The approach developed by the researchers directly tests for interference. The approach focuses on so-called spatial interference, with a large-scaling policing experiment in Colombia the use case used to test the method.
The experiment, which was conducted by another team in 2015 aimed to measure crime on the streets of Medellin so that they could identify hotspots and assign extra police resources to those areas.
The researchers constructed a graph to determine whether the experiment worked. The graph aimed to encode information about the specific interference structure of the various street segments in the city together with the possible combinations of police assignments.
The graph was then used to develop an algorithm to create two suitable subsets, the first of which was spillover units that while they were not treated may be linked to the treated unit, and a pure control unit that was neither linked nor treated.
They then performed a Fisher randomized test to ascertain whether there was indeed any spillover effect that measured whether crime on streets near those that were policed differed from those that were a distance away from the policed streets.
The analysis confirmed that additional policing in one area was also effective at reducing crime in adjacent areas. The researchers believe, however, that their approach is sufficiently robust to have applications outside of policing.
“It can also be used at Facebook and Google in all of the thousands of experiments that they’re conducting every single day, or at any other company conducting A/B tests and internal experiments to improve operations,” they conclude.
Article source: How Organizations Can Run Better Experiments.
- Puelz, D., Basse, G., Feller, A., & Toulis, P. (2022). A graph-theoretic approach to randomization tests of causal effects under general interference. Journal of Royal Statistical Society Series B, 2022, 174– 204. ↩