Testing Exclusion and Shape Restrictions in Potential Outcomes Models
Abstract
Exclusion and shape restrictions play a central role in defining causal effects and interpreting estimates in potential outcomes models. To date, the testable implications of such restrictions have been studied on a case-by-case basis in a limited set of models. In this paper, we develop a general framework for characterizing sharp testable implications of general support restrictions on the potential response functions, based on a novel graph-based representation of the model. The framework provides a unified and constructive method for deriving all observable implications of the modeling assumptions. We illustrate the approach in several popular settings, including instrumental variables, treatment selection, mediation, and interference. As an empirical application, we revisit the US Lung Health Study and test for the presence of spillovers between spouses, specification of exposure maps, and persistence of treatment effects over time.
Summary
This paper addresses the problem of testing exclusion and shape restrictions in potential outcomes models, which are fundamental for defining causal effects. Existing approaches often handle these restrictions on a case-by-case basis. The authors develop a general, graph-based framework to characterize the *sharp* testable implications of general support restrictions on potential response functions. This framework provides a unified and constructive method for deriving *all* observable implications of the modeling assumptions. The key idea is to represent the compatibility structure of the model using a novel graph representation. This graph allows computation of testable implications using graph algorithms. A key contribution is proving that the implied restrictions are *sharp*, meaning they are both necessary and sufficient for consistency with the data. The paper illustrates the approach in several popular settings, including instrumental variables, treatment selection, mediation, and interference, and provides an empirical application using the US Lung Health Study to test various hypotheses related to spillover effects and treatment persistence. This work matters because it provides a systematic and computationally feasible way to assess the validity of commonly imposed modeling assumptions in a wide range of causal inference problems, thereby improving the reliability of causal effect estimates.
Key Insights
- •Novel Graph-Based Framework: The paper introduces a novel graph-based representation that encodes the compatibility structure of potential outcomes models, allowing for efficient computation of testable implications.
- •Sharp Testable Implications: The framework guarantees *sharp* testable implications, meaning the derived restrictions are both necessary and sufficient for consistency with the imposed assumptions. This is a stronger result than simply finding valid tests.
- •Maximal Independent Set Inequalities: The authors show that inequalities derived from Maximal Independent Sets (MIS) of the graph provide valid testable implications. Theorem 1 formalizes this.
- •Regularity Condition for Sharpness: The paper introduces a "regularity" condition (Assumption 3) on the support restrictions, under which the MIS inequalities alone are sufficient for *sharpness* (Theorem 3). This significantly simplifies the analysis in many common settings.
- •Generalization of Artstein's Theorem: The authors extend Artstein's theorem (originally for two random elements) to a multi-marginal setting, enabling the handling of multi-valued instruments and alternative treatment-selection models.
- •Handling Continuous Responses: The paper extends the analysis to settings with continuous response variables, generalizing existing results in instrumental variables models.
- •Counterexample to Regularity: A notable exception to regularity is the IV model with multi-valued instruments (K ⩾ 3), where only exclusion is assumed without any additional shape restrictions.
Practical Implications
- •Model Validation: Practitioners can use the framework to empirically assess the validity of commonly imposed modeling assumptions (e.g., exclusion restrictions, monotonicity) in their causal inference models.
- •Improved Causal Inference: By testing and potentially refining modeling assumptions, researchers can improve the reliability of causal effect estimates.
- •Computational Tools: The paper provides practical guidance on computation and inference, including the use of existing algorithms for finding maximal independent sets and testing moment inequalities. The empirical application demonstrates computational feasibility.
- •Model Selection: The framework enables formal model selection by systematically comparing testable implications across different models, allowing practitioners to evaluate how changes in modeling assumptions affect the implied restrictions.
- •Future Research Directions: The framework opens up avenues for future research, including the development of more efficient algorithms for computing testable implications and the extension of the framework to other types of restrictions on potential outcomes.