Falsification tests, based on empirically testable implications of causal identifying assumptions, are an essential part of the toolkit for justifying causal claims. Common practice typically, and incorrectly, conflates failure to reject evidence of a flawed design with evidence that the design is credible. Hartman and Hidalgo (2018) have argued that equivalence tests are better suited for falsification testing, and that they provide the evidence researchers seek in arguing for a credible design. This paper considers two such tests tailored for regression discontinuity designs that allow researchers to provide statistical evidence that their designs are consistent with the observable implications of their identifying assumptions–one for continuity in the regression function of a pre-treatment covariate and one for continuity in the density function of the forcing variable. Simulation studies show the superior performance of equivalence tests over tests-of-difference, as used in current practice. The tests are applied to the close elections RD data presented in Eggers et al. (2015) and Caughey and Sekhon (2011).