It is not possible to know whether a gap between theory and observation requires adding a bit of missing data, or is instead grounds for revamping the theory. Let me explain.
The planet Neptune was discovered “with the point of a pen”: it solved an apparent discrepancy between the observed orbit of Uranus and what was predicted by Newtonian gravitational theory.
Meanwhile, there was another known gravitational anomaly: the perihelion of Mercury. In analogy with the Neptune story, some suggested a hidden ‘inner’ planet. But no such planet was found.
Instead, explaining this discrepancy turned out to be a major vindication of General Relativity. So Einstein’s revolutionary reformulation of classical physics was required.
The wider lesson for science is this: there is no principled way to decide if ‘bridging hypotheses’ designed to align observations with theory turn out to correspond with latent observables, or if the theory needs to be modified.
As Kuhn pointed out, no theory explains *all* anomalies at any given time. They could arise from experimental error in addition to the lack of suitable bridging hypotheses. A theoretical revolution is in some sense a re-prioritization of empirical findings.
This should be in the back of one’s mind whenever one reads about ‘dark matter’, ‘dark energy’, and other theoretical ‘curve fitting parameters’. They *could* be signs of new entities… … or signs that a new theory is needed. Ontic certainty is ruled out either way.