Science is about recipes, all the rest is literature. E=MC^2, F=G*m1*m2/r^2, PV=nrt, etc. That's science, and good new models are bricks that are built upon. In 1969 when they started the Nobel Prize, they had reasonable reason to believe that economics was a just like physics and biology, in that game theory, macroeconomic modeling, and Arrow and Debreu's Theory of Value, seemed to suggest that economics was a system of logic and precision little different than physics.
Alas, it was not to be so. Most economic truths are found via common sense and experience. As Milton Friedman noted, it wasn't his arguments that convinced anyone of the supremacy of capitalism, but the manifest failure of the socialist economies. With hindsight, we can identify we socialism failed theoretically (the inability to decentralize incentives), but that had to conspicuously happen to convince anyone.
When I was in grad school, the top financial work consisted of applying Banach Spaces or various GMM tests to the data. The focus was the technique, not the data. Shanken (1985), Gibbons (1982), and Gibbons, Ross, and Shanken (1989) tested the CAPM by whether or not the market is mean-variance efficient. They applied Wald, Maximum Likelihood, and Lagrange Multiplier tests, as if this mélange would be better than just one test.
The thought was these approaches would be fruitful because they were so rigorous. Yet they did not lead to any fruitful improvements in finance. They did not highlight what was wrong with the CAPM, only that, it didn't work at some level of exactness no one cared about. Fama's rejection of the CAPM simply sorted companies by size and beta, and highlighted that beta was then uncorrelated with average returns.
Fama was consider unrigorous by my professors back then, part of the unsophisticated old guard. Who knew that subsequently, he would be seen as the theoretical and empirical expert. The key was not some subtle distinction between the Wald and Lagrange multiplier test, but rather, a simple correlation in the data between size and beta that made the correlation between beta and returns seem plausible. When you adjusted for this, it was not plausible. This did not take advanced econometric techniques.
Many good things are combinations, and good financial insights come from good statistics and fundamental knowledge of the data. Adjusting the data for delisting biases (Shumway, 1997), not using daily data dominated by low prices (Stambaugh and Blume, 1983), are key adjustments that totally change results, and though we look at them now with obvious hindsight, one must remember that for 10 years people did not know about them.
Finance is like wine in that while experts talk about sophisticated concepts the basics are simple things like clean barrels and clean data. Getting economists back to simple stuff, basic data analysis that corrects for various omitted variables, is the best way to find important economic truths. It's the data, not the process, that matters more. There simply aren't many economic equations, not enough to make the field a science, so it's more of seeing empirical tendencies, as almost everything is 'an empirical issue' with theoretical offsetting effects. Many thought that powerful techniques could avoid these issues, and in fact those concentrating on these adjustments were considered not as productive as those doing yeoman's work with the data. The scientism of economics that Hayek warned about has been very counterproductive to economics.