I didn’t want to clutter my post on PM Truss’s economics with detailed references but in case anyone is interested:-
A paper ‘Do corporate tax cuts boost economic growth?’ in the August 2022 European Economic Review by Sebastian Gechert and Philip Heinberger found no evidence that they did. Instead they found evidence of a publication bias, with positive results more likely to be published. The authors point out the complexity of the relationship and therefore the lack of surprise that it is difficult to find a straightforward causal relationship. In the UK context, my view is that it is very unlikely that cutting our rate will have anything but a negative impact: our corporate tax rate is not high compared to other countries, and cutting it in the midst of a full blown fiscal crisis is unlikely to boost investment or growth.
The evidence on personal tax is even clearer, as David Hope of LSE argues in a December 2020 paper: ‘Keeping tax low for the rich does not boost the economy.’
On the publication bias point – [apart from the glee with which any right-wing organ is likely to descend upon the barest hint that one more widget was produced over a decade] I’m as interested in why the centre/left (or anyone) might be less likely to publish negative assessments. Is there maybe an element of ‘proving a negative’, or ’the dog that didn’t bark’ at play here? To put it another way, it’s far easier to celebrate the production of 1 more widget, than to explain how it should have been 100 more.
LikeLike
I think they obtained access to the original data sets for all of the studies they could find. There’s a well known bias in economics (and many other fields including medicine). With computers you can try hundreds of different specifications of the model of causality and just publish the ones that produce positive correlations. For that reason it is important to give other researchers access to the data set, to see if they can replicate the findings, check that they aren’t just a feature of a particular way of modelling the relationship. There are plenty of marginally respectable ways of biasing the data- choosing a specific time period, leaving out some data points because they are outliers, specifying lags between the change and the resulting impact in ways that favour the results you favour. Both sides of the argument are probably guilty of such tricks- it is too easy to mine the data in this way. The study I refer to tries to avoid this or claims to do so by constructing a larger data base. I am not sure if similar biases are entirely absent from their work, but it seems dull and academic enough to be serious research rather than polemical to support a position arrived at in advance.
LikeLike