In this paper, we use local projections to investigate the impact of consolidation shocks on GDP growth, conditional on the fragility of government finances. Based on a database of fiscal plans in OECD countries, we show that spending shocks are less detrimental than tax-based consolidation. In times of fiscal fragility, our results indicate strongly that governments should consolidate through surprise policy changes rather than announcements of consolidation at a later horizon.
This paper investigates the propagation of technology news shocks within and across industrialized economies. We construct utilization-adjusted total factor productivity (TFP) and labor productivity measures for twelve OECD countries. Based on a structural VAR, we document that (i) the max-share identification is able to recover essentially the same technology news shock process within a country irrespective of which productivity measure is considered; and (ii) the identified technology shock series display a significant degree of correlation across several countries. Furthermore, we find that the US are not the only major source of technological innovations impacting other advanced economies. In terms of underlying transmission mechanisms, technology news propagate through endogenous monetary policy to the slope of the term structure of interest rates and also via trade related variables. Our results imply that financial markets and trade are key channels for the dissemination of technology news shocks.
The objective of this project is to quantify the relative importance of credit demand and credit supply shocks in determining fluctuations in credit variables and the business cycle. Since the financial crisis, the volume of credit has been the linchpin of policymaking geared toward financial stability and macroprudential policy. For example, the credit-to-GDP gap is key in the Basel III regulation and the setting of countercyclical capital buffers, and private credit stocks and flows are among the scoreboard indicators employed for the European macroeconomic imbalances procedure. These two regulatory initiatives monitor the developments of credit volumes, but focus on different sides of the market. Countercyclical capital buffers are aimed at regulating credit supply, while the scoreboard has an eye on firm and household (over-)indebtedness and hence credit demand. Thus, understanding to what extent fluctuations are due to demand or supply is crucial for formulating policy and for assessing the macroeconomic implications. Using a novel approach to identification that combines multiple sources of information, this project aims at disentangling credit supply and credit demand shocks to examine their dynamic effects on the real economy and their contributions to historical fluctuations in credit volumes.
In this project, we investigate the sources of the increase in real house prices relative to other prices and rents since the 1960s across all advanced economies. We construct a theoretical model, including housing supply, demand, a no-arbitrage condition and a monetary policy rule. This model informs priors of a Bayesian VAR that helps to identify the sources of surging house prices.
Two contradictory strands of the rating literature criticize that rating agencies merely follow the market on the one hand, and emphasizing that rating changes affect capital movements on the other hand. Both focus on explaining rating levels rather than the timing of rating announcements. Contrarily, we explicitly differentiate between a decision to assess a country and the actual rating decision. We show that this differentiation significantly improves the estimation of the rating function. The three major rating agencies treat economic fundamentals similarly, while differing in their response to other factors such as strategic considerations. This reconciles the conflicting literature.
In this project, we aim to provide three contributions. First, we want to provide a detailed analysis of shocks to sovereign debt sustainability in the Euro area between 1999 and 2019. Second, we show how macroeconomic aggregates react to these shocks. To achieve our goals, we link news ticker data to high-frequency interest rates on European sovereign bonds. These data are used to identify shocks to sovereign debt sustainability. Local projections provide impulse-response functions of macroeconomic aggregates to these shocks. Using textual analysis, we can look at different dimensions of sovereign debt sustainability (election news; fiscal news; macroeconomic uncertainty).
The Swiss National Bank abolished the exchange rate floor versus the Euro in January 2015. Using a synthetic matching framework, we analyze the impact of this unexpected (and therefore exogenous) policy change on the stock market. The results reveal a significant level shift (decline) in asset prices following the discontinuation of the minimum exchange rate. As a novel finding in the literature, we document that the exchange-rate elasticity of Swiss asset prices is around -0.75. Differentiating between sectors of the Swiss economy, we find that the industrial, financial and consumer goods sectors are most strongly affected by the abolition of the minimum exchange rate.
This paper compares the out-of-sample predictive performance of different early warning models for systemic banking crises using a sample of advanced economies covering the past 45 years. We compare a benchmark logit approach to several machine learning approaches recently proposed in the literature. We find that while machine learning methods often attain a very high in-sample fit, they are outperformed by the logit approach in recursive out-of-sample evaluations. This result is robust to the choice of performance metric, crisis definition, preference parameter, and sample length, as well as to using different sets of variables and data transformations. Thus, our paper suggests that further enhancements to machine learning early warning models are needed before they are able to offer a substantial value-added for predicting systemic banking crises. Conventional logit models appear to use the available information already fairly efficiently, and would for instance have been able to predict the 2007/2008 financial crisis out-of-sample for many countries. In line with economic intuition, these models identify credit expansions, asset price booms and external imbalances as key predictors of systemic banking crises.
Reserve requirements, as a tool of macroprudential policy, have been increasingly employed since the outbreak of the great financial crisis. We conduct an analysis of the effect of reserve requirements in tranquil and crisis times on long-run growth rates of GDP per capita and credit (%GDP) making use of Bayesian model averaging methods. Regulation has on average a negative effect on GDP in tranquil times, which is only partly offset by a positive (but not robust effect) in crisis times. Credit over GDP is positively affected by higher requirements in the longer run.
Recurring financial instabilities have led policymakers to rely on early-warning models to signal financial vulnerabilities. These models rely on ex-post optimization of signaling thresholds on crisis probabilities accounting for preferences between forecast errors, but come with the crucial drawback of unstable thresholds in recursive estimations. We propose two alternatives for threshold setting with similar or better out-of-sample performance; (i) including preferences in the estimation itself and (ii) setting thresholds ex-ante according to preferences only. Given probabilistic model output, it is intuitive that a decision rule is independent of the data or model specification, as thresholds on probabilities represent a willingness to issue a false alarm vis-à-vis missing a crisis. We provide real-world and simulation evidence that this simplification results in stable thresholds, while keeping or improving on out-of-sample performance. Our solution is not restricted to binary-choice models, but directly transferable to the signaling approach and all probabilistic early-warning models.
Can a negative shock to sovereign ratings invoke a vicious cycle of increasing government bond yields and further downgrades, ultimately pushing a country toward default? The narratives of public and political discussions, as well as of some widely cited papers, suggest this possibility. In this paper, we will investigate the possible existence of such a vicious cycle. We find no evidence of a bad long-run equilibrium and cannot confirm a feedback loop leading into default as a transitory state for all but the very worst ratings. We use a bivariate semiparametric dynamic panel model to reproduce the joint dynamics of sovereign ratings and government bond yields. The individual equations resemble Pesaran-type cointegration models, which allow for valid interference regardless of whether the employed variables display unit-root behavior. To incorporate most of the empirical features previously documented (separately) in the literature, we allow for different long-run relationships in both equations, nonlinearities in the level effects of ratings, and asymmetric effects in changes of ratings and yields. Our finding of a single good equilibrium implies the slow convergence of ratings and yields toward this equilibrium. However, the persistence of ratings is sufficiently high that a rating shock can have substantial costs if it occurs at a highly speculative rating or lower. Rating shocks that drive the rating below this threshold can increase the interest rate sharply, and for a long time. Yet, simulation studies based on our estimations show that it is highly improbable that rating agencies can be made responsible for the most dramatic spikes in interest rates.
Due to the recent financial crisis, the interest in econometric models that allow to incorporate binary variables (such as the occurrence of a crisis) experienced a huge surge. This paper evaluates the performance of the Qual VAR, originally proposed by Dueker (2005). The Qual VAR is a VAR model including a latent variable that governs the behavior of an observable binary variable. While we find that the Qual VAR performs reasonable well in forecasting (outperforming a probit benchmark), there are substantial identification problems even in a simple VAR specification. Typically, identification in economic applications is far more difficult than in our simple benchmark. Therefore, when the economic interpretation of the dynamic behavior of the latent variable and the chain of causality matter, use of the Qual VAR is inadvisable.
Designers of MMOs such as Diablo 3 face economic problems much like policy makers in the real world, e.g. inflation and distributional issues. Solving economic problems through regular updates (patches) became as important to those games as traditional gameplay issues. In this paper we provide an agent framework inspired by the economic features of Diablo~3 and analyze the effect of monetary policy in the game. Our model reproduces a number of features known from the Diablo~3 economy such as a heterogeneous price development, driven almost exclusively by goods of high quality, a highly unequal wealth distribution and strongly decreasing economic mobility. The basic framework presented in this paper is meant as a stepping stone to further research, where our evidence is used to deepen our understanding of the real-world counterparts of such problems. The advantage of our model is that it combines simplicity that is inherent to model economies with a similarly simple observable counterpart (namely the game environment where real agents interact). By matching the dynamics of the game economy we can thus easily verify that our behavioral assumptions are good approximations to reality.
The European debt crisis has revealed severe imbalances within the Euro area, sparking a debate about the magnitude of those imbalances, in particular concerning real effective exchange rate misalignments. We use synthetic matching to construct a counterfactual economy for each member state in order to identify the degree of these misalignments. We find that crisis countries are best described as a combination of advanced and emerging economies. Comparing the actual real effective exchange rate with those of the counterfactuals gives evidence of misalignments before the outbreak of the crisis - all peripheral countries appear strongly and significantly overvalued.
After every major financial crisis, the question about the responsibility of the rating agencies resurfaces. Regarding government bonds, the most frequently voiced concern targeted ``unreasonably’’ bad ratings that might trigger capital flights and increasing risk premia which sanction further rating downgrades. In this paper we develop a multivariate, nonparametric version of the Pesaran type cointegration model that allows for nonlinearities, to show that a unique equilibrium between ratings and sovereign yields exists. Therefore, we have to reject the concern that there is an unholy cycle leading to certain default in the long run.
The current European Debt Crisis has led to a reinforced effort to identify the sources of risk and their influence on yields of European Government Bonds. Until now, the potentially nonlinear influence and the theoretical need for interactions reflecting flight-to-quality and flight-to-liquidity has been widely disregarded. I estimate government bond yields of the Euro-12 countries without Luxembourg from May 2003 until December 2011. Using penalized spline regression, I find that the effect of most explanatory variables is highly nonlinear. These nonlinearities, together with flight patterns of flight-to-quality and flight-to-liquidity, can explain the co-movement of bond yields until September 2008 and the huge amount of differentiation during the financial and the European debt crisis without the unnecessary assumption of a structural break. The main effects are credit risk and flight-to-liquidity, while the evidence for the existence of flight-to-quality and liquidity risk (the latter measured by the bid-ask spread and total turnover of bonds) is comparably weak.
The signals approach as an early-warning system has been fairly successful in detecting crises, but it has so far failed to gain popularity in the scientific community because it cannot distinguish between randomly achieved in-sample fit and true predictive power. To overcome this obstacle, we test the null hypothesis of no correlation between indicators and crisis probability in three applications of the signals approach to different crisis types. To that end, we propose bootstraps specifically tailored to the characteristics of the respective datasets. We find (1) that previous applications of the signals approach yield economically meaningful results; (2) that composite indicators aggregating information contained in individual indicators add value to the signals approach; and (3) that indicators which are found to be significant in-sample usually perform similarly well out-of-sample.
European authorities and scholars published proposals on which indicators of macroeconomic imbalances might be used to uncover risks for the sustainability of public debt in the European Union. In this article the ability of four proposed sets of indicators to send early warnings of debt crises is tested using a signals approach for the study of indicators and the construction of composite indicators. It is found that a broad composite indicator has the highest predictive power. This fact still holds true if equal weights are used for the construction of the composite indicator in order to reflect the uncertainty about the origin of future crises.