Instrumental variable estimation via a continuum of instruments with an application to estimating the elasticity of intertemporal substitution in consumption

This study proposes new instrumental variable (IV) estimators for linear models by exploiting a continuum of instruments effectively. The effectiveness is attributed to the unique weighting function employed in the minimum distance objective functions, which enjoys attractive properties in relation to estimation efficiency. The proposed estimators enjoy analytical formulas, which are easily computable. The inferences drawn for these estimators are also straightforward, since their variance estimators for parameter inferences are of analytical forms. The proposed estimators are robust to weak instruments and heteroskedasticity of unknown form. Further, they are robust to high dimensionality o..

Econometrics

Generalized Spectral Tests for High Dimensional Multivariate Martingale Difference Hypotheses

This study proposes new generalized spectral tests for multivariate Martingale Difference Hypotheses, especially suitable for high-dimensionality situations. The new tests are based on the martingale difference divergence covariance (MDD) proposed by Shao and Zhang (2014). It considers block-wise serial dependence of all lags, therefore, is consistent against general block-wise nonparametric Pitman’s local alternatives at the parametric rate n−1/2, where n is the sample size, and free of a user-chosen parameter. In order to cope with the highdimensionality in the sense that the dimension of time series is comparable to or even greater than the sample size, it is pivotal to employ a bias-..

Econometrics

Double generative adversarial networks for conditional independence testing

In this article, we study the problem of high-dimensional conditional independence testing, a key building block in statistics and machine learning. We propose an inferential procedure based on double generative adversarial networks (GANs). Specifically, we first introduce a double GANs framework to learn two generators of the conditional distributions. We then integrate the two generators to construct a test statistic, which takes the form of the maximum of generalized covariance measures of multiple transformation functions. We also employ data-splitting and cross-fitting to minimize the conditions on the generators to achieve the desired asymptotic properties, and employ multiplier bootst..

Econometrics

Robustness of Detrended Cross-correlation Analysis Method Under Outliers Observations

The computation of the bivariate Hurst exponent constitutes an important technique to test the power-law cross-correlation of time series. For this objective, the detrended cross-correlation analysis method represents the most used one. In this article, we prove the robustness of the detrended cross-correlation analysis method, where the trend is estimated using the polynomial fitting, to estimate the bivariate Hurst exponent when time series are corrupted by outliers observations. On the other hand, we give the exact polynomial order and a regression region for computing a detrended cross-correlation function to obtain a least-square estimator of bivariate Hurst exponent. Our theoretical re..

Econometrics

Bandwidth selection for the Local Polynomial Double Conditional Smoothing under Spatial ARMA Errors*

Nonparametric estimation of the mean surface of spatial data usually depends on a bivariate regressor, which is an ineffective estimation method for large data sets. The Double Conditional Smoothing (DCS) increases computational efficiency by reducing the regression problem to one dimension. We apply the DCS scheme to two-dimensional functional or spatial time series and use local polynomial regression for estimation of the regression surface and its derivatives. Asymptotic formulas for expectation and variance are given and formulas for the asymptotic optimal bandwidth derived. We propose a iterative plug-in algorithm for estimation of these optimal bandwidths under dependent errors. Spatia..

Econometrics

Dividend Momentum and Stock Return Predictability: A Bayesian Approach

A long tradition in macro-finance studies the joint dynamics of aggregate stock returns and dividends using vector autoregressions (VARs), imposing the cross-equation restrictions implied by the Campbell-Shiller (CS) identity to sharpen inference. We take a Bayesian perspective and develop methods to draw from any posterior distribution of a VAR that encodes a priori skepticism about large amounts of return predictability while imposing the CS restrictions. In doing so, we show how a common empirical practice of omitting dividend growth from the system amounts to imposing the extra restriction that dividend growth is not persistent. We highlight that persistence in dividend growth induces a ..

Econometrics

Autoregressive conditional duration modelling of high frequency data

This paper explores the duration dynamics modelling under the Autoregressive Conditional Durations (ACD) framework (Engle and Russell 1998). I test different distributions assumptions for the durations. The empirical results suggest unconditional durations approach the Gamma distributions. Moreover, compared with exponential distributions and Weibull distributions, the ACD model with Gamma distributed innovations provide the best fit of SPY durations.

Econometrics

Bootstrap inference for panel data quantile regression

This paper develops bootstrap methods for practical statistical inference in panel data quantile regression models with fixed effects. We consider random-weighted bootstrap resampling and formally establish its validity for asymptotic inference. The bootstrap algorithm is simple to implement in practice by using a weighted quantile regression estimation for fixed effects panel data. We provide results under conditions that allow for temporal dependence of observations within individuals, thus encompassing a large class of possible empirical applications. Monte Carlo simulations provide numerical evidence the proposed bootstrap methods have correct finite sample properties. Finally, we provid..

Econometrics

Multiplicative Component GARCH Model of Intraday Volatility

This paper proposes a multiplicative component intraday volatility model. The intraday conditional volatility is expressed as the product of intraday periodic component, intraday stochastic volatility component and daily conditional volatility component. I extend the multiplicative component intraday volatility model of Engle (2012) and Andersen and Bollerslev (1998) by incorporating the durations between consecutive transactions. The model can be applied to both regularly and irregularly spaced returns. I also provide a nonparametric estimation technique of the intraday volatility periodicity. The empirical results suggest the model can successfully capture the interdependency of intraday r..

Econometrics

A Causality-based Graphical Test to obtain an Optimal Blocking Set for Randomized Experiments

Randomized experiments are often performed to study the causal effects of interest. Blocking is a technique to precisely estimate the causal effects when the experimental material is not homogeneous. We formalize the problem of obtaining a statistically optimal set of covariates to be used to create blocks while performing a randomized experiment. We provide a graphical test to obtain such a set for a general semi-Markovian causal model. We also propose and provide ideas towards solving a more general problem of obtaining an optimal blocking set that considers both the statistical and economic costs of blocking.

Econometrics

Correlation Estimation in Hybrid Systems

A simple method is proposed to estimate the instantaneous correlations between state variables in a hybrid system from the empirical correlations between observable market quantities such as spot rate, stock price and implied volatility. The new algorithm is extremely fast since only 4x4 linear systems are involved. In case the resulting matrix from the linear systems is not positive semidefinite, it can be converted easily using a method called shrinking, which requires only bisection-style iterations. The square of short-term at-the-money implied volatility is suggested as the proxy for the unobservable stochastic variance. If the implied volatility is not available, a simple algorithm is ..

Econometrics

Finite- and Large-Sample Inference for Ranks using Multinomial Data with an Application to Ranking Political Parties

It is common to rank different categories by means of preferences that are revealed through data on choices. A prominent example is the ranking of political candidates or parties using the estimated share of support each one receives in surveys or polls about political attitudes. Since these rankings are computed using estimates of the share of support rather than the true share of support, there may be considerable uncertainty concerning the true ranking of the political candidates or parties. In this paper, we consider the problem of accounting for such uncertainty by constructing confidence sets for the rank of each category. We consider both the problem of constructing marginal confidenc..

Econometrics

Forecasting with VAR-teXt and DFM-teXt Models:exploring the predictive power of central bank communication

This paper explores the complementarity between traditional econometrics and machine learning and applies the resulting model – the VAR-teXt – to central bank communication. The VAR-teXt is a vector autoregressive (VAR) model augmented with information retrieved from text, turned into quantitative data via a Latent Dirichlet Allocation (LDA) model, whereby the number of topics (or textual factors) is chosen based on their predictive performance. A Markov chain Monte Carlo (MCMC) sampling algorithm for the estimation of the VAR-teXt that takes into account the fact that the textual factors are estimates is also provided. The approach is then extended to dynamic factor models (DFM) generat..

Econometrics

Incorporating Search and Sales Information in Demand Estimation

We propose an approach to modeling and estimating discrete choice demand that allows for a large number of zero sale observations, rich unobserved heterogeneity, and endogenous prices. We do so by modeling small market sizes through Poisson arrivals. Each of these arriving consumers then solves a standard discrete choice problem. We present a Bayesian IV estimation approach that addresses sampling error in product shares and scales well to rich data environments. The data requirements are traditional market-level data and measures of consumer search intensity. After presenting simulation studies, we consider an empirical application of air travel demand where product-level sales are sparse. ..

Econometrics

Coresets for Time Series Clustering

We study the problem of constructing coresets for clustering problems with time series data. This problem has gained importance across many fields including biology, medicine, and economics due to the proliferation of sensors for real-time measurement and rapid drop in storage costs. In particular, we consider the setting where the time series data on N entities is generated from a Gaussian mixture model with autocorrelations over k clusters in Rd. Our main contribution is an algorithm to construct coresets for the maximum likelihood objective for this mixture model. Our algorithm is efficient, and, under a mild assumption on the covariance matrices of the Gaussians, the size of the cor..

Econometrics

An online sequential test for qualitative treatment effects

Tech companies (e.g., Google or Facebook) often use randomized online experiments and/or A/B testing primarily based on the average treatment effects to compare their new product with an old one. However, it is also critically important to detect qualitative treatment effects such that the new one may significantly outperform the existing one only under some specific circumstances. The aim of this paper is to develop a powerful testing procedure to efficiently detect such qualitative treatment effects. We propose a scalable online updating algorithm to implement our test procedure. It has three novelties including adaptive randomization, sequential monitoring, and online updating with guaran..

Econometrics

Two-stage multilevel latent class analysis with covariates in the presence of direct effects

In this article, we present a two-stage estimation approach applied to multilevel latent class analysis (LCA) with covariates. We separate the estimation of the measurement and structural model. This makes the extension of the structural model computationally efficient. We investigate the robustness against misspecifications of the proposed two-stage and the classical one-stage approach for models where a direct effect exists between indicators of the LC model and covariate, and the direct effect is ignored.

Econometrics

Faster fiscal stimulus and a higher government spending multiplier in China: Mixed-frequency identification with SVAR

Motivating with two scenarios in which the government spending in China timely reacted to output shock within a quarter, this letter points out a downward bias in the estimation of Chinese government spending multiplier using the classical lag restriction for shock identification in a quarterly SVAR framework à la Blanchard and Perotti (2002). By relaxing the lag-length restriction from one quarter to one month, we propose a mixed-frequency identification (MFI) strategy by taking the unexpected spending change in the first month of each quarter as an instrument. The estimation results show that the Chinese government significantly reacts to output shock counter-cyclically within a quarter, ..

Econometrics

Modelling Cycles in Climate Series: the Fractional Sinusoidal Waveform Process

The paper proposes a novel model for time series displaying persistent stationary cycles, the fractional sinusoidal waveform process. The underlying idea is to allow the parameters that regulate the amplitude and phase to evolve according to fractional noise processes. Its advantages with respect to popular alternative specifications, such as the Gegenbauer process, are twofold: the autocovariance function is available in closed form, which opens the way to exact maximum likelihood estimation; secondly the model encompasses deterministic cycles, so that discrete spectra arise as a limiting case. A generalization of the process, featuring multiple components, an additive `red noise' component..

Econometrics

A quantile based dimension reduction technique

Partial least squares (PLS) is a dimensionality reduction technique used as an alternative to ordinary least squares (OLS) in situations where the data is colinear or high dimensional. Both PLS and OLS provide mean based estimates, which are extremely sensitive to the presence of outliers or heavy tailed distributions. In contrast, quantile regression is an alternative to OLS that computes robust quantile based estimates. In this work, the multivariate PLS is extended to the quantile regression framework, obtaining a theoretical formulation of the problem and a robust dimensionality reduction technique that we call fast partial quantile regression (fPQR), that provides quantilebased estimate..

Econometrics

A Dummy Test of Identification in Models with Bunching

We propose a simple test of the main identification assumption in models where the treatment variable takes multiple values and has bunching. The test consists of adding an indicator of the bunching point to the estimation model and testing whether the coefficient of this indicator is zero. Although similar in spirit to the test in Caetano (2015), the dummy test has important practical advantages: it is more powerful at detecting endogeneity, and it also detects violations of the functional form assumption. The test does not require exclusion restrictions and can be implemented in many approaches popular in empirical research, including linear, two-way fixed effects, and discrete choice mode..

Econometrics

Efficient Nonparametric Estimation of Generalized Autocovariances

This paper provides a necessary and sufficient condition for asymptotic efficiency of a nonparametric estimator of the generalized autocovariance function of a stationary random process. The generalized autocovariance function is the inverse Fourier transform of a power transformation of the spectral density and encompasses the traditional and inverse autocovariance functions as particular cases. A nonparametric estimator is based on the inverse discrete Fourier transform of the power transformation of the pooled periodogram. The general result on the asymptotic efficiency is then applied to the class of Gaussian stationary ARMA processes and its implications are discussed. Finally, we illus..

Econometrics

Conditional inference and bias reduction for partial effects estimation of fixed-effects logit models

We propose a multiple-step procedure to compute average partial effects (APEs) for fixed-effects panel logit models estimated by Conditional Maximum Likelihood (CML). As individual effects are eliminated by conditioning on suitable sufficient statistics, we propose evaluating the APEs at the ML estimates for the unobserved heterogeneity, along with the fixed-T consistent estimator of the slope parameters, and then reducing the induced bias in the APE by an analytical correction. The proposed estimator has bias of order O(T −2 ), it performs well in finite samples and, when the dynamic logit model is considered, better than alternative plug-in strategies based on bias-corrected estimates fo..

Econometrics

MCMC Conditional Maximum Likelihood for the two-way fixed-effects logit

We propose a Markov chain Monte Carlo Conditional Maximum Likelihood (MCMC-CML) estimator for two-way fixed-effects logit models for dyadic data. The proposed MCMC approach, based on a Metropolis algorithm, allows us to overcome the computational issues of evaluating the probability of the outcome conditional on nodes in and out degrees, which are sufficient statistics for the incidental parameters. Under mild regularity conditions, the MCMC-CML estimator converges to the exact CML one and is asymptotically normal. Moreover, it is more efficient than the existing pairwise CML estimator. We study the finite sample properties of the proposed approach by means of a simulation study and three em..

Econometrics

Kernel-based Time-Varying IV estimation: handle with care

Giraitis, Kapetanios, and Marcellino (Journal of Econometrics, 2020) proposed a kernel-based time-varying coefficients IV estimator. By using entirely different code, We broadly replicate the simulation results and the empirical application on the Phillips Curve but we note that a small coding mistake might have affected some of the reported results. Further, we extend the results by using a different sample and many kernel functions; we find that the estimator is remarkably robust across a wide range of smoothing choices, but the effect of outliers may be less obvious than expected.

Econometrics

Beyond F-statistic - A General Approach for Assessing Weak Identification

We propose a new method to detect weak identification in instrumental variable (IV) models. This method is based on the asymptotic normality of the distributions of the estimated endogenous variable structural equation coefficients in the presence of strong identification. Therefore, our method resulting in a specific test is more flexible than previous tests as it does not depend on a specific class of models, but is applicable for a variety of both linear and non-linear IV models or mixtures of them, which can be estimated by generalized method of moments (GMM). Moreover, our proposed test does not rely on assumptions of homoscedasticity or the absence of autocorrelation. For linear models..

Econometrics

Hierarchical Gaussian Process Models for Regression Discontinuity/Kink under Sharp and Fuzzy Designs

We propose nonparametric Bayesian estimators for causal inference exploiting Regression Discontinuity/Kink (RD/RK) under sharp and fuzzy designs. Our estimators are based on Gaussian Process (GP) regression and classification. The GP methods are powerful probabilistic modeling approaches that are advantageous in terms of derivative estimation and uncertainty qualification, facilitating RK estimation and inference of RD/RK models. These estimators are extended to hierarchical GP models with an intermediate Bayesian neural network layer and can be characterized as hybrid deep learning models. Monte Carlo simulations show that our estimators perform similarly and often better than competing est..

Econometrics

Identifcation-Robust Nonparametric Inference in a Linear IV Model

For a linear IV regression, we propose two new inference procedures on parameters of endogenous variables that are robust to any identification pattern, do not rely on a linear first-stage equation, and account for heteroskedasticity of unknown form. Building on Bierens (1982), we first propose an Integrated Conditional Moment (ICM) type statistic constructed by setting the parameters to the value under the null hypothesis. The ICM procedure tests at the same time the value of the coefficient and the specification of the model. We then adopt a conditionality principle to condition on a set of ICM statistics that informs on identification strength. Our two procedures uniformly control size ir..

Econometrics

RieszNet and ForestRiesz: Automatic Debiased Machine Learning with Neural Nets and Random Forests

Many causal and policy effects of interest are defined by linear functionals of high-dimensional or non-parametric regression functions. $\sqrt{n}$-consistent and asymptotically normal estimation of the object of interest requires debiasing to reduce the effects of regularization and/or model selection on the object of interest. Debiasing is typically achieved by adding a correction term to the plug-in estimator of the functional, that is derived based on a functional-specific theoretical derivation of what is known as the influence function and which leads to properties such as double robustness and Neyman orthogonality. We instead implement an automatic debiasing procedure based on automat..

Econometrics

A Time-Varying Endogenous Random Coefficient Model with an Application to Production Functions

This paper proposes a random coefficient panel model where the regressors are correlated with the time-varying random coefficients in each period, a critical feature in many economic applications. We model the random coefficients as unknown functions of a fixed effect of arbitrary dimensions, a time-varying random shock that affects the choice of regressors, and an exogenous idiosyncratic shock. A sufficiency argument is used to control for the fixed effect, which enables one to construct a feasible control function for the random shock and subsequently identify the moments of the random coefficients. We propose a three-step series estimator and prove an asymptotic normality result. Simulati..

Econometrics