Меню Закрыть
2 Sales
-62%

Econometrics and Statistics

150,00 57,00

Email

Part A: Econometrics. Emphasis is given to methodological and theoretical papers containing substantial econometrics derivations or showing a potential of a significant impact in the broad area of econometrics. Topics of interest include the estimation of econometric models and associated inference, model selection, panel data, measurement error, Bayesian methods, and time series analyses. Simulations are considered when they involve an original methodology. Innovative papers in financial econometrics and its applications are considered. The covered topics include portfolio allocation, option pricing, quantitative risk management, systemic risk and market microstructure. Interest is focused as well on well-founded applied econometric studies that demonstrate the practicality of new procedures and models. Such studies should involve the rigorous application of statistical techniques, including estimation, inference and forecasting. Topics include volatility and risk, credit risk, pricing models, portfolio management, and emerging markets. Innovative contributions in empirical finance and financial data analysis that use advanced statistical methods are encouraged. The results of the submissions should be replicable. Applications consisting only of routine calculations are not of interest to the journal.

Part B: Statistics. Papers providing important original contributions to methodological statistics inspired in applications are considered for this section. Papers dealing, directly or indirectly, with computational and technical elements are particularly encouraged. These cover developments concerning issues of high-dimensionality, re-sampling, dependence, robustness, filtering, and, in general, the interaction of mathematical methods, numerical implementations and the extra burden of analysing large and/or complex datasets with such methods in different areas such as medicine, epidemiology, biology, psychology, climatology and communication. Innovative algorithmic developments are also of interest, as are the computer programs and the computational environments that implement them as a complement.

Volume 8

Pages 1-250 (October 2018)

Part A: Econometrics and Part B: Statistics — Special issue on Quantile regression and semiparametric methods

Abstract

Heterogeneity in how some independent variables affect a dependent variable is pervasive in many phenomena. In this respect, this paper addresses the question of constant versus nonconstant effect through quantile regression modelling. For linear quantile regression under endogeneity, it is often believed that the fitted-value setting (i.e., replacing endogenous regressors with their exogenous fitted-values) implies constant effect (that is: the coefficients of the covariates do not depend on the considered quantile, except for the intercept). Here, it is shown that, under a weakened instrumental variable restriction, the fitted-value setting can allow for nonconstant effect, even though only the constant-effect coefficients of the model can be identified. An application to food demand estimation in 2012 Egypt shows the practical potential of this approach.

Purchase PDF

Abstract

Continuous treatments (e.g., doses) arise often in practice. Methods for estimation and inference for quantile treatment effects models with a continuous treatment are proposed. Identification of the parameters of interest, the dose-response functions and the quantile treatment effects, is achieved under the assumption that selection to treatment is based on observable characteristics. An easy to implement semiparametric two-step estimator, where the first step is based on a flexible Box–Cox model is proposed. Uniform consistency and weak convergence of this estimator are established. Practical statistical inference procedures are developed using bootstrap. Monte Carlo simulations show that the proposed methods have good finite sample properties. Finally, the proposed methods are applied to a survey of Massachusetts lottery winners to estimate the unconditional quantile effects of the prize amount, as a proxy of non-labor income changes, on subsequent labor earnings from U.S. Social Security records. The empirical results reveal strong heterogeneity across unconditional quantiles. The study suggests that there is a threshold value in non-labor income that is high enough to make all individuals stop working, and that this applies uniformly for all quantiles. It also shows that the threshold value is monotonic in the quantiles.

Purchase PDF

Abstract

A new hyperplanes intersection simulated annealing (HISA) algorithm, based on a discrete representation of the search space as a combinatorial set of hyperplanes intersections, is developed for maximum score estimation of the binary choice model. As a prerequisite of the discrete space simulated annealing algorithm, also, a multi-start Hyperplanes Intersection Local Search algorithm (HILS) is devised. The implementation of the local search and simulated annealing algorithms searches the space of hyperplanes intersections combinations formulated by the regression’s observations. A set of attributes that are equivalent to the hyperplanes whose intersections define potential maxima is selected as the solution representation. A swap move is introduced so that starting from an arbitrary set of attributes, nearby sets of attributes are generated and evaluated either using the steepest ascent or the Metropolis criterion. Applications include a work-trip mode choice application, for which the global optimum is known, and two labor force participation datasets with unknown global optima. Comparison is made to leading heuristic and metaheuristic approaches as well as to Mixed Integer Programming. Results show that multi-start HILS and especially HISA offer the best results for the two labor force participation datasets, and also discover the global optimum in the work-trip mode choice application.

Purchase PDF

Abstract

Penalized quantile regressions are proposed for the combination of Value-at-Risk forecasts. The primary reason for regularization of the quantile regression estimator with the elastic net, lasso and ridge penalties is multicollinearity among the standalone forecasts, which results in poor forecast performance of the non-regularized estimator due to unstable combination weights. This new approach is applied to combining the Value-at-Risk forecasts of a wide range of frequently used risk models for stocks comprising the Dow Jones Industrial Average Index. Within a thorough comparison analysis, the penalized quantile regressions perform better in terms of backtesting and tick losses than the standalone models and several competing forecast combination approaches. This is particularly evident during the global financial crisis of 2007–2008.

Purchase PDF

Abstract

A robust regression analysis in the presence of missing covariates is considered. The signed-rank estimator of the regression coefficients is studied, where the missing covariates are imputed under the assumption that they are missing at random. The consistency and asymptotic normality of the proposed estimator are established under mild conditions. Monte Carlo simulation experiments are carried out. They demonstrate that the signed-rank estimator is more efficient than the least squares and the least absolute deviations estimators whenever the error distribution is heavy tailed or contaminated. Under the standard normal model error distribution with well specified conditional distribution of the missing covariates, the least-squares and signed-rank methods provide similar results while the least absolute deviations method is inefficient. Finally, the use of the proposed methodology is illustrated using the economic and political data on nine developing countries in Asia from 1980 to 1999.

Purchase PDF

Abstract

An improved Bayesian smoothing spline (BSS) model is developed to estimate the term structure of Chinese Treasury yield curves. The developed BSS model has a flexible function form which can model various yield curve shapes. As a nonparametric method different from Jarrow–Ruppert–Yu’s penalized splines, the BSS model does not need to choose the number of and locations for knots. Instead, this BSS model obtains the smoothing parameter as a by-product that does not need to be estimated. Furthermore, a dimension reduction procedure is developed to calculate an inverse matrix when implementing this BSS model. Finally, simulation results and an application illustrate the BSS model outperforms traditional parametric models and the penalized spline model.

Purchase PDF

Abstract

Heterogeneous effects are prevalent in many economic settings. As the functional form between outcomes and regressors is generally unknown a priori, a semiparametric negative binomial count data model is proposed which is based on the local likelihood approach and generalized product kernels. The local likelihood framework allows to leave unspecified the functional form of the conditional mean, while still exploiting basic assumptions of count data models (i.e. non-negativity). Since generalized product kernels allow to simultaneously model discrete and continuous regressors, the curse of dimensionality is substantially reduced. Hence, the applicability of the proposed estimator is increased, for instance in estimation of health service demand where data is frequently mixed. An application of the semiparametric estimator to simulated and real-data from the Oregon Health Insurance Experiment provide results on its performance in terms of prediction and estimation of incremental effects.

Purchase PDF

Abstract

Classical growth convergence regressions fail to account for various sources of heterogeneity and nonlinearity. Recent contributions advocating nonlinear dynamic factor models remedy these problems by identifying group-specific convergence paths. Similar to statistical clustering methods, those results are sensitive to choices made in the clustering/grouping mechanism. Classical models also do not allow for a time-varying influence of initial endowment on growth. A novel application of a nonparametric regression framework to time-varying, grouped heterogeneity and nonlinearity in growth convergence is proposed. The approach rests upon group-specific transition paths derived from a nonlinear dynamic factor model. Its fully nonparametric nature avoids problems of neglected nonlinearity while alleviating the problem of underspecification of growth convergence regressions. The proposed procedure is backed by an economic rationale for leapfrogging and falling-back of countries due to the time-varying heterogeneity of number, size, and composition of convergence groups. The approach is illustrated by using a current Penn World Table data set. An important aspect of the illustration is empirical evidence for leapfrogging and falling-back of countries, as nonlinearities and heterogeneity in convergence regressions vary over time.

Purchase PDF

Part A: Econometrics — Special issue on Risk management

Abstract

The notion of a zenpath and a zenplot is introduced to search and detect dependence in high-dimensional data for model building and statistical inference. By using any measure of dependence between two random variables (such as correlation, Spearman’s rho, Kendall’s tau, tail dependence etc.), a zenpath can construct paths through pairs of variables in different ways, which can then be laid out and displayed by a zenplot. The approach is illustrated by investigating tail dependence and model fit in constituent data of the S&P 500 during the financial crisis of 2007–2008. The corresponding Global Industry Classification Standard (GICS) sector information is also addressed.

Zenpaths and zenplots are useful tools for exploring dependence in high-dimensional data, for example, from the realm of finance, insurance and quantitative risk management. All presented algorithms are implemented using the R package zenplots and all examples and graphics in the paper can be reproduced using the accompanying demo SP500.

No article

Abstract

A saddlepoint approximation for evaluating the expected shortfall of financial returns under realistic distributional assumptions is derived. This addresses a need that has arisen after the Basel Committee’s proposed move from Value at Risk to expected shortfall as the mandated risk measure in its market risk framework. Unlike earlier results, the approximation does not require the existence of a moment generating function, and is therefore applicable to the heavy-tailed distributions prevalent in finance. A link is established between the proposed approximation and mean-expected shortfall portfolio optimization. Numerical examples include the noncentral t, generalized error, and α-stable distributions. A portfolio of DJIA stocks is considered in an empirical application.

Purchase PDF

Abstract

A simple method is proposed to estimate stochastic volatility models with Markov-switching. It relies on a nested structure of filters (a Hamilton filter and several particle filters) to approximate unobserved regimes and state variables, respectively. Smooth resampling is used to keep the computational complexity constant over time and to implement a standard likelihood-based inference on parameters. A bootstrap and an adapted version of the filter are described and their performance are assessed using simulation experiments. The volatility of US and French markets is characterized over the last decade using a three-regime stochastic volatility model extended to include a leverage effect.

Purchase PDF

Abstract

The lapse risk arising from the termination of policies, due to a variety of causes, has significant influence on the prices of contracts, liquidity of an insurer, and the reserves necessary to meet regulatory capital. The aim is to address in an integrated manner the problem of pricing and determining the capital requirements for a guaranteed annuity option when lapse risk is embedded in the modelling framework. In particular, two decrements are considered in which death and policy lapse occurrences with their correlations to the financial risk are explicitly modelled. A series of probability measure changes is employed and the corresponding forward, survival, and risk-endowment measures are constructed. This approach superbly circumvents the rather slow “simulation-within-simulation” pricing procedure under a stochastic setting. Implementation results illustrate that the proposed approach cuts down the Monte-Carlo simulation technique’s average computing time by 99%. Risk measures are computed using the moment-based density method and benchmarked against the Monte-Carlo-based numerical findings. Depending on the risk metric used (e.g., VaR, CVaR, various forms of distortion risk measures) and the correlation between the interest and lapse rates, the capital requirement may substantially change, which could be either an increase or decrease of up to 50%.

Purchase PDF