What is it about?
A distributed lag model (DLM) is a model for a sort of time series data in which a regression equation is used to predict current values of a dependent variable based on both the current values of an explanatory variable and the past period (lagged) values of this explanatory variable. Precisely, a DLM is a dynamic model in which the effect of a regressor on regressand occurs over time rather than all at once. The non-constant variances of the error term (i.e., issue of heteroscedasticity), associated with such model, may result in inefficient estimation of the unknown coefficients of the stated DLM. This paper addresses the same issue and suggests some more efficient estimation method.
Featured Image
Read the Original
This page is a summary of: Efficient estimation of distributed lag model in presence of heteroscedasticity of unknown form: A Monte Carlo evidence, Cogent Mathematics & Statistics, October 2018, Taylor & Francis,
DOI: 10.1080/25742558.2018.1538596.
You can read the full text:
Contributors
The following have contributed to this page







