收藏 分享(赏)

格林 面板数据讲义-2.ppt

上传人:天天快乐 文档编号:1207002 上传时间:2018-06-18 格式:PPT 页数:41 大小:578KB
下载 相关 举报
格林 面板数据讲义-2.ppt_第1页
第1页 / 共41页
格林 面板数据讲义-2.ppt_第2页
第2页 / 共41页
格林 面板数据讲义-2.ppt_第3页
第3页 / 共41页
格林 面板数据讲义-2.ppt_第4页
第4页 / 共41页
格林 面板数据讲义-2.ppt_第5页
第5页 / 共41页
点击查看更多>>
资源描述

1、Econometric Analysis of Panel Data,William GreeneDepartment of EconomicsStern School of Business,Econometric Analysis of Panel Data,2. Econometric Methods,A Statistical Relationship,A relationship of interest: Number of hospital visits: H = 0,1,2,Covariates: x1=Age, x2=Sex, x3=Income, x4=Health Caus

2、ality and covariationTheoretical implications of causationComovement and associationIntervention of omitted or latent variables,Models,Conditional mean function: Ey | xOther conditional characteristics what is the model?Conditional variance function: Vary | xConditional quantiles, e.g., median y | x

3、Other conditional momentsConditional probabilities: P(y|x)What is the sense in which “y varies with x?”,Using the Model,Understanding the relationship:Estimation of quantities of interest such as elasticitiesPrediction of the outcome of interestControl of the path of the outcome of interest,Represen

4、ting Covariation,Conditional mean function: Ey | x = g(x)Linear approximation to the conditional mean function: Linear Taylor seriesThe linear projection (linear regression?),Projection and Regression,The linear projection is not the regression, and is not the Taylor series. Example: (Derivation and

5、 demonstration in problem set 1),For the Example: =1, =2,Linear Projection,Linear Projection,Conditional Mean,Taylor Series,What About the Linear Projection?,What we do when we linearly regress a variable on a set of variablesAssuming there exists a conditional meanThere usually exists a linear proj

6、ection. Requires finite variance of y.Approximation to the conditional meanIf the conditional mean is linearTaylor series equals the conditional meanLinear projection equals the conditional mean,Application: Doctor Visits,German Individual Health Care data: N=27,236Model for number of visits to the

7、doctor:Poisson regression (fit by maximum likelihood)EV|Income=exp(1.412 - .0745*income)Linear regression: g*(Income)=3.917 - .208*income,Conditional Mean and Projection,Notice the problem with the linear projection. Negative predictions.,This area is outside the range of the data,Most of the data a

8、re in here,Partial Effects,What did the model tell us?Covariation and partial effects: How does the y “vary” with the x?Marginal Effects: Effect on what?For continuous variablesFor dummy variablesElasticities: (x)=(x) * x / Ey|x,Average Partial Effects,When (x) , APE = Ex(x)=Approximation: Is (Ex) =

9、 Ex(x)? (no) Empirically: Estimated APE =Empirical approximation: Est.APE =For the doctor visits model(x)= exp(+x)=-.0745exp(1.412-.0745income)Sample APE = -.2373Approximation = -.2354Slope of the linear projection = -.2083 (!),APE and PE at the Mean,Implication: Computing the APE by averaging over

10、observations (and counting on the LLN and the Slutsky theorem) vs. computing partial effects at the means of the data.In the earlier example: Sample APE = -.2373 Approximation = -.2354,Marginal Effects at the Mean vs. Average Partial Effects,y = 1DocVis 0. Ey|x = (x) =1/1+exp(x) for a binary logit m

11、odel.N=27,326 observations for the German health care data.,The Linear Model,y = X+, N observations, K columns in X, including a column of ones.Standard assumptions about XStandard assumptions about |XE|X=0, E=0 and Cov,x=0Regression?If Ey|X = XApproximation: Then this is an LP, not a Taylor series.

12、,Endogeneity,Definition: E|x0Why not?Omitted variablesUnobserved heterogeneity (equivalent to omitted variables)Measurement error on the RHS (equivalent to omitted variables)Simultaneity (?),Structure and Regression,Simultaneity? What if E|x0y=x+, x=y+u. Covx, 0x is not the regression?What is the re

13、gression?Reduced form: Assume and u are uncorrelated.y = /(1- )u + 1/(1- )x= 1/(1- )u + /(1- )Covx,y/Varx =The regression is y = x + v, where Ev|x=0,Structure vs. Regression,Supply = a + b*Price + c*CapacityDemand = A + B*Price + C*Income,Implications,The structure is the theoryThe regression is the

14、 conditional meanThere is always a conditional meanIt may not equal the structureIt may be linear in the same variablesWhat is the implication for least squares estimation?LS estimates regressionsLS does not necessarily estimate structuresStructures may not be estimable they may not be identified.,E

15、stimation of the Parameters,Least squares, LAD, other estimators we will focus on least squaresClassical vs. Bayesian estimation of PropertiesStatistical inference: Hypothesis testsPrediction (not this course),Properties of Least Squares,Finite sample properties: Unbiased, etc. No longer interested

16、in these.Asymptotic propertiesConsistent? Under what assumptions?Efficient?Contemporary work: Often not importantEfficiency within a class: GMMAsymptotically normal: How is this established?Robust estimation: To be considered later,Least Squares Summary,Hypothesis Testing,Nested vs. nonnested testsP

17、arametric restrictionsLinear: R-q = 0, R is JxK, J K, full row rankGeneral: r(,q)=0, r = a vector of J functions, R (,q) = r(,q)/. Full row rank implies functional independence.Use r(,q)=0 for linear and nonlinear cases,Wald Tests,r(b,q)= close to zero?Wald distance function:r(b,q)Varr(b,q)-1 r(b,q)

18、 2JUse the delta method to estimate Varr(b,q)Est.Asy.Varb=s2(XX)-1Est.Asy.Varr(b,q)= R(b,q)s2(XX)-1R(b,q),Likelihood Ratio Test,Why the normality assumption?Does it work approximately?For any regression model yi = h(xi,)+i where i N0,2, (linear or nonlinear), at the linear (or nonlinear) least squar

19、es estimator, however, computed, with or without restrictions,This forms the basis for likelihood ratio tests.,Score or LM Test: General,Maximum Likelihood (ML) EstimationA hypothesis testH0: Restrictions on parameters are trueH1: Restrictions on parameters are not trueBasis for the test: b0 = param

20、eter estimate under H0 (i.e., restricted), b1 = unrestrictedDerivative results: For the likelihood function under H1, logL1/ | =b1 = 0 (exactly, by definition)logL1/ | =b0 0. Is it close? If so, the restrictions look reasonable,LM Test,LM Test (Cont.),Application of the Score Test,Linear Model: Y =

21、X+Z+Test H0: =0Restricted estimator is b,0,Computing the LM Statistic,The derivation on page 60 of Wooldridges text is needlessly complex, and the second form of LM is actually incorrect because the first derivatives are not heteroscedasticity robust.,Example: Panel Data on Spanish Dairy Farms,N = 2

22、47 farms, T = 6 years (1993-1998),Application,Spanish dairy farm dataY = log outputX = Cobb douglas production: 1,x1,x2,x3,x4 constant and logs of 4 inputsZ = translog terms, x12, x22, etc. and all cross products, x1*x2, x1*x3, x1*x4, x2*x3, etc.Null hypothesis is Cobb Douglas, alternative is transl

23、og = Cobb-Douglas plus second order terms.,Computing an LM Statistic,Namelist ; X = a list ; Z = a list ; W = X,Z $ $Regress ; Lhs = y ; Rhs = X ; Res = e $Create ; e2 = e*e $Matrix ; LM = eW * * We $ (Though the LM statistic can be written as a regression of a column of ones on the first derivative

24、s of logL1|0, you usually dont want actually to compute it this way. This is just a unifying (and interesting) theoretical result. The practical application is much easier.),Lagrange Multiplier Test for Omitted Variables,? Cobb - Douglas ModelNamelist ; X = One,x1,x2,x3,x4 $? Translog second order t

25、erms, squares and cross products of logsNamelist; Z = x11,x22,x33,x44,x12,x13,x14,x23,x24,x34 $? Restricted regression. Short. Has only the log termsRegress; Lhs = yit ; Rhs = X ; Res = e $Calc ; LoglR = LogL ; RsqR = Rsqrd $Create ; e2 = e*e $Namelist; W = X,Z $? LM statistic using basic matrix alg

26、ebraMatrix; List ; LM = eW * * We $? LR statistic uses the full, long regression with all quadratic termsRegress; Lhs = yit ; Rhs = W $Calc; LoglU = LogL ; RsqU = Rsqrd $Calc ; List ; LR = 2*(Logl - LoglR) $? Wald Statistic is just J*F for the translog termsCalc ; List ; JF=col(Z)*(RsqU-RsqR)/col(Z)

27、/(1-RsqU)/(n-kreg) )$,Restricted Regression,Derivatives for the LM Test,Derivatives of unrestricted (long) model evaluated at restricted (short model) coefficients.Are these “close” to zero?,+- 1| 0.0000000 2| 0.0000000 3| 0.0000000 4| 0.0000000 5| 0.0000000 6| 110.38257 7| -54.32128 8| 25.48239 9|

28、226.0547110| 37.9375311| 177.1637812| 258.2964613| 87.6110214| 42.3551715| 205.07842,Unrestricted Regression,Model Selection,Regression models: Fit measure = R2Nested models: log likelihood, GMM criterion function (distance function)Nonnested models, nonlinear models:Classical Akaike information cri

29、terion=(logL 2K)/NBayes (Schwartz) information criterion = (logL-K(logN)/NBayesian: Bayes factor = Posterior odds/Prior odds (For noninformative priors, BF=ratio of posteriors),Remaining to Consider for the Linear Regression Model,Failures of standard assumptionsHeteroscedasticityAutocorrelationRobu

30、st estimationOmitted variablesMeasurement error,How do panel data fit into this?,We can use the usual models.We can use far more elaborate modelsWe can study effects through timeObservations are surely correlated.The same individual is observed more than onceUnobserved heterogeneity that appears in the disturbance in a cross section remains persistent across observations (on the same unit).Procedures must be adjusted.Dynamic effects are likely to be present.,

展开阅读全文
相关资源
猜你喜欢
相关搜索
资源标签

当前位置:首页 > 企业管理 > 经营企划

本站链接:文库   一言   我酷   合作


客服QQ:2549714901微博号:道客多多官方知乎号:道客多多

经营许可证编号: 粤ICP备2021046453号世界地图

道客多多©版权所有2020-2025营业执照举报