% Specification File for Planner.m
% --------------------------------
%
% This file must be updated before running "Planner.m". (See "Planner.
% doc" for documentation and instructions on how to run the program. If
% you are only interested in estimation of this model, see McOfn.m first.)
% The problem being solved by Planner.m is:
%
% max E sum{ beta^t u(c1(t),c2(t),..,cN(t),v2(t),param2) }
% s.t.
% c1(t) + i1(t)*IC(:,1) <= f1(v11(t),param11)
% : : :
% : : :
% cN(t) + iN(t)*IC(:,N) <= fN(v1N(t),param1N)
% X(t+1) = A*X(t)+B*u(t)+g*epsilon(t), epsilon=N(0,Sigma)
% param=[param1; param2],
% param1=[param11; param12; ... param1N]
%
% Optional:
% 1. Hidden state case:
% y(t)=C*X(t)+w1(t) is a px1 vector of observables,
% excluding variables seen without noise (i.e. y(t,i)=X(t,j) or
% y(t,i)=u(t,j) is NOT included in this system of equations).
% hidden=a vector sequentially listing ELEMENTS of X that are not
% seen noiselessly. Confused? If so, see EXAMPLES below.
% [epsilon w1] =N(0,[Sigma V3; V3' V2])
% 2. Multiple decision case:
% u(t)=G*u1(t)+H*u2(t), where u1 (u2) are chosen first (second).
% y1(t)=C1*X(t)+w1(t) is observed before u1 chosen
% y2(t)=C2*X(t)+w2(t) is observed before u2 chosen
% These systems of equations exclude variables seen without noise
% (unless a variable is seen with noise when choosing u1 and then
% without when choosing u2). Thus, the case: y1(t,i)=X(t,j) AND
% y2(t,i)=X(t,j) is NOT included.
% hidden=a vector sequentially listing ELEMENTS of X that are not
% seen noiselessly (at all stages). Confused? If so, see EXAMPLES
% below.
% [epsilon w1]=[Sigma V31; V31' V21]
% [epsilon w2]=[Sigma V32; V32' V22]
%
% where beta is the discount factor, c(t)=[c1(t)..cN(t)]' is a Nx1 vector
% commodity consumption, i(t)=[i1(t)..iN(t)]' is a N x (maxinv) matrix,
% (where maxinv= max no. of investments across the N technologies), IC is
% a (maxinv) x N coefficient matrix for i(t), v1j, j=1,..N, and v2 are
% variables appearing in the production (fj) and utility (u) functions,
% respectively, param1j and param2 are parameter vectors for fj(.) and u(.),
% j=1,..N. X(t) is a (ns x 1)-vector of state variables at time t, u(t)
% is a (nc x 1)-vector of controls at time t, and epsilon(t) is a (ns x 1)-
% vector of shocks at t that are normally distributed with mean 0 and
% variance Sigma. Thus, matrices A, g, Sigma have dimension: (ns x ns),
% and B has dimension (ns x nc). Without loss of generality,
% assume X(1)=1, which allows for a constant term. (This can be assumed
% whether or not your model has constant terms. It is necessary if you opt
% for certain solution procedures.) *** Important note: When
% specifying parameters, etc. below, set all vectors to be COLUMN VECTORS.
% For example, param1j, j=1,..N, param2 and param should be column vectors.
% Otherwise, I cannot guarantee that your results will be correct. ***
% (Note: a possible notational problem to avoid is confusion over the
% utility FUNCTION u(.) and controls VECTOR u(t)).
%
% Also, define x to be the (nx x 1)-vector of variables appearing in
% the utility function: u(f1(v11,param11)-i1*IC(:,1),...,fN(v1N,param1N)-iN*
% IC(:,N),v2,param2):
%
% x(t)=[v11(t), v12(t),...,v1N(t), i1(t), i2(t),...,iN(t), v2(t)]'.
%
% If any elements of v2(t) are redundent (i.e. appear in v11 or v12 ..or v1N
% or i1.. or iN), then drop the occurrence of the variables in question from
% v2 and rename v2 as v2~. Any redundencies in v1j or ij should be left in.
% (Example: N=1, maxinv=1, IC=1, v11(t)=[lam(t), k(t), y(t), n(t)], i1(t)=
% i(t), v2(t)=[i(t) a(t)]. Then, x(t)=[lam(t), k(t), y(t), n(t) i(t) a(t)]'.)
% Note that we can also write x in terms of states and controls as:
%
% x(t)=D*X(t)+E*u(t). D: (nx x ns), E: (nx x nc)
%
% This will be important for mapping a nonlinear problem into a linear-
% quadratic one.
%
% EXAMPLES:
%
% 1. We can use the Hansen-Sargent, "Straight-time and Overtime" model to
% illustrate the more confusing aspects of the 'standard' and 'hidden
% state' cases discussed above. For the standard case, we would set:
%
% X(t)=[1 z(t) k(t) n1(t-1)]'; (z here is N(0,.0075^2))
% u(t)=[n1(t) n2(t) i(t)]'; (i=investment)
% x(t)=[(1+z(t)) k(t) n1(t) n1(t-1) n2(t) i(t)]';
%
% Suppose, capital is a not observed directly by the agents. Instead,
% they see: y(t)=k(t)+w1(t), where w1, epsilon are uncorrelated. Then,
% we would set C=[0 0 1 0], V2=E[w1^2], V3=zeros(4,1), hidden=[3].
% See: hsdesc.m, hsp.m for more details.
%
% 2. We can use the Kydland-Prescott, "Time to Build" model to illustrate
% the confusing aspects of the 'multiple-decision' case. For this model,
% we set:
%
% X(t)=[1 lam1(t) lam2(t) k(t) y(t) a(t) s1(t) s2(t) s3(t)]'
% u(t)=[n(t) s4(t) y(t+1)]', u1(t)=[n(t) s4(t)], u2(t)=y(t+1)
% x(t)=[lam(t) k(t) y(t) n(t) i(t) a(t)]'
% G= [1 0; H=[0;
% 0 1; 0;
% 0 0] 1]
%
% Before choosing u1, agents see: y1(t)=C1*X(t)+w1(t)
% Before choosing u2, agents see: y2(t)=C2*X(t)+w2(t)
% where
%
% hidden=[2 3]';
% C1=[0 1 1 0 0 0 0 0 0]
% C2=[0 1 1 0 0 0 0 0 0]
% Sigma(2,2)=.009^2, Sigma(3,3)=.0018^2
% V21=.009^2 V22=0 V31=V32=0
%
% For more details, see kpdesc.m, kpp.m
%
%
%
% SPECIFICATIONS TO BE UPDATED BEFORE SIMULATING:
%
% 1. Specify SOLUTION METHODS:
% Fill in 'sol=[ ...]' with =i, i=1,2
% a) general solution: linear-quadratic approx. (1) or stochastic
% simulation with the extended path algorithm (2) (if 2, skip c;
% sol can have length 2 or 3 in this case.)
% b) solving steady-states x, X, u: analytical (1) or numerical with
% "Steadyst.m" (2)
% c) linear-quadratic solutions: taylor expansion (1) or function
% evaluation as in Kydland-Prescott, "Time to Build", pp. 1356-7 (2)
%
% If sol(1)=2, set the 17 elements of solspec=[chtype,crit1,crit2,crit3,..
% damp,dampwt,lead,maxit1,maxit2,nbeg,nbep,ndraw,neep,nend,nfut1,...
% simtype]. (See Epspec.m, Instruction 3 for definitions.)
% *** NOTE: OPTION SOL(1)=2 IS NOT YET AVAILABLE. ***
% If sol(2)=1, replace of '!copy .m ssxu.m' with the file
% giving steady-state values for states, X, controls, u, and x=D*X+E*u.
% x,X, and u should be COLUMN vectors.
% (Note: I advise setting x, X, u as functions of parameters.)
% If sol(2)=2, give values for 'crit', the convergence criterion, 'damp-
% val', the damping factor (i.e. a real # in [0,1] where 1=full damp-
% ing, 0=no damping), 'maxit', max # of iterations to get a solution
% (e.g. 50 if you have a reasonable guess for X0,u0,mu0), and 'X0',
% 'u0','mu0', the initial vectors for states, controls and Lagrange
% multipliers. If you have no guess X0, u0, or mu0, set it to -9999.
% You must also specify 'overlap'. overlap=0 if no elements of u(t)
% coincide with elements of X(t+1) resulting in a trivial constraint
% (i.e. if u(1,t)=n1(t) and X(4,t)=n1(t-1), and X(4,t+1)=1*u(1,t) is
% a constraint in X(t+1)=A*X(t)+B*u(t), then we have a "trivial
% constraint"). overlap= a (roverl x 2) matrix if there are cases
% of X(j,t+1)=u(k,t) for some j,k. The ith row of overlap is [j k],
% where j is the element of X(t+1) and k is the element of u(t) such
% that X(j,t+1)=u(k,t), and this occurrence of redundency is the ith,
% i=1,..,roverl (e.g. X(t)=[1 z(t) k(t) n1(t-1)]', u(t)=[n1(t) n2(t)
% i(t)]' implies that overlap=[4 1]). The ordering of overlap is
% important: the first column should be in ASCENDING order with the
% second column corresponding correctly with the first. See Steadyst
% .m for details. Also, try to avoid redundent or trivial constraints
% not discussed if possible. Otherwise steadyst may have problems.
% Also, try to avoid redundent or trivial constraints not discussed
% if possible. Otherwise, steadyst.m may have problems.
% If sol(3)=2, enter the (nx x 1)-vector 'perdev', percent deviations
% from the steady state used to calculate numerical derivatives (e.g.
% [3 1 2 3 8 .5]' if nx=6) See Kydland and Prescott for details.
% An OPTIONAL (but recommended) specification is the setting of "check".
% If sol(1)=1, then I use the doubling algorithm when solving the prob-
% lem (if a certain condition is satisfied). You can check to see if the
% doubling algorithm gives the same answer as a more "brute force"
% algorithm (given in dricc.m) by setting check=1. (In some cases,
% double.m will give the wrong decision rule.)
%
crit=.0001;
dampval=.8;
maxit=200;
overlap=0;
X0=[1 2.3 1];
u0=[.11];
mu0=-9999;
sol=[1 2 1];
check=1;
%
% 2. Specify INFORMATION STRUCTURE:
% There are 3 possible cases programmed with corresponding settings for
% the scalar 'info':
% (a) no hidden states, no multi-stage decision process; set info=1,
% (b) hidden states, no multi-stage decision process (see #1 under
% 'optional' at the top of this file); set info=2.
% (c) multi-stage decision process (see #2 under 'optional'); set info=3
%
% If info=2, you must also set 'ytknown'. If y(t) is known to the agents
% when computing u(t) (e.g. u(t)=-F*E[X(t)|y(t),y(t-1)..]), then set
% ytknown=1. If only y(t-j), j=1,2,.. are known, set ytknown=0.
%
% In the cases of (b) and (c), our decision rules will depend on the
% predicted state vector, Xhat, rather than the true state X. (Note,
% for (c), there will be Xhat1 and Xhat2--different predictions based
% on seeing y1 and y2, respectively.)
% For more details on (b),(c), see Sargent, "Lectures on Linear Optimal
% Control, Filtering, and Rat'l Expectations," and Kydland-Prescott,
% "Time to Build", respectively.
%
info=1;
%
% 3. Specify PARAMETERS: A, B, D, E, g, beta, N, param11, param12, ..,
% param1N, param1 (=[param11; param12; ..; param1N]), param2, IC,
% Sigma, and nov1, which is an Nx1 vector where nov1(j)=length(v1j).
% If info=2, include C, V2, V3, and hidden in the parameter list. If
% info=3, include G, H, C1, C2, V21, V22, V31, V32, and hidden. These
% parameters can be specified either by:
% (a) typing 'eval()' where you specify or
% (b) adding the lines in this file directly.
% (With the former method, default parameters in can be
% changed by adding the lines below eval(). )
%
eval(brmip);
%
% 4. Using the order: v11,..,v1N,i1,..,iN,v2 (or v11, v12,...,iN, v2~ if
% some elements of v2 were redundent) fill in the string:
% xstr='x=[ ... ]' with the appropriate VARIABLE
% NAMES. Then, fill in strings for states and controls ('Xstr'and
% 'ustr', respectively) that are consistent with the definition of x
% (i.e. x=D*X+E*u) and D, E from step 2.
%
xstr='where x = [k z i ]';
Xstr='where X = [1 k z ]';
ustr='where u = [i ]';
%
% 5. Specify the FUNCTIONS, u(.), f1(.),...,fN(.) to be used. To do this,
% replace , , and in the (system-level) commands:
% '!copy .m ufn.m' and '!copy .m ffn.m', j=1,..N, with
% the matlab functions to be used. ufn.m should have INPUTS: x
% (corresponding to x given above) and param= [param1; param2].
% ffn.m should have INPUTS: x(m:m+nov1(j)-1), m=1+nov1(1)+nov1(2)+...
% +nov1(j-1), and param1j. OUTPUT of both files are of the form
% [z zj zh], where z, zj, zh are the function itself, the jacobian of
% the function, and the hessian of the function evaluated at the inputs,
% respectively. Make sure that ufn:
% (a) evaluates u(f1(v11,param11)-i1,..,fN(v1N,param1N)-iN,v2,param2)
% rather than u(c1,c2,..cN,v2,param2) (so that it has the correct #
% of arguments) and
% (b) uses the correct fj(.) functions.
% (See kpu.m and kpf.m for examples of ufn and ffn1.m)
%
!copy [mcgrattan.models]brmif.m ffn1.m
!copy [mcgrattan.models]brmiu.m ufn.m
%
% 6. If you want to have a DESCRIPTION of the model in the output file,
% then replace in the command: !copy .m desc.m
% with your file. Otherwise, replace with descoff.
%
!copy descoff.m desc.m
%
% 7. Give the TIME SERIES specifications by
% (a) defining the matrix getv: getv will be a matrix of 1's and 0's:
% 1 implies that the print/plot option for the variable in question is
% on and otherwise it isn't. The rows for getv correspond to X (states),
% Xhat1, Xhat2, u (controls), x (variables in u(.) when c's are
% replaced), and w, where w=[f1(.) c1 f2(.) c2... fN(.) cN].
% If info=1, Xhat1(t)=X(t), Xhat2(t)=X(t). If info=2, Xhat1(j,t)=
% E[X(j,t)|y(t-i),y(t-i-1),..], j=elements of X that are hidden,
% i=1-ytknown, Xhat1(j,t)=X(j,t), j=elements of X that agents see,
% and Xhat2(t)=X(t). If info=3, Xhat1(j,t)=E[X(j,t)|y1(t),y2(t-1),
% y1(t-1)..] if X(j,t) is a hidden variable and X(j,t) otherwise.
% Similarly, Xhat2(j,t)=E[X(j,t)|y2(t),y1(t),y2(t-1),y1(t-1)..],
% if X(j,t) is hidden and X(j,t) otherwise.
% Example: getv(6,2)=1 gives results for consumption of the first good.
% The number of columns of getv must equal max(length(X),length(u),
% length(w),length(Xhat1),length(Xhat2)). Therefore, add zeros on the
% end of short rows. The number of rows is always 6.
% (b) specifying .. in 'zstr', where zstr='where z=[
% .. ]' is the string of names corresponding to variables
% being printed/plotted,
% (c) specifying 'lts', the length of time series,
% (d) specifying 'nsim', the number of simulations,
% (e) specifying 'detr', the (m x 1)-vector (m=sum(sum(getv))+#vars
% given in (f),i.e. in morev.m) of 1's and 0's. If detr(i)=1, then
% the ith variable printed (reading across rows of getv) will have
% been detrended using the Hodrick-Prescott (1978) method.
% (To change the detrending method, revise "trend.m".)
%
% This results in tables of the model's standard deviations (that is,
% the means and standard deviations of standard deviations across
% simulations) and correlations with real output(s).
% You can get these results for additional variables that are functions
% of the variables specified in getv by
% (f) editing "Morev.m" and setting morevars=1. Instructions on editing
% "morev.m" will be provided in that file.
% You can also get additional user-defined results recursively (i.e. over
% simulations) by
% (g) editing "Recurs.m" and setting morefns=1. The inputs of "recurs.m"
% are: z, a (lts x #vars) (#vars=sum(sum(getv))+ #vars specified in
% morev.m) matrix of time series, dz, the detrended z (Note: if
% variable i has detr(i)=0, dz(:,i)=z(:,i)), and old, the last value
% computed by recurs. The output becomes 'old' at the next simulation.
% (If you are storing the code to be used in morev.m or recurs.m in
% other files, remember to copy them to "morev.m", "recurs.m".)
%
lts=500;
nsim=1;
detr=zeros(4,1);
getv=[0 0 0;
0 0 0;
0 0 0;
0 0 0;
1 1 1;
1 1 0];
zstr='where z=[all x, f]';
morevars=0;
morefns=0;
%
% When finished, evaluate this file in matlab. It will then save your
% specifications in mat-file plspec.mat.
%
save plspec
%
% Good luck and Good Computing!!!