HARK.utilities

General purpose / miscellaneous functions. Includes functions to approximate continuous distributions with discrete ones, utility functions (and their derivatives), manipulation of discrete distributions, and basic plotting tools.

HARK.utilities.CARAutility(c, alpha)

Evaluates constant absolute risk aversion (CARA) utility of consumption c given risk aversion parameter alpha.

Parameters:
  • c (float) – Consumption value
  • alpha (float) – Risk aversion
Returns:

(unnamed) – Utility

Return type:

float

HARK.utilities.CARAutilityP(c, alpha)

Evaluates constant absolute risk aversion (CARA) marginal utility of consumption c given risk aversion parameter alpha.

Parameters:
  • c (float) – Consumption value
  • alpha (float) – Risk aversion
Returns:

(unnamed) – Marginal utility

Return type:

float

HARK.utilities.CARAutilityPP(c, alpha)

Evaluates constant absolute risk aversion (CARA) marginal marginal utility of consumption c given risk aversion parameter alpha.

Parameters:
  • c (float) – Consumption value
  • alpha (float) – Risk aversion
Returns:

(unnamed) – Marginal marginal utility

Return type:

float

HARK.utilities.CARAutilityPPP(c, alpha)

Evaluates constant absolute risk aversion (CARA) marginal marginal marginal utility of consumption c given risk aversion parameter alpha.

Parameters:
  • c (float) – Consumption value
  • alpha (float) – Risk aversion
Returns:

(unnamed) – Marginal marginal marginal utility

Return type:

float

HARK.utilities.CARAutilityP_inv(u, alpha)

Evaluates the inverse of constant absolute risk aversion (CARA) marginal utility function at marginal utility uP given risk aversion parameter alpha.

Parameters:
  • u (float) – Utility value
  • alpha (float) – Risk aversion
Returns:

(unnamed) – Consumption value corresponding to uP

Return type:

float

HARK.utilities.CARAutility_inv(u, alpha)

Evaluates inverse of constant absolute risk aversion (CARA) utility function at utility level u given risk aversion parameter alpha.

Parameters:
  • u (float) – Utility value
  • alpha (float) – Risk aversion
Returns:

(unnamed) – Consumption value corresponding to u

Return type:

float

HARK.utilities.CARAutility_invP(u, alpha)

Evaluates the derivative of inverse of constant absolute risk aversion (CARA) utility function at utility level u given risk aversion parameter alpha.

Parameters:
  • u (float) – Utility value
  • alpha (float) – Risk aversion
Returns:

(unnamed) – Marginal onsumption value corresponding to u

Return type:

float

HARK.utilities.CRRAutility(c, gam)

Evaluates constant relative risk aversion (CRRA) utility of consumption c given risk aversion parameter gam.

Parameters:
  • c (float) – Consumption value
  • gam (float) – Risk aversion
Returns:

  • (unnamed) (float) – Utility
  • Tests
  • —–
  • Test a value which should pass
  • >>> c, gamma = 1.0, 2.0 # Set two values at once with Python syntax
  • >>> utility(c=c, gam=gamma)
  • -1.0

HARK.utilities.CRRAutilityP(c, gam)

Evaluates constant relative risk aversion (CRRA) marginal utility of consumption c given risk aversion parameter gam.

Parameters:
  • c (float) – Consumption value
  • gam (float) – Risk aversion
Returns:

(unnamed) – Marginal utility

Return type:

float

HARK.utilities.CRRAutilityPP(c, gam)

Evaluates constant relative risk aversion (CRRA) marginal marginal utility of consumption c given risk aversion parameter gam.

Parameters:
  • c (float) – Consumption value
  • gam (float) – Risk aversion
Returns:

(unnamed) – Marginal marginal utility

Return type:

float

HARK.utilities.CRRAutilityPPP(c, gam)

Evaluates constant relative risk aversion (CRRA) marginal marginal marginal utility of consumption c given risk aversion parameter gam.

Parameters:
  • c (float) – Consumption value
  • gam (float) – Risk aversion
Returns:

(unnamed) – Marginal marginal marginal utility

Return type:

float

HARK.utilities.CRRAutilityPPPP(c, gam)

Evaluates constant relative risk aversion (CRRA) marginal marginal marginal marginal utility of consumption c given risk aversion parameter gam.

Parameters:
  • c (float) – Consumption value
  • gam (float) – Risk aversion
Returns:

(unnamed) – Marginal marginal marginal marginal utility

Return type:

float

HARK.utilities.CRRAutilityP_inv(uP, gam)

Evaluates the inverse of the CRRA marginal utility function (with risk aversion parameter gam) at a given marginal utility level uP.

Parameters:
  • uP (float) – Marginal utility value
  • gam (float) – Risk aversion
Returns:

(unnamed) – Consumption corresponding to given marginal utility value.

Return type:

float

HARK.utilities.CRRAutilityP_invP(uP, gam)

Evaluates the derivative of the inverse of the CRRA marginal utility function (with risk aversion parameter gam) at a given marginal utility level uP.

Parameters:
  • uP (float) – Marginal utility value
  • gam (float) – Risk aversion
Returns:

(unnamed) – Marginal consumption corresponding to given marginal utility value

Return type:

float

HARK.utilities.CRRAutility_inv(u, gam)

Evaluates the inverse of the CRRA utility function (with risk aversion para- meter gam) at a given utility level u.

Parameters:
  • u (float) – Utility value
  • gam (float) – Risk aversion
Returns:

(unnamed) – Consumption corresponding to given utility value

Return type:

float

HARK.utilities.CRRAutility_invP(u, gam)

Evaluates the derivative of the inverse of the CRRA utility function (with risk aversion parameter gam) at a given utility level u.

Parameters:
  • u (float) – Utility value
  • gam (float) – Risk aversion
Returns:

(unnamed) – Marginal consumption corresponding to given utility value

Return type:

float

class HARK.utilities.NullFunc

A trivial class that acts as a placeholder “do nothing” function.

distance(other)

Trivial distance metric that only cares whether the other object is also an instance of NullFunc. Intentionally does not inherit from HARKobject as this might create dependency problems.

Parameters:other (any) – Any object for comparison to this instance of NullFunc.
Returns:(unnamed) – The distance between self and other. Returns 0 if other is also a NullFunc; otherwise returns an arbitrary high number.
Return type:float
HARK.utilities.addDiscreteOutcome(distribution, x, p, sort=False)

Adds a discrete outcome of x with probability p to an existing distribution, holding constant the relative probabilities of other outcomes.

Parameters:
  • distribution ([np.array]) – Two element list containing a list of probabilities and a list of outcomes.
  • x (float) – The new value to be added to the distribution.
  • p (float) – The probability of the discrete outcome x occuring.
Returns:

  • X (np.array) – Discrete points for discrete probability mass function.
  • pmf (np.array) – Probability associated with each point in X.
  • Written by Matthew N. White
  • Latest update (11 December 2015)

HARK.utilities.addDiscreteOutcomeConstantMean(distribution, x, p, sort=False)

Adds a discrete outcome of x with probability p to an existing distribution, holding constant the relative probabilities of other outcomes and overall mean.

Parameters:
  • distribution ([np.array]) – Two element list containing a list of probabilities and a list of outcomes.
  • x (float) – The new value to be added to the distribution.
  • p (float) – The probability of the discrete outcome x occuring.
  • sort (bool) – Whether or not to sort X before returning it
Returns:

  • X (np.array) – Discrete points for discrete probability mass function.
  • pmf (np.array) – Probability associated with each point in X.
  • Written by Matthew N. White
  • Latest update (08 December 2015 by David Low)

HARK.utilities.approxBeta(N, a=1.0, b=1.0)

Calculate a discrete approximation to the beta distribution. May be quite slow, as it uses a rudimentary numeric integration method to generate the discrete approximation.

Parameters:
  • N (int) – Size of discrete space vector to be returned.
  • a (float) – First shape parameter (sometimes called alpha).
  • b (float) – Second shape parameter (sometimes called beta).
Returns:

  • X (np.array) – Discrete points for discrete probability mass function.
  • pmf (np.array) – Probability associated with each point in X.

HARK.utilities.approxLognormal(N, mu=0.0, sigma=1.0, tail_N=0, tail_bound=[0.02, 0.98], tail_order=2.718281828459045)

Construct a discrete approximation to a lognormal distribution with underlying normal distribution N(mu,sigma). Makes an equiprobable distribution by default, but user can optionally request augmented tails with exponentially sized point masses. This can improve solution accuracy in some models.

Parameters:
  • N (int) – Number of discrete points in the “main part” of the approximation.
  • mu (float) – Mean of underlying normal distribution.
  • sigma (float) – Standard deviation of underlying normal distribution.
  • tail_N (int) – Number of points in each “tail part” of the approximation; 0 = no tail.
  • tail_bound ([float]) – CDF boundaries of the tails vs main portion; tail_bound[0] is the lower tail bound, tail_bound[1] is the upper tail bound. Inoperative when tail_N = 0. Can make “one tailed” approximations with 0.0 or 1.0.
  • tail_order (float) – Factor by which consecutive point masses in a “tail part” differ in probability. Should be >= 1 for sensible spacing.
Returns:

  • pmf (np.ndarray) – Probabilities for discrete probability mass function.
  • X (np.ndarray) – Discrete values in probability mass function.
  • Written by Luca Gerotto
  • Based on Matab function “setup_workspace.m,” from Chris Carroll’s – [Solution Methods for Microeconomic Dynamic Optimization Problems] (http://www.econ2.jhu.edu/people/ccarroll/solvingmicrodsops/) toolkit.
  • Latest update (11 February 2017 by Matthew N. White)

HARK.utilities.approxMeanOneLognormal(N, sigma=1.0, **kwargs)

Calculate a discrete approximation to a mean one lognormal distribution. Based on function approxLognormal; see that function’s documentation for further notes.

Parameters:
  • N (int) – Size of discrete space vector to be returned.
  • sigma (float) – standard deviation associated with underlying normal probability distribution.
Returns:

  • X (np.array) – Discrete points for discrete probability mass function.
  • pmf (np.array) – Probability associated with each point in X.
  • Written by Nathan M. Palmer
  • Based on Matab function “setup_shocks.m,” from Chris Carroll’s – [Solution Methods for Microeconomic Dynamic Optimization Problems] (http://www.econ2.jhu.edu/people/ccarroll/solvingmicrodsops/) toolkit.
  • Latest update (01 May 2015)

HARK.utilities.approxUniform(N, bot=0.0, top=1.0)

Makes a discrete approximation to a uniform distribution, given its bottom and top limits and number of points.

Parameters:
  • N (int) – The number of points in the discrete approximation
  • bot (float) – The bottom of the uniform distribution
  • top (float) – The top of the uniform distribution
Returns:

(unnamed) – An equiprobable discrete approximation to the uniform distribution.

Return type:

np.array

HARK.utilities.calcSubpopAvg(data, reference, cutoffs, weights=None)

Calculates the average of (weighted) data between cutoff percentiles of a reference variable.

Parameters:
  • data (numpy.array) – A 1D array of float data.
  • reference (numpy.array) – A 1D array of float data of the same length as data.
  • cutoffs ([(float,float)]) – A list of doubles with the lower and upper percentile bounds (should be in [0,1]).
  • weights (numpy.array) – A weighting vector for the data.
Returns:

The (weighted) average of data that falls within the cutoff percentiles of reference.

Return type:

slice_avg

HARK.utilities.calcWeightedAvg(data, weights)

Generates a weighted average of simulated data. The Nth row of data is averaged and then weighted by the Nth element of weights in an aggregate average.

Parameters:
  • data (numpy.array) – An array of data with N rows of J floats
  • weights (numpy.array) – A length N array of weights for the N rows of data.
Returns:

weighted_sum – The weighted sum of the data.

Return type:

float

HARK.utilities.combineIndepDstns(*distributions)

Given n lists (or tuples) whose elements represent n independent, discrete probability spaces (probabilities and values), construct a joint pmf over all combinations of these independent points. Can take multivariate discrete distributions as inputs.

Parameters:distributions ([np.array]) – Arbitrary number of distributions (pmfs). Each pmf is a list or tuple. For each pmf, the first vector is probabilities and all subsequent vectors are values. For each pmf, this should be true: len(X_pmf[0]) == len(X_pmf[j]) for j in range(1,len(distributions))
Returns:
  • List of arrays, consisting of
  • P_out (np.array) – Probability associated with each point in X_out.
  • X_out (np.array (as many as in *distributions)) – Discrete points for the joint discrete probability mass function.
  • Written by Nathan Palmer
  • Latest update (5 July August 2017 by Matthew N White)
HARK.utilities.epanechnikovKernel(x, ref_x, h=1.0)

The Epanechnikov kernel.

Parameters:
  • x (np.array) – Values at which to evaluate the kernel
  • x_ref (float) – The reference point
  • h (float) – Kernel bandwidth
Returns:

out – Kernel values at each value of x

Return type:

np.array

HARK.utilities.getArgNames(function)

Returns a list of strings naming all of the arguments for the passed function.

Parameters:function (function) – A function whose argument names are wanted.
Returns:argNames – The names of the arguments of function.
Return type:[string]
HARK.utilities.getLorenzShares(data, weights=None, percentiles=[0.5], presorted=False)

Calculates the Lorenz curve at the requested percentiles of (weighted) data. Median by default.

Parameters:
  • data (numpy.array) – A 1D array of float data.
  • weights (numpy.array) – A weighting vector for the data.
  • percentiles ([float]) – A list of percentiles to calculate for the data. Each element should be in (0,1).
  • presorted (boolean) – Indicator for whether data has already been sorted.
Returns:

lorenz_out – The requested Lorenz curve points of the data.

Return type:

numpy.array

HARK.utilities.getPercentiles(data, weights=None, percentiles=[0.5], presorted=False)

Calculates the requested percentiles of (weighted) data. Median by default.

Parameters:
  • data (numpy.array) – A 1D array of float data.
  • weights (np.array) – A weighting vector for the data.
  • percentiles ([float]) – A list of percentiles to calculate for the data. Each element should be in (0,1).
  • presorted (boolean) – Indicator for whether data has already been sorted.
Returns:

pctl_out – The requested percentiles of the data.

Return type:

numpy.array

HARK.utilities.kernelRegression(x, y, bot=None, top=None, N=500, h=None)

Performs a non-parametric Nadaraya-Watson 1D kernel regression on given data with optionally specified range, number of points, and kernel bandwidth.

Parameters:
  • x (np.array) – The independent variable in the kernel regression.
  • y (np.array) – The dependent variable in the kernel regression.
  • bot (float) – Minimum value of interest in the regression; defaults to min(x).
  • top (float) – Maximum value of interest in the regression; defaults to max(y).
  • N (int) – Number of points to compute.
  • h (float) – The bandwidth of the (Epanechnikov) kernel. To-do: GENERALIZE.
Returns:

regression – A piecewise locally linear kernel regression: y = f(x).

Return type:

LinearInterp

HARK.utilities.makeGridExpMult(ming, maxg, ng, timestonest=20)

Make a multi-exponentially spaced grid.

Parameters:
  • ming (float) – Minimum value of the grid
  • maxg (float) – Maximum value of the grid
  • ng (int) – The number of grid points
  • timestonest (int) – the number of times to nest the exponentiation
Returns:

  • points (np.array) – A multi-exponentially spaced grid
  • Original Matab code can be found in Chris Carroll’s
  • [Solution Methods for Microeconomic Dynamic Optimization Problems]
  • (http (//www.econ2.jhu.edu/people/ccarroll/solvingmicrodsops/) toolkit.)
  • Latest update (01 May 2015)

HARK.utilities.makeMarkovApproxToNormal(x_grid, mu, sigma, K=351, bound=3.5)

Creates an approximation to a normal distribution with mean mu and standard deviation sigma, returning a stochastic vector called p_vec, corresponding to values in x_grid. If a RV is distributed x~N(mu,sigma), then the expectation of a continuous function f() is E[f(x)] = numpy.dot(p_vec,f(x_grid)).

Parameters:
  • x_grid (numpy.array) – A sorted 1D array of floats representing discrete values that a normally distributed RV could take on.
  • mu (float) – Mean of the normal distribution to be approximated.
  • sigma (float) – Standard deviation of the normal distribution to be approximated.
  • K (int) – Number of points in the normal distribution to sample.
  • bound (float) – Truncation bound of the normal distribution, as +/- bound*sigma.
Returns:

p_vec – A stochastic vector with probability weights for each x in x_grid.

Return type:

numpy.array

HARK.utilities.makeMarkovApproxToNormalByMonteCarlo(x_grid, mu, sigma, N_draws=10000)

Creates an approximation to a normal distribution with mean mu and standard deviation sigma, by Monte Carlo. Returns a stochastic vector called p_vec, corresponding to values in x_grid. If a RV is distributed x~N(mu,sigma), then the expectation of a continuous function f() is E[f(x)] = numpy.dot(p_vec,f(x_grid)).

Parameters:
  • x_grid (numpy.array) – A sorted 1D array of floats representing discrete values that a normally distributed RV could take on.
  • mu (float) – Mean of the normal distribution to be approximated.
  • sigma (float) – Standard deviation of the normal distribution to be approximated.
  • N_draws (int) – Number of draws to use in Monte Carlo.
Returns:

p_vec – A stochastic vector with probability weights for each x in x_grid.

Return type:

numpy.array

HARK.utilities.makeTauchenAR1(N, sigma=1.0, rho=0.9, bound=3.0)

Function to return a discretized version of an AR1 process. See http://www.fperri.net/TEACHING/macrotheory08/numerical.pdf for details

Parameters:
  • N (int) – Size of discretized grid
  • sigma (float) – Standard deviation of the error term
  • rho (float) – AR1 coefficient
  • bound (float) – The highest (lowest) grid point will be bound (-bound) multiplied by the unconditional standard deviation of the process
Returns:

  • y (np.array) – Grid points on which the discretized process takes values
  • trans_matrix (np.array) – Markov transition array for the discretized process
  • Written by Edmund S. Crawley
  • Latest update (27 October 2017)

HARK.utilities.memoize(obj)

A decorator to (potentially) make functions more efficient.

With this decorator, functions will “remember” if they have been evaluated with given inputs before. If they have, they will “remember” the outputs that have already been calculated for those inputs, rather than calculating them again.

HARK.utilities.plotFuncs(functions, bottom, top, N=1000, legend_kwds=None)

Plots 1D function(s) over a given range.

Parameters:
  • functions ([function] or function) – A single function, or a list of functions, to be plotted.
  • bottom (float) – The lower limit of the domain to be plotted.
  • top (float) – The upper limit of the domain to be plotted.
  • N (int) – Number of points in the domain to evaluate.
  • legend_kwds (None, or dictionary) – If not None, the keyword dictionary to pass to plt.legend
Returns:

Return type:

none

HARK.utilities.plotFuncsDer(functions, bottom, top, N=1000, legend_kwds=None)

Plots the first derivative of 1D function(s) over a given range.

Parameters:
  • function (function) – A function or list of functions, the derivatives of which are to be plotted.
  • bottom (float) – The lower limit of the domain to be plotted.
  • top (float) – The upper limit of the domain to be plotted.
  • N (int) – Number of points in the domain to evaluate.
  • legend_kwds (None, or dictionary) – If not None, the keyword dictionary to pass to plt.legend
Returns:

Return type:

none