![]() |
Stan
2.5.0
probability, sampling & optimization
|
For compiling models. More...
Classes | |
| class | prob_grad |
The prob_grad class represents the basic parameter holders for a model. More... | |
| struct | model_functional |
Functions | |
| template<bool jacobian_adjust_transform, class M > | |
| double | log_prob_propto (const M &model, std::vector< double > ¶ms_r, std::vector< int > ¶ms_i, std::ostream *msgs=0) |
Helper function to calculate log probability for double scalars up to a proportion. More... | |
| template<bool propto, bool jacobian_adjust_transform, class M > | |
| double | log_prob_grad (const M &model, std::vector< double > ¶ms_r, std::vector< int > ¶ms_i, std::vector< double > &gradient, std::ostream *msgs=0) |
| Compute the gradient using reverse-mode automatic differentiation, writing the result into the specified gradient, using the specified perturbation. More... | |
| template<bool jacobian_adjust_transform, class M > | |
| double | log_prob_propto (const M &model, Eigen::VectorXd ¶ms_r, std::ostream *msgs=0) |
Helper function to calculate log probability for double scalars up to a proportion. More... | |
| template<bool propto, bool jacobian_adjust_transform, class M > | |
| double | log_prob_grad (const M &model, Eigen::VectorXd ¶ms_r, Eigen::VectorXd &gradient, std::ostream *msgs=0) |
| Compute the gradient using reverse-mode automatic differentiation, writing the result into the specified gradient, using the specified perturbation. More... | |
| template<bool propto, bool jacobian_adjust_transform, class M > | |
| void | finite_diff_grad (const M &model, std::vector< double > ¶ms_r, std::vector< int > ¶ms_i, std::vector< double > &grad, double epsilon=1e-6, std::ostream *msgs=0) |
| Compute the gradient using finite differences for the specified parameters, writing the result into the specified gradient, using the specified perturbation. More... | |
| template<bool propto, bool jacobian_adjust_transform, class M > | |
| int | test_gradients (const M &model, std::vector< double > ¶ms_r, std::vector< int > ¶ms_i, double epsilon=1e-6, double error=1e-6, std::ostream &o=std::cout, std::ostream *msgs=0) |
| Test the log_prob_grad() function's ability to produce accurate gradients using finite differences. More... | |
| template<bool propto, bool jacobian_adjust_transform, class M > | |
| double | grad_hess_log_prob (const M &model, std::vector< double > ¶ms_r, std::vector< int > ¶ms_i, std::vector< double > &gradient, std::vector< double > &hessian, std::ostream *msgs=0) |
| Evaluate the log-probability, its gradient, and its Hessian at params_r. More... | |
| template<class M > | |
| void | gradient (const M &model, const Eigen::Matrix< double, Eigen::Dynamic, 1 > &x, double &f, Eigen::Matrix< double, Eigen::Dynamic, 1 > &grad_f, std::ostream *msgs=0) |
| template<class M > | |
| void | hessian (const M &model, const Eigen::Matrix< double, Eigen::Dynamic, 1 > &x, double &f, Eigen::Matrix< double, Eigen::Dynamic, 1 > &grad_f, Eigen::Matrix< double, Eigen::Dynamic, Eigen::Dynamic > &hess_f, std::ostream *msgs=0) |
| template<class M > | |
| void | gradient_dot_vector (const M &model, const Eigen::Matrix< double, Eigen::Dynamic, 1 > &x, const Eigen::Matrix< double, Eigen::Dynamic, 1 > &v, double &f, double &grad_f_dot_v, std::ostream *msgs=0) |
| template<class M > | |
| void | hessian_times_vector (const M &model, const Eigen::Matrix< double, Eigen::Dynamic, 1 > &x, const Eigen::Matrix< double, Eigen::Dynamic, 1 > &v, double &f, Eigen::Matrix< double, Eigen::Dynamic, 1 > &hess_f_dot_v, std::ostream *msgs=0) |
| template<class M > | |
| void | grad_tr_mat_times_hessian (const M &model, const Eigen::Matrix< double, Eigen::Dynamic, 1 > &x, const Eigen::Matrix< double, Eigen::Dynamic, Eigen::Dynamic > &X, Eigen::Matrix< double, Eigen::Dynamic, 1 > &grad_tr_X_hess_f, std::ostream *msgs=0) |
For compiling models.
| void stan::model::finite_diff_grad | ( | const M & | model, |
| std::vector< double > & | params_r, | ||
| std::vector< int > & | params_i, | ||
| std::vector< double > & | grad, | ||
| double | epsilon = 1e-6, |
||
| std::ostream * | msgs = 0 |
||
| ) |
Compute the gradient using finite differences for the specified parameters, writing the result into the specified gradient, using the specified perturbation.
| propto | True if calculation is up to proportion (double-only terms dropped). |
| jacobian_adjust_transform | True if the log absolute Jacobian determinant of inverse parameter transforms is added to the log probability. |
| M | Class of model. |
| model | Model. | |
| params_r | Real-valued parameters. | |
| params_i | Integer-valued parameters. | |
| [out] | grad | Vector into which gradient is written. |
| epsilon | ||
| [in,out] | msgs |
| double stan::model::grad_hess_log_prob | ( | const M & | model, |
| std::vector< double > & | params_r, | ||
| std::vector< int > & | params_i, | ||
| std::vector< double > & | gradient, | ||
| std::vector< double > & | hessian, | ||
| std::ostream * | msgs = 0 |
||
| ) |
Evaluate the log-probability, its gradient, and its Hessian at params_r.
This default version computes the Hessian numerically by finite-differencing the gradient, at a cost of O(params_r.size()^2).
| propto | True if calculation is up to proportion (double-only terms dropped). |
| jacobian_adjust_transform | True if the log absolute Jacobian determinant of inverse parameter transforms is added to the log probability. |
| M | Class of model. |
| model | Model. |
| params_r | Real-valued parameter vector. |
| params_i | Integer-valued parameter vector. |
| gradient | Vector to write gradient to. |
| hessian | Vector to write gradient to. hessian[i*D + j] gives the element at the ith row and jth column of the Hessian (where D=params_r.size()). |
| msgs | Stream to which print statements in Stan programs are written, default is 0 |
| void stan::model::grad_tr_mat_times_hessian | ( | const M & | model, |
| const Eigen::Matrix< double, Eigen::Dynamic, 1 > & | x, | ||
| const Eigen::Matrix< double, Eigen::Dynamic, Eigen::Dynamic > & | X, | ||
| Eigen::Matrix< double, Eigen::Dynamic, 1 > & | grad_tr_X_hess_f, | ||
| std::ostream * | msgs = 0 |
||
| ) |
| void stan::model::gradient | ( | const M & | model, |
| const Eigen::Matrix< double, Eigen::Dynamic, 1 > & | x, | ||
| double & | f, | ||
| Eigen::Matrix< double, Eigen::Dynamic, 1 > & | grad_f, | ||
| std::ostream * | msgs = 0 |
||
| ) |
| void stan::model::gradient_dot_vector | ( | const M & | model, |
| const Eigen::Matrix< double, Eigen::Dynamic, 1 > & | x, | ||
| const Eigen::Matrix< double, Eigen::Dynamic, 1 > & | v, | ||
| double & | f, | ||
| double & | grad_f_dot_v, | ||
| std::ostream * | msgs = 0 |
||
| ) |
| void stan::model::hessian | ( | const M & | model, |
| const Eigen::Matrix< double, Eigen::Dynamic, 1 > & | x, | ||
| double & | f, | ||
| Eigen::Matrix< double, Eigen::Dynamic, 1 > & | grad_f, | ||
| Eigen::Matrix< double, Eigen::Dynamic, Eigen::Dynamic > & | hess_f, | ||
| std::ostream * | msgs = 0 |
||
| ) |
| void stan::model::hessian_times_vector | ( | const M & | model, |
| const Eigen::Matrix< double, Eigen::Dynamic, 1 > & | x, | ||
| const Eigen::Matrix< double, Eigen::Dynamic, 1 > & | v, | ||
| double & | f, | ||
| Eigen::Matrix< double, Eigen::Dynamic, 1 > & | hess_f_dot_v, | ||
| std::ostream * | msgs = 0 |
||
| ) |
| double stan::model::log_prob_grad | ( | const M & | model, |
| std::vector< double > & | params_r, | ||
| std::vector< int > & | params_i, | ||
| std::vector< double > & | gradient, | ||
| std::ostream * | msgs = 0 |
||
| ) |
Compute the gradient using reverse-mode automatic differentiation, writing the result into the specified gradient, using the specified perturbation.
| propto | True if calculation is up to proportion (double-only terms dropped). |
| jacobian_adjust_transform | True if the log absolute Jacobian determinant of inverse parameter transforms is added to the log probability. |
| M | Class of model. |
| [in] | model | Model. |
| [in] | params_r | Real-valued parameters. |
| [in] | params_i | Integer-valued parameters. |
| [out] | gradient | Vector into which gradient is written. |
| [in,out] | msgs |
| double stan::model::log_prob_grad | ( | const M & | model, |
| Eigen::VectorXd & | params_r, | ||
| Eigen::VectorXd & | gradient, | ||
| std::ostream * | msgs = 0 |
||
| ) |
Compute the gradient using reverse-mode automatic differentiation, writing the result into the specified gradient, using the specified perturbation.
| propto | True if calculation is up to proportion (double-only terms dropped). |
| jacobian_adjust_transform | True if the log absolute Jacobian determinant of inverse parameter transforms is added to the log probability. |
| M | Class of model. |
| [in] | model | Model. |
| [in] | params_r | Real-valued parameters. |
| [out] | gradient | Vector into which gradient is written. |
| [in,out] | msgs |
| double stan::model::log_prob_propto | ( | const M & | model, |
| std::vector< double > & | params_r, | ||
| std::vector< int > & | params_i, | ||
| std::ostream * | msgs = 0 |
||
| ) |
Helper function to calculate log probability for double scalars up to a proportion.
This implementation wraps the double values in stan::agrad::var and calls the model's log_prob() function with propto=true and the specified parameter for applying the Jacobian adjustment for transformed parameters.
| propto | True if calculation is up to proportion (double-only terms dropped). |
| jacobian_adjust_transform | True if the log absolute Jacobian determinant of inverse parameter transforms is added to the log probability. |
| M | Class of model. |
| [in] | model | Model. |
| [in] | params_r | Real-valued parameters. |
| [in] | params_i | Integer-valued parameters. |
| [in,out] | msgs |
| double stan::model::log_prob_propto | ( | const M & | model, |
| Eigen::VectorXd & | params_r, | ||
| std::ostream * | msgs = 0 |
||
| ) |
Helper function to calculate log probability for double scalars up to a proportion.
This implementation wraps the double values in stan::agrad::var and calls the model's log_prob() function with propto=true and the specified parameter for applying the Jacobian adjustment for transformed parameters.
| propto | True if calculation is up to proportion (double-only terms dropped). |
| jacobian_adjust_transform | True if the log absolute Jacobian determinant of inverse parameter transforms is added to the log probability. |
| M | Class of model. |
| [in] | model | Model. |
| [in] | params_r | Real-valued parameters. |
| [in,out] | msgs |
| int stan::model::test_gradients | ( | const M & | model, |
| std::vector< double > & | params_r, | ||
| std::vector< int > & | params_i, | ||
| double | epsilon = 1e-6, |
||
| double | error = 1e-6, |
||
| std::ostream & | o = std::cout, |
||
| std::ostream * | msgs = 0 |
||
| ) |
Test the log_prob_grad() function's ability to produce accurate gradients using finite differences.
This shouldn't be necessary when using autodiff, but is useful for finding bugs in hand-written code (or agrad).
| propto | True if calculation is up to proportion (double-only terms dropped). |
| jacobian_adjust_transform | True if the log absolute Jacobian determinant of inverse parameter transforms is added to the log probability. |
| M | Class of model. |
| model | Model. |
| params_r | Real-valued parameter vector. |
| params_i | Integer-valued parameter vector. |
| epsilon | Real-valued scalar saying how much to perturb. Defaults to 1e-6. |
| error | Real-valued scalar saying how much error to allow. Defaults to 1e-6. |
| o | Output stream for messages. Defaults to std::cout. |
| msgs | Stream to which Stan programs write. Defaults to 0. |