DRAM
The DRAM
class is imported using the following command:
>>> from UQpy.sampling.mcmc.DRAM import DRAM
- class DRAM(pdf_target=None, log_pdf_target=None, args_target=None, burn_length=0, jump=1, dimension=None, seed=None, save_log_pdf=False, concatenate_chains=True, initial_covariance=None, covariance_update_rate=100, scale_parameter=None, delayed_rejection_scale=0.2, save_covariance=False, random_state=None, n_chains=None, nsamples=None, nsamples_per_chain=None)[source]
Delayed Rejection Adaptive Metropolis algorithm [28] [10]
In this algorithm, the proposal density is Gaussian and its covariance C is being updated from samples as
C = scale_parameter * C_sample
whereC_sample
is the sample covariance. Also, the delayed rejection scheme is applied, i.e, if a candidate is not accepted another one is generated from the proposal with covariancedelayed_rejection_scale ** 2 * C
.- Parameters:
pdf_target (
Union
[Callable
,list
[Callable
],None
]) –Target density function from which to draw random samples. Either pdf_target or log_pdf_target must be provided (the latter should be preferred for better numerical stability).
If pdf_target is a callable, it refers to the joint pdf to sample from, it must take at least one input x, which are the point(s) at which to evaluate the pdf. Within
MCMC
the pdf_target is evaluated as:p(x) = pdf_target(x, *args_target)
where x is a
numpy.ndarray of shape :code:`(nsamples, dimension)
and args_target are additional positional arguments that are provided toMCMC
via its args_target input.If pdf_target is a list of callables, it refers to independent marginals to sample from. The marginal in dimension
j
is evaluated as:p_j(xj) = pdf_target[j](xj, *args_target[j])
where x is anumpy.ndarray
of shape(nsamples, dimension)
log_pdf_target (
Union
[Callable
,list
[Callable
],None
]) –Logarithm of the target density function from which to draw random samples. Either pdf_target or log_pdf_target must be provided (the latter should be preferred for better numerical stability).
Same comments as for input pdf_target.
args_target (
Optional
[tuple
]) – Positional arguments of the pdf / log-pdf target function. See pdf_targetburn_length (
int
) – Length of burn-in - i.e., number of samples at the beginning of the chain to discard (note: no thinning during burn-in). Default is \(0\), no burn-in.jump (
int
) – Thinning parameter, used to reduce correlation between samples. Settingjump=n
corresponds to skipping n-1 states between accepted states of the chain. Default is \(1\) (no thinning).dimension (
Optional
[int
]) – A scalar value defining the dimension of target density function. Either dimension and n_chains or seed must be provided.Seed of the Markov chain(s), shape
(n_chains, dimension)
. Default:zeros(n_chains x dimension)
.If seed is not provided, both n_chains and dimension must be provided.
save_log_pdf (
bool
) – Boolean that indicates whether to save log-pdf values along with the samples. Default:False
concatenate_chains (
bool
) – Boolean that indicates whether to concatenate the chains after a run, i.e., samples are stored as anumpy.ndarray
of shape(nsamples * n_chains, dimension)
ifTrue
,(nsamples, n_chains, dimension)
ifFalse
. Default:True
n_chains (
Optional
[int
]) – The number of Markov chains to generate. Either dimension and n_chains or seed must be provided.initial_covariance (
Optional
[float
]) – Initial covariance for the gaussian proposal distribution. Default: I(dim)covariance_update_rate (
float
) – Rate at which covariance is being updated, i.e., every k0 iterations. Default: \(100\)scale_parameter (
Optional
[float
]) – Scale parameter for covariance updating. Default: \(2.38^2/dim\)delayed_rejection_scale (
float
) – Scale parameter for delayed rejection. Default: \(1/5\)save_covariance (
bool
) – IfTrue
, updated covariance is saved in attributeadaptive_covariance
. Default:False
random_state (
Union
[None
,int
,RandomState
]) – Random seed used to initialize the pseudo-random number generator. Default isNone
.nsamples_per_chain (
Optional
[int
]) – Number of samples to generate per chain.