模型收敛不好怎么办

Created: June 11, 2023 9:26 AM Last edited by: Pan Wanke Last edited time: June 11, 2023 9:26 AM Owner: Pan Wanke

一般的解决办法

1 more samples, more burn-in, more thinning

2 add inter-trial variablity

In the simplest case you just need to run a longer chain with more burn-in and more thinning. E.g.:

model.sample(10000, burn=5000, thin=5)

This will cause the first 5000 samples to be discarded. Of the remaining 5000 samples only every 5th sample will be saved. Thus, after sampling our trace will have a length of a 1000 samples.

You might also want to find a good starting point for running your chains. This is commonly achieved by finding the maximum posterior (MAP) via optimization. Before sampling, simply call:

model.find_starting_values()

which will set the starting values to the MAP. Then sample as you would normally. This is a good idea in general.

If that still does not work you might want to consider simplifying your model. Certain parameters are just notoriously slow to converge; especially inter-trial variability parameters. The reason is that often individual subjects do not provide enough information to meaningfully estimate these parameters on a per-subject basis.

最后再考虑的办法,only fit group-level

One way around this is to not even try to estimate individual subject parameters and instead use only group nodes. This can be achieved via the group_only_nodes keyword argument:

non-converging parameters

可以引用2011年的一个研究

Untitled

Untitled