是否还需要转换bias
Created: June 11, 2023 9:26 AM Last edited by: Pan Wanke Last edited time: June 11, 2023 9:26 AM Owner: Pan Wanke
z bias一定要用公式转换吗&&为什么regression model里不能再用invev logic function?:现在不用转换了
https://groups.google.com/g/hddm-users/c/k8dUBepPyl8/m/8HuUjLOBAAAJ
HDDMRegressor options (google.com)
an important point has come to light about estimating regression models on the starting point, which I believe led to the issue you identified in this thread. (Ultimately you corrected it by choosing a different condition for your baseline, but it turns out that there was an actual problem in the first place, in contrast to what I thought.)
The issue is that your regression used the** inverse logit transformation. This is meant to constrain z between 0 and 1, which is sensible, and is what we had used in our tutorial example for stimulus coded regression. But since then, HDDM now applies that transform to the **prior for z (for all models with z, including the intercept of regression models), which constrains it. Applying the transform again in the link function leads to a bias (because the invlogit prior on an invlogit transformed variable then forces the intercept to be >0.5).
So, with recent versions of HDDM, one should not use an inverse logit transform on z in the link function for regression models (unless they change the prior in the guts of their version of HDDM). It is sufficient to use the constrained prior on the intercept with the regular linear link function (any extreme values of z that would go out of the [0,1] bounds on the full regression would get rejected by the sampler anyway). This also makes z regression coefficients more comparable to those for other models parameters (which are all usually linear), and easier to interpret the coefficients.
I confirmed this was the culprit for your original case that you shared in this thread, by simply refitting the model to data in which the baseline condition has z<0.5 and using the identity link function for z (ie lambda x:x), and it works properly (recovers z intercepts below and above .5). I also confirmed that the original tutorial code with model recovery on stim-coded regression would fail if run as originally specified (due to the altered prior), but that it works properly when the link function is fixed (in that case still needing to apply the 1-z swap) The upshot is that there is no bug in HDDM itself, but just that this particular tutorial code was outdated - just updated it now on github (the old docs are still on ski, will fix that).
Note, anyone who has performed a regression on starting points before and used invlogit in the link function, if you are concerned you can check your results and if got even a single z<0.5 (after transformed back), for any subject/condition, your results should be ok (ie, it would mean you did it in an earlier version of HDDM without that prior, so that the invlogit was applied just once).
h/t: Ian Krajbich and his lab for first noticing the problem with biased estimates- thanks!
确实不用再转换z了
Regarding what link function to use for z: in the past, it has been suggested to use an inverse logit but that is now already incorporated in the prior. Therefore, one should instead just use the linear link function (lambda x:x - this just means that the sampler will directly estimate z instead of a transform of it). See this thread. For v, the link function usually is identity / linear also because it is not constrained.