This is the R code I used to for my MPhil thesis. The script use Hierarchical Bayesian Modelling (hBayesDM; Ahn et al., 2017) to fit data of a Probabilistic Reverse Learning (PRL) Task to different versions of reinforcement learning models. Fit indexs are compared to determine a winning model, which is used to estimate the parameters of each participants. The estimates were used as the regressors of the linear model of EEG signals.
- Win-Stay-Loss-Switch Model (prl_wsls_multipleB)
- Rescorla-Wagner Model (prl_delta_multipleB)
- Fictitious Update Model (prl_fictitious_multipleB)
- Reward-Punishment Model (prl_rp_multipleB)
- Reward-Punishment Fictitious Update Model (prl_fictitious_rp_multipleB)
- Learning rate (shared or differ between reward and non-reward trials depends on models)
- Inverse Temperature
- Indecision point