Massachusetts Institute of Technology (MIT) neuroscientists studied the behavior of mice in a learn-reward situation, with the aim of teaching the mice to turn a wheel left or right to receive a reward.
In the task, the rewarded side switched every 15-25 turns.
The team realized the mice were using more than one strategy in each “block” of the game.
To disentangle the strategy being used, the team harnessed a Hidden Markov Model (HMM), which can computationally determine when one unseen state is producing a result versus another unseen state.
The team first had to adapt the HMM to explain choice transitions over the course of blocks.
Computational simulations of task performance using the adapted "blockHMM" showed that the algorithm is able to infer the true hidden states of an artificial agent.
The authors then used this technique to show the mice were persistently blending multiple strategies, achieving varied levels of performance.
From MIT News
View Full Article
Abstracts Copyright © 2023 SmithBucklin, Washington, D.C., USA
No entries found