Logo of wtpaEurope PMCEurope PMC Funders GroupSubmit a Manuscript

Figure 4

An external file that holds a picture, illustration, etc.
Object name is ukmss-3819-f0004.jpg

Diagram showing the generative model (left) and corresponding recognition; i.e., neuronal model (right) used in the simulations. Left panel: this is the generative model using a single cause v(1), two dynamic states x(1)1,x(1)2 and four outputs y1,K, y4. The lines denote the dependencies of the variables on each other, summarised by the equation on top (in this example both the equations were simple linear mappings). This is effectively a linear convolution model, mapping one cause to four outputs, which form the inputs to the recognition model (solid arrow). The architecture of the corresponding recognition model is shown on the right. This has a corresponding architecture, but here the prediction error units, ε˜(i)u, provide feedback. The combination of forward (red lines) and backward influences (black lines) enables recurrent dynamics that self-organise (according to the recognition equation; μ˜(i)u=h(ε˜(i),ε˜(i+1))) to suppress and hopefully eliminate prediction error, at which point the inferred causes and real causes should correspond.

Images in this article

  • Figure 1
  • Figure 2
  • Figure 3
  • Figure 4
  • Figure 5
  • Figure 6
Click on the image to see a larger version.