The primary aim of this project is to develop large-scale brain-like machine learning algorithms.
Our work will focus on computational modelling of neural networks built using key brain-like principles derived from the Bayesian Confidence Propagation Neural Network (BCPNN) framework.
Our research group at KTH (Sweden) has over the years developed the BCPNN framework to offer a mechanistic explanation for various memory, learning, perception, action selection, amongst other brain functions. In this project we are translating the insights and experiences from the past cognitive/neuroscientific modelling into large-scale machine learning.
Our recent work in this direction has demonstrated that the feedforward version of BCPNN can perform representation learning, which is the key characteristic of most modern deep learning systems. The model was benchmarked on various machine learning datasets and was found to favourably compare with many brain-like neural network models and on par with backprop-based Multi-layer Perceptron.
This work has greatly benefitted from HPC resources, previously with PDC (KTH, Sweden), and currently, with GPU resources at Vega (IZUM, Slovenia). We expect this project to contribute both in the scientific and engineering domain: scientific understanding of brain function and, in engineering domains for machine learning applications and in the development of neuromorphic hardware.