Your browser doesn't support javascript. This means that the content or functionality of our website will be limited or unavailable. If you need more information about Vinnova, please contact us.

Design of a Neuromorphic Memory

Reference number
Coordinator Xenergic AB
Funding from Vinnova SEK 300 000
Project duration May 2018 - November 2018
Status Completed
Venture Innovative Startups

Important results from the project

The goal of this project was exploring of performance improvements which are due to our memories. The analysis was conducted by integrating our memory solution into a commercial processor-based machine learning platform. Through benchmarking, we obtained reliable numbers for performance and energy efficiency improvements. The project outcome shows that we are able to drastically improve state-of-the-art. Since computational cost for processor-based machine learning grows exponentially with classification complexity, we expect even higher gains for larger applications.

Expected long term effects

The initial goal of implementing a small scale convolutional neural network (CNN) as a hardware accelerator was accomplished. Moreover, we implemented the benchmark on processor-based system that is specific to machine learning applications. The implementation of the benchmark on a commercial platform gave us the advantage of being able to compare our improvement to state-of-the-art. Thus, the outcome proves that significant improvements in terms of performance and energy efficiency can be achieved by using our memories.

Approach and implementation

Xenergic became access to a commercial platform for processor-based hardware accelerated machine learning. This gave us the advantage to work on a higher abstraction level, having the advantage of exploring our ideas on realistic use-cases. New processor instructions were implemented, having the goal to load computation expensive instruction to a hardware specific accelerator. Our analysis verified that the most hardware expensive operation performed by convolutional neural networks (CNNs) is the convolution, where the by far dominating number of processor cycles were spent.

The project description has been provided by the project members themselves and the text has not been looked at by our editors.

Last updated 3 December 2018

Reference number 2018-00949

Page statistics