MulticoreWare

Research Publications

Energy Efficient DNN Compaction for Edge Deployment

September 27, 2023

 

AuthorsBijin Elsa Baby, Dipika Deb, Benuraj Sharma, Kirthika Vijayakumar, Satyajit Das

Deep Neural Networks (DNNs) have gained popularity in the realm of deep learning due to their learnable parameters that are essential for training and inference phases. Nonetheless, deploying these models on mobile and edge devices, which often have constrained hardware resources and power budgets, poses a considerable challenge.

Achieving real-time performance and energy efficiency becomes imperative, necessitating the compression of DNN models. This research paper introduces a fixed partition compaction method that leverages consecutive zeros and non-zero weights/parameters within sparse DNN models.

The paper resulted from the collective work of our MAGIC cluster team at IIT Palakkad and was released by the Association of Computing Machinery (ACM).

Share Via

Explore More

Oct 25 2024

Multi-Objective Design and Optimization of Hardware-Friendly Grid-Based Sparse MIMO Arrays

A design framework is proposed to optimize sparse MIMO (multiple-input, multiple-output) arrays for enhanced multi-target detection.

Read more
Jul 7 2023 EvoPruner Pool

EvoPrunerPool from MulticoreWare – NEWK Workshop Paper 

EvoPrunerPool, an Evolutionary Pruner for Convolutional Neural Network compression, frames the process of filter pruning as a search problem to discover the optimal set of pruners from a selection of pre-existing filter pruners.

Read more

GET IN TOUCH

    (Max 300 characters)