PhD Project: Training Methods for DNNs Under Computational Resource Constraints

PhD Student: Morten Østergaard NielsenORCID iD icon

Supervisors: Jan Østergaard, Zheng-Hua Tan, Jesper Jensen

Over the last decade, we have seen Deep Neural Networks (DNNs) achieve state-of-the-art performance in various scientific fields and applications, e.g., image recognition, object detection, key-word spotting, and speech enhancement. This have marked the arrival of the modern era of Deep Learning. In the pursuit of improving the performances of modern day DNNs, the number of parameters required have increased almost exponentially. Hence, modern DNNs with billions of parameters are very costly in terms of computational complexity, making them prohibited from running on smaller computational devices, e.g., microcontrollers, hearing aids, and mobile phones.

Different approaches to reduces either memory or mathematical operations of a DNN have been proposed ever since the early days of DNNs, where redundant parameters were pruned. However, reducing the size of a DNN to comply with given computation resource constraints will also significantly reduce the original network’s performance. In spite all the progress in this area, majority of modern DNNs remains computational complex and only very few can run on a smaller computational device. Thus, it is an ongoing and important research topic. This PhD project we will study the training of DNNs under resource constraints, combining the knowledge from information theory, data compression and existing methods in parameter pruning, to improve the performance/complexity trade-off.