Research Topic

Mobirise

Graph Processing & Neural Network Accelerators

Current Researchers: Dr. Jiajun Li

Mobirise


Implementing graph processing or neural network algorithms on hardware platform will incur interaction problems such as poor locality, random access latency, and unbalanced workloads. These problems are more obvious in the commercial environment where natural graphs or data loads are both large-scale and irregular. Therefore, domain-specific accelerators for graph processing or neural networks are needed to achieve high-performance computing during hardware implementation.

In this research, we are working on minimizing the computational complexity of graph processing, together with deep learning algorithms by employing novel dataflow and preprocessing frameworks. This could reduce redundant operations and fully exploit parallelism in hardware level. Moreover, we are exploring efficient memory allocation approaches, improving data reuse, enhancing calculation throughput under limited bandwidth or directly increasing memory bandwidth based on custom architecture layout. Our ultimate target is designing high-performance and energy-efficient accelerators by FPGA / ASIC without violating computation accuracy.
HPCAT Lab
High Performance Computing Architectures & Technologies Lab

Department of Electrical and Computer Enginnering
School of Engineering and Applied Science
The George Washington University


800 22nd Street NW
Washington, DC 20052
United States of America 

Contact

Ahmed Louri, IEEE Fellow
David and Marilyn Karlgaard Endowed Chair Professor of ECE
Director,  HPCAT Lab 


Email: louri@gwu.edu                    
Phone: +1 (202) 994 8241