Director
Ahmed Louri, IEEE Life Fellow

David and Marilyn Karlgaard Endowed Chair Professor of ECE
Director, HPCAT Laboratory
Editor-in-Chief, IEEE Transactions on Sustainable Computing

1

Prof. Louri Receives The George Washington University 2021 Office of the Vice-President for Research, Distinguished Researcher Award!

Congratulations to Prof. Ahmed Louri, who was selected to receive the GWU 2021 Office of the Vice-President for Research (OVPR), Distinguished Researcher Award. The OVPR Distinguished Researcher Award recognizes GWU faculty members who have made significant contributions in research and scholarship to the university and society. The award is given annually to just one GWU faculty member.

2

Prof. Louri is the recipient of the 2020 IEEE Computer Society’s Edward J. McCluskey Technical Achievement Award

Prof. Ahmed Louri is the recipient of the IEEE Computer Society 2020 Edward J. McCluskey Technical Achievement Award, “for pioneering contributions to the solution of on-chip and off-chip communication problems for parallel computing and manycore architectures.” (Award Video)


Lab Mission

We rely on computing to design systems for energy, transportation, finance, education, health, defense, entertainment, and overall wellness. However, today's computing systems are facing major challenges both at the technology and application levels. At the technology level, traditional scaling of device sizes has slowed down, and the reduction of cost per transistor is plateauing, making it increasingly difficult to extract more computer performance by simply adding more transistors to a chip. Power limits and reduced semiconductor reliability are making device scaling more difficult – if not impossible – to leverage for performance in the future and across all platforms, including mobile, embedded systems, laptops, servers, and data centers. Simultaneously, at the application level, we are entering a new computing era that demands a shift from an algorithm computing world to a learning-based, data-intensive computing paradigm in which human capabilities are scaled and magnified. To meet the ever-increasing computing needs and to overcome power density limitations, the computing industry has embraced parallelism (parallel computing) as the primary method for improving computer performance. Today, computing systems are being designed with tens to hundreds of computing cores integrated into a single chip, and hundreds to thousands of computing servers based on these chips are connected in data centers and supercomputers. However, power consumption remains a significant design problem, and such highly parallel systems still face substantial challenges in energy efficiency, performance, reliability, and security.

Additionally, the rapid rise of deep learning and artificial intelligence (AI), driven by large language models (LLMs) like ChatGPT, requires unprecedented computing power. While deep learning and AI offer numerous benefits, they also pose new challenges. This insatiable demand for computation translates into equally vast energy consumption, as the data centers powering these models are projected to consume up to 9% of the total electricity generated in the United States by 2030.

A new era of chip design is needed for the Age of AI and LLMs. Existing solutions, such as CPUs and GPUs, will not be sufficient as scalable and affordable solutions. New domain-specific chips and architectures will need to be designed in order to meet the rigorous demands of deep neural networks and the computationally intensive tasks of model training and inference.

Professor Louri and his team investigate novel parallel computer architectures and technologies that deliver high reliability, high performance, and energy-efficient solutions to essential application domains and societal needs for highly parallel systems, ML, and AI applications. The research has far-reaching impacts on the computing industry and society at large. Current research topics include: (1) design of computer architecture and high-performance computing with emphasis on energy efficiency, reliability, performance scalability, and security; (2) design of scalable, power-efficient, reliable and secure Network on Chips (NoCs) for parallel computing systems and AI accelerators; 3) hardware-software co-design of reconfigurable, adaptable and reliable deep neural networks (DNNs) and graph neural networks (GNNs) accelerators for various AI applications; 4) design of emerging interconnect technologies (optical, wireless, RF) for multicores, AI accelerators, and data center architectures; 5) exploration of approximate computing and model reductions techniques for large-scale and energy-efficient LLMs; 6) exploration of hybrid technologies (optical, electrical) for sustainable computing. 


Current Research

We Are Hiring!

HPCAT Laboratory is looking for new post-doctoral research fellows and Ph.D. students!


HPCAT Members

Jiajun
Li

Former Post-Doctoral Scientist

Lei
Yang

Former Post-Doctoral Scientist

Hao
Zheng

Former PhD Student

Ke
Wang

Former PhD Student

Yuechen
Chen

Former PhD Student

Yuan
Li

Former PhD Student

Yingnan
Zhao

PhD Student

Jiaqi
Yang

PhD Student

Juliana
Curry

PhD Student

Parveen
Ayoubi

Graduate Student

Qian
Cai

Graduate Student

Jasmine
Pillarisetti

Graduate Student

Isaac
Bilsel

Undergraduate
Student

Jonathan
He

Undergraduate
Student

Marie-Laure
Brossay

Undergraduate 
Student

Sebastian
Foubert

Undergraduate
Student

Sphia
Martinez

Undergraduate Student

Parmvir
Chahal

Undergraduate Student

News Highlights

Check out the latest news about HPCAT labs

HPCAT students have published refereed papers in top conferences

1) Yingnan Zhao, Ke Wang, and Ahmed Louri, “An Efficient Hardware Accelerator Design for Dynamic Graph Convolutional Network (DGCN) Inference,” to appear in Proceedings of the ACM/IEEE Design Automation Conference (DAC), San Francisco, CA, USA, June 23-27, 2024.
2) Jiaqi Yang, Hao Zheng, and Ahmed Louri, “Aurora: A Versatile and Flexible Accelerator for Generic Graph Neural Networks,” to appear in Proceedings of the IEEE International Parallel and Distributed Processing Symposium (IPDPS), San Francisco, CA, May 27-31, 2024.



Academic Research Partners

Avinash Karanth

Joseph K. Jachinowski Professor
Ohio University

Razvan Bunescu

Associate Professor
University of North Carolina at Charlotte

Savas Kaya

Professor

Ohio University

Dongsheng Brian Ma

Professor

University of Texas at Dallas

Fabrizio Lombardi

ITC Endowed Professor
Northeastern University

Hao Xin

Professor
University of Arizona

Jean-Luc Gaudiot

Distinguished Professor
University of California - Irvine

Hao Zheng

Assistant Professor
University of Central Florida

Ke Wang

Assistant Professor
University of North Carolina at Charlotte


Related Organizations

HPCAT Lab
High Performance Computing Architectures & Technologies Lab

Department of Electrical and Computer Enginnering
School of Engineering and Applied Science
The George Washington University


800 22nd Street NW
Washington, DC 20052
United States of America 

Contact

Ahmed Louri, IEEE Life Fellow
David and Marilyn Karlgaard Endowed Chair Professor of ECE
Director,  HPCAT Lab 


Email: louri@gwu.edu                    
Phone: +1 (202) 994 8241