top of page

WORLD-CLASS SUPERCOMPUTER

Dr. Tao's research has been fortunate to utilize several state-of-the-art production HPC systems (supercomputers) across the U.S. Up till now, Dr. Tao's group has been using Frontera at Texas Advanced Computing Center (TACC), Bridges at Pittsburg Supercomputing Center (PSC), Summit at Oak Ridge Leadership Computing Facility (OLCF), Cori at National Energy Research Scientific Computing Center (NERSC), Mira at Argonne Leadership Computing Facility (ALCF), for experimental research activities.

Experimental System: News & Resources
frontera.jpg

BIG RED 200 @ INDIANA UNIVERSITY

1633541103624_edited.jpg

Dr. Tao's group has access to Big Red 200 at Indiana University. Big Red 200 is an HPE Cray EX supercomputer designed to support scientific and medical research, and advanced research in artificial intelligence, machine learning, and data analytics. Big Red 200 features

  • 640 compute nodes, each equipped with 256 GB of memory and two 64-core, 2.25 GHz, AMD EPYC 7742 processors;

  • 64 GPU nodes, each with 256 GB of memory, a single 64-core, 2.0 GHz, AMD EPYC 7713 processor, and four NVIDIA A100 GPUs;

  • A theoretical peak performance (Rpeak) nearly 7 PFLOPS.

Experimental System: News & Resources

KAMIAK CLUSTER @ WASHINGTON STATE UNIVERSITY

wsu-kamiak.jpeg

Dr. Tao's group has a dedicated node at Kiamiak cluster at WSU. This node is equipped with:

  • two 28-core Intel 6238 CPUs

  • two Nvidia A100 GPUs (40 GB)

  • 384 GB memory

  • 2 TB SSD local storage

Besides, Dr. Tao's group can access to other partitions and nodes in Kamiak. More details about these partitions and nodes can be found: https://hpc.wsu.edu/kamiak-hpc/queue-list/

Experimental System: News & Resources

PANTARHEI CLUSTER @ UNIVERSITY OF ALABAMA

pantarhei.jpg

Dr. Tao's group is operating an experimental cluster PantaRhei with Dr. Grey Nearing (Department of Geoscience) at UA.

PantaRhei cluster has 8 nodes including 1 login node, 6 CPU nodes, 1 GPU node, 1 FPGA node, totally equipped with 296 Intel Xeon Scalable Processors, 2.8 TB memory, 32 TB local storage, 108 TB file system.

  • CPU node: 2 Intel Xeon Gold 6148 Processors, 384 GB memory, 4 TB NVMe.

  • GPU node: 4 NVIDIA Tesla V100 GPUs, 2 Intel Xeon Gold 6148 Processors, 384 GB memory, 3.2 TB NVMe.

  • FPGA node: 1 Intel Arria 10 FPGA, 2 Intel Xeon Gold 6148 6148 Processors, 384 GB memory, 1.6 TB NVMe.

  • Interconnect: all nodes are connected to a Mellanox InfiniBand switch with FDR 56 Gb/s networks.

Experimental System: News & Resources

ICEBOX CLUSTER @ UNIVERSITY OF ALABAMA

front1.jpg

Dr. Tao's group is managing Icebox cluster with Remote Sensing Center at UA. Icebox cluster has 18 nodes including 1 login node, 16 computing nodes, and 1 storage node, totally equipped with 564 Intel Xeon Scalable Processors, 3.2 TB DDR4 memory, 20 TB local storage, 800+ TB file system.

Experimental System: News & Resources
bottom of page