Lorena A. Barba group

New teraflop/s GPU cluster coming to BU

Prof Claudio Rebbi, Prof. Richard Brower, Bryan Marler (HP), Prof. Lorena Barba, Mark Hamilton (HP) Glenn Bresnahan (IS&T), Daniel Kamalic (ENG IT). Photo by Saana McDaniel, 2012.

BU's College of Engineering, Center for Computational Science (CCS), and Information Systems & Technology (IS&T) announce a significant expansion to the facilities for research computing with the acquisition of a GPU-accelerated cluster. At 80 teraflop/s in peak double-precision performance, this system will be the fastest ever deployed at BU.

The new computing resource was made possible by the investment of IS&T funds, a contribution from the CCS and a donation by Hewlett Packard of the 160 GPU cards, valued at $320,000.

It was also made possible by the catalyzing effect, and strengthening of a community, started by a few researches at Boston University. Over the past four years, experiments with GPU hardware were seeded by an NSF EAGER stimulus grant to build a prototype "Experimental GPU Cluster for Fundamental Physics" awarded to Professor Richard Brower as principal investigator, Lorena Barba and Claudio Rebbi, as co-principal investigators.

The new system represents a build-out from the EAGER-supported BUNGEE (Boston University Networked GPU Experimental Environment) cluster.  The new donation from HP was motivated by the expertise that was developed over the past years thanks to the EAGER seed grant and the BUNGEE cluster.

"We are extremely happy that our project brought us to the stage where we are able to expand it and partner with IS&T to bring GPU technology to the whole university," said Prof. Brower.

The system, to be named BUDGE (Boston University Distributed Gpu Environment) is being installed now, and consists of 20 HP 12-core Intel Xeon servers with 48GB of RAM each (soon to be doubled to 96GB), with eight NVIDIA Tesla M2070 GPU accelerators in each with 6GB RAM per accelerator, for a total of 160 GPUs in the deployment.  Network is provided by a Cisco 3500 gigabit ethernet switch on the front end and a Mellanox QDR Infiniband 40 gigabit switch on the back end.

To receive updates about its status as well as other GPU initiatives at Boston University, please see our web site at http://blogs.bu.edu/gpu, where you can also subscribe to the gpu@bu.edu mailing list.

Reference