Wednesday, November 28, 2007

Blue Gene/L
In December 1999, IBM announced $100 million research initiative of a five-year effort to build a massively parallel computer, to be applied to the study of biomolecular phenomena such as protein folding. The project has two main goals: to advance our understanding of the mechanisms behind protein folding via large-scale simulation, and to explore novel ideas in massively parallel machine architecture and software. This project should enable biomolecular simulations that are orders of magnitude larger than current technology permits. Major areas of investigation include: how to use this novel platform to effectively meet its scientific goals, how to make such massively parallel machines more usable, and how to achieve performance targets at a reasonable cost, through novel machine architectures.
In November 2001, Lawrence Livermore National Laboratory joined IBM as a research partner for Blue Gene.
On September 29, 2004, IBM announced that a Blue Gene/L prototype at IBM Rochester (Minnesota) had overtaken NEC's Earth Simulator as the fastest computer in the world, with a speed of 36.01 TFLOPS on the Linpack benchmark, beating Earth Simulator's 35.86 TFLOPS. This was achieved with an 8-cabinet system, with each cabinet holding 1,024 compute nodes. Upon doubling this configuration to 16 cabinets, the machine reached a speed of 70.72 TFLOPS by November 2004 , taking first place in the Top500 list.
On March 24, 2005, the US Department of Energy announced that the Blue Gene/L installation at LLNL broke its speed record, reaching 135.5 TFLOPS. This feat was possible because of doubling the number of cabinets to 32.
On the June 2006 Top500 list, Blue Gene/L installations across several sites world-wide took 3 out of the 10 top positions, and 13 out of the top 64. Three racks of BlueGene/L are available at the San Diego Supercomputer Center and are available for academic research.
On October 27, 2005, LLNL and IBM announced that Blue Gene/L had once again broken its speed record, reaching 280.6 TFLOPS on Linpack, upon reaching its final configuration of 65,536 "Compute Nodes" (i.e., 2 nodes) and an additional 1024 "IO nodes" in 64 air-cooled cabinets.
BlueGene/L is also the first supercomputer ever to run over 100 TFLOPS sustained on a real world application, namely a three-dimensional molecular dynamics code (ddcMD), simulating solidification (nucleation and growth processes) of molten metal under high pressure and temperature conditions. This won the 2005 Gordon Bell Prize.
On June 22, 2006, NNSA and IBM announced that Blue Gene/L has achieved 207.3 TFLOPS on a quantum chemical application (Qbox). [1]
On Nov 14, 2006, at Supercomputing 2006 SC06, Blue Gene/L has been awarded the winning prize in all HPC Challenge Classes of awards. [2]
On Apr 27, 2007, a team from the IBM Almaden Research Lab and the University of Nevada ran a simulation of half a mouse brain for ten seconds. [3]

History
The Blue Gene/L supercomputer is unique in the following aspects:

Trading the speed of processors for lower power consumption.
Dual processors per node with two working modes: co-processor (1 user process/node: computation and communication work is shared by two processors) and virtual node (2 user processes/node)
System-on-a-chip design
A large number of nodes (scalable in increments of 1024 up to at least 65,536)
Three-dimensional torus interconnect with auxiliary networks for global communications, I/O, and management
Lightweight OS per node for minimum system overhead (computational noise)
Roughly equivalent to the combined processing power of a 2.4-kilometre-high pile of laptop computers. Major features
Each Compute or IO node is a single ASIC with associated DRAM memory chips. The ASIC integrates two 700 MHz PowerPC 440 embedded processors, each with a double-pipeline-double-precision Floating Point Unit (FPU), a cache sub-system with built-in DRAM controller and the logic to support multiple communication sub-systems. The dual FPUs give each BlueGene/L node a theoretical peak performance of 5.6 GFLOPS. Node CPUs are not cache coherent with one another.
By integration of all essential sub-systems on a single chip, each Compute or IO node dissipates low power (about 17 watts, including DRAMs). This allows very aggressive packaging of up to 1024 Compute nodes plus additional IO nodes in a standard 19" cabinet, within reasonable limits of electrical power supply and air cooling. The performance metrics in terms of FLOPS per Watt, FLOPS per m² of floorspace and FLOPS per unit cost allow scaling up to very high performance.
Each Blue Gene/L node is attached to three parallel communications networks: a 3D toroidal network for peer-to-peer communication between compute nodes, a collective network for collective communication, and a global interrupt network for fast barriers. The I/O nodes, which run the Linux operating system, provide communication with the world via an Ethernet network. Finally, a separate and private Ethernet network provides access to any node for configuration, booting and diagnostics.
Blue Gene/L Compute nodes use a minimal operating system supporting a single user program. Only a subset of POSIX calls are supported, and only one process may be run at a time. Programmers need to implement green threads in order to simulate local concurrency.
Application development is usually performed in C, C++, or Fortran using MPI for communication. However, some scripting languages such as Ruby have been ported to the compute nodes.
To allow multiple programs to run concurrently, a Blue Gene/L system can be partitioned into electronically isolated sets of nodes. The number of nodes in a partition must be a positive integer power of 2, and must contain at least 2 = 32 nodes. The maximum partition is all nodes in the computer. To run a program on Blue Gene/L, a partition of the computer must first be reserved. The program is then run on all the nodes within the partition, and no other program may access nodes within the partition while it is in use. Upon completion, the partition nodes are released for future programs to use.
With so many nodes, component failures are inevitable. The system is able to electrically isolate faulty hardware to allow the machine to continue to run.

Architecture
A team comprised of members from Bell-Labs, IBM Research, Sandia National Labs, and Vita Nuova has completed a port of Plan 9 to Blue Gene/L. Plan 9 kernels are running on both the compute nodes and the I/O nodes and the Ethernet, Torus, Collective Network, Barrier Network, and Management network are all supported.[4] [5]

BlueGene/L Plan 9 Support

Main article: Cyclops64 Cyclops64 (BlueGene/C)
On June 26, 2007, IBM unveiled Blue Gene/P, the second generation of the Blue Gene supercomputer. Designed to run continuously at one petaflops, it can be configured to reach speeds in excess of three petaflops. Furthermore, it is at least seven times more energy efficient than any supercomputer, accomplished by using many small, low-power chips connected through five specialized networks. Four 850 MHz PowerPC 450 processors are integrated on each Blue Gene/P chip. The one-petaflops Blue Gene/P configuration is a 294,912-processor, 72-rack system harnessed to a high-speed, optical network. Blue Gene/P can be scaled to an 884,736-processor, 216-rack cluster to achieve three-petaflops performance. A standard Blue Gene/P configuration will house 4,096 processors per rack. [6]

Blue Gene/Q

IBM Roadrunner

No comments: