Click to expand(+) for more details.
PARAM SANGANAK
"PARAM Sanganak", is established at IIT Kanpur under the build approach of the National Supercomputing Mission with a peak computing power of 1.3 peta FLOPS. PARAM Sanganak is designed and commissioned by C-DAC to cater to the computational needs of IIT Kanpur, and various Research and Engineering institutes of the region.
The cluster is configured with dense compute nodes having the latest Intel® Xeon® Processor Scalable Family CPUs to achieve maximum computing performance per rack. The system is useful for the research needs of varied scientific domains such as Weather & Climate, Oil & Gas, Seismic, Life, and Material sciences, etc. These nodes are connected over high-speed and low latency 100 Gbps InfiniBand network for faster inter-node communication. A dedicated 1 Gbps network for cluster provisioning, management, and administration is also provided. The system has storage of 2 peta byte with a high-performance parallel file system and archival subsystem.
C-DAC has indigenously designed and developed a system software stack that resides on the underlying supercomputing hardware to make the usage of the system efficient and easy for the users as well as administrators. The stack is based on the open-source tools customized and complimented with in-house software to suit the requirements. This software stack is optimized for a High-Performance Computing environment to achieve optimal performance from the computing resources. It provides a complete HPC environment comprising Linux OS, cluster provisioning tools, Parallel programming modules, Scientific libraries, Compilers, Profilers, and Debuggers.
HPC2013
HPC2013 was initially a machine of 781 nodes - acquired by IIT Kanpur. Another 120 nodes were added to it in 2013-2014. This machine had a rank of 118 in the top 500 list published www.top500.org in June, 2014.
In the initial ratings it had a Rpeak of 359.6 Terra-Flops and Rmax of 344.317 Terra-Flops. Extensive testing of this machine was carried out and we were able to achieve an efficiency of 96% on the Linpack benchmark.
It is based on Intel Xeon E5-2670V 2.5 GHz 2 CPU-IvyBridge (20-cores per node) on HP-Proliant-SL-230s-Gen8 servers with 128 GB of RAM per node E5-2670v2x10 core2.5 GHz.
The nodes are connected by Mellanox FDR InfiniBand chassis based switches that can provide 56Gbps of throughput. It also has 500 Terra-Bytes of storage with an aggregate performance of around 23GBps on write and 15 GBps on read.
It is divided into a home (13/7 w/r GBps) and a scratch (22/12 w/r GBps) file system. The home file system is around 169 Terra-Bytes and the scratch file system is around 332 Terra-Bytes.
HPC2010
HPC2010 is a machine of 376 node - funded by DST, from which 368 nodes are serving as compute node. This machine had a rank of 369 in the top 500 list published www.top500.org in June 2010.
In the initial ratings it had a Rpeak of 34.50 Terra-Flops and Rmax of 29.01 Terra-Flops. It is based on Intel Xeon X5570 2.93 GHz 2 CPU-Nehelam (8-cores per node) on HP-Proliant-BL-280c-Gen6 servers with 48 GB of RAM per node.
The nodes are connected by Qlogic QDR InfiniBand federated switches that can provide 40Gbps of throughput. It also has 100 Terra-Bytes of storage with an aggregate performance of around 5GBps on write performance. It is divided into a home (1.7/1.3 w/r GBps) and a scratch (3.4/2.4 w/r GBps) file system. The home file system is around 60 Terra-Bytes and the scratch file system is around 40 Terra-Bytes.
This cluster was later augment with 96 nodes of Intel Xeon ES-2670 2.6 GHz 2 CPU--Sandy-Bridge (16-cores per node) on HP-Proliant-SL-230s-Gen8 servers with 64 GB of RAM per node that add an additional theoretical 31 Terra-Flops to the above 2010 cluster. PBS Pro is the scheduler of choice. Though it has FDR InfiniBand cards it is connected to the QDR InfiniBand fabric seamlessly.