A High-Performance Computing (HPC) centre is about more than just supercomputers:
|The ICHEC HPC Hub is a shared space where users are welcome to
work with our staff. Find out more here.
ICHEC currently provides the following supercomputers:
|SGI ICE X / Hybrid / SGI UV 2000||Bull Novascale R422-E2|
|CPU||Intel Xeon E5-2695 / E5-2660 / E5-4640 0||Intel Xeon X5560|
|CPU Clock||2.4 GHz / 2.2 GHz / 2.4 GHz||2.8 GHz|
|CPU Cores||7680 / 640 / 112||496|
|Memory||23.7 TB||2976 GB|
|Accelerators||32 Intel Xeon Phi 5110P
32 NVIDIA Tesla K20
|48 x NVIDIA Tesla M2090|
|Shared Memory||SGI UV 2000 with 1.7TB RAM
112 (14x8) Sandy Bridge CPU cores
2 Xeon Phi 5110P
|147.5 TFlop (SGI ICE X only)||5.73 TFlop (CPU only)|
|140.4 TFlop (SGI ICE X only)||5.14 TFlop (CPU only)|
|Interconnect||Infiniband (FDR)||Infiniband (DDR)|
|560 TB||21 TB|
|Operating System||SUSE Linux Enterprise Server 11||bullx Linux Server release 6.1 (V1)|
|Launched||Oct 2013||Jul 2009 (upgraded Jan 2012)|
HPC often involves very large data sets. To avoid a data bottleneck all of the ICHEC systems are connected to the HEAnet national research network, typically at 1Gbit. This facilitates fast data transfer internally between the ICHEC systems, to and from the member institutions networks, and externally to resources on other networks.
Researchers with particularly large data sets should note that their member institution's network may not be designed to cope with these traffic volumes.
Largely due to the high electrical costs involved in running HPC systems, it is not economic to keep them in production indefinitely even if they are still working reliably. It is cheaper to replace them with new systems with up to date processors etc. in the long term. To date ICHEC has decommissioned 5 HPC platforms, Stokes, Walton, Hamilton, Lanczos and Schrodinger. Some details of these machines can be found here.