Logo of Science Foundation Ireland  Logo of the Higher Education Authority, Ireland7 CapacitiesGPGPU Research Projects
Ireland's High-Performance Computing Centre | ICHEC
Home | News | Infrastructure | Outreach | Services | Research | Support | Education & Training | Consultancy | About Us | Login

Decommissioned High-Performance Computing Systems


The following information relates to previous generations of hardware that was operated by ICHEC and have since been decommissioned, it is provided for histroical interest:

Stoney, July 2009 - October 2014

Stoney was funded under the PRTLI Cycle 4 funded project e-INIS. This grant was awarded to NUI Galway who in turn provided the system for national use managed by ICHEC. As part of this agreement NUI Galway owned a percentage of the system time.

Stoney was a Bull Novascale R422-E2 cluster with 64 compute nodes. Each compute node had two 2.8GHz Intel (Nehalem EP) Xeon X5560 quad-core processors and 48GB of RAM. The nodes were interconnected via a half-blocking fat-tree network of ConnectX Infiniband (DDR) and used the parallel Lustre filesystem. In January 2012, Stoney was upgraded by adding two NVIDIA Tesla M2090 GPGPU cards to 24 of the nodes. This enabled ICHEC to add a GPU platform to the National Service.

Schrödinger, January 2008 - January 2010

The Dublin Institute for Advanced Studies, in partnership with all the Irish Universities and major research institutions, and supported by the HEA under PRTLI cycles 3 and 4, led a national initiative in collaboration with ICHEC and HEAnet to provide the Irish research community with access to true capability high-performance computing. The outcome was an IBM solution based on local Blue Gene /L and Blue Gene /P supercomputers along with remote access to additional Blue Gene facilities abroad. Access to these systems is granted through the National Capability Service, which was operated by ICHEC between February 2008 and December 2010.

Schrödinger was a system based on a single cabinet of IBM Blue Gene/P. The system used PowerPC 450 prcoessors running at 850Mhz, providing 4096 cores with 2TB of RAM.

It featured the powerful Blue Gene Tree/Torus network. The system achieved a peak Linpack performance of 11.11 TFlop and had a peak performance of 13.93 TFlop.

Stokes pre-upgrade, December 2008 - August 2010

In August 2010 the original Stokes system was significantly upgraded. Prior to the upgrade it was made up of 320 8 core Intel Xeon E5462 processors with a total of 5120 GB of RAM. The system achieved a peak Linpack performance of 25.11 TFlop and had a peak performance of 28.67 TFlop. The upgrade was done by swapping the compute node blades, network, storage and chassis equipment was retained.

Stokes post-upgrade, August 2010 - December 2013

The Stokes II system was an SGI Altix ICE 8200EX cluster with 320 compute nodes. It was made up of 320 compute nodes. Each compute node had two Intel (Westmere) Xeon E5650 hex-core processors and 24GB of RAM. This resulted in a total of 3840 cores and 7680GB of RAM. The system achieved a peak Linpack performance of 36.6 TFlop and had a peak performance of 40.9 TFlop.

Photo of Lanczos

Lanczos, January 2008 - July 2010

Lanczos was a system based on a single cabinet of IBM Blue Gene/L. The system used PowerPC 440 processors running at 700Mhz, providing 2048 cores with 1TB of RAM.

It featured the powerful Blue Gene Tree/Torus network. The system achieved a peak Linpack performance of 4.74 TFlop and had a peak performance of 5.73 TFlop.

Photo of Walton (Multiple Racks)

Walton, September 2005 - November 2008

Walton was an IBM Cluster 1350 consisting of 479 IBM e326 compute nodes. Each compute node was a dual AMD Opteron 250 - 2.4 GHz single core CPUs with a 1MB level 2 cache. 415 of these nodes have 4 GB of RAM, while the remaining 64 nodes had 8 GB RAM.

The nodes were connected together with Gigabit Ethernet using a Force10 E600 switch. Storage was provided via the IBM GPFS parallel filesystem running on a set of dedicated storage servers connected to an IBM DS4500 storage controller. An addtional 14 nodes provided load-balanced login, cluster management, scheduling, filesystem and other facilities for the cluster.

The system achieved a peak Linpack performance of 3.142 TFlop and had a peak performance of 4.464 TFlop.

Photo of Hamilton

Hamilton, September 2005 - November 2008

Hamilton was a Bull NovaScale 6320 that provided 32 Intel Itanium2 CPUs and 256 GB of RAM as a single system image. Each CPU ran at 1.5 GHz and had 6 MB of L3 cache.

The processors were connected using a three tier NUMA topology providing excellent inter-process communication via shared memory. Storage was provided by 9 TB of directly attached disks. This provides optimal performance for codes using large scratch files.

Hamilton had a Bull NovaScale 4400 for use as a login node with 4 more Itanium2 CPUs and 8 GB of RAM for compilation, batch submission along with pre- and post- processing.