Logo of Science Foundation Ireland  Logo of the Higher Education Authority, Ireland7 CapacitiesGPGPU Research Projects
Ireland's High-Performance Computing Centre | ICHEC
Home | News | Infrastructure | Outreach | Services | Research | Support | Education & Training | Consultancy | About Us | Login

Access to HPC systems outside of Ireland

Overview

There will always be limits to the scale of HPC systems which are available locally to the research community. However in so far as possible this should not hinder the ambitions of researchers. A number of access programmes exist whereby Irish based researchers can gain access to very large scale HPC resources in the US and Europe. The systems in question represent some of the world's fastest machines.

To assist users in this area ICHEC is happy to discuss appropriate hardware and access programmes. On a number of such proposals ICHEC staff have acted as co-investigators, adding their HPC experience to PI's scientific expertise to help create a multidisciplinary team which can effectively use such large scale resources.

It is ICHEC's policy to offer user support to its users when they are using external systems in much the same fashion as we do on our own systems, with obvious restrictions because ICHEC will not have administrative control of the systems. Liaising with the local administration and support teams is a key part of this effort. A number of different access schemes are outlined below all of which ICHEC has experience of working with. If you have any questions do not hesitate to get in contact.


PRACE Regular Access

PRACE the Partnership for Advanced Computing in Europe operates an access programme. Currently there are six systems available:

  • CURIE is based on x86 architecture CPUs with a mix of thin and fat nodes interconnected through a QDR Infiniband interconnect. It has a total of 92,160 processing cores with 4 GB/core and 360 TBytes in total. The peak performance of the fat nodes partition is 105 Teraflops and the total peak performance (thin nodes and fat nodes) is 1.6 Petaflops.
  • FERMI is an IBM BlueGene/Q hosted by CINECA in Italy. This machine has a total of 163,840 cores. FERMI has a peak performance of 2 Petaflop. It is composed of 10240 PowerA2 sockets running at 1.6GHz with 16 cores each. There is 1GB RAM per core.
  • HERMIT is a Cray XE6 machine situated in Stuttgart, Germany. This machine has a total of 113,472 cores. HERMIT has a peak performance of 1 Petaflop. It comprises of 3552 dual socket nodes equipped with AMD Interlagos Processors with 32GB or 64GB main memory
  • JUQUEEN is an IBM BlueGene/Q located in Jülich, Germany. This machine has a total of 393,216 cores. JUQUEEN has a peak performance of 4 Petaflop. Each node has 16 cores and 16GB of RAM
  • MareNostrum is an IBM System X iDataplex machine housed in a deconsecrated Chapel in Barcelona, Spain. This machine has a total of 33,664 cores and has a peak performance of 0.6 Petaflop. This is an Intel Sandy Bridge EP processor (eight core, 2.6GHz) based system with 32GB/node. The nodes are connected through an Infiniband FDR10 network.
  • SuperMUC is an IBM System X iDataplex machine based in Garching near Munich, Germany. This machine has a total of 147,456 cores. SuperMUC has a peak performance of 3 Petaflop. SuperMUC is based on the Intel Xeon architecture consisting of 18 Thin Node Islands and one Fat Node Island. Each island contains 8192 cores. All compute nodes within an individual island are connected via a fully non-blocking Infiniband network (FDR10 for the Thin Nodes / QDR for the Fat Nodes). A pruned network connects the islands. The Thin Node Islands contain Sandy Bridge nodes, each having 16 cores and 32 GB memory, the Fat Node Island contains Westmere nodes each having 40 cores and 25 6 GB memory.

The next call for proposals is expected to open in September 2013. Full details of these calls can be found here: http://www.prace-ri.eu/Calls-for-Proposals. Irish based researchers who interested in applying to future calls are strongly encouraged to contact ICHEC to discuss their application.

Note: as of November 2013 the BlueGene/Q at FZJ is the 5th fastest system in the world in terms of the Top 500. With SuperMUC 6th, FERMI 9th, CURIE 11th, HERMIT 27th, and MareNostrum 36th. PRACE awards access to six supercomputers ranked within the top 50 machines in the world.


PRACE Preparatory Access

PRACE also operates a Preparatory Access scheme. The aim of this scheme is to permit rapid access to modest levels of resources to aid porting and optimisation to the six systems described above. Preparatory access is intended for testing or development of computer codes for preparation for applications for PRACE project access. Standard production runs will not be allowed as part of preparatory access. For types B and C a detailed description of code bottlenecks is very important for the assessment.

Preparatory access calls are rolling calls. There are 3 types of preparatory access:

  • A) Code scalability testing to obtain scalability data which can be used as supporting information when responding to future PRACE project calls. This route provide an opportunity to ensure the scalability of the codes within the set of parameters to be used for PRACE project calls, and document this scalability. Assessment of applications is undertaken using a light-weight application procedure with application evaluated at least every 2 months.
  • B) Code development and optimisation by the applicant using their own personnel resources (i.e. without PRACE support). Applicants will need to describe the planning for development in detail together with the expert resources that are available to execute the project. Applications will be assessed at least every 2 months.
  • C) Code development with support from experts from PRACE. Assessment of the applications received will be carried out at least every two months.

Resource Allocation (core hours) and Time Period (months)

Class Time CURIE:Fat CURIE:Thin CURIE:Hybrid HERMIT FERMI JUQUEEN MareNostrum SuperMUC
A 2 50k 50k 50k 50k 100k 100k 50k 100k
B 6 200k 200k 100k 50k 250k 250k 100k 250k
C 6 200k 200k 100k 50k 250k 250k 100k 250k
This information was last updated on 26 April 2013 and was taken from the following source.

Details of some successful Irish application's can be found here. For more information on PRACE see http://www.prace-ri.eu and http://www.prace-ri.eu/IMG/pdf/prace_preparatory_access_call.pdf or contact us.


INCITE

INCITE is a US HPC access programme that is operated by the Department of Energy. It grants access to "leadership Computing Facilities" at both the Argonne and Oak Ridge National Labs.An annual call is made though applications can be for multi-year programmes. The 2011 call will allocate 1.6 billion compute hours. With the average award being in excess of 20 million hours. This programme is open to applications from European researchers. More information can be found here.


Argonne Early Science Program

From time to time one-off programmes are run often corresponding to the commissioning of a new HPC system. The Argonne "Early Science Program" is one such programme it aims to get researchers using the next generation IBM Blue Gene being installed at Argonne National Lab in the US as early as possible. The system will be a 10 Petaflop machines with roughly 0.75 million cores. Roughly 2 billion compute hours were offered. While this call has now closed it was open to European researchers. More details can be found here.


Distributed European Computing Initiative (DECI)

DEISA the Distributed European Infrastructure for Supercomputing Applications operated an access programme called DECI (DEISA Extreme Computing Initiative) from 2004 to 2009 issuing six calls. Taken under the PRACE brand now DECI stands for the Distributed European Computing Initiative and stills provides a single-project HPC access scheme within Europe. Under the PRACE projects (PRACE-2IP), DECI has been integrated into the PRACE ecosystem of HPC facilities and services. In order to provide seamless access to users, PRACE took over the organisation, operation and dissemination of DECI scientific results.

This initiative operated as follows:

  • There was a call for proposals biannually.
  • Proposals were evaluated by National Evaluation Committees. Their recommendations were considered by the DECI executive council and final decisions are made.
  • Decisions were based on potential for innovation, scientific merit and the relevance of the DECI infrastructure.
  • Access to resources is given in defined time periods of normally 12 months.

For details of successful DECI applications see here. For more information see DEISA, DEISA-DECI, and PRACE-DECI.