User Tools

Site Tools


vecma-testbed

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

vecma-testbed [2018/11/16 09:52]
piontek@man.poznan.pl created
vecma-testbed [2018/11/16 09:53] (current)
piontek@man.poznan.pl
Line 5: Line 5:
  
 ===== Poznan Supercomputing and Networking Center (PSNC) ===== ===== Poznan Supercomputing and Networking Center (PSNC) =====
-PSNC [[http://​www.man.poznan.pl/​online/​en/​|www.man.poznan.pl]] is providing its biggest HPC system, “Eagle”,​ comprising 33,000 cores, with 300TB of memory, and operating at 1.4 petaFLOPS, to a total of at least 5 million CPU hours over the 36-month duration of VECMA. A "​vecma-members"​ team account has been created on Eagle, while a "​plggvecma"​ shared storage directory has been created on Eagle (and on the QCG client machine). "​vecma2018"​ is the active grant until 15.01.2019, allocating 2 million CPU hours and 1TB storage capacity. Subsequent grants “vecma2019” and “vecma2020” will allocate the remainder CPU hours over the respective years. ​+PSNC ([[http://​www.man.poznan.pl/​online/​en/​|www.man.poznan.pl]]is providing its biggest HPC system, “Eagle”,​ comprising 33,000 cores, with 300TB of memory, and operating at 1.4 petaFLOPS, to a total of at least 5 million CPU hours over the 36-month duration of VECMA. A "​vecma-members"​ team account has been created on Eagle, while a "​plggvecma"​ shared storage directory has been created on Eagle (and on the QCG client machine). "​vecma2018"​ is the active grant until 15.01.2019, allocating 2 million CPU hours and 1TB storage capacity. Subsequent grants “vecma2019” and “vecma2020” will allocate the remainder CPU hours over the respective years. ​
  
 Detailed description of the Eagle cluster can be found here: [[https://​wiki.man.poznan.pl/​hpc/​index.php?​title=Eagle|Eagle cluster @ PSNC]] Detailed description of the Eagle cluster can be found here: [[https://​wiki.man.poznan.pl/​hpc/​index.php?​title=Eagle|Eagle cluster @ PSNC]]
  
-==== The Leibniz Rechenzentrum (LRZ) ==== +===== The Leibniz Rechenzentrum (LRZ) ===== 
-LRZ [[https://​www.lrz.de/​english/​|lrz.de]] houses SuperMUC, a PRACE Tier-0 system, supporting per-node power monitoring and reporting per-job energy-to-solution. LRZ will devote a minimum of 6 million CPU hours over the 36-month duration of VECMA, with a storage capacity of 10 TB on $WORK and more temporary storage on $SCRATCH (N.B.: Temporary directory $SCRATCH does not hold data permanently and is not safe for long-term storage.). ​+LRZ ([[https://​www.lrz.de/​english/​|lrz.de]]houses SuperMUC, a PRACE Tier-0 system, supporting per-node power monitoring and reporting per-job energy-to-solution. LRZ will devote a minimum of 6 million CPU hours over the 36-month duration of VECMA, with a storage capacity of 10 TB on $WORK and more temporary storage on $SCRATCH (N.B.: Temporary directory $SCRATCH does not hold data permanently and is not safe for long-term storage.). ​
  
 The detailed information about the SuperMUC cluster can be found here: [[https://​www.lrz.de/​services/​compute/​supermuc/​|SuperMUC @ LRZ]] The detailed information about the SuperMUC cluster can be found here: [[https://​www.lrz.de/​services/​compute/​supermuc/​|SuperMUC @ LRZ]]
  
-=== The Centre for Excellence in Parallel Programming (CEPP) ===+===== The Centre for Excellence in Parallel Programming (CEPP) ​=====
 CEPP at Atos Bull provides access to two supercomputers,​ “manny” and “genji”. Manny is a large-scale system, while genji is a medium-scale,​ heterogeneous system, offering 10 nodes equipped with 32 cores Skylake Xeon processors. ​ CEPP has allocated 640,000 CPU hours in total on both systems. According to current confidential and internal organizational aspects, we have been told that it will be very difficult to provide access to these facilities to VECMA’s partners directly, so this allocation will be used by CEPP members for the time being. This means that Paul Karlshoefer would be the only individual with access. CEPP at Atos Bull provides access to two supercomputers,​ “manny” and “genji”. Manny is a large-scale system, while genji is a medium-scale,​ heterogeneous system, offering 10 nodes equipped with 32 cores Skylake Xeon processors. ​ CEPP has allocated 640,000 CPU hours in total on both systems. According to current confidential and internal organizational aspects, we have been told that it will be very difficult to provide access to these facilities to VECMA’s partners directly, so this allocation will be used by CEPP members for the time being. This means that Paul Karlshoefer would be the only individual with access.
  
  
  
vecma-testbed.txt · Last modified: 2018/11/16 09:53 by piontek@man.poznan.pl