Awoonga
Awoonga
- About Awoonga
- Getting an Awoonga account
- Awoonga training
- Awoonga Users' Guide
- Awoonga support
- Awoonga technical specifications
About Awoonga
RCC, in collaboration with the Queensland Cyber Infrastructure Foundation (QCIF), built a new computer cluster called Awoonga, which became available to use in mid-2017.
Awoonga is a conventional high performance compute cluster built with standard ethernet networking. It supports the Nimrod parameter sweep and workflow tools, providing additional capacity to this powerful high-level computational environment.
Awoonga augments Tinaroo, a high performance parallel job computer and FlashLite, a data-intensive supercomputer. Together these systems span a wide range of computational templates. Awoonga shares filesystems, software and environment with Tinaroo and FlashLite, making migration of work between the three clusters straightforward.
Awoonga is for applications that:
- utilise a single core or multiple cores on a single server (shared memory mode)
- get used in repeated runs for statistical sampling or as parametric sweeps
- have relatively low data I/O requirements
- use message-passing techniques within single nodes.
Getting an Awoonga account
Access to Awoonga will be through QRIScloud.
To open an Awoonga account, UQ staff and students should do the following:
- Register for a QRIScloud account if they do not already have one (register at QRIScloud).
- Click on "Account" and login using their AAF credentials.
- All users associated with a university, CSIRO and most other research institutions in Australia will be able to use their organisational ID (login name) and password as their AAF credential without any further process.
If a user has no AAF credential, one can be created via QRIScloud's Request an institutional account link.
- All users associated with a university, CSIRO and most other research institutions in Australia will be able to use their organisational ID (login name) and password as their AAF credential without any further process.
- Order a new service by clicking on "Services".
- Under QRIScloud's "Services / Compute" section there is an option to register to use Awoonga.
- Complete the form with the following details:
- Provide your UQ username, contact details, and a short description of your project
- If you are a student, please also provide the name of your UQ supervisor.
Once your account has been created, your access details for Awoonga will be confirmed via email.
Awoonga training
RCC conducts regular 'Introduction to HPC' training for UQ staff and students on the last Friday of each month. Please visit RCC's training webpage for further information.
Non-UQ users should request training via support@qriscloud.org.au.
Awoonga Users' Guide
The Awoonga User Guide on the QRIScloud portal (or via the RCC User Guides link on this page) provides useful information on getting started with Awoonga.
Awoonga support
UQ Users of Awoonga shoudl submit support requests to rcc-support@uq.edu.au.
Non UQ users of Awoonga should submit support requests to: support@qriscloud.org.au.
Awoonga technical specifications
Awoonga is a conventional high performance compute cluster interconnected by a 10 Gigabit Ethernet network. It provides a batch scheduler environment and is pre-populated with a range of application software.
Awoonga has 40 physical nodes each with 24 Intel cores and 256 GB of RAM.
Awoonga hosts four additional nodes with NVIDIA Tesla K20M GPUs.
Awoonga is built on the following hardware:
- 1032 Intel cores across 43 compute nodes, each with 24 cores, 256 GB memory and 300 GB of disk
The cluster provides the following resources:
- 2 login nodes, with a load balancer.
- The open source PBS TORQUE batch system with Maui scheduler.
- 500 TB of shared storage connected via GPFS and accessible across Awoonga, FlashLite, and Tinaroo clusters.
- Home directories with tight quotas and *no* backups. Backup is the responsibility of the user.
- Two temporary storage filesystems for staging data: /30days and /90days.
- The 500 GB of local scratch storage on each compute nodes as $TMPDIR.