ARCHER logo ARCHER banner

The ARCHER Service is now closed and has been superseded by ARCHER2.

  • ARCHER homepage
  • About ARCHER
    • About ARCHER
    • News & Events
    • Calendar
    • Blog Articles
    • Hardware
    • Software
    • Service Policies
    • Service Reports
    • Partners
    • People
    • Media Gallery
  • Get Access
    • Getting Access
    • TA Form and Notes
    • kAU Calculator
    • Cost of Access
  • User Support
    • User Support
    • Helpdesk
    • Frequently Asked Questions
    • ARCHER App
  • Documentation
    • User Guides & Documentation
    • Essential Skills
    • Quick Start Guide
    • ARCHER User Guide
    • ARCHER Best Practice Guide
    • Scientific Software Packages
    • UK Research Data Facility Guide
    • Knights Landing Guide
    • Data Management Guide
    • SAFE User Guide
    • ARCHER Troubleshooting Guide
    • ARCHER White Papers
    • Screencast Videos
  • Service Status
    • Detailed Service Status
    • Maintenance
  • Training
    • Upcoming Courses
    • Online Training
    • Driving Test
    • Course Registration
    • Course Descriptions
    • Virtual Tutorials and Webinars
    • Locations
    • Training personnel
    • Past Course Materials Repository
    • Feedback
  • Community
    • ARCHER Community
    • ARCHER Benchmarks
    • ARCHER KNL Performance Reports
    • Cray CoE for ARCHER
    • Embedded CSE
    • ARCHER Champions
    • ARCHER Scientific Consortia
    • HPC Scientific Advisory Committee
    • ARCHER for Early Career Researchers
  • Industry
    • Information for Industry
  • Outreach
    • Outreach (on EPCC Website)

You are here:

  • ARCHER

ARCHER Best Practice Guide

  • 1. Introduction
  • 2. System Architecture and Configuration
  • 3. Programming Environment
  • 4. Job Submission System
  • 5. Performance analysis
  • 6. Tuning
  • 7. Debugging
  • 8. I/O on ARCHER
  • 9. Tools
  • User Guides & Documentation
  • Essential Skills
  • Quick Start Guide
  • ARCHER User Guide
  • ARCHER Best Practice Guide
  • Scientific Software Packages
  • UK Research Data Facility Guide
  • Knights Landing Guide
  • Data Management Guide
  • SAFE User Guide
  • ARCHER Troubleshooting Guide
  • ARCHER White Papers
  • Screencast Videos

Contact Us

support@archer.ac.uk

Twitter Feed

Tweets by @ARCHER_HPC

ISO 9001 Certified

ISO 27001 Certified

2. System Architecture and Configuration

  • 2.1 Processor architecture
    • 2.1.1 Vector-type instructions
  • 2.2 Memory architecture
  • 2.3 Available file systems
  • 2.4 Operating system (CLE)

This page is under construction.

For an overview of the Cray XC30 architecture used in ARCHER, please see the relevant About ARCHER section:

  • ARCHER Hardware Details

2.1 Processor architecture

2.1.1 Vector-type instructions

One of the keys to getting good performance out of the Xeon architecture is writing your code in such a way that the compiler can make use of the vector-type, floating point operations available on the processor. There are two different vector-type operations available that execute in a similar manner: SSE (Streaming SIMD Extensions) Instructions and AVX (Advanced Vector eXtensions) Instructions. Please note AVX instructions are not available on the serial/PP nodes

These instructions can use the floating-point unit (FPU) to operate on multiple floating point numbers simultaneously as long as the numbers are contiguous in memory. SSE instructions contain a number of different operations (for example: arithmetic, comparison, type conversion) that operate on two operands in 128-bit registers. AVX instructions expand SSE to allow operations to operate on three operands and on a data path expanded from 128- to 256-bits. In the E5-2600 architecture each core has a 256-bit floating point pipeline.

2.2 Memory architecture

The two processors on a standard ARCHER compute node share 64 GB of DDR3 memory. There are a small number of high-memory nodes with 128 GB of memory shared between the two processors. ARCHER has 4544 standard memory node, along with 376 high-memory nodes, bringing the total memory on ARCHER to over 200 TB.

Each node has a main memory bandwidth of 117GB/s (59GB/s per socket, 4.9GB/s per core).

2.3 Available file systems

ARCHER has a number of different filesystems, each with its own purpose. Detailed information on ARCHER filesystems can be found in the User Guide:

  • Resource Management (ARCHER User Guide)

2.4 Operating system (CLE)

The operating system on ARCHER is the Cray Linux Environment (CLE) that in turn is based on SuSE Linux. CLE consists of two components: CLE and Compute Node Linux (CNL).

The service nodes, external login nodes, and post-processing nodes of ARCHER run a full-featured version of Linux.

The compute nodes of ARCHER run CNL. CNL is a stripped-down version of Linux that has been extensively modified to reduce both the memory footprint of the OS and also the amount of variation in compute node performance due to OS overhead.

Copyright © Design and Content 2013-2019 EPCC. All rights reserved.

EPSRC NERC EPCC