2012 onwards. The Hartree Centre and Big Data.

The UK has aspirations to be at the forefront of the so-called ``big data revolution'' and the Hartree Centre established at Daresbury is a key STFC strength in this area. Hartree is the world's largest centre dedicated to high performance computing software development and home to Blue Joule, the most powerful supercomputer in the UK. The UK Government has invested over £50 million in the Hartree Centre from 2011-2013 to support the progress of power efficient computing technologies designed for a range of industrial and scientific applications. This is because it is estimated that successful exploitation of high performance computing could potentially increase Europe's GDP by 2%-3% by 2020.

The Hartree Centre focuses on partnerships with business and academia to unlock the commercial opportunities offered by high performance computing systems. See Web site This has the benefit of utilising contributing to the growth of UK skills in this area and has attracted IBM as the major partner to develop these opportunities. The Centre is working closely with a number of major companies such as Unilever, where we have established formal research partnerships.

The UK's e-Infrastructure

For some time before 2012 there had been growing opinion in the research community that a more integrated approach to computing was required in the UK. Reasons stated were to remain on par with similar OECD countries and to aid economic recovery. Discussions came to a head in early 2011.

Staff at Daresbury had already formulated a bid for funding known as the Hartree Centre ... What we said was: The Hartree Centre will be a new kind of computational sciences institute for the UK. It will seek to bring together academic, government and industry communities and focus on multi-disciplinary, multi-scale, efficient and effective simulation. The goal is to provide a step change in modelling capabilities for strategic themes including energy, life sciences, the environment and materials.

A 2010 YouTube video of then CSED Director Richard Blake presenting HPCx and the Hartree Centre vision can be found here.

This proved to be very timely and the vision became part of the UK e-Infrastucture [41] for which Government announced funding following the Conservative Party Conference in Manchester, summer 2011. See and Events/37248.aspx.

From the DBIS press release of 1/12/2011, the proposed capital spend of £145M was to be as follows.

The latter became the £8M e-Infrastructure Connectivity Call from EPSRC and coined the term ``Tier-2'' for shared resources in regional centres of excellence.

Definition of Tiers

Europe wide with users from multiple countries, e.g. through PRACE, the European partnership for advanced computing. The UK currently does not have such an HPC machine. HECToR was suggested, but EPSRC were not able to commit the required resources. Tier-0 for the particle physics community is the HPC Data Centre at CERN.

typically a national facility. For HPC users this is HECToR, and may shortly also include the Hartree Centre facility at Daresbury. For particle physics users it is the LHC Tier-1 Centre at RAL.

with the funding of regional centres for the e-Infrastructure this tier can be defined to be a resource shared by a number of participating institutions. It may be interesting to see how this fits with future plans for the National Grid Service (NGS) and if there are any lessons to be learned.

is therefore considered to be the main institutional computing service, either for HPC users, data storage, particle physics or a combination of these. We could propose that it include all resources on a campus. It is the focus of the HPC-SIG and CG-SIG.

The Hartree Centre

It seems also a good time to recall some of the reasons Richard Blake chose the name Hartree Centre.

Douglas Rayner Hartree Ph.D., F.R.S. (b.27/3/1897 - d.12/2/1958) was a mathematician and physicist most famous for the development of numerical analysis and its application to the Hartree-Fock equations of atomic physics and the construction of the differential analyser, a working example of which is featured in the Manchester Museum of Science and Industry. The Web page here explains it

In the mid 1920s he derived the Hartree equations, and later V. Fock published the ``equations with exchange'' now known as Hartree-Fock equation which is the basis of computational chemistry.

1929 saw him become a Professor of Applied Mathematics at the University of Manchester and among other research he built his own differential analyser from Meccano. During the Second World War the subsequent differential analyser at the University of Manchester was the only full size 8 integrator unit in the country and was used to great effect to support the war effort.

In the 1930s he turned his research to radio wave propagation that led to the Appleton-Hartree equation. In 1946 he moved on to Cambridge where he was involved in the early application of digital computers to a number of areas.

From his books [25] you can see that he was instrumental in establishing computational science.

Following a formal tendering process, on 30/3/2012, STFC and IBM announced a major collaboration that will create one of the world's foremost centres in software development, and Events/38813.aspx. This collaboration is a key component, and marks the launch, of the International Centre of Excellence for Computational Science and Engineering (ICE-CSE) [now known as The Hartree Centre]. Located at STFC's Daresbury Laboratory in Cheshire, the Centre will establish high performance computing as a highly accessible and invaluable tool to UK industry, accelerating economic growth and helping to rebalance the UK economy.

High performance computing (HPC) has become essential in the modern world, aiding research and innovation, and enabling companies to compete effectively in a global market by providing solutions to extremely complex problems. Breakthroughs in HPC could result in finding cures for serious diseases or significantly improving the prediction of natural disasters such as earthquakes and floods. It will provide the ability to simulate very complex systems, such as mapping the human brain or modelling the Earth's climate - the data from which would overwhelm even today's most powerful supercomputer. By the year 2020, supercomputers are expected to be capable of a million trillion calculations per second and will be thousands of times faster than the most powerful systems in use today.

The IDC Corporation report to the European commission in Aug'2010 estimated that the successful exploitation of HPC could lead to an increase in the European GDP of 2-3% within 10 years. In today's figures this translates into around £25 billion per year in additional revenue to UK Treasury and more than half a million UK based, high value jobs.

The Department of Business Innovation and Skills (DBIS) announced its e-infrastructure initiative in Oct'2011, with £145 million funding to create the necessary computer and network facilities for the UK to access this potential benefit. £30m of this was earmarked for HPC at Daresbury. This was in addition to an earlier government investment of £7.5m into HPC when the creation of an Enterprise Zone at the Daresbury Science and Innovation Campus was announced [now known as Sci-Tech Daresbury]. Following a rigorous tender process as a result of these investments, IBM was named as the successful bidder to form a unique research, development and business outreach collaboration with STFC.

Under the initial 3 year agreement, STFC will invest in IBM's most advanced hardware systems, most notably the BlueGene/Q and iDataPlex. With a peak performance of 1.4 petaflop/s, which is roughly the equivalent of 1,000,000 iPads, the BlueGene/Q system at Daresbury will be the UK's most powerful machine by a considerable margin. It is also the most energy efficient supercomputer in the UK, being 8 times more efficient than most other supercomputers.

These systems will help the Centre to develop the necessary software to run on the next generation of supercomputers, thus providing UK academic and industrial communities with the tools they will need to make full use of these systems both now and in the future.

The Centre will target both the increasingly important area of data driven science and continue to target software development for current and future computer systems, due within 5-10 years and could will require entirely new software design. STFC is already a world leading provider of the software engineering skills required to exploit the future growth in available computing power and this is a very exciting time to be collaborating in this way.

The investment into the Centre is being used to upgrade STFC's existing computing infrastructure to provide the capability to host the next generation of HPC systems which have much higher power densities than existing systems. It will also install an impressive series of internationally competitive computer systems as a software development and demonstration facility, along with a range of advanced visualisation capabilities.

Procurement in early 2012 included refurbishment of existing HPCx computer room (now split into 2, half with water cooling and half with conventional air cooling), and purchase of equipment as follows.

IBM iDataPlex, known as Blue Wonder - 512x nodes with 2.6GHz Intel SandyBridge processors making 8,192 cores. Around half will be running ScaleMP software to allow testing and development of large shared memory applications. Blue Wonder started operation at Daresbury in 2012 as the 114th most powerful computer in the world (39th Top500 list, see

IBM BlueGene/Q, known as Blue Joule - 6 racks with 98,304 1.6 GHz BGC cores and 96 TB RAM + Power7 management and login nodes. Plus BlueGene/Q - 1 rack as above but planned to make this into a prototype data intensive compute server. Blue Joule started operation at Daresbury in 2012 as the 13th most powerful computer in the world (39th Top500 list, see

With data store, backup and visualisation facilities, the initial configuration of the centre is described as follows.

Hartree Base:
conventional Intel x86 cluster technology (IBM iDataplex). Part of the Blue Wonder system.

Hartree Data Intensive:
data intensive system. Conventional x86 cluster technology (IBM iDataplex). Part of the Blue Wonder system which will use advanced software from ScaleMP to aggregate nodes into large virtual SMP machines.

Hartree Advanced:
IBM BlueGene/Q architecture known as Blue Joule.

Hartree Data Store:
the data store. Uses 8x SFA10k disk arrays from DataDirect Networks, providing 5.7 PB usable disk space, with minimum 15 Gb/s throughput to any of the above compute systems.

Hartree Archive:
an IBM TS3500 tape library with 12x TS1140 tape drives and 3760 tape slots. This provides around 15 PB tape storage.

Hartree Viz and Training:
Refurbishment of other laboratory space to create a visualisation and training suite, plus project space also involved purchase of training PCs, top end PCs for development of visual applications and a large 3D immersive visualisation system from Virtalis to compliment the existing suite used by the Virtual Engineering Centre and the Laboratory lecture theatre. Additional equipment was also installed at ISIC and Atlas on the Harwell site to facilitate joint projects.

Blue Wonder

Blue Wonder is a 512-node IBM xSeries iDataPlex. It comprises 228 nodes, each with 2x 8-core Intel Sandybridge processors and 32 GB RAM. Additional 24 nodes with the same spec plus 2x nVidia M2090 GPUs plus 4 high memory nodes with 256 GB RAM each. Infiniband high speed interconnect throughout. A further 256 nodes each have 2x 8-core Intel Sandybridge processors and 128 GB RAM.

Some more information and photos of iDataPlex clusters can be found here.

Blue Joule

When installed in 2012, Blue Joule was a 7 rack BlueGene/Q system. It was the 13th fastest computer in the world at that time. Each rack is around 200 Tflop/s peak performance, so 1.2 Pflop/s overall peak. Each rack has 1,024 16-core processors (16,384 cores). The seventh BlueGene/Q rack was used for more adventurous research projects and now forms part of the Blue Gene Active Storage system.

Some more information and photos of BlueGene can be found here.


An IBM NextScale cluster was installed in April 2014. It has 360 nodes, each node has 2 x12 core Intel Xeon Sandybridge processors (E5-2697v2 2.7GHz) and 64GB RAM making a total of 8,640 cores. Interconnect is Infiniband high-speed network from Mellanox.

Energy Efficient Computing


50 Years of Big Data Impact

Every day [in 2014] the world creates and shares 2.5 quintillion bytes of data across an increasingly sophisticated global computer network. Fifty years ago it was hard to imagine how much computing would influence how we work, create and share information; and yet in general it is not public demand that drives computing advances but the requirements of researchers to collect, store and manipulate increasingly large and more complex data sets. The power of computing developed to analyse massive and mixed scientific data sets in turn transforms industry and everyday life. As we have seen, STFC and our predecessors have been at the forefront of computing knowhow for the past 50 years. During this time, we have led the way across the whole spectrum of computing capabilities, from high performance computing facilities and digital curation, to graphics and software, from networking to grid infrastructure and the World Wide Web.

In the early 1960s, we developed ground breaking computer graphics and animation technologies to help researchers visualise complex mathematical data sets. This innovative and pioneering approach using computer generated imagery (CGI) caused the Financial Times at the time to pronounce RAL as the home of computer animation in Britain. STFC's forebears continued to lead the UK's CGI field through the next two decades, most notably creating the computer images for the movie Alien in 1979, the first significant film to use CGI and which won an Oscar for best special effects. The success of this film spawned a new sector, with many new companies commercialising the CGI concepts and code developed by STFC and introducing them to new markets. The UK computer animation industry is currently worth £20 billion including £2 billion generated by the video and computer games market.

Increasingly large data sets not only required new techniques but increasingly more powerful computers. In 1964 we were one of three research establishments which hosted an Atlas 1 computer at RAL, then the world's most powerful computer [16]. In the following years we continued to play a pivotal role in the development and support of the UK's supercomputing hardware and software development capabilities. Today, STFC super computers such as Blue Joule and DiRAC are at the cutting edge of academic and industrial research, helping to model everything from cosmology to weather systems. Blue Joule, opened in 2013 and situated on the Sci-Tech Daresbury Campus, is the UK's most powerful supercomputer. It is the foundation of STFC's Hartree Centre, set up to harness the UK's leading high performance computing capabilities in academia for the benefit of UK industry. It is estimated that successful exploitation of high performance computing could increase Europe's GDP by 2-3% by 2020. These activities have re-affirmed the UK's place as a world leader in high performance computing.

Another pillar in the world's computing revolution has been connectivity. STFC's predecessor organisations led the UK's networking effort many years before the invention of the internet. Twenty-five years ago the Web was established at CERN and is now a fundamental part of our lives: 33 million adults accessed the internet every day in the UK last year and it is worth over £121 billion to the UK economy every year. STFC manages the UK participation in CERN and underpinned the internet's development in the UK through its early computer networking deployments, hosting the first UK Web site, developing Web standards and protocols, supporting the evolution of the Grid, and spinning out some notable organisations. These include: Nominet, the internet registry which manages 10 million UK business domain names; JANET, which manages the computer network for all UK education, the domain and the JISCMail service used by 80% of UK academics (1.2 million users).

Big science projects such as those supported by STFC have consistently pushed the boundaries of data volumes and complexity, serving as ``stretch goals'' that drive technological innovation in computing capability. In the 1990s, the Large Hadron Collider (LHC) at CERN was the first project to require processing of petabyte scale datasets (a million gigabytes) on an international scale and this led to the development of grid computing. The LHC Grid makes use of computer resources distributed across the UK, Europe and worldwide to process the huge volumes of data produced by the LHC and to identify the tiny fraction of collisions in which a Higgs boson is produced. This technology development was supported by the RCUK e-Science programme and STFC's GridPP project and the expertise developed is now supporting the UK and European climate and earth system modelling community through the JASMIN facility and the Satellite Applications Catapult through the Climate and Environmental Monitoring from Space (CEMS) facility. This same approach is now widely used by business and academia as part of the Cloud Computing revolution.

As the world becomes increasingly digital, preservation of digital records becomes more and more important across all aspects of daily life; a major task given how quickly innovations in digital media occur. Maintaining access to digital data has been at the heart of STFC science for over 30 years. It is still possible to access the raw data recorded on the ISIS neutron source since its first experiments over 25 years ago. Working through the Consultative Committee on Space Data Systems, STFC helped to derive the standards which have been adopted as the de facto standard for building digital archives and the ISO standard for audit and certification of digital repositories. Working with partners such as the British Library and CERN, STFC has formed the Alliance for Permanent Access to the Record of Science to address issues with long term preservation of digital data.

Looking to the future, the exploding volume of scientific data sets needed for fundamental science will continue to drive innovation. By 2023, the Square Kilometre Array project will generate 1.3 zettabytes of data each month - that's 1300 billion gigabytes, over 10 times the entire global internet traffic today. Processing such a flood of data will require computers over a thousand times faster than today's. This is a true stretch goal for the computing industry that may well require a transformative rethink of computer architectures. For this reason industry partners such as IBM and nVidia are closely involved in the current SKA project engineering phase. A related challenge is reducing the energy consumption of computers to well below current levels. Already, a University of Cambridge computer cluster, built to the SKA system design and supported by STFC, is one of the top two most energy efficient supercomputers in the world as ranked by the ``Green 500'' list. The close connection between SKA and the impact on electronic signal processing, computing and big data is one of the reasons why it is a high priority for STFC and why we are taking a strategic lead in the project.

Whilst we don't know exactly how these innovations will affect our daily lives, we can be confident of two things: the discovery science projects that STFC supports will continue to drive innovation in information technology; and the sheer pace of change in that industry means that these innovations will very quickly benefit the daily lives of everyone in the country.

In the UK Government's Autumn Statement of 3/12/2014 it was announced that there will be a further input of £113M to create a Cognitive Computing Research Centre at Daresbury.

Rob Allan 2016-09-20