Early Parallel Computers

This page contains information about several early parallel computers. We disinguish them from "clusters" because they do not have separate stand alone nodes.

Meiko M10

Meiko Scientific Ltd. was formed in 1985 by Miles Chesney, David Alden, Eric Barton, Roy Bottomley, James Cownie and Gerry Talbot formerly of INMOS, a British company who manufactured the Transputer cpu. In 1986 they produced a system based on 32 bit T414 transputers known as the Meiko Computing Surface. The chassis could support a range of plug-in boards for computation or graphics processing.

A Meiko M10 was used at Daresbury c.1986 by the Transputer Initiative to investigate general purpose parallel computing to support scientific research. Transputers were also used in dedicated systems to control experiments and collect data. This M10 had a mixture of T414 and T800 transputers, the latter on the Mk009 dedicated 4 processor board. The system had a total of 13 T800 processors. It was programmed using the Occam-2 and Fortran languages and Fortnet (written by R.J.Allan).

Meiko M60

The M60 was a larger chassis from Meiko Ltd. c.1990. The Mk086 parallel computer boards had 4x T800 transputers and 2x Intel i860 vector accelerators. Certain mathematical operations (scalar) were performed on the transputers and others (vector) on the i860s which gave added performance. However these boards were expensive and hard to program. The M60 at Daresbury had 10x i860 processors and 4x T800s with a total of 7 MB memory. It was programmed usng CSTools, C and Fortran.

See also Jim Austin's Computer Museum.

Intel iPSC/2

Another era started with the Intel (USA) iPSC and i860 parallel computers from 1987. The British market with its transputers could not compete with the performance of the American and Japanese rivals using less specialised RISC processors. Commodity processers were engineered for the mass market so could be designed and produced quickly.

In our system, 32x sx/vx Intel nodes in the iPSC/2 were provided by the SERC Science Board in 1988 for this second-generation Intel hypercube. Each node had a 80386 plus Weitek 1167 floating point processor set and AMD vx vector co-processors and a total of 5 MByte of memory split between two processor boards (4 MByte scalar and 1 MByte vector memory). A further 32 sx nodes and concurrent i/o system with 1.5GBytes of disc were funded by ICI plc. reduced again to 32 nodes to fund ICI's stake in the iPSC/860. It was programmed using NX-2, C and Fortran. There was a total of 128 MB of distributed memory.

The system became obsolete and too expensive to maintain in 1993, it was however still being used in 1994 for system code development.

Intel iPSC/860

The Intel iPSC/860 was a RISC-based distributed memory system launched in 1990. It had 128 processing elements connected in a hypercube topology. The maximum dimension of the hypercube was 7. An eighth connection at every node was used for the concurrent I/O system. Each node element consisted of an Intel i860 processor with a 32-bit ALU (arithmetic logic unit) along with a 64-bit FPU (floating point unit) that was itself built in three parts, an adder, a multiplier, and a graphics processor. The Intel i860 had separate pipelines for the ALU, floating point adder and multiplier, and could hand off up to three operations per clock.

Our system had 64x Intel i860 nodes and a total of 1 GB of distributed memory. Initially 32 nodes each with 8 MB memory were bought with Science Board funding and money from the trade-in of ICI's iPSC/2 nodes. The system was installed in June 1990 to have 16 MB memory per node and made generally available in Jan'1992. It was upgraded to 64 nodes by the Advisory Board to the Research Councils (ABRC) Supercomputer Management Committee (SMC) in 1993 following the first successful year of a national peer reviewed service.

In addition to the processing nodes, a number of additional processors were provided to handle input and output to a collection of disks. Initially, 4 disks each of 750 Mbytes were installed, a configuration that this was later upgraded to 8 disks. When the machine was extended to 64 nodes, a simultaneous disk upgrade replaced all 8 disks with 1.5 Gbyte disks with improved performance and reliability (as well as doubling the capacity). The I/O subsystem also included an Ethernet port so that the I/O system can be accessed directly from the network using the ftp protocol.

Intel iPSC computers

IBM SP2

The IBM SP2 parallel computer perhaps marks the transition to cluster technology as it is composed of a number of separate nodes in a tower with connecting high speed network. However these nodes could not be purchased or used as stand alone servers.

It originally had 2x "thick" nodes and 14x "thin" TN2 nodes. TN2 nodes had Power2 processors running at 66 MHz. The latter were upgraded to 24x TN5 P2SC nodes running at 120 MHz each with 256 MB memory in Aug'1997.

These were later again in 1999 replaced by 8x Winterhawk-II "wide" nodes which had 4x Power3 CPUs per quad-SMP node running at 375 MHz.

Back
to main site.