High Performance computing is just another term for what we usually call 'Supercomputing'. Supercomputers are essentially high speed performance machines which
deliver what cannot be extracted out of ordinary functional computers, Their
super capabilities are a result of a network of processing units working in a coordination
to achieve the result of desired operation, There are two ways of getting this:
first one is to use many machines connected to a network and work is
distributed to them using a main server ( Distributed computing), The second one
is to cluster as many units as you need in one place and then make them perform
cohesively . The latter approach is more commonly used because connecting
several nodes to a network distributed over a wide area adds to time and cost
factors.
So many of us might imagine that developing a supercomputer
is as simple as connecting a lot of processors and controlling them using a
master computer, You can do this but only if you can deliver so much power and
control the massive heat emission released by the CPUs.
In fact any geek having free time and bucks to spare can
make a supercomputer in his garage, All you need is required number of CPUs of the same
kind and necessary software to arrange for operating system (Linux is an
obvious choice) and network interconnections.
Here is a simple 9 step procedure to make a supercomputer on your own ( here ’simple’ is a relative term ) .
Here is a simple 9 step procedure to make a supercomputer on your own ( here ’simple’ is a relative term ) .
However it is quite likely that devoid of sophisticated hardware
and a tailored version of operating system just to suit your machine; you
cannot achieve the performance capabilities of today’s most powerful
supercomputers. But this is not it, today’s supercomputer do not necessarily
use cluster parallelism (although most of the supercomputers in TOP 500 list do
use the same) in fact they can now use multi-core parallelism which is why they
are performance –wise better. Apart from these there are half a dozen of other
parallelism techniques that can be applied.
But just about four decades back; In 1960’s
when the first supercomputer was built by Seymour Cray, supercomputers
delivered just a fraction of what can be achieved by today’s ordinary computers,
for instance take Cray -1; one of the most successful supercomputers in the
history of computing operated only at 80 MHz, Even mobile phones today operate
at a speed of 1000 MHz. Still your mobile phone isn’t a supercomputer because
obviously you cannot simulate cyclone patterns on your smart phone and that’s
what can be done using today’s supercomputers.
The dramatic revolution in supercomputing performance took
place only after the innovations in parallel computing and processor speeds.
The history of supercomputing goes back to 1960’s when Seymour Cray, the father of supercomputers built first of a kind CDC 6600 which delivered a sustained speed of 500 kilo-FLOPS on standard mathematical operations, almost ten times faster than any computer of that time. After that he formed his own company and delivered the CRAY series of supercomputers which involved innovative use of parallelism in processing. For example the Cray Y-MP designed by Steve Chen could use eight vector processors at 167 MHz with a peak performance of 333 megaflops. 1990’s saw the emergence of computers with about thousands of processors. 21st century computers have about tens of thousands of cores and they have reached at petaflops in terms of sustained speeds. By this time the supercomputing world is transforming dramatically with the top positions being switched repeatedly from one to another.
Evolution of speed:
The supercomputers of dino age(1960’s)
were a billion times slower than today’s massive machines. The supercomputer
speeds are measured in FLOPS (FLoating - point Operations Per Second) instead
of MIPS (Million Instructions Per Seconds) which is popular for measuring the
speeds of ordinary systems, FLOPS are generally used for measuring calculation
speeds in scientific applications. There is a great deal of evolution in flops
as the first supercomputer CDC 6600 delivered a sustainable speed of 500
kiloflops, but today’s champion TITAN can deliver 17.6 petaflops (almost 20
billion times as fast as CDC 6600). This in fact is coupled with dramatic cost
reduction in the processing world. Just observe that in 1961 cost per GFLOP was
around $ 1.1 trillion, and now in 2012 this is reduced to about $0.75.
Evolution of power
consumption and heat management:
A supercomputer of today takes up power in megawatts and
that is needed to power the thousands of processors it’s using. For instance
Tianhe-1A of China needs 4.04 megawatts and Titan needs 8.2 megawatts for its
operation. Most of this power ends up in heat and that’s
why supercomputers usually generate super heat. This is where comes up the
cooling system. Earlier supercomputers could use submerged liquid cooling .
Cray-2 used submerged liquid cooling and used
a Flourinert "cooling
waterfall" which was forced through the modules under pressure. Released in 1985 it was a 4 processor liquid-cooled computer totally immersed in
a tank of Flourinert, which bubbled as
it operated. Cray-2 was sometimes called bubbles just because of this,
funny isn’t it?
Even now submerged cooling is sometimes used but it is
impractical in most cases. The blue gene series of supercomputers deliberately
use low power processors so as to generate manageable quantities of heat and
hence they are on the top of GREEN 500 lists. This also shows that heat
generation is actually a serious issue while designing the hardware part of the
supercomputer. To visualize this more clearly, think of your i5 or i7 laptop ,
you already know how much heat can your laptop produce and now think about a
supercomputer with tens of thousands of cores(Titan has 560640).Thus a simple cooling
fan might not help in this case. In fact IBM Aquaser uses hot water cooling and
this water is later used to heat up the buildings as well.
![]() |
Cray-2 sometimes called 'bubbles' |
Applications:
Supercomputers were initially meant to be used in weather
forecasting, aerodynamic research, brute force code braking etc, with the
achievement of higher performance capabilities, they can now be used in even
complex tasks like molecular dynamics simulations and quantum statistical
calculations
No comments:
Post a Comment