Hard to compute

Size matters as the US takes the lead in the race to conquer the super computer challenge

You remember how, when computers were first developed, they were the size of a large room and no one could imagine having one in their home? Over the decades, the technology shrank in size while ballooning in capacity – now, in 2018, you can pretty much run a  whole business empire from a smart phone.

Well, just in case you thought those warehouses full of servers were a thing of the past, meet Summit. This is a super computer developed by the US and, as the title suggests, it could be the ultimate in hardware.

Until now, China held the super computer crown with TaihuLight, which had a processing power of 93 petaflops (a petaflop is equal to one thousand million million floating-point operations per second). With Summit, the US now boasts a 200 petaflop capability, or 200,000 trillion calculations per second.

This means that Summit could process 30 years’ worth of data in one hour. Potential applications for the super computer – which has 4,608 compute servers and is housed at the Oak Ridge National Laboratory in Tennessee – include cancer research, astrophysics and systems biology. Developed in partnership with IBM and Nvidia, Summit has already been used to run a comparative genomics code while it was still being built.

The potential for Summit to contribute to calculation-intensive research is clear, but is the super computer race simply a game of one-upmanship? Are the world’s super powers concentrating more on the size of their technology than on putting it to good use? Let’s hope that this new giant will now start contributing to huge leaps in science, rather than proving to be a false summit.

Are you working on making technology more compact, or do you believe that bigger is better? Get in touch with the team on hello@techtalkshow.co.uk and let us know.

More posts

  1. Home
  2. Blog
  3. Hard to compute