IBM Goes Linux
IBM reportedly will invest $1bn in Linux and other open source technologies for its Power system servers.
The firm is expected to announce the news at the Linuxcon 2013 conference in New Orleans, pledging to spend $1bn over five years on Linux and related open source technologies.
The software technology will be used on IBM’s Power line of servers, which are based on the chip technology of the same name and used for running large scale systems in data centres.
Previously IBM Power systems have mostly run IBM’s proprietary AIX version of Unix, though some used in high performance computing (HPC) configurations have run Linux.
If true, this will make the second time IBM coughs up a $1bn investment in Linux. IBM gave the open source OS the same vote of confidence around 13 years ago.
According to the Wall Street Journal, IBM isn’t investing in Linux to convert its existing AIX customers, but instead Linux will help support data centre applications driving big data, cloud computing and analytics.
“We continue to take share in Unix, but it’s just not growing as fast as Linux,” said IBM VP of Power development Brad McCredie.
The $1bn is expected to go mainly for facilities and staffing to help Power system users move to Linux, with a new centre being opened in France especially to help manage that transition.
Full details are planned to be announced at Linuxcon later today.
Last month, IBM swallowed Israeli security firm Trusteer to boost its customers’ cyber defences with the company’s anti-hacking technology.
Announcing that it had signed a definitive agreement with Trusteer to create a security lab in Israel, IBM said it planned to focus on mobile and application security, counter-fraud and malware detection staffed by 200 Trusteer and IBM researchers.
Dell Promises ExaScale By 2015
Dell has claimed it will make exascale computing available by 2015, as the firm enters the high performance computing (HPC) market.
Speaking at the firm’s Enterprise Forum in San Jose, Sam Greenblatt, chief architect of Dell’s Enterprise Solutions Group, said the firm will have exascale systems by 2015, ahead of rival vendors. However, he added that development will not be boosted by a doubling in processor performance, saying Moore’s Law is no longer valid and is actually presenting a barrier for vendors.
“It’s not doubling every two years any more, it has flattened out significantly,” he said. According to Greenblatt, the only way firms can achieve exascale computing is through clustering. “We have to design servers that can actually get us to exascale. The only way you can do it is to use a form of clustering, which is getting multiple parallel processes going,” he said.
Not only did Greenblatt warn that hardware will have to be packaged differently to reach exascale performance, he said that programmers will also need to change. “This is going to be an area that’s really great, but the problem is you never programmed for this area, you programmed to that old Von Neumann machine.”
According to Greenblatt, shifting of data will also be cut down, a move that he said will lead to network latency being less of a performance issue.”Things are going to change very dramatically, your data is going to get bigger, processing power is going to get bigger and network latency is going to start to diminish, because we can’t move all this [data] through the pipe,” he said.
Greenblatt’s reference to data being closer to the processor is a nod to the increasing volume of data that is being handled. While HPC networking firms such as Mellanox and Emulex are increasing bandwidths on their respective switch gear, bandwidth increases are being outpaced by the growth in the size of datasets used by firms deploying analytics workloads or academic research.
That Dell is projecting 2015 for the arrival of exascale clusters is at least a few years sooner than firms such as Intel, Cray and HP, all of which have put a “by 2020″ timeframe on the challenge. However what Greenblatt did not mention is the projected power efficiency of Dell’s 2015 exascale cluster, something that will be critical to its usability.
Quantum Computing Making Strides
Researchers at the University of Innsbruck in Austria have managed to transfer quantum information from an atom to a photon, which is being seen as a breakthrough in the making of quantum computers.
According to Humans Invent the breakthrough allows quantum computers to exchange data at the speed of light along optical fibres. Lead researcher on the project Tracy Northup said that the method allows the mapping of quantum information faithfully from an ion onto a photon.
Northup’s team used an “ion trap” to produce a single photon from a trapped calcium ion with its quantum state intact using mirrors and lasers. No potential cats were injured in the experiment. The move enables boffins to start to play with thousands of quantum bits rather than just a dozen or so. This means that they can get a computer to do specific tasks like factoring large numbers or a database search, faster.