nVidia NVLINK 2.0 Going In IBM Servers
Comments Off on nVidia NVLINK 2.0 Going In IBM Servers
On Monday, PCWorld reported that the first servers expected to use Nvidia’s second-generation NVLINK 2.0 technology will be arriving sometime next year using IBM’s upcoming Power9 chip family.
IBM launched its Power8 lineup of superscalar symmetric multiprocessors back in August 2013 at the Hot Chips conference, and the first systems became available in August 2014. The announcement was significant because it signaled the beginning of a continuing partnership between IBM and Nvidia to develop GPU-accelerated IBM server systems, beginning with the Tesla K40 GPU.
The result was an HPC “tag-team” where IBM’s Power8 architecture, a 12-core chip with 96MB of embedded memory, would eventually go on to power Nvidia’s next-generation Pascal architecture which debuted in April 2016 at the company’s GPU Technology Conference.
NVLINK, first announced in March 2014, uses a proprietary High-Speed Signaling interconnect (NVHS) developed by Nvidia. The company says NVHS transmits data over a differential pair running at up to 20Gbps, so eight of these differential 20Gbps connections will form a 160Gbps “Sub-Link” that sends data in one direction. Two sub-links—one for each direction—will form a 320Gbps, or 40GB/s bi-directional “Link” that connects processors together in a mesh framework (GPU-to-GPU or GPU-to-CPU).
NVLINK lanes upgrade from 20Gbps to 25Gbps
IBM is projecting its Power9 servers to be available beginning in the middle of 2017, with PCWorld reporting that the new processor lineup will include support for NVLINK 2.0 technology. Each NVLINK lane will communicate at 25Gbps, up from 20Gbps in the first iteration. With eight differential lanes, this translates to a 400Gbps (50GB/s) bi-directional link between CPUs and GPUs, or about 25 percent more performance if the information is correct.
NVLINK 2.0 capable servers arriving next year
Meanwhile, Nvidia has yet to release any NVLINK 2.0-capable GPUs, but a company presentation slide in Korean language suggests that the technology will first appear in Volta GPUs which are also scheduled for release sometime next year. We were originally under the impression that the new GPU architecture would release in 2018, as per Nvidia’s roadmap. But a source hinted last month that Volta would be getting 16nm FinFET treatment and may show up in roughly the same timeframe as AMD’s HBM 2.0-powered Vega sometime in 2017. After all, it is easier for Nvidia to launch sooner if the new architecture is built on the same node as the Pascal lineup.
Still ahead of PCI-Express 4.0
Nvidia claims that PCI-Express 3.0 (32GB/s with x16 bandwidth) significantly limits a GPU’s ability to access a CPU’s memory system and is about “four to five times slower” than its proprietary standard. Even PCI-Express 4.0, releasing later in 2017, is limited to 64GB/s on a slot with x16 bandwidth.
To put this in perspective, Nvidia’s Tesla P100 Accelerator uses four 40GB/s NVLINK ports to connect clusters of GPUs and CPUs, for a total of 160GB/s of bandwidth.
With a generational NVLINK upgrade from 40GB/s to 50GB/s bi-directional links, the company could release a future Volta-based GPU with four 50GB/s NVLINK ports totaling of 200GB/s of bandwidth, well above and beyond the specifications of the new PCI-Express standard.
Courtesy-Fud
IBM’s Watson To Power Self-Driving Cars
Comments Off on IBM’s Watson To Power Self-Driving Cars
Olli, a self-driving passenger shuttle running IBM Watson Internet of Things technology, made its debut in a shopping area of the Washington,D.C. suburbs.
While some “fine-tuning” of the self-driving features are needed, passengers, by this fall, should be able to ride around and speak directions to Olli on the private roads at the National Harbor shopping and entertainment area on the Maryland side of the Potomac River, according to a spokeswoman for Local Motors, the designer of Olli.
The vision is that Olli will be used in all kinds of venues, such as crowded urban areas, college and corporate campuses and theme parks. It could also become the “last mile” connection from a subway or bus stop to a job site. Miami-Dade County has ordered two of the vehicles for a pilot project there, said the Local Motors spokeswoman, Jacqueline Keidel.
Olli didn’t give any rides to reporters and bystanders at its Thursday debut, but the vehicle dropped off Local Motors CEO John Rogers with engineers standing by to offer assistance if needed.
“Olli offers a smart, safe and sustainable transportation solution that is long overdue,” Rogers said in a statement, adding that Olli with Watson “acts as our entry into the world of self-driving vehicles.”
Olli is the first vehicle to use cloud-based cognitive computing from IBM Watson Internet of Things to analyze and learn from 30 sensors embedded in the vehicle. Four Watson developer APIs were used that allow Olli to interact with passengers: speech to text, natural language classifier, entity extraction and text to speech.
Since Watson is web-enabled, Olli will also be able to answer questions about popular nearby restaurants or historical sites, at least according to how Local Motors and IBM have described the vehicle’s capabilities.
Green said IBM will expand its Watson IBM research by helping develop and create additional Ollis at Local Motors headquarters near Phoenix and at IBM Watson IoT’s AutoLab, an incubator for cognitive mobility applications. “We have a long term vision with Watson,” Keidel added.
Courtesy-http://www.thegurureview.net/aroundnet-category/ibms-watson-powers-self-driving-shuttle-olli-debuts-in-d-c.html
IBM Acquires EZSource
The digital transformation revolution is already in full swing, but for companies with legacy mainframe applications, it’s not always clear how to get in the game. IBM announced an acquisition that could help.
The company will acquire Israel-based EZSource, it said, in the hopes of helping developers “quickly and easily understand and change mainframe code.”
EZSource offers a visual dashboard that’s designed to ease the process of modernizing applications. Essentially, it exposes application programming interfaces (APIs) so that developers can focus their efforts accordingly.
Developers must often manually check thousands or millions of lines of code, but EZSource’s software instead alerts them to the number of sections of code that access a particular entity, such as a database table, so they can check them to see if updates are needed.
IBM’s purchase is expected to close in the second quarter of 2016. Terms of the deal were not disclosed.
Sixty-eight percent of the world’s production IT workloads run on mainframes, IBM said, amounting to roughly 30 billion business transactions processed each day.
“The mainframe is the backbone of today’s businesses,” said Ross Mauri, general manager for IBM z Systems. “As clients drive their digital transformation, they are seeking the innovation and business value from new applications while leveraging their existing assets and processes.”
EZSource will bring an important capability to the IBM ecosystem, said Patrick Moorhead, president and principal analyst with Moor Insights & Strategy.
“While IBM takes advantage of a legacy architecture with z Systems, it’s important that the software modernizes, and that’s exactly what EZSource does,” Moorhead said.
Large organizations still run a lot of mainframe systems, particularly within the financial-services sector, noted analyst Frank Scavo, president of Computer Economics.
“As these organizations roll out new mobile, social and other digital business experiences, they have no choice but to expose these mainframe systems via APIs,” Scavo said.
But in many large organizations, skilled mainframe developers are in short supply — especially those who really understand these legacy systems, he added.
“Anything to increase the productivity of these developers will go a long way to ensuring the success of their digital business initiatives,” Scavo said. “Automation tools to discover, expose and analyze the inner workings of these legacy apps are really needed.”
It’s a smart move for IBM, he added.
Source- http://www.thegurureview.net/computing-category/looking-to-transform-mainframe-business-ibm-acquires-ezsource.html
IBM’s Watson Goes Cybersecurity
IBM Security has announced a new year-long research project through which it will partner with eight universities to help train its Watson artificial intelligence system to tackle cybercrime.
Knowledge about threats is often hidden in unstructured sources such as blogs, research reports and documentation, said Kevin Skapinetz, director of strategy for IBM Security.
“Let’s say tomorrow there’s an article about a new type of malware, then a bunch of follow-up blogs,” Skapinetz explained. “Essentially what we’re doing is training Watson not just to understand that those documents exist, but to add context and make connections between them.”
Over the past year, IBM Security’s own experts have been working to teach Watson the “language of cybersecurity,” he said. That’s been accomplished largely by feeding it thousands of documents annotated to help the system understand what a threat is, what it does and what indicators are related, for example.
“You go through the process of annotating documents not just for nouns and verbs, but also what it all means together,” Skapinetz said. “Then Watson can start making associations.”
Now IBM aims to accelerate the training process. This fall, it will begin working with students at universities including California State Polytechnic University at Pomona, Penn State, MIT, New York University and the University of Maryland at Baltimore County along with Canada’s universities of New Brunswick, Ottawa and Waterloo.
Over the course of a year, the program aims to feed up to 15,000 new documents into Watson every month, including threat intelligence reports, cybercrime strategies, threat databases and materials from IBM’s own X-Force research library. X-Force represents 20 years of security research, including details on 8 million spam and phishing attacks and more than 100,000 documented vulnerabilities.
Watson’s natural language processing capabilities will help it make sense of those reams of unstructured data. Its data-mining techniques will help detect outliers, and its graphical presentation tools will help find connections among related data points in different documents, IBM said.
Ultimately, the result will be a cloud service called Watson for Cyber Security that’s designed to provide insights into emerging threats as well as recommendations on how to stop them.
Source-http://www.thegurureview.net/computing-category/ibms-watson-to-get-schooled-on-cybersecurity.html
Groupon Starts Fight With IBM
May 16, 2016 by admin
Filed under Around The Net
Comments Off on Groupon Starts Fight With IBM
The online marketplace Groupon Inc has filed a lawsuit against IBM Corp, accusing it of infringing a patent related to technology that assists businesses to solicit customers based on the customers’ locations at a given moment.
Groupon filed its lawsuit on Monday with the federal court in its hometown of Chicago, two months after IBM accused Groupon of patent infringement in a separate lawsuit.
“IBM is trying to shed its status as a dial-up-era dinosaur” by infringing the rights of “current” technology companies such as Groupon, according to Groupon spokesman Bill Roberts.
The latest lawsuit concerns IBM’s WebSphere Commerce platform, which Groupon said lets merchants send messages to customers with GPS-enabled devices based on their real-time locations, and their use of social media including Facebook.
Groupon said the platform infringes a December 2010 patent, and that it deserves royalties based on the “billions of dollars” of revenue that Armonk, New York-based IBM has received through its infringement.
“IBM, a relic of once-great 20th Century technology firms, has now resorted to usurping the intellectual property of companies born this millennium,” Groupon said in its lawsuit.
On March 2, IBM accused Groupon in a federal lawsuit in Delaware of infringing four patents, including two related to Prodigy, a late-1980s forerunner to the Internet.
“Over the past three years, IBM has attempted to conclude a fair and reasonable patent license agreement with Groupon, and we are disappointed that Groupon is seeking to divert attention from its patent infringement by suing,” Shelton said.
The Chicago case is Groupon Inc v International Business Machines Corp, U.S. District Court, Northern District of Illinois, No. 16-05064. The Delaware case is International Business Machines Corp v Groupon Inc, U.S. District Court, District of Delaware, No. 16-00122.
Source-http://www.thegurureview.net/aroundnet-category/groupon-gets-into-patent-fight-with-ibm.html
Are Quantum Computers On The Horizon?
Massachusetts Institute of Technology (MIT) and Austria’s University of Innsbruck claim to have put together a working quantum computer capable of solving a simple mathematical problem.
The architecture they have devised ought to be relatively easy to scale, and could therefore form the basis of workable quantum computers in the future – with a bit of “engineering effort” and “an enormous amount of money”, according to Isaac Chuang, professor of physics, electrical engineering and computer science at MIT.
Chuang’s team has put together a prototype comprising the first five quantum bits (or qubits) of a quantum computer. This is being tested on mathematical factoring problems, which could have implications for applications that use factoring as the basis for encryption to keep information, including credit card details, secure.
The proof-of-concept has been applied only to the number 15, but the researchers claim that this is the “first scalable implementation” of quantum computing to solve Shor’s algorithm, a quantum algorithm that can quickly calculate the prime factors of large numbers.
“The team was able to keep the quantum system stable by holding the atoms in an ion trap, where they removed an electron from each atom, thereby charging it. They then held each atom in place with an electric field,” explained MIT.
Chuang added: “That way, we know exactly where that atom is in space. Then we do that with another atom, a few microns away – [a distance] about 100th the width of a human hair.
“By having a number of these atoms together, they can still interact with each other because they’re charged. That interaction lets us perform logic gates, which allow us to realise the primitives of the Shor factoring algorithm. The gates we perform can work on any of these kinds of atoms, no matter how large we make the system.”
Chuang is a pioneer in the field of quantum computing. He designed a quantum computer in 2001 based on one molecule that could be held in ‘superposition’ and manipulated with nuclear magnetic resonance to factor the number 15.
The results represented the first experimental realisation of Shor’s algorithm. But the system wasn’t scalable as it became more difficult to control as more atoms were added.
However, the architecture that Chuang and his team have put together is, he believes, highly scalable and will enable the team to build quantum computing devices capable of solving much bigger mathematical factors.
“It might still cost an enormous amount of money to build, [and] you won’t be building a quantum computer and putting it on your desktop anytime soon, but now it’s much more an engineering effort and not a basic physics question,” said Chuang.
In other quantum computing news this week, the UK government has promised £200m to support engineering and physical sciences PhD students and fuel UK research into quantum technologies, although most of the cash will be spent on Doctoral Training Partnerships rather than trying to build workable quantum computing prototypes.
Courtesy-TheInq
Courtesy-TheInq
Seagate Goes 8TB For Surveillance
Seagate has become the first hard drive company to create an 8TB unit aimed specifically at the surveillance market, targeting system integrators, end users and system installers.
The Seagate Surveillance HDD, as those wags in marketing have named it, is the highest capacity of any specialist drive for security camera set-ups, and Seagate cites its main selling points as maximizing uptime while removing the need for excess support.
“Seagate has worked closely with the top surveillance manufacturers to evolve the features of our Surveillance HDD products and deliver a customized solution that has precisely matched market needs in this evolving space for the last 10 years,” said Matt Rutledge, Seagate’s senior vice president for client storage.
“With HD recordings now standard for surveillance applications, Seagate’s Surveillance HDD product line has been designed to support these extreme workloads with ease and is capable of a 180TB/year workload, three times that of a standard desktop drive.
“It also includes surveillance-optimized firmware to support up to 64 cameras and is the only product in the industry that can support surveillance solutions, from single-bay DVRs to large multi-bay NVR systems.”
The 3.5in drive is designed to run 24/7 and is able to capture 800 hours of high-definition video from up to 64 cameras simultaneously, making it ideal for shopping centers, urban areas, industrial complexes and anywhere else you need to feel simultaneously safe and violated. Its capacity will allow 6PB in a 42U rack.
Included in the deal is the Seagate Rescue Service, capable of restoring lost data in two weeks if circumstances permit, and sold with end users in mind for whom an IT support infrastructure is either non-existent or off-site. The service has a 90 percent success rate and is available as part of the drive cost for the first three years.
Seagate demonstrated the drive today at the China Public Security Expo. Where better than the home of civil liberty infringement to show off the new drive?
Earlier this year, Seagate announced a new co-venture with SSD manufacturer Micron, which will come as a huge relief after the recent merger announcement between WD and SanDisk.
Courtesy-http://www.thegurureview.net/computing-category/seagate-goes-8tb-for-surveillance.html
Oracle’s M7 Processor Has Security On Silicon
Comments Off on Oracle’s M7 Processor Has Security On Silicon
Oracle started shipping systems based on its latest Sparc M7 processor, which the firm said will go a long way to solving the world’s online security problems by building protection into the silicon.
The Sparc M7 chip was originally unveiled at last year’s Openworld show in San Francisco, and was touted at the time as a Heartbleed-prevention tool.
A year on, and Oracle announced the Oracle SuperCluster M7, along with Sparc T7 and M7 servers, at the show. The servers are all based on the 32-core, 256-thread M7 microprocessor, which offers Security in Silicon for better intrusion protection and encryption, and SQL in Silicon for improved database efficiency.
Along with built-in security, the SuperCluster M7 packs compute, networking and storage hardware with virtualisation, operating system and management software into one giant cloud infrastructure box.
Oracle CTO Larry Ellison was on hand at Openworld on Tuesday to explain why the notion of building security into the silicon is so important.
“We are not winning a lot of these cyber battles. We haven’t lost the war but we’re losing a lot of the battles. We have to rethink how we deliver technology especially as we deliver vast amounts of data to the cloud,” he told delegates.
Ellison said that Oracle’s approach to this cyber war is to take security as low down in the stack as possible.
“Database security is better than application security. You should always push security as low in the stack as possible. At the bottom of the stack is silicon. If all of your data in the database is encrypted, that’s better than having an application code that encrypts your data. If it’s in the database, every application that uses that database inherits that security,” he explained.
“Silicon security is better than OS security. Then every operating system that runs on that silicon inherits that security. And the last time I checked, even the best hackers have not figured out a way to download changes to your microprocessor. You can’t alter the silicon, that’s really tricky.”
Ellison’s big idea is to take software security features out of operating systems, VMs and even databases in some cases – because software can be changed – and instead push them into the silicon, which can’t be. He is also urging for security to be switched on as default, without an option to turn it back off again.
“The security features should always be on. We provide encryption in our databases but it can be switched off. That is a bad idea. There should be no way to turn off encryption. The idea of being able to turn on and off security features makes no sense,” he said.
Ellison referred back to a debate that took place at Oracle when it first came up with its backup system – should the firm have only encrypted backups. “We did a customer survey and customers said no, we don’t want to pay the performance penalty in some cases,” he recalled. “In that case customer choice is a bad idea. Maybe someone will forget to turn on encryption when it should have been turned on and you lose 10 million credit cards.”
The Sparc M7 is basically Oracle’s answer to this dire security situation. Ellison said that while the M7 has lots of software features built into the silicon, the most “charismatic” of these is Silicon Secured Memory, which is “deceptively simple” in how it works.
“Every time a computer program asks for memory, say you ask for 8MB of memory, we compute a key and assign this large number to that 8MB of memory,” he explained. “We take those bits and we lock that memory. We also assign that same number to the program. Every time the program accesses memory, we check that number to make sure it’s the memory you allocated earlier. That compare is done by the hardware.”
If a program tries to access memory belonging to another program, the hardware detects a mismatch and raises a signal, flagging up a possible breach or bug.
“We put always-on memory intrusion detection into the silicon. We’re always looking for Heartbleed and Venom-like violations. You cannot turn it off,” the CTO warned.
“We’ve also speeded up encryption and decompression, which is kind of related to encryption. It runs at memory speed there’s zero cost in doing that. We turn it on, you can’t turn it off, it’s on all the time. It’s all built into the M7.”
Ellison claimed that running M7-based systems will stop threats like Heartbleed and Venom in their tracks.
“The way Venom worked, the floppy disc driver concealed this code. It’s the worst kind of situation, you’re writing into memory you’re not supposed to. You’re writing computer instructions into the memory and you’ve just taken over the whole computer,” he explained. “You can steal and change data. M7 – the second we tried to write that code into memory that didn’t belong to that program, where the keys didn’t match, that would have been detected real-time and that access would have been foiled.
All well and good, except for the fact that nearly every current computer system doesn’t run off the M7 processor. Ellison claimed that even if only three or four percent of servers in the cloud an organisation is using have this feature, they will be protected as they’ll get the early warning to then deal with the issue across non-M7 systems.
“You don’t have to replace every micro processor, you just have to replace a few so you get the information real-time,” he added.
“You’ll see us making more chips based on security, to secure our cloud and to sell to people who want to secure their clouds or who want to have secure computers in their datacentre. Pushing security down into silicon is a very effective way to do that and get ahead of bad guys.”
SuperCluster M7 and Sparc M7 servers are available now. Pricing has not been disclosed but based on normal Oracle hardware costs, expect to dig deep to afford one.
Source-http://www.thegurureview.net/computing-category/oracles-new-m7-processor-has-security-on-silicon.html
Can IBM Beat Moore’s Law?
Big Blue Researchers have discovered a way to replace silicon semiconductors with carbon nanotube transistors and think that the development will push the industry past Moore’s law limits.
IBM said its researchers successfully shrunk transistor contacts in a way that didn’t limit the power of carbon nanotube devices. The chips could be smaller and faster and significantly surpass what’s possible with today’s silicon semiconductors.
The chips are made from carbon nanotubes consist of single atomic sheets of carbon in rolled-up tubes. This means that high-performance computers may well be capable of analysing big data faster, and battery life and the power of mobile and connected devices will be better. The advance may enable cloud-based data centres to provide more efficient services, IBM claims.
Moore’s law, which has for years governed the ability of the semiconductor industry to double the processing power of chips every 24 months is starting to reach the limits of physics when it comes to doubling the power of silicon chips. This could mean a slowing of significant computing performance boosts unless someone comes up with something fast.
IBM researchers claim to have proved that carbon nanotube transistors can work as switches at widths of 10,000 times thinner than a human hair, and less than half the size of the most advanced silicon technology.
The latest research has overcome “the other major hurdle in incorporating carbon nanotubes into semiconductor devices which could result in smaller chips with greater performance and lower power consumption,” IBM said.
Electrons found in carbon transistors move more efficiently than those that are silicon-based, even as the extremely thin bodies of carbon nanotubes offer more advantages at the atomic scale, IBM says.
The new research is jump-starting the move to a post-silicon future, and paying off on $3 billion in chip research and development investment IBM announced in 2014.
Source-http://www.thegurureview.net/computing-category/can-ibm-beat-moores-law.html
Oracle’s New Processor Goes For The Cheap
Comments Off on Oracle’s New Processor Goes For The Cheap
Oracle is looking to expand the market for its Sparc-based servers with a new, low-cost processor which it curiously called Sonoma.
The company isn’t saying yet when the chip will be in the shops but the spec shows that could become a new rival for Intel’s Xeon chips and make Oracle’s servers more competitive.
Sonoma is named after a place where they make cheap terrible Californian wine and Oracle aims the chip at Sparc-based servers at “significantly lower price points” than now.
This means that companies can use them for smaller, less critical applications.
Oracle has not done much with its Sparc line-up for a couple of years, and Sonoma was one of a few new chips planned. The database maker will update its Sparc T5, used in its mid-range systems and the high-end Sparc M7. The technology is expected to filter to the Sonoma lower tier servers.
The Sparc M7 will have technologies for encryption acceleration and memory protection built into the chip. It will include coprocessors to speed up database performance.
According to IDG Sonoma will take those same technologies and bring them down to low-cost points. This means that people can use them in cloud computing and for smaller applications.
He didn’t talk about prices or say how much cheaper the new Sparc systems will be, and it could potentially be years before Sonoma comes to market.