Cisco Goes To The Cloud
April 4, 2014 by admin
Filed under Around The Net
Comments Off on Cisco Goes To The Cloud
Cisco Systems Inc will offer cloud computing services, pledging to spend $1 billion over the next two years to make a foray into a market currently dominated by the world’s biggest online retailer Amazon.com Inc, the Wall Street Journal reported.
Cisco said it will spend the amount to build data centers to help run the new service called Cisco Cloud Services, the Journal reported.
Cisco, which mainly deals in networking hardware, wants to take advantage of companies’ desire to rent computing services rather than buying and maintaining their own machines.
Enterprise hardware spending is dwindling across the globe as companies cope with shrinking budgets, slowing or uncertain economies and a fundamental migration to cloud computing, which reduces demand for equipment by outsourcing data management and computing needs.
“Everybody is realizing the cloud can be a vehicle for achieving better economics (and) lower cost,” the Journal quoted Rob Lloyd, Cisco’s president of development and sales as saying.
“It does not mean that we’re embarking on a strategy to go head-to-head with Amazon.”
Microsoft Corp last year said it was cutting prices for hosting and processing customers’ online data in an aggressive challenge to Amazon’s lead in the growing business of cloud computing.
Cisco could not be immediately reached for comment by Reuters outside regular U.S.business hours.
AMD To Focus On China
Advanced Micro Devices has relocated its desktop chip business operations from the U.S. to the growing market of China, adding to its research lab and testing plant there.
The desktop market in China is growing at a fast pace and its shipments of desktops and laptops are equal in ratio, said Michael Silverman, an AMD spokesman, in an email. “The desktop market in China remains strong,” Silverman said.
The move of AMD’s desktop operations was first reported by technology news publication Digitimes, but the chip maker confirmed the news.
The company is also developing tailored products for users in China, Silverman said.
AMD’s move of desktop operations to China brings them closer to key customers such as Lenovo, said Dean McCarron, principal analyst at Mercury Research.
“Not that they don’t have their sales in the U.S.,” but a significant number of those PCs are made in China and then shipped internationally, McCarron said.
AMD is the world’s second-largest x86 processor maker behind Intel. Many PC makers like HP and Dell get products made in China.
Being in China also solves some desktop supply chain issues because it moves AMD closer to motherboard suppliers like Asustek and MSI, which are based in Taiwan, but get parts made in China. Chips will be shipped to customers faster and at a lower cost, which would reduce the time it takes for PCs to come to market, McCarron said.
AMD already has a plant in Suzhou, which Silverman said “represents half of our global back-end testing capacity.” AMD’s largest research and development center outside the U.S. is in Shanghai.
Some recent products released by the company have been targeted at developing countries. AMD recently starting shipping Sempron and Athlon desktop chips for the Asia-Pacific and Latin America markets, and those chips go into systems priced between $60 and $399. AMD is targeting the chips at users that typically build systems at home and shop for processors, memory and storage. The chips — built on the Jaguar microarchitecture — go into AMD’s new AM1 socket, which will be on motherboards and is designed for users to easily upgrade processors.
China is also big in gaming PCs, and remains a key market for AMD’s desktop chips, said Nathan Brookwood, principal analyst at Insight 64. “White box integrator’s play a big role in China,” he said.
Do Chip Makers Have Cold Feet?
It is starting to look like chip makers are having cold feet about moving to the next technology for chipmaking. Fabricating chips on larger silicon wafers is the latest cycle in a transition, but according to the Wall Street Journal chipmakers are mothballing their plans.
Companies have to make massive upfront outlays for plants and equipment and they are refusing, because the latest change could boost the cost of a single high-volume factory to as much as $10 billion from around $4 billion. Some companies have been reining in their investments, raising fears the equipment needed to produce the new chips might be delayed for a year or more.
ASML, a maker of key machines used to define features on chips, recently said it had “paused” development of gear designed to work with the larger wafers. Intel said it has slowed some payments to the Netherlands-based company under a deal to help develop the technology.
Gary Dickerson, chief executive of Applied Materials said that the move to larger wafers “has definitely been pushed out from a timing standpoint”
IBM Breaks Big Data Record
IBM Labs claims to have broken a speed record for Big Data, which the company says could help boost internet speeds to 200 to 400Gbps using “extremely low power”.
The scientists achieved the speed record using a prototype device presented at the International Solid-State Circuits Conference (ISSCC) this week in San Francisco.
Apparently the device, which employs analogue-to-digital conversion (ADC) technology, could be used to improve the transfer speed of Big Data between clouds and data centres to four times faster than existing technology.
IBM said its device is fast enough that 160GB – the equivalent of a two-hour 4K ultra-high definition (UHD) movie or 40,000 music tracks – could be downloaded in a few seconds.
The IBM researchers have been developing the technology in collaboration with Swiss research institution Ecole Polytechnique Fédérale de Lausanne (EPFL) to tackle the growing demands of global data traffic.
“As Big Data and internet traffic continues to grow exponentially, future networking standards have to support higher data rates,” the IBM researchers explained, comparing data transfer per day in 1992 of 100GB to today’s two Exabytes per day, a 20 million-fold increase.
“To support the increase in traffic, ultra-fast and energy efficient analogue-to-digital converter (ADC) technology [will] enable complex digital equalisation across long-distance fibre channels.”
An ADC device converts analogue signals to digital, estimating the right combination of zeros and ones to digitally represent the data so it can be stored on computers and analysed for patterns and predictive outcomes.
“For example, scientists will use hundreds of thousands of ADCs to convert the analogue radio signals that originate from the Big Bang 13 billion years ago to digital,” IBM said.
The ADC technology has been developed as part of an international project called Dome, a collaboration between the Netherlands Institute for Radio Astronomy (ASTRON), DOME-South Africa and IBM to build the Square Kilometer Array (SKA), which will be the world’s largest and most sensitive radio telescope when it’s completed.
“The radio data that the SKA collects from deep space is expected to produce 10 times the global internet traffic and the prototype ADC would be an ideal candidate to transport the signals fast and at very low power – a critical requirement considering the thousands of antennas which will be spread over 1,900 miles,” IBM expalined.
IBM Research Systems department manager Dr Martin Schmatz said, “Our ADC supports Institute of Electrical and Electronics Engineers (IEEE) standards for data communication and brings together speed and energy efficiency at 32 nanometers, enabling us to start tackling the largest Big Data applications.”
He said that IBM is developing the technology for its own family of products, ranging from optical and wireline communications to advanced radar systems.
“We are bringing our previous generation of the ADC to market less than 12 months since it was first developed and tested,” Schmatz added, noting that the firm will develop the technology in communications systems such as 400Gbps opticals and advanced radars.
Samsung Joins OpenPower
Samsung has joined Google, Mellanox, Nvidia and other tech companies as part of IBM’s OpenPower Consortium. The OpenPower Consortium is working toward giving developers access to an expanded and open set of server technologies to improve data centre hardware using chip designs based on the IBM Power architecture.
Last summer, IBM announced the formation of the consortium, following its decision to license the Power architecture. The OpenPower Foundation, the actual entity behind the consortium, opened up the Power architecture technology, including specs, firmware and software under a license. Firmware is offered as open source. Originally, OpenPower was the brand of a range of System p servers from IBM that utilized the Power5 CPU. Samsung’s products currently utilize both x86 and ARM-based processors.
The intention of the consortium is to develop advanced servers, networking, storage and GPU-acceleration technology for new products. The four priority technical areas for development are system software, application software, open server development platform and hardware architecture. Along with its announcement of Samsung’s membership, the organization said that Gordon MacKean, Google’s engineering director of the platforms group, will now become chairman of the group. Nvidia has said it will use its graphics processors on Power-based hardware, and Tyan will be releasing a Power-based server, the first one outside IBM.
Amazon, Microsoft Cut Cloud Storage Prices
February 6, 2014 by admin
Filed under Around The Net
Comments Off on Amazon, Microsoft Cut Cloud Storage Prices
Last April, Microsoft agreed that it would match Amazon’s Web Services’ (AWS’) prices for compute, storage and bandwidth.
So when Amazon announced last Thursday that it would cut its S3 (Simple Storage Service) and Elastic Block Store (EBS) prices by up to 22%, Microsoft followed suit the very next day.
“We are matching AWS’ lowest prices (US East Region) for S3 and EBS, reducing prices by up to 20% and making the lower prices available in all regions worldwide,” Microsoft posted in its official blog.
For Microsoft’s “Locally Redundant Disks/Page Blobs Storage,” the company is reducing prices by up to 28%. It is also reducing the price of Azure Storage service by 50%.
Amazon’s new prices take effect Feb. 1. Microsoft’s price cuts begin March 13.
“We’re also making the new prices effective worldwide, which means that Azure storage will be less expensive than AWS in many regions,” Microsoft said.
Amazon said it dropped its prices for its S3 storage by 22% and its EBS standard volume storage and I/O operations by up to 50%.
vmWare Buys Airwatch
VMware will buy mobile management and security startup outfit Airwatch for $1.54billion.
The firm announced today that the deal has been approved by both companies’ boards and is forecast to close by the end of this quarter.
The deal will see VMware, which also announced estimated revenue of $1.48bn for the fourth quarter of 2013, pay $1.175bn in cash and $365m in installment payments.
Airwatch has nine offices worldwide with a workforce of 1,600 people and lists over 10,000 global customers.
The acquisition, which will help redefine VMware’s product portfolio and bring it more up to date with the industry’s threat landscape, will see the integration of Airwatch staff into the company’s End-User Computing Group, with the team working from its Atlanta base. VMware said it will continue to answer directly to Airwatch founder and CEO John Marshall, who will report to ex-Intel executive and VMware CEO Pat Gelsinger.
VMware EVP and GM of the End-User Computing group Sanjay Poonen said that the company plans to expand Airwatch’s Atlanta offices to become the centre of its mobile operations.
“Our vision is to provide a secure virtual workspace that allows end users to work at the speed of life,” he said. “The combination of Airwatch and VMware will enable us to deliver unprecedented value to our customers and partners across their desktop and mobile environments.”
Almost a year ago, VMWare announced a two percent increase in quarterly profits despite an impressive 22 percent increase in sales, and announced 900 job cuts.
The visualization specialist is one many firms to acquire security companies over the past year. Advanced threat specialist Fireeye announced plans to buy end-point protection firm Mandiant earlier in January for $1bn.
Was Dropbox Really Hacked?
January 24, 2014 by admin
Filed under Around The Net
Comments Off on Was Dropbox Really Hacked?
Dropbox suffered a major outage over the weekend.
In one of the more bizarre recent incidents, after the service went down on Friday evening a group of hackers claimed to have infiltrated the service and compromised its servers.
However, on the Dropbox blog, Dropbox VP of engineering Ardita Ardwarl told users that hackers were not to blame.
Ardwari said, “On Friday evening we began a routine server upgrade. Unfortunately, a bug installed this upgrade on several active servers, which brought down the entire service. Your files were always safe, and despite some reports, no hacking or DDOS attack was involved.”
The fault occurred when a bug in an upgrade script caused an operating system upgrade to be triggered on several live machines, rendering them inoperative. Although the fault was rectified in three hours, the knock-on effects led to problems that lasted through the weekend for some users.
Dropbox has assured users that there are no further problems and that all users should now be back online. It said that at no point were files in danger, adding that the affected machines didn’t host any user data. In other words, the “hackers” weren’t hackers at all, but attention seeking trolls.
Dropbox claims to have over 200 million users, many of which it has acquired through strategic partnerships with device manufacturers offering free storage with purchases.
The company is looking forward to an initial public offering (IPO) on the stock market, so the timing of such a major outage could not be worse. Dropbox, which includes Bono and The Edge from U2 amongst its investors, has recently enhanced its business offering to appeal to enterprise clients, and such a loss of uptime could affect its ability to attract customers.
IBM To Become Cloud Broker
IBM is in the throes of developing software that will allow organizations to use multiple cloud storage services interchangeably, reducing dependence on any single cloud vendor and ensuring that data remains available even during service outages.
Although the software, called InterCloud Storage (ICStore), is still in development, IBM is inviting its customers to test it. Over time, the company will fold the software into its enterprise storage portfolio, where it can back up data to the cloud. The current test iteration requires an IBM Storewize storage system to operate.
ICStore was developed in response to customer inquiries, said Thomas Weigold, who leads the IBM storage systems research team in IBM’s Zurich, Switzerland, research facility, where the software was created. Customers are interested in cloud storage services but are worried about trusting data with third party providers, both in terms of security and the reliability of the service, he said.
The software provides a single interface that administrators can use to spread data across multiple cloud vendors. Administrators can specify which cloud providers to use through a point-and-click interface. Both file and block storage is supported, though not object storage. The software contains mechanisms for encrypting data so that it remains secure as it crosses the network and resides on the external storage services.
A number of software vendors offer similar cloud storage broker capabilities, all in various stages of completion, notably Red Hat’s DeltaCloud and Hewlett Packard’s Public Cloud.
ICStore is more “flexible,” than other approaches, said Alessandro Sorniotti, an IBM security and cloud system researcher who also worked on the project. “We give customers the ability to select what goes where, depending on the sensitivity and relevance of data,” he said. Customers can store one copy of their data on one provider and a backup copy on another provider.
ICStore supports a number of cloud storage providers, including IBM’s SoftLayer, Amazon S3 (Simple Storage Service), Rackspace, Microsoft Windows Azure and private instances of the OpenStack Swift storage service. More storage providers will be added as the software goes into production mode.
“Say, you are using SoftLayer and Amazon, and if Amazon suffers an outage, then the backup cloud provider kicks in and allows you to retrieve data,” from SoftLayer, Sorniotti said.
ICStore will also allow multiple copies of the software to work together within an enterprise, using a set of IBM patent-pending algorithms developed for data sharing. This ensures that the organization will not run into any upper limits on how much data can be stored.
IBM has about 1,400 patents that relate to cloud computing, according to the company.
App Stores For Supercomputers Enroute
Comments Off on App Stores For Supercomputers Enroute
A major problem facing supercomputing is that the firms that could benefit most from the technology, aren’t using it. It is a dilemma.
Supercomputer-based visualization and simulation tools could allow a company to create, test and prototype products in virtual environments. Couple this virtualization capability with a 3-D printer, and a company would revolutionize its manufacturing.
But licensing fees for the software needed to simulate wind tunnels, ovens, welds and other processes are expensive, and the tools require large multicore systems and skilled engineers to use them.
One possible solution: taking an HPC process and converting it into an app.
This is how it might work: A manufacturer designing a part to reduce drag on an 18-wheel truck could upload a CAD file, plug in some parameters, hit start and let it use 128 cores of the Ohio Supercomputer Center’s (OSC) 8,500 core system. The cost would likely be anywhere from $200 to $500 for a 6,000 CPU hour run, or about 48 hours, to simulate the process and package the results up in a report.
Testing that 18-wheeler in a physical wind tunnel could cost as much $100,000.
Alan Chalker, the director of the OSC’s AweSim program, uses that example to explain what his organization is trying to do. The new group has some $6.5 million from government and private groups, including consumer products giant Procter & Gamble, to find ways to bring HPC to manufacturers via an app store.
The app store is slated to open at the end of the first quarter of next year, with one app and several tools that have been ported for the Web. The plan is to eventually spin-off AweSim into a private firm, and populate the app store with thousands of apps.
Tom Lange, director of modeling and simulation in P&G’s corporate R&D group, said he hopes that AweSim’s tools will be used for the company’s supply chain.
The software industry model is based on selling licenses, which for an HPC application can cost $50,000 a year, said Lange. That price is well out of the reach of small manufacturers interested in fixing just one problem. “What they really want is an app,” he said.
Lange said P&G has worked with supply chain partners on HPC issues, but it can be difficult because of the complexities of the relationship.
“The small supplier doesn’t want to be beholden to P&G,” said Lange. “They have an independent business and they want to be independent and they should be.”
That’s one of the reasons he likes AweSim.
AweSim will use some open source HPC tools in its apps, and are also working on agreements with major HPC software vendors to make parts of their tools available through an app.
Chalker said software vendors are interested in working with AweSim because it’s a way to get to a market that’s inaccessible today. The vendors could get some licensing fees for an app and a potential customer for larger, more expensive apps in the future.
AweSim is an outgrowth of the Blue Collar Computing initiative that started at OSC in the mid-2000s with goals similar to AweSim’s. But that program required that users purchase a lot of costly consulting work. The app store’s approach is to minimize cost, and the need for consulting help, as much as possible.
Chalker has a half dozen apps already built, including one used in the truck example. The OSC is building a software development kit to make it possible for others to build them as well. One goal is to eventually enable other supercomputing centers to provide compute capacity for the apps.
AweSim will charge users a fixed rate for CPUs, covering just the costs, and will provide consulting expertise where it is needed. Consulting fees may raise the bill for users, but Chalker said it usually wouldn’t be more than a few thousand dollars, a lot less than hiring a full-time computer scientist.
The AweSim team expects that many app users, a mechanical engineer for instance, will know enough to work with an app without the help of a computational fluid dynamics expert.
Lange says that manufacturers understand that producing domestically rather than overseas requires making products better, being innovative and not wasting resources. “You have to be committed to innovate what you make, and you have to commit to innovating how you make it,” said Lange, who sees HPC as a path to get there.