Intel Partners With WMware
Intel has teamed up with Microsoft’s rival VMware to deliver a platform for “trusted cloud.”
The technology will mix Intel’s Trusted Execution Technology (TXT) and VMware’s vSphere 5.1, platform for building cloud infrastructures. Intel said its hardware-enhanced security capabilities integrated directly into the processor combined with vSphere 5.1 would provide a hardened and high-integrity platform to run business-critical applications in private and public cloud environments.
Intel thinks that the biggest barrier to cloud adoption is the fact that companies are worried about security. Jason Waxman, general manager of Intel’s Cloud Infrastructure Group, in a statement that Intel TXT provides hardware enforcement to help overcome some of the most challenging aspects of cloud security, including detection and prevention of bios attacks and evolving forms of stealthy malware, such as rootkits.
Dell’s Cloud Plans Falls Behind Schedule
Comments Off on Dell’s Cloud Plans Falls Behind Schedule
Dell announced an aggressive schedule last year to roll out cloud-based application services, but it appears that the schedule was a little too aggressive.
Dell said last August that it planned to launch an online analytics service in the first half of this year for small and midsized businesses, but that service isn’t due now until early next year, a Dell executive said.
“Like a lot of development projects, it can take a bit longer than you think,” Paulette Altmaier, general manager of Dell’s Cloud Business Applications group, said in an interview Thursday.
Dell also said it would launch a platform-as-a-service offering this year based on Microsoft’s Azure platform. On Friday, a Dell spokeswoman said the company no longer has a delivery date for that service.
The delays are a setback for Dell, which is trying to reduce its dependence on PCs and build more profitable businesses in services and software. But a lot of companies are moving slowly to the cloud, so the hold-up isn’t a disaster, said Peter Ffoulkes, an industry analyst with 451 Research Group.
“The move to the cloud is not a fast journey and for most people it is still largely a future. I would not expect a quarter or two to make a big difference in practical terms,” he said.
Dell has also made a string of software acquisitions in the past year that might be causing it to rethink its software-as-a-service strategy. It updated press and analysts on its software plans Thursday.
When it does arrive, the analytics service will offer “cross-app” analytics, meaning customers will be able to import data from one or more applications to a data warehouse that Dell will host for them online, and then perform analysis on that data.
IBM Freezes Employee Salaries
IBM this year won’t be granting any pay raises to its executives or to many of its workers in its Global Technology Services division.
The company said it is only giving pay raises to workers with high-demand skills that the company needs.
IBM customarily issues pay raises during the mid-year period.
“There are targeted skill groups of employees that are eligible for salary increases in 2012,” said Trink Guarino, an IBM spokeswoman. “No executives will be eligible for salary increases.”
Business Insider Tuesday published an internal IBM memo announcing the action that was sent to employees from Global Technology Services executives.
One IBM employee, who didn’t want to be identified, said he believes the lack of pay raises “is part of IBM’s hyper-aggressive plan to meet its 2015 roadmap.”
That IBM roadmap lays out an aggressive growth strategy, which calls for increasing the company’s earnings per share by $20 by 2015.
The employee noted that the company has been spending billions in stock buybacks, but says it can’t afford pay increases.
Rather than reaching profit goals “the old-fashioned way by increasing market share, developing and selling new products,” the company is “maniacally focused on cutting labor costs and off-shoring work to low-cost countries,” the employee said.
AMD Gives Opteron A Boost
AMD has shown there is a little life left in its Bulldozer Opterons by bumping up the clock speed of five Opteron models.
AMD launched its Bulldozer Opteron processors last November amid widespread anticipation that its brand new Bulldozer architecture would once again make it competitive with Intel. Its new architecture failed to impress, but the company has managed to eek out another 100MHz from five Opteron processors in what is likely to be a last hurrah before Piledriver Opterons make their appearance.
AMD bumped up the clocks by 100MHz on the 16-core Opteron 6284 SE and Opteron 6278 to 2.7GHz and 2.4GHz, respectively, while keeping TDPs the same as before, at 140W and 115W, respectively. The firm gave four Opteron 4200 series chips the same 100MHz bump, including the eight-core Opteron 4276HE to 2.6GHz, the six-core Opteron 4240 to 3.4GHz and the Opteron 4230 to 2.9GHz.
AMD was keen to point out that its speed bumped Opteron chips have been picked by Dell and by HP for 11 of its servers. Although the firm has not been able to compete with Intel’s Xeon chips on perfermance, its chips are considerably cheaper, a fact that AMD is using to win customers.
Although AMD’s 100MHz speed bump isn’t going to set the world on fire, every little bit of performance will help the firm as Intel ploughs on with its hugely impressive Sandy Bridge E and Ivy Bridge Xeon chips. AMD’s answer to Intel’s latest Xeon chips is expected to be the Piledriver Opterons.
Rackspace Goes Openstack
Rackspace has finally deployed an Openstack based cloud, playing down claims that it benefits the most from the alliance.
Rackspace is one of the leaders of the Openstack alliance, an open source cloud initiative that aims to break Amazon’s stranglehold on the industry by offering open application programmable interfaces (APIs). Until now Openstack has largely been all talk, but Rackspace has deployed a production Openstack cloud that the firm claims will help it sell Openstack to the enterprise.
Fabio Torlini, VP of cloud at Rackspace said the firm has been “going flat out to make the code production ready”. Torlini said Rackspace’s decision to deploy an Openstack based cloud could be a tipping point in deployment. “It’s going to be the catalyst for many other companies deploying Openstack,” said Torlini.
Rackspace has been the largest contributor to Openstack and the fact that it has the first major Openstack deployment support claims that Rackspace is getting the most out of Openstack.
However Torlini said, “For us, we’re able to be the first one to launch a large scale Openstack compute platform because, yes, we are one of the main providers of the original code and we are a founder of Openstack, so we have tried to develop Openstack as a neutral foundation and it is a foundation to provide a service to all its members. But we’re lucky enough to be one of the founder members, to be able to drive it, and get there [deployment] first.”
Torlini defended Rackspace’s role in the Openstack alliance, claiming the strong leadership shown by the firm is good for the community. Torlini said, “Openstack is beneficial to the product itself but that’s the whole point. The whole idea of many more providers going onto Openstack helping develop the Openstack cloud, helping advance the actual products and code is the whole point of Openstack. On the counter side of that argument is if it’s beneficial for us it is just as beneficial for any other member of Openstack because they have access to the same code and they are able to provide.”
Torlini admitted that Openstack and the community is an advantage for the firm but claimed it wasn’t possible for Rackspace to dominate. “You have companies in Openstack that are far larger than Rackspace enabled to put much more resources into Openstack as well, it’s impossible for us to dominate Openstack – it’s an independent foundation. Is it advantageous from a product perspective? I should damn well hope so,” said Torlini.
AMD Aims For The Cloud
Advanced Micro Devices on Tuesday is expected to announce new Opteron 3200 series chips for low-end servers, which the company believes will give it a competitive edge over Intel in the cloud server arena.
The three Opteron 3200 chips are for use in single-socket servers for Web hosting and cloud applications, according to a company presentation. The chips have up to eight processor cores, clock speeds of up to 3GHz, and draw between 45 watts and 65 watts of power.
The new chips are based on the Bulldozer processor architecture, which is also in the Opteron 6200 16-core processors and FX-series gaming chips. The Opteron 3200 launch comes after AMD in late February announced it would pay US$334 million to acquire SeaMicro, which offers dense and power-efficient servers for cloud computing environments.
AMD’s chips will likely compete against Intel’s Xeon E3 series chips, which are used in SeaMicro’s SM10000-XE server. Intel worked with SeaMicro on the server, but analysts have said that AMD will ultimately swap Intel’s chips with its own chips.
AMD is pitching the Opteron 3200 as a “low-cost-per-core” product. The chips are priced between US$99 and $129, while Intel’s E3 chips are priced between $189 and $885. MSI, Tyan, Fujitsu and Dell are expected to launch Web servers and dense systems based on the chips.
AMD’s expanded product line provides an entry point to new markets, said Jim McGregor, chief technology strategist at In-Stat.
But the Opteron 3200 could be a misfit in servers if competing on price versus performance-per-watt, McGregor said. There is a growing interest in deploying low-power servers in data centers to cut energy costs, but the Opteron 3200 chips are comparatively power-hungry for such installations.
IBM Scientist Unveil Terabit Optical Chip
Comments Off on IBM Scientist Unveil Terabit Optical Chip
IBM has designed a prototype optical chipset that transfers one terabit of data per second (1Tbit/s).
IBM scientists revealed today that the chipset, dubbed “Holey Optochip”, is the first parallel optical transceiver to transfer one trillion bits of information – or 500 HD movies – per second.
Speaking at the Optical Fiber Communication Conference taking place in Los Angeles today, the scientists reported that the chipset is eight times faster than other parallel optical components available today.
They estimate that the raw speed of one transceiver is equivalent to the bandwidth consumed by 100,000 users at today’s typical 10Mb/s broadband internet access speed. This means it would take just around an hour to transfer the entire US Library of Congress web archive through the transceiver.
According to the boffins, optical networking offers the potential to significantly improve data transfer rates by speeding the flow of data using light pulses instead of sending electrons over wires.
A single 90nm IBM CMOS transceiver IC with 24 receiver and 24 transmitter circuits becomes a Holey Optochip with the fabrication of 48 through-silicon holes, or “optical vias” – one for each transmitter and receiver channel. Simple post-processing on completed CMOS wafers with all devices and standard wiring levels results in an entire wafer populated with Holey Optochips.
The transceiver chip measures only 5.2×5.8mm. Twenty-four channel, industry-standard 850nm VCSEL (vertical cavity surface emitting laser) and photodiode arrays are directly flip-chip soldered to the Optochip. This direct packaging produces high-performance, chip-scale optical engines. The Holey Optochips are designed for direct coupling to a standard 48 channel multimode fibre array through a microlens optical system that can be assembled with conventional high-volume packaging tools.
Will Help Desks Become Extinct?
Tom Soderstrom, CTO at NASA’s Jet Propulsion Laboratory (JPL), views everything through the clouds.
NASA’s JPL uses 10 public or private clouds to store everything from photos of Mars for public purview to top-secret data.
Pretty soon, Soderstrom told attendees of Computerworld‘s SNW conference, data stored by large enterprises like NASA will be measured in Exabytes; one Exabyte is equal to 1.5 billion CDs or a million terabytes.
And, he noted, the only place to store Exabytes of data is on public and private clouds.
The good news is that with data in the cloud, people will be able to “work with anyone, from anywhere, with any data, using any device at any time,” he said.
And the not-so-bad news is that IT help desks, as we know them, will become a thing of the past, and IT workers in general will have to rethink how they approach application development and security.
“Now the workforce and consumers of IT are becoming mobile. Have you ever called a help desk for your mobile device? What do you do? Probably, the first you do is Google or Bing it. If you can’t get the answer there, you ask your kids. If you can’t get your answer there, you ask your friends who are like you. For us, that’s the workgroup,” Soderstrom said.
RIM Heads To The Cloud
August 31, 2011 by admin
Filed under Smartphones
Comments Off on RIM Heads To The Cloud
Canada’s Research In Motion (RIM) will take the wraps off of a new cloud-based social music sharing service called BBM Music, as companies begin to bet on entertainment delivered over the Internet that incorporates social networking features.
Research in Motion, the maker of BlackBerry phones, said select music from Universal Music Group, Sony Music Entertainment, Warner Music and EMI would be available for the users.
A closed beta trial of the BBM Music service is starting on today in Canada, the United States and the UK, the company stated.
The music service is expected to be commercially available to customers later this year for a monthly subscription of $4.99 in a number of countries, it said.
Lightning Took Down Amazon, Microsoft Clouds
August 12, 2011 by admin
Filed under Network Services
Comments Off on Lightning Took Down Amazon, Microsoft Clouds
A lightning strike in Dublin on Sunday caused a power outage in data centers belonging to Amazon and Microsoft, causing the companies’ cloud services to go offline.
Lightning hit a transformer, sparking an explosion and fire which caused the power outage at 10:41 AM PDT, according to preliminary information, Amazon wrote on its Service Health Dashboard. Under normal circumstances, backup generators would seamlessly kick in, but the explosion also managed to disable some of those generators.
By 1:56 PM PDT, power to the majority of network devices had been restored, allowing Amazon to focus on bringing EC2 (Elastic Compute Cloud) instances and EBS (Elastic Block Storage) volumes back online. But progress was slower than expected, Amazon said a couple of hours later.
“We know many of you are anxiously waiting for your instances and volumes to become available, and we want to give you more detail on why the recovery of the remaining instances and volumes is taking so long,” the company wrote at 11:04 PM PDT. “Due to the scale of the power disruption, a large number of EBS servers lost power and require manual operations before volumes can be restored … While many volumes will be restored over the next several hours, we anticipate that it will take 24-48 hours until the process is completed.”