Dell Promises ExaScale By 2015
Dell has claimed it will make exascale computing available by 2015, as the firm enters the high performance computing (HPC) market.
Speaking at the firm’s Enterprise Forum in San Jose, Sam Greenblatt, chief architect of Dell’s Enterprise Solutions Group, said the firm will have exascale systems by 2015, ahead of rival vendors. However, he added that development will not be boosted by a doubling in processor performance, saying Moore’s Law is no longer valid and is actually presenting a barrier for vendors.
“It’s not doubling every two years any more, it has flattened out significantly,” he said. According to Greenblatt, the only way firms can achieve exascale computing is through clustering. “We have to design servers that can actually get us to exascale. The only way you can do it is to use a form of clustering, which is getting multiple parallel processes going,” he said.
Not only did Greenblatt warn that hardware will have to be packaged differently to reach exascale performance, he said that programmers will also need to change. “This is going to be an area that’s really great, but the problem is you never programmed for this area, you programmed to that old Von Neumann machine.”
According to Greenblatt, shifting of data will also be cut down, a move that he said will lead to network latency being less of a performance issue.”Things are going to change very dramatically, your data is going to get bigger, processing power is going to get bigger and network latency is going to start to diminish, because we can’t move all this [data] through the pipe,” he said.
Greenblatt’s reference to data being closer to the processor is a nod to the increasing volume of data that is being handled. While HPC networking firms such as Mellanox and Emulex are increasing bandwidths on their respective switch gear, bandwidth increases are being outpaced by the growth in the size of datasets used by firms deploying analytics workloads or academic research.
That Dell is projecting 2015 for the arrival of exascale clusters is at least a few years sooner than firms such as Intel, Cray and HP, all of which have put a “by 2020″ timeframe on the challenge. However what Greenblatt did not mention is the projected power efficiency of Dell’s 2015 exascale cluster, something that will be critical to its usability.
IBM Buys SoftLayer
IBM has signed an agreement to purchase SoftLayer Technologies, as it looks to accelerate the build-out of its public cloud infrastructure. The company is also forming a services division to back up the push.
The financial details of the deal were not announced, but SoftLayer is the world’s largest privately held cloud computing infrastructure provider, according to IBM.
IBM already has an offering that includes private, public and hybrid cloud platforms. The acquisition of SoftLayer will give it a more complete in-house offering, as enterprises look to keep some applications in the data center, while others are moved to public clouds.
SoftLayer has about 21,000 customers and an infrastructure that includes 13 data centers in the U.S., Asia and Europe, according to IBM. SoftLayer allows enterprises to buy compute power on either dedicated or shared servers.
Following the close of the acquisition of SoftLayer, which is expected in the third quarter, a new division will combine its services with IBM’s SmartCloud. IBM expects to reach $7 billion annually in cloud revenue by the end of 2015, it said.
Success is far from certain: The public cloud market is becoming increasingly competitive as dedicated cloud providers, telecom operators and IT vendors such as Microsoft and Hewlett-Packard all want a piece. The growing competition should be a good thing for customers if it drives down prices. For example, Microsoft has already committed to matching Amazon Web Services prices for commodity services such as computing, storage and bandwidth.
Not all hardware vendors feel it’s necessary to have their own public cloud. Last month, Dell changed strategy and said it would work with partners including Joyent, instead of having its own cloud.
Apple Raising Prices In Japan
June 10, 2013 by admin
Filed under Uncategorized
Comments Off on Apple Raising Prices In Japan
Apple Inc increased prices of iPads and iPods in Japan on Friday, becoming the highest-profile brand to join a growing list of foreign companies asking Japanese consumers to pay more as a weakening yen squeezes profit.
Some U.S. companies have inoculated themselves at least temporarily against the yen’s fall through financial hedging instruments, while others are charging customers more.
The yen has fallen more than 20 percent against the U.S. dollar since mid-November when then-opposition leader Shinzo Abe, who is now prime minister, prescribed a dose of radical monetary easing to reverse years of sliding consumer prices as part of a deflation-fighting policy, dubbed “Abenomics.”
The Bank of Japan, under a new Abe-backed governor, in April promised to inject $1.4 trillion into the economy in less than two years to achieve 2 percent inflation in roughly two years.
Price rises are rare in Japan, which has suffered 15 years of low-grade deflation. A few other foreign brands have also raised prices on products, providing an early sign of inflation for Abe and an indication that these companies feel consumer demand is strong enough to withstand the increases.
Still, price rises would have to spread much more widely, especially to lower-end discretionary goods, to show that Abe’s aggressive policies are helping reinvigorate the economy.
Apple, one of the most visible foreign companies in Japan, raised the price of iPads by up to 13,000 yen ($130) at its local stores. The 64-gigabyte iPad will now cost 69,800 yen, up from 58,800 yen a day ago, an Apple store employee said. The 128-gigabyte model will cost 79,800 yen compared with 66,800 yen.
Apple also upped prices of its iPod music players by as much as 6,000 yen and its iPad Mini by 8,000 yen.
Will Arm/Atom CPUs Replace Xeon/Opteron?
Comments Off on Will Arm/Atom CPUs Replace Xeon/Opteron?
Analyst are saying that smartphone chips could one day replace the Xeon and Opteron processors used in most of the world’s top supercomputers. In a paper in a paper titled “Are mobile processors ready for HPC?” researchers at the Barcelona Supercomputing Center wrote that less expensive chips bumping out faster but higher-priced processors in high-performance systems.
In 1993, the list of the world’s fastest supercomputers, known as the Top500, was dominated by systems based on vector processors. They were nudged out by less expensive RISC processors. RISC chips were eventually replaced by cheaper commodity processors like Intel’s Xeon and AMD Opteron and now mobile chips are likely to take over.
The transitions had a common thread, the researchers wrote: Microprocessors killed the vector supercomputers because they were “significantly cheaper and greener,” the report said. At the moment low-power chips based on designs ARM fit the bill, but Intel is likely to catch up so it is not likely to mean the death of x86.
The report compared Samsung’s 1.7GHz dual-core Exynos 5250, Nvidia’s 1.3GHz quad-core Tegra 3 and Intel’s 2.4GHz quad-core Core i7-2760QM – which is a desktop chip, rather than a server chip. The researchers said they found that ARM processors were more power-efficient on single-core performance than the Intel processor, and that ARM chips can scale effectively in HPC environments. On a multi-core basis, the ARM chips were as efficient as Intel x86 chips at the same clock frequency, but Intel was more efficient at the highest performance level, the researchers said.
Twitter’s Authentication Has Vulnerabilities
June 6, 2013 by admin
Filed under Around The Net
Comments Off on Twitter’s Authentication Has Vulnerabilities
Twitter’s SMS-based, two-factor authentication feature could be abused to lock users who have not enabled it for their accounts if attackers gain access to their log-in credentials, according to researchers from Finnish antivirus vendor F-Secure.
Twitter introduced two-factor authentication last week as an optional security feature in order to make it harder for attackers to hijack users’ accounts even if they manage to steal their usernames and passwords. If enabled, the feature introduces a second authentication factor in the form of secret codes sent via SMS.
According to Sean Sullivan, a security advisor at F-Secure, attackers could actually abuse this feature in order to prolong their unauthorized access to those accounts that don’t have two-factor authentication enabled. The researcher first described the issue Friday in a blog post.
An attacker who steals someone’s log-in credentials, via phishing or some other method, could associate a prepaid phone number with that person’s account and then turn on two-factor authentication, Sullivan said Monday. If that happens, the real owner won’t be able to recover the account by simply performing a password reset, and will have to contact Twitter support, he said.
This is possible because Twitter doesn’t use any additional method to verify that whoever has access to an account via Twitter’s website is also authorized to enable two-factor authentication.
When the two-factor authentication option called “Account Security” is first enabled on the account settings page, the site asks users if they successfully received a test message sent to their phone. Users can simply click “yes,” even if they didn’t receive the message, Sullivan said.
Instead, Twitter should send a confirmation link to the email address associated with the account for the account owner to click in order to confirm that two-factor authentication should be enabled, Sullivan said.
As it is, the researcher is concerned that this feature could be abused by determined attackers like the Syrian Electronic Army, a hacker group that recently hijacked the Twitter accounts of several news organizations, in order to prolong their unauthorized access to compromised accounts.
Some security researchers already expressed their belief that Twitter’s two-factor authentication feature in its current implementation is impractical for news organizations and companies with geographically dispersed social media teams, where different employees have access to the same Twitter account and cannot share a single phone number for authentication.
Twitter did not immediately respond to a request for comment regarding the issue described by Sullivan.
Google Updates It’s SSL Certificate
Google has announced plans to upgrade its Secure Sockets Layer (SSL) certificates to 2048-bit keys by the end of 2013 to strengthen its SSL implementation.
Announcing the news on a blog post today, Google’s director of information security engineering Stephen McHenry said it will begin switching to the new 2048-bit certificates on 1 August to ensure adequate time for a careful rollout before the end of the year.
“We’re also going to change the root certificate that signs all of our SSL certificates because it has a 1024-bit key,” McHenry said.
“Most client software won’t have any problems with either of these changes, but we know that some configurations will require some extra steps to avoid complications. This is more often true of client software embedded in devices such as certain types of phones, printers, set-top boxes, gaming consoles, and cameras.”
McHenry advised that for a smooth upgrade, client software that makes SSL connections to Google, for example, HTTPS must: “perform normal validation of the certificate chain; include a properly extensive set of root certificates contained […]; and support Subject Alternative Names (SANs)”.
He also recommended that clients support the Server Name Indication (SNI) extension because they might need to make an extra API call to set the hostname on an SSL connection.
He pointed out some of the problems that the change might trigger, and pointed to a FAQ addressing certificate changes, as well as instructions for developers on how to adapt to certificate changes.
F-secure’s security researcher Sean Sullivan advised, “By updating its SSL standards, Google will make it easier to spot forged certificates.
“Certificate authorities have been abused and/or hacked in the past. I imagine it will be more difficult to forge one of these upgraded certs. Therefore, users can have more confidence.”
Is This A Mobile First World?
June 3, 2013 by admin
Filed under Smartphones
Comments Off on Is This A Mobile First World?
Judging from the number of people engrossed in activities with their smartphones on the sidewalk, in their cars and in public places, mobile seems to have stolen our attention away from the wired Internet and traditional TV.
However, there is a ways to go before mobile platforms become the primary place where consumers turn for entertainment and getting things done, players at CTIA Wireless trade show said.
Nokia Siemens Networks announced new capabilities in its network software to make video streams run more smoothly over mobile networks. Among other things, the enhancements can reduce video stalling by 90 percent, according to the company. But even Sandro Tavares, head of marketing for NSN’s Mobile Core business, sees “mobile-first” viewing habits as part of the future.
“Now that the networks are providing a better capacity, a better experience with mobile broadband, mobile-first will come,” Tavares said. “Because the experiences they have with the devices are so good, these devices … start to be their preferred screen, their first screen.
“This is a trend, and this is something that will not change,” Tavares said. But he thinks it’s too early to build networks assuming consumers will turn to tablets and phones as their primary sources of entertainment. “Do you have to be prepared for mobile-first now? Probably not. You have to be able to keep the pace.”
For AT&T, mobile-first is a top priority for its own internal apps, ensuring employees can do their jobs wherever they are, said Kris Rinne, the carrier’s senior vice president of network technologies. But to make it possible over the network, a range of new technologies and relationships may have to come together, she said.
For example, giving the best possible performance for streaming video and other uses of mobile may require steering traffic to the right network if both cellular and Wi-Fi are available. AT&T is developing an “intelligent network selection” capability to do this, Rinne said. When AT&T starts to deliver voice over LTE, it will stay on the cellular network — at least in the early days — because the carrier has more control over quality of service on that system, she said.
Other issues raised by mobile-first include security of packets going over the air and rights for content that subscribers are consuming primarily on mobile devices instead of through TV and other traditional channels, Rinne said.
Should Investors Dump AMD?
If you have any old AMD shares lying around you might like to sell them as fast as you can, according to the bean counters at Goldman Sachs.
Despite the fact that the company is doing rather well, and its share price is has gone up rapidly over recent months, Goldman Sach analysts claim that the writing is on the wall for AMD. It thinks that AMD shares will be worth just $2.50 soon. The stock’s 50-day moving average is currently $2.98.
The company said that while AMD could clean up in the gaming market even if you take those figures into account the stock is trading at 22 times its 2014 CY EPS estimate. In other words the company’s core PC business is still shagged and still will generate 45 per cent of the company’s 2013 revenue.
“We therefore believe this recent move in the stock is just the latest in a long history of unsustainable rallies, and we are downgrading the stock to Sell. We believe the current multiple is unjustified for any company with such significant exposure to the secularly declining PC market,” the firm’s analyst wrote.
Analysts at Sanford C. Bernstein think that the share price will settle on $2.00 and FBR Capital Markets thinks $3.00. In other words if you want to know what is really happening at AMD you might as well ask the cat, than any Wall Street expert.
Is Yahoo Really Back?
Yahoo has once again made the list as one of the world’s 100 most valuable brands.
The Internet company nabbed the 92nd spot in the annual list of global companies from multiple industries including technology, retail and service, released Tuesday by BrandZ, a brand equity database. The ranking gave Yahoo a “brand value” of US$9.83 billion, which is based on the opinions of current and potential users as well as actual financial data.
Apple occupied the number-one position on the list, with a brand value of $185 billion. Google was number two, with a value of roughly $114 billion.
The BrandZ ranking, commissioned by the advertising and marketing services group WPP, incorporates interviews with more than 2 million consumers globally about thousands of brands along with financial performance analysis to compile the list. Yahoo last appeared on the list in 2009 at number 81.
Yahoo’s inclusion on the 2013 list comes as the Internet company works to reinvent itself and win back users. Previously a formidable player in Silicon Valley, the company has struggled in recent years to compete against the likes of Google, Facebook and Twitter.
Improving its product offerings on mobile has been a focus. New mobile apps for email and weather have been unveiled, along with a new version of the main Yahoo app, featuring news summaries generated with technology the company acquired when it bought Summly.
Most notably, Monday the company announced it is acquiring the blogging site Tumblr for $1.1 billion in cash. Big changes to its Flickr photo sharing service were also announced.
Yahoo’s rebuilding efforts have picked up steam only during the last several months, but the 2013 BrandZ study was completed by March 1.
However, last July’s appointment of Marissa Mayer as CEO likely played a significant role in the company’s inclusion in the ranking, said Altimeter analyst Charlene Li. “Consumer perception has gone up since then,” she said.
“Yahoo’s leadership has a strong sense of what they want to do with the brand,” she added.
Yahoo’s 2012 total revenue was flat at $4.99 billion. However, after subtracting advertising fees and commissions paid to partners, net revenue was up 2 percent year-on-year.
nVidia Explains Tegra 4 Delays
nVidia’s CEO Jen-Hsun Huang mentioned a concrete reason of Tegra 4 delays during the company’s latest earnings call.
The chip was announced back in January, but Jensen told the investors that Tegra 4 was delayed because of Nvidia’s decision to pull in Grey aka Tegra 4i in for six months. Pulling Tegra 4i in and having it scheduled for Q4 2013 was, claims Jensen, the reason for the three-month delay in Tegra 4 production. On the other hand, we heard that early versions of Tegra 4 were simply getting too hot and frankly we don’t see why Nvidia would delay its flagship SoC for tactical reasons.
Engaging the LTE market as soon as possible has been the main reason for pulling Tegra 4i, claims Jensen. It looks to us that Tegra 4 will be more than three months delayed but we have been promised to see Tegra 4 based devices in Q2 2013, or by the end of June 2013.
Nvidia claims Tegra 4i has many design wins and it should be a very popular chip. Nvidia expects to have partners announcing their devices based on this new LTE based chip in early 2014. Some of them might showcase some devices as early as January, but we would be surprised if we don’t see Tegra 4i devices at the Mobile World Congress next year, that kicks off on February 24th 2014.
Jensen described Tegra 4i as an incredibly well positioned product, saying that “it brings a level of capabilities and features of performance that that segment has just never seen”. The latter half of 2013 will definitely be interesting for Nvidia’s Tegra division and we are looking forward to see the first designs based on this new chip.