Syber Group
Toll Free : 855-568-TSTG(8784)
Subscribe To : Envelop Twitter Facebook Feed linkedin

nVidia’s CUDA 5.5 Available

June 25, 2013 by  
Filed under Computing

Comments Off on nVidia’s CUDA 5.5 Available

Nvidia has made its CUDA 5.5 release candidate supporting ARM based processors available for download.

Nvidia has been aggressively pushing its CUDA programming language as a way for developers to exploit the floating point performance of its GPUs. Now the firm has announced the availability of a CUDA 5.5 release candidate, the first version of the language that supports ARM based processors.

Aside from ARM support, Nvidia has improved supported Hyper-Q support and now allows developers to have MPI workload prioritisation. The firm also touted improved performance analysis and improved performance for cross-compilation on x86 processors.

Ian Buck, GM of GPU Computing Software at Nvidia said, “Since developers started using CUDA in 2006, successive generations of better, exponentially faster CUDA GPUs have dramatically boosted the performance of applications on x86-based systems. With support for ARM, the new CUDA release gives developers tremendous flexibility to quickly and easily add GPU acceleration to applications on the broadest range of next-generation HPC platforms.”

Nvidia’s support for ARM processors in CUDA 5.5 is an indication that it will release CUDA enabled Tegra processors in the near future. However outside of the firm’s own Tegra processors, CUDA support is largely useless, as almost all other chip designers have chosen OpenCL as the programming language for their GPUs.

Nvidia did not say when it will release CUDA 5.5, but in the meantime the firm’s release candidate supports Windows, Mac OS X and just about every major Linux distribution.

Source

Should Investors Dump AMD?

May 29, 2013 by  
Filed under Computing

Comments Off on Should Investors Dump AMD?

If you have any old AMD shares lying around you might like to sell them as fast as you can, according to the bean counters at Goldman Sachs.

Despite the fact that the company is doing rather well, and its share price is has gone up rapidly over recent months, Goldman Sach analysts claim that the writing is on the wall for AMD. It thinks that AMD shares will be worth just $2.50 soon. The stock’s 50-day moving average is currently $2.98.

The company said that while AMD could clean up in the gaming market even if you take those figures into account the stock is trading at 22 times its 2014 CY EPS estimate. In other words the company’s core PC business is still shagged and still will generate 45 per cent of the company’s 2013 revenue.

“We therefore believe this recent move in the stock is just the latest in a long history of unsustainable rallies, and we are downgrading the stock to Sell. We believe the current multiple is unjustified for any company with such significant exposure to the secularly declining PC market,” the firm’s analyst wrote.

Analysts at Sanford C. Bernstein think that the share price will settle on $2.00 and FBR Capital Markets thinks $3.00. In other words if you want to know what is really happening at AMD you might as well ask the cat, than any Wall Street expert.

Source

nVidia Explains Tegra 4 Delays

May 23, 2013 by  
Filed under Computing

Comments Off on nVidia Explains Tegra 4 Delays

nVidia’s CEO Jen-Hsun Huang mentioned a concrete reason of Tegra 4 delays during the company’s latest earnings call.

The chip was announced back in January, but Jensen told the investors that Tegra 4 was delayed because of Nvidia’s decision to pull in Grey aka Tegra 4i in for six months. Pulling Tegra 4i in and having it scheduled for Q4 2013 was, claims Jensen, the reason for the three-month delay in Tegra 4 production. On the other hand, we heard that early versions of Tegra 4 were simply getting too hot and frankly we don’t see why Nvidia would delay its flagship SoC for tactical reasons.

Engaging the LTE market as soon as possible has been the main reason for pulling Tegra 4i, claims Jensen. It looks to us that Tegra 4 will be more than three months delayed but we have been promised to see Tegra 4 based devices in Q2 2013, or by the end of June 2013.

Nvidia claims Tegra 4i has many design wins and it should be a very popular chip. Nvidia expects to have partners announcing their devices based on this new LTE based chip in early 2014. Some of them might showcase some devices as early as January, but we would be surprised if we don’t see Tegra 4i devices at the Mobile World Congress next year, that kicks off on February 24th 2014.

Jensen described Tegra 4i as an incredibly well positioned product, saying that “it brings a level of capabilities and features of performance that that segment has just never seen”. The latter half of 2013 will definitely be interesting for Nvidia’s Tegra division and we are looking forward to see the first designs based on this new chip.

Source

Yahoo On A Buying Spree

May 22, 2013 by  
Filed under Internet

Comments Off on Yahoo On A Buying Spree

Yahoo has purchased a mobile gaming company, Loki Studios, taking its total acquisitions this month to four.

The company said over the weekend it welcomed Loki, Astrid, GoPollGo and MileWise to its growing mobile team. “We recently added 22 entrepreneurs to our growing mobile team,” the company said in a Twitter message in a possible reference to some of the people from the four companies who have moved to Yahoo.

Loki’s flagship application is its location-aware game, Geomon. “We are thrilled to be joining the exceptional folks at Yahoo!. We believe fully in their commitment to creating outstanding mobile products,” the Loki team said on their website.

Earlier in the week, Yahoo also acquired GoPollGo, a social polling tool. The company’s founder and team said they were moving to Yahoo, and would no longer be supporting their offerings.

It is not clear whether Yahoo has bought all these companies for their products and technology or just to get their experienced staff in the area of mobile as it tries to build up its own mobile capabilities. The way the services are being shut down suggests that their user base did not particularly interest Yahoo. The company could not be immediately reached for comment.

Source

AMD Touts Its Memory Architecture

May 9, 2013 by  
Filed under Around The Net

Comments Off on AMD Touts Its Memory Architecture

AMD has said the memory architecture in its heterogeneous system architecture (HSA) will move management of CPU and GPU memory coherency from the developer’s hands down to the hardware.

While AMD has been churning out accelerated processing units (APUs) for the best part of two years now, the firm’s HSA is the technology that will really enable developers to make use of the GPU. The firm revealed some details of the memory architecture that will form one of the key parts of HSA and said that data coherency will be handled by the hardware rather than software developers.

AMD’s HSA chips, the first of which will be Kaveri, will allow both the CPU and GPU to access system memory directly. The firm said that this will eliminate the need to copy data to the GPU, an operation that adds significant latency and can wipe out any gains in performance from GPU parallel processing.

According to AMD, the memory architecture that it calls HUMA – heterogeneous unified memory access, a play on unified memory access – will handle concurrency between the CPU and GPU at the silicon level. AMD corporate fellow Phil Rogers said that developers should not have to worry about whether the CPU or GPU is accessing a particular memory address, and similarly he claimed that operating system vendors prefer that memory concurrency be handled at the silicon level.

Rogers also talked up the ability of the GPU to take page faults and that HUMA will allow GPUs to use memory pointers, in the same way that CPUs dereference pointers to access memory. He said that the CPU will be able to pass a memory pointer to the GPU, in the same way that a programmer may pass a pointer between threads running on a CPU.

AMD has said that its first HSA-compliant chip codenamed Kaveri will tip up later this year. While AMD’s decision to give GPUs access to DDR3 memory will mean lower bandwidth than GPGPU accelerators that make use of GDDR5 memory, the ability to address hundreds of gigabytes of RAM will interest a great many developers. AMD hopes that they will pick up the Kaveri chip to see just what is possible.

Source

Will Zynga Survive?

May 6, 2013 by  
Filed under Around The Net

Comments Off on Will Zynga Survive?

Nobody expected Zynga’s results for this quarter to be great, so nobody was exactly surprised when the company announced a decline in almost every number that matters. It turned a small profit, but that’s a bright spot in an otherwise deeply unimpressive set of results. The really important figures – the number of people playing and, crucially, the number of people paying – are all down. Zynga’s business may not be hemorrhaging money, but it’s losing audience, and in a business so heavily focused on scale, that’s a really bad thing.

The company likes to present itself as being on the cusp of a turnaround, or perhaps already embarked upon a slow but steady turn. If so, it’s the oddest turnaround imaginable. The firm’s MAUs – Monthly Active Users – dropped from 292 million to 253 million year on year, so nearly 40 million people have simply stopped logging in to a Zynga game even once a month. Worse still, though, is the disproportionate fall in the number of Monthly Unique Payers – those who make at least one transaction during a month-long period. This number fell from 3.5 million to 2.5 million, a precipitous year-on-year drop of almost 30%.

It bears emphasising just how bad that actually is. For a social gaming business, MUPs are the real customers. There is huge value to having a large audience (MAUs), of course, and companies need to be very careful about not trying to force players into becoming paying customers before they’re good and ready – but ultimately, non-paying users are like footfall in a store. They’re not customers, in a strict business sense. Zynga’s not-quite-so-bad loss of 13% of its players (MAUs) is a side-show compared to the fact that it’s lost 30% of its paying customers (MUPs). Imagine, by comparison, a shop loudly announcing that the number of people walking past its window had fallen 13%, distracting from the fact that the number who came in and bought something had fallen 30%.

Of course, the two figures are related, and the disproportionately large drop in MUPs figures into that relationship to some degree. The process of encouraging players of a social game to spend money is focused around a number of principles, but the key temptation lies in buying items or currency that will give you the ability to match or overtake your friends’ progress, or to create a fantastic character, farm, castle or whatever which will “impress” the many friends who are also playing the same game.

For that psychology to work, of course, you actually need to have lots of friends playing the game. Most social games, as the name suggests, don’t work terribly well if you don’t have friends active in the game. “Active” is a key aspect here too – if you see that your friends are losing interest, logging in less often or spending less time tending to their farm, castle, town or whatever, then you also tend to lose interest rapidly. Hence, a game that gives the impression of being “in decline” – with players losing interest in some visible manner – will likely experience a precipitous decline in revenue, because even though lots of people are still playing, the sense of decline removes the key psychological drive to spend money on the game. (It doesn’t help, of course, that social game operators have established a pattern of shutting down unsuccessful games rapidly, which creates a feedback loop in which players are unwilling to spend money on a game they think might be in commercial trouble.)

The psychology of what Zynga is experiencing is clear enough, then, but the figures on the bottom line are still pretty dreadful. Whatever the reasons or the mechanism, the company is losing paying customers, and that kind of damage is extremely hard to recover from.

A stark contrast to Zynga’s woes can be found on the other side of the Pacific, where mobile developer GungHo this week topped a $9 billion valuation on the Osaka Stock Exchange, making it into a larger mobile gaming company than even fellow Japanese giants GREE and DeNA. GungHo’s valuation is ridiculous, a bubble that will inevitably pop in relatively short order, but there’s a genuine success driving the excitement – a single game, Puzzle and Dragons, which is the most successful mobile game in Japan (and is launching in other territories as well). Puzzle and Dragons reportedly makes about $2 million a day; it certainly makes enough to justify prime-time adverts in evening slots on Japanese TV.

GungHo is an extreme example of a phenomenon which is completely unavoidable in the social and casual game sphere. Mobile utterly dominates this sphere. Facebook, it turns out, was a flash in the pan in gaming terms. Smartphones, and to some extent tablets (though they’re arguably more “midcore”), are the social gaming platforms of today. Zynga, for all its cash (the company still has plenty of liquid assets), its clout and its former dominance, still hasn’t made a successful transition to being a mobile-first company. Clinging to the wreckage of the Facebook social gaming model which it so successful exploited (in doing so, perhaps hastening the downfall), Zynga is being overtaken time and again by smaller companies who have mobile gaming in their DNA from the outset. With this week’s results came a fresh claim that the company will be focusing more heavily on mobile, but a good, nimble firm would have accomplished that focus shift 12 months ago, at least. Zynga right now feels like it’s plodding along in everyone else’s wake.

The other great white hope for the company, of course, is gambling. It has cautiously launched gambling services – what it calls “real money gaming” – in the UK, and wants to expand into other territories. Plenty of pundits like to tap their noses sagely and suggest that Zynga will become a gambling giant down the line – although in doing so, they’re just following in the well-worn footsteps of a large number of video games industry pundits, executives and even developers who have regarded the gambling industry with something like the avaricious wonder of wannabe prospectors hearing about a new gold rush.

I don’t see any gold rush for Zynga in “real money gaming”. Investors and executives consistently overstate the allure and possibilities of this kind of gaming, because by dint of being investors and executives, they tend to be exactly the sort of person who is very attracted to gambling risks (you wouldn’t have an investment, or a career, anywhere within spitting distance of tech stocks otherwise). Moreover, by moving into the online gambling arena, Zynga is entering a market that’s already incredibly crowded with companies who are deeply, deeply expert in this field – not just in the customer-facing psychology of the casino, but also in the legal and regulatory minefield of operating a gambling enterprise online. Many major markets simply aren’t open to this kind of business; most others require you to jump through all manner of hoops simply in order to set up shop. The notion of Zynga having an open goal in “real money gaming” is born either from complete naivety or utter desperation – it could make money in the gambling business, but it has its work cut out for it.

It’s worth highlighting, all the same, that Zynga did make a small profit this quarter – it may only be one bright spot, but it’s bright all the same. The company’s scale still also arguably works in its favour, allowing it to buy talent and IP that smaller firms could never afford. Yet after several grim quarters, it’s also worth highlighting that talk of a “turnaround” is optimistic at best. Something about Zynga – its culture, its leadership or a combination of both – is blocking this company from moving in the agile, intelligent way a firm in its position desperately needs. Inventing fairy stories about the magical potential of gambling games or constantly reassuring the world that a pivot to mobile is definitely happening any day now won’t cover up the cracks for much longer. If Zynga wants the world to buy the “turnaround” story, it needs to start showing evidence; if not, it needs to start making big changes, starting right at the top.

Source

nVidia Wins With Tegra 4

April 30, 2013 by  
Filed under Computing

Comments Off on nVidia Wins With Tegra 4

Nvidia’s first Tegra 4 design win is here, apparently, and it doesn’t appear very impressive at all. Tegra 4 is late to the party, so it is a bit short on design wins, to put it mildly.

Now a new ZTE smartphone has been spotted by Chinese bloggers and it seems to be based on Nvidia’s first A15 chip. The ZTE 988 is a phablet, with a 5.7-inch 720p screen. It has 2GB of RAM, a 13-megapixel camera and a 6.9mm thin body. It weighs just 110g, which is pretty surprising. The spec is rather underwhelming, especially in the display department.

However, a grain of salt is advised. It is still unclear whether the phone features a Tegra 4 or a Qualcomm chipset. Also, it is rather baffling to see a 720p screen on a Tegra 4 phablet, it just seems like overkill.

Source

Are CUDA Applications Limited?

March 29, 2013 by  
Filed under Computing

Comments Off on Are CUDA Applications Limited?

Acceleware said at Nvidia’s GPU Technology Conference (GTC) today that most algorithms that run on GPGPUs are bound by GPU memory size.

Acceleware is partly funded by Nvidia to provide developer training for CUDA to help sell the language to those that are used to traditional C and C++ programming. The firm said that most CUDA algorithms are now limited by GPU local memory size rather than GPU computational performance.

Both AMD and Nvidia provide general purpose GPU (GPGPU) accelerator parts that provide significantly faster computational processing than traditional CPUs, however they have only between 6GB and 8GB of local memory that constrains the size of the dataset the GPU can process. While developers can push more data from system main memory, the latency cost negates the raw performance benefit of the GPU.

Kelly Goss, training program manager at Acceleware, said that “most algorithms are memory bound rather than GPU bound” and “maximising memory usage is key” to optimising GPGPU performance.

She further said that developers need to understand and take advantage of the memory hierarchy of Nvidia’s Kepler GPU and look at ways of reducing the number of memory accesses for every line of GPU computing.

The point Goss was making is that GPU computing is relatively cheap in terms of clock cycles relative to the time it takes to fetch data from local memory, let alone loading GPU memory from system main memory.

Goss, talking to a room full of developers, proceeded to outline some of the performance characteristics of the memory hierarchy in Nvidia’s Kepler GPU architecture, showing the level of detail that CUDA programmers need to pay attention to if they want to extract the full performance potential from Nvidia’s GPGPU computing architecture.

Given Goss’s observation that algorithms running on Nvidia’s GPGPUs are often constrained by local memory size rather than by the GPU itself, the firm might want to look at simplifying the tiers of memory involved and increasing the amount of GPU local memory so that CUDA software developers can process larger datasets.

Source

Will Tegra 4 Help nVidia’s Financials?

March 28, 2013 by  
Filed under Computing

Comments Off on Will Tegra 4 Help nVidia’s Financials?

Trefis analysts believe Nvidia’s Tegra business is likely to grow over the next few years, although Nvidia won’t become a mobile chip company anytime soon.

In a note published a couple of days ago, Trefis concluded that Nvidia has managed to offset the impact of the PC slump thanks to mobile revenue. The PC market took a massive hit in 2012, and although things are looking up, Tegra could still come in handy.

Nvidia currently earns about 18 percent of its revenue from Tegra processors, which is not bad for a product that was on the drawing board just a few years ago.

“We estimate Tegra sales to grow at a CAGR of 17% until 2016. While we believe that Nvidia will manage to expand its footprint in mobile computing, we think that the increasing competition will keep its growth rate lower than the industry average,” said Trefis.

However, Trefis went on to conclude that Nvidia had more lack with tablets than smartphones. Last year it scored several big tablet design wins, but relatively few phone wins. The Tegra 4i, with integrated LTE, should lend a helping hand, but it won’t be ready for much of 2013. In addition, Nvidia is facing more pressure from Qualcomm and Samsung, while at the same time it was forced to push back the introduction of Tegra 4 due to technical issues.

Trefis believes Tegra’s contribution to Nvidia’s overall revenue could reach over 25 percent by 2019, which means the Tegra business won’t expand much in a mature smartphone market.

Nvidia has Tegra, AMD has consoles, so both outfits have something to fall back on in a slow PC market, at least for the time being.

Source

Facebook Goes DRAM

March 19, 2013 by  
Filed under Computing

Comments Off on Facebook Goes DRAM

Facebook has come up with a data cache which runs on flash memory instead of DRAM. Dubbed McDipper it saves money while still delivering higher performance than disk.

The system is a Facebook-built implementation of the popular memcached key-value store the only difference is that runs on flash memory rather than pricier DRAM. Memcached is the open-source key-value store that caches frequently accessed data in memory so applications can access and serve it faster than if it were stored on hard disks.

Facebook runs thousands of memcached servers to power its various applications. The only downside is that it is expensive. McDipper can handle working sets that had very large footprints but moderate to low request rates. It provides up to 20 times the capacity per server and still supports tens of thousands of operations per second.

According to Gigaom, Facebook has deployed McDipper for a handful of these workloads. This has reduced the total number of deployed servers in some pools by as much as 90 per cent while still delivering more than 90 per cent of get responses with sub-millisecond latencies.

Source

« Previous PageNext Page »