Syber Group
Toll Free : 855-568-TSTG(8784)
Subscribe To : Envelop Twitter Facebook Feed linkedin

AMD 2014 GPU Shaping Up

March 1, 2013 by  
Filed under Computing

Comments Off on AMD 2014 GPU Shaping Up

It looks like AMD has changed its mind about 8000 series graphics, part of the Solar System range of products are slowly showing up in the market.

We got an idea that at some point in the latter part of 2013 there will be some new Sea Islands based graphics line from AMD. What we refer to as the HD 8000 series will come at some point, but AMD can actually end up branding some of these new parts as HD 7000 series products as well.

We were more interested in the big picture, when and whether we will see an entirely new generation in 2014, let’s call it 9000 series for now. Multiple sources have told us that AMD will stick with discrete graphics for the foreseeable future. In 2014, we should see the new HSA generation as well as a steady roadmap for the future beyond 2014.

Remember, AMD still makes quite a bit of money on graphics. It doesn’t makes a lot, but it doesn’t build GPUs at a loss either. Its graphics integrated in CPUs, APUs if you will, also help AMD sell more cores and this is why AMD will stick with making new graphics cores in the future. Technology developed for high-end discrete graphics will trickle down to APUs over time.

Winning all three consoles, including the already launched Nintendo Wii U as well as the soon to launch Xbox 720 (next) and Playstation, 4 will definitely help AMD to perform even better in the future and build closer relations with developers.

Nvidia will have someone to fight in this market and AMD will continue to make discrete graphics cards, as well as notebook chips. Both companies will fight for as much market share as they possibly can and analysts who claim either of them is about to ditch the discrete market is dead wrong.

Source

AMD Delays Cards

February 19, 2013 by  
Filed under Computing

Comments Off on AMD Delays Cards

While some people are packing in smoking, AMD thinks it is a good idea to pack in releasing new graphics cards, at least for a year or so.

AMD marketing manager Robert Hallock told Megagames that the company has no intentions to release Radeon HD 8000 series cards for the foreseeable future.

“The HD 7000 Series will remain our primary focus for quite some time,” he said.

When pressed he said that AMD and its partners are happy with the HD 7000 Series, and it will continue to be its emphasis in the channel for the foreseeable future. There had been some rumours that an HD 8000 Series was being developed. However AMD has never confirmed it and so far there has been no proof that any were ready to show up.

Hallock’s comments lend credibility to rumours that the HD 8000 family won’t be out before Q4 2013.

Source

AMD Releases Vishera

February 8, 2013 by  
Filed under Computing

Comments Off on AMD Releases Vishera

Although it was detailed back in August last year, AMD has just now officially released its new “affordable” Vishera based FX-4130 quad-core socket AM3+ CPU.

The new CPU is part of AMD’s 4100-series and is based on Vishera core design with four Piledriver cores. It works at 3.8GHz base clock and can “turbo” up to 3.9GHz. It packs 4MB of L2 and 4MB of L3 cache and has a 125W TDP.

According to the slide over at Xbitlabs.com, the FX-4130 replaces the FX-4100 with the same US $101 price but should provide between 3 and 9 percent more performance.

As things get better with Globalfoundries and their 32nm process technology, AMD is expected to introduce new models based on cut-down versions of Vishera, according to the report.

Source…

Anonymous Attacks MIT

January 23, 2013 by  
Filed under Around The Net

Comments Off on Anonymous Attacks MIT

Anonymous goes after the Massachusetts Institute of Technology (MIT) website after its president called for an internal investigation into what role it played in the prosecution of web activist Aaron Swartz.

MIT president Rafael Reif revealed the investigation in an email to staff that he sent out after hearing the news about Swartz’s death.

“I want to express very clearly that I and all of us at MIT are extremely saddened by the death of this promising young man who touched the lives of so many. It pains me to think that MIT played any role in a series of events that have ended in tragedy,” he wrote.

“I have asked Professor Hal Abelson to lead a thorough analysis of MIT’s involvement from the time that we first perceived unusual activity on our network in fall 2010 up to the present. I have asked that this analysis describe the options MIT had and the decisions MIT made, in order to understand and to learn from the actions MIT took. I will share the report with the MIT community when I receive it.”

Hacktivists from Anonymous defaced two MIT webpages in the wake of the announcement and turned them into memorials for Swartz.

Source…

Bonets Attack U.S. Banks

January 18, 2013 by  
Filed under Around The Net

Comments Off on Bonets Attack U.S. Banks

Evidence collected from a website that was recently used to flood U.S. banks with junk traffic suggests that the responsible parties behind the ongoing DDoS attack campaign against U.S. financial institutions — thought by some to be the work of Iran — are using botnets for hire.

The compromised website contained a PHP-based backdoor script that was regularly instructed to send numerous HTTP and UDP (User Datagram Protocol) requests to the websites of several U.S. banks, including PNC Bank, HSBC and Fifth Third Bank, Ronen Atias, a security analyst at Web security services provider Incapsula, said Tuesday in a blog post.

Atias described the compromised site as a “small and seemingly harmless general interest UK website” that recently signed up for Incapsula’s services.

An analysis of the site and the server logs revealed that attackers were instructing the rogue script to send junk traffic to U.S. banking sites for limited periods of time varying between seven minutes and one hour. The commands were being renewed as soon as the banking sites showed signs of recovery, Atias said.

During breaks from attacking financial websites the backdoor script was being instructed to attack unrelated commercial and e-commerce sites. “This all led us to believe that we were monitoring the activities of a Botnet for hire,” Atias said.

“The use of a Web Site as a Botnet zombie for hire did not surprise us,” the security analyst wrote. “After all, this is just a part of a growing trend we’re seeing in our DDoS prevention work.”

Source…

Passwords Continue As The Weakest Link

January 11, 2013 by  
Filed under Computing

Comments Off on Passwords Continue As The Weakest Link

Passwords aren’t the only failure point in many recent widely publicized intrusions by hackers.

But passwords played a part in the perfect storm of users, service providers and technology failures that can result in epic network disasters.  Password-based security mechanisms — which can be cracked, reset and socially engineered — no longer suffice in the era of cloud computing.

The problem is this: The more complex a password is, the harder it is to guess and the more secure it is. But the more complex a password is, the more likely it is to be written down or otherwise stored in an easily accessible location, and therefore the less secure it is. And the killer corollary: If a password is stolen, its relative simplicity or complexity becomes irrelevant.

Password security is the common cold of our technological age, a persistent problem that we can’t seem to solve. The technologies that promised to reduce our dependence on passwords — biometrics, smart cards, key fobs, tokens — have all thus far fallen short in terms of cost, reliability or other attributes. And yet, as ongoing news reports about password breaches show, password management is now more important than ever.

All of which makes password management a nightmare for IT shops. “IT faces competing interests,” says Forrester analyst Eve Maler. “They want to be compliant and secure, but they also want to be fast and expedient when it comes to synchronizing user accounts.”

Source…

AMD Offers More Radeon Chips

December 26, 2012 by  
Filed under Computing

Comments Off on AMD Offers More Radeon Chips

AMD has announced four Radeon HD 8000M series GPU chips sporting its latest Graphics Core Next (GCN) architecture.

AMD’s GCN architecture made its first appearance in the firm’s ultra high-end mobile chips, however next month’s CES show will see the firm show off laptops featuring four Radeon HD 8000M series chips. The firm’s four Radeon HD 8000M chips are pitched at the mainstream and gaming laptop markets, though the company said that Asus already has a laptop announced that will use the chips in an ‘ultrathin’ design.

AMD’s Radeon HD 8000M series sees the firm split three chips with the same number of stream processors and memory clock speeds that scale up to 1.25GHz but differentiated by their core speeds. The firm’s Radeon HD 8500M, Radeon HD 8600M and Radeon HD 8700M all have 384 stream processors but are clocked up to 650MHz, 775MHz and between 650MHz and 850MHz, respectively.

Topping AMD’s present range is the Radeon HD 8800M, which has 640 stream processors and is clocked at between 650MHz and 700MHz, while its GDDR5 memory is also clocked at 1.25GHz. All of the firm’s chips, by virtue of being based on the GCN architecture, support DirectX 11.1.

AMD said it will launch three more chips in the Radeon HD 8000M series in the second quarter of 2013. According to the firm’s roadmap, two of those chips will sit above the Radeon HD 8800M in terms of performance while the third will straddle somewhere between the Radeon HD 8600M and Radeon HD 8700M.

AMD was tight-lipped on the power figures for its chips, saying that full details of its Radeon HD 8000M series chips will appear at CES, where its partners will tip up with laptops sporting the chips.

Source…

AMD Shows Piledriver Opteron

December 13, 2012 by  
Filed under Computing

Comments Off on AMD Shows Piledriver Opteron

AMD’s Piledriver rollout is all but complete. With Trinity in the mobile and desktop space, new 3300 and 4300 Opterons are bringing the new architecture to data centers.

The Opteron 4300 series offers six different parts, in quad-, six- and eight-core flavours. Stock clocks range between 2.2GHz and 3.5GHz, with TDP’s in the 35W to 95W range. The cheapest Opteron 4334 costs $191, while the priciest 4332HE comes in at $501. The 3300 series consists of three quad- and eight-core SKUs, priced at $125 to $229. The pricing of both series is pretty aggressive.

But what’s next for AMD? Well things should be eerily quiet on the server front in 2013. Abu Dhabi, Seoul and Delhi/Orochi C should last throughout 2013 and even a good part of 2014. That’s when we can expect some major changes, as AMD transitions to 28nm and goes about transforming its Opteron lineup.

Future Low Power CPUs and APUs (as AMD calls them) should replace Dehli/Orochi-C in 1P and dense server markets, but AMD is also planning “Client APUs for market enablement,” and this sounds a lot like ARM-based low voltage parts. Of course, in the high end AMD plans to stick with big Steamroller cores, but mid-2014 is a long way off.

Source…

Radeon 8000 HD Expected In Q2

December 10, 2012 by  
Filed under Computing

Comments Off on Radeon 8000 HD Expected In Q2

Since we only have one month left in 2012 and due the fact that graphics companies rarely announce something big in December, it is obvious that Radeon HD 8000 slipped to 2013.

Our well informed industry sources are confirming that the next generation, based on Sea Islands, architecture is coming in 2013 and some of them dare to say that it will be Q2 2013 rather than Q1 2013. Some people were expecting to see the cards in Q1 2013 but even according to AMD’s own roadmap Sea Islands, the new GPU architecture with HAS features was scheduled (or should we say delayed. Ed) for 2013.

AMD has already communicated this schedule loud and clear in its February 2012 roadmap update, and even then it killed hopes that Sea Islands or HD 8000 cards are coming in very late Q3 or Q4 2012, as was previously expected.

We won’t get into any specific details like the 8000 branding, or die sizes as we simply don’t know this at the time being. It’s safe to say that these cards will end up faster than 7000 series and at similar TDPs to the previous generation, all manufactured in 28nm.

Source…

Do Supercomputers Lead To Downtime?

December 3, 2012 by  
Filed under Computing

Comments Off on Do Supercomputers Lead To Downtime?

As supercomputers grow more powerful, they’ll also become more susceptible to failure, thanks to the increased amount of built-in componentry. A few researchers at the recent SC12 conference, held last week in Salt Lake City, offered possible solutions to this growing problem.

Today’s high-performance computing (HPC) systems can have 100,000 nodes or more — with each node built from multiple components of memory, processors, buses and other circuitry. Statistically speaking, all these components will fail at some point, and they halt operations when they do so, said David Fiala, a Ph.D student at the North Carolina State University, during a talk at SC12.

The problem is not a new one, of course. When Lawrence Livermore National Laboratory’s 600-node ASCI (Accelerated Strategic Computing Initiative) White supercomputer went online in 2001, it had a mean time between failures (MTBF) of only five hours, thanks in part to component failures. Later tuning efforts had improved ASCI White’s MTBF to 55 hours, Fiala said.

But as the number of supercomputer nodes grows, so will the problem. “Something has to be done about this. It will get worse as we move to exascale,” Fiala said, referring to how supercomputers of the next decade are expected to have 10 times the computational power that today’s models do.

Today’s techniques for dealing with system failure may not scale very well, Fiala said. He cited checkpointing, in which a running program is temporarily halted and its state is saved to disk. Should the program then crash, the system is able to restart the job from the last checkpoint.

The problem with checkpointing, according to Fiala, is that as the number of nodes grows, the amount of system overhead needed to do checkpointing grows as well — and grows at an exponential rate. On a 100,000-node supercomputer, for example, only about 35 percent of the activity will be involved in conducting work. The rest will be taken up by checkpointing and — should a system fail — recovery operations, Fiala estimated.

Because of all the additional hardware needed for exascale systems, which could be built from a million or more components, system reliability will have to be improved by 100 times in order to keep to the same MTBF that today’s supercomputers enjoy, Fiala said.

Fiala presented technology that he and fellow researchers developed that may help improve reliability. The technology addresses the problem of silent data corruption, when systems make undetected errors writing data to disk.

Source…

« Previous PageNext Page »