Syber Group
Toll Free : 855-568-TSTG(8784)
Subscribe To : Envelop Twitter Facebook Feed linkedin

Intel Buys Into Altera

April 15, 2014 by  
Filed under Computing

Comments Off on Intel Buys Into Altera

Technology gossip columns are full of news that Intel and Altera have expanded their relationship. Apparently, Altera has been Intel’s shoulder to cry on as the chip giant seeks to move beyond the declining PC market and the breakup of the Wintel alliance. Intel took the break up very hard and there was talk that Alteria might be just a rebound thing.

Last year Intel announced that it would manufacture Altera’s ARM-based quad-core Stratix 10 processors, as part of its efforts to grow its foundry business to make silicon products for third parties. Now the two vendors are expanding the relationship to include multi-die devices integrating Altera’s field-programmable gate arrays (FPGAs) and systems-on-a-chip (SoCs) with a range of other components, from memory to ASICs to processors.

Multi-die devices can drive down production costs and improve performance and energy efficiency of chips for everything from high-performance servers to communications systems. The multi-die devices will take advantage of the Stratix 10 programmable chips that Intel is manufacturing for Altera with its 14-nanometer Tri-Gate process. Intel’s three-dimensional transistor architecture combined with Altera’s FPGA redundancy technology leads to Altera being able to create a highly dense and energy efficient programmable chip die that can offer better integration of components.

At the same time, Intel officials are looking for ways to make more cash from its manufacturing capabilities, including growing its foundry business by making chips for other vendors. CEO Brian Krzanich and other Intel executives have said they will manufacture third-party chips even if they are based on competing infrastructure, which is the case with Altera and its ARM-based chips.

Source

AMD, Intel & nVidia Go OpenGL

April 7, 2014 by  
Filed under Computing

Comments Off on AMD, Intel & nVidia Go OpenGL

AMD, Intel and Nvidia teamed up to tout the advantages of the OpenGL multi-platform application programming interface (API) at this year’s Game Developers Conference (GDC).

Sharing a stage at the event in San Francisco, the three major chip designers explained how, with a little tuning, OpenGL can offer developers between seven and 15 times better performance as opposed to the more widely recognised increases of 1.3 times.

AMD manager of software development Graham Sellers, Intel graphics software engineer Tim Foley and Nvidia OpenGL engineer Cass Everitt and senior software engineer John McDonald presented their OpenGL techniques on real-world devices to demonstrate how these techniques are suitable for use across multiple platforms.

During the presentation, Intel’s Foley talked up three techniques that can help OpenGL increase performance and reduce driver overhead: persistent-mapped buffers for faster streaming of dynamic geometry, integrating Multidrawindirect (MDI) for faster submission of many draw calls, and packing 2D textures into arrays, so texture changes no longer break batches.

They also mentioned during their presentation that with proper implementations of these high-level OpenGL techniques, driver overhead could be reduced to almost zero. This is something that Nvidia’s software engineers have already claimed is impossible with Direct3D and only possible with OpenGL (see video below).

Nvidia’s VP of game content and technology, Ashu Rege, blogged his account of the GDC joint session on the Nvidia blog.

“The techniques presented apply to all major vendors and are suitable for use across multiple platforms,” Rege wrote.

“OpenGL can cut through the driver overhead that has been a frustrating reality for game developers since the beginning of the PC game industry. On desktop systems, driver overhead can decrease frame rate. On mobile devices, however, driver overhead is even more insidious, robbing both battery life and frame rate.”

The slides from the talk, entitled Approaching Zero Driver Overhead, are embedded below.

At the Game Developers Conference (GDC), Microsoft also unveiled the latest version of its graphics API, Directx 12, with Direct3D 12 for more efficient gaming.

Showing off the new Directx 12 API during a demo of Xbox One racing game Forza 5 running on a PC with an Nvidia Geforce Titan Black graphics card, Microsoft said Directx 12 gives applications the ability to directly manage resources to perform synchronisation. As a result, developers of advanced applications can control the GPU to develop games that run more efficiently.

Source

Do Chip Makers Have Cold Feet?

March 27, 2014 by  
Filed under Computing

Comments Off on Do Chip Makers Have Cold Feet?

It is starting to look like chip makers are having cold feet about moving to the next technology for chipmaking. Fabricating chips on larger silicon wafers is the latest cycle in a transition, but according to the Wall Street Journal chipmakers are mothballing their plans.

Companies have to make massive upfront outlays for plants and equipment and they are refusing, because the latest change could boost the cost of a single high-volume factory to as much as $10 billion from around $4 billion. Some companies have been reining in their investments, raising fears the equipment needed to produce the new chips might be delayed for a year or more.

ASML, a maker of key machines used to define features on chips, recently said it had “paused” development of gear designed to work with the larger wafers. Intel said it has slowed some payments to the Netherlands-based company under a deal to help develop the technology.

Gary Dickerson, chief executive of Applied Materials said that the move to larger wafers “has definitely been pushed out from a timing standpoint”

Source

nVidia Outs CUDA 6

March 19, 2014 by  
Filed under Computing

Comments Off on nVidia Outs CUDA 6

Nvidia has made the latest GPU programming language CUDA 6 Release Candidate available for developers to download for free.

The release arrives with several new features and improvements to make parallel programming “better, faster and easier” for developers creating next generation scientific, engineering, enterprise and other applications.

Nvidia has aggressively promoted its CUDA programming language as a way for developers to exploit the floating point performance of its GPUs. Available now, the CUDA 6 Release Candidate brings a major new update in unified memory access, which lets CUDA applications access CPU and GPU memory without the need to manually copy data from one to the other.

“This is a major time saver that simplifies the programming process, and makes it easier for programmers to add GPU acceleration in a wider range of applications,” Nvidia said in a blog post on Thursday.

There’s also the addition of “drop-in libraries”, which Nvidia said will accelerate applications by up to eight times.

“The new drop-in libraries can automatically accelerate your BLAS and FFTW calculations by simply replacing the existing CPU-only BLAS or FFTW library with the new, GPU-accelerated equivalent,” the chip designer added.

Multi-GPU Scaling has also been added to the CUDA 6 programming language, introducing re-designed BLAS and FFT GPU libraries that automatically scale performance across up to eight GPUs in a single node. Nvidia said this provides over nine teraflops of double-precision performance per node, supporting larger workloads of up to 512GB in size, more than it’s supported before.

“In addition to the new features, the CUDA 6 platform offers a full suite of programming tools, GPU-accelerated math libraries, documentation and programming guides,” Nvidia said.

The previous CUDA 5.5 Release Candidate was issued last June, and added support for ARM based processors.

Aside from ARM support, Nvidia also improved Hyper-Q support in CUDA 5.5, which allowed developers to use MPI workload prioritisation. The firm also touted improved performance analysis and improved performance for cross-compilation on x86 processors.

Source

Is AMD Worried?

March 17, 2014 by  
Filed under Computing

Comments Off on Is AMD Worried?

AMD’s Mantle has been a hot topic for quite some time and despite its delayed birth, it has finally came delivered performance in Battlefield 4. Microsoft is not sleeping it has its own answer to Mantle that we mentioned here.

Oddly enough we heard some industry people calling it DirectX 12 or DirectX Next but it looks like Microsoft is getting ready to finally update the next generation DirectX. From what we heard the next generation DirectX will fix some of the driver overhead problems that were addressed by Mantle, which is a good thing for the whole industry and of course gamers.

AMD got back to us officially stating that “AMD would like you to know that it supports and celebrates a direction for game development that is aligned with AMD’s vision of lower-level, ‘closer to the metal’ graphics APIs for PC gaming. While industry experts expect this to take some time, developers can immediately leverage efficient API design using Mantle. “

AMD also told us that we can expect some information about this at the Game Developers Conference that starts on March 17th, or in less than two weeks from now.

We have a feeling that Microsoft is finally ready to talk about DirectX Next, DirectX 11.X, DirectX 12 or whatever they end up calling it, and we would not be surprised to see Nvidia 20nm Maxwell chips to support this API, as well as future GPUs from AMD, possibly again 20nm parts.

Source

Intel Outs New Xeon Chipset

March 4, 2014 by  
Filed under Computing

Comments Off on Intel Outs New Xeon Chipset

Intel has released details about its new Xeon E7 v2 chipset. The Xeon processor E7 8800/4800/2800 v2 product family is designed to support up to 32-socket servers with configurations of up to 15 processing cores and up to 1.5 terabytes of memory per socket.

The chip is designed for the big data end of the Internet of Things movement, which the processor maker projected will grow to consist of at least 30 billion devices by 2020. Beyond two times better performance power, Intel is promising a few other upgrades with the next generation of this data-focused chipset, including triple the memory capacity, four times the I/O bandwidth and the potential to reduce total cost of ownership by up to 80 percent.

The 15-core variants with the largest thermal envelope (155W) run at 2.8GHz with 37.5MB of cache and 8 GT/s QuickPath connectivity. The lowest-power models in the list have 105W TDPs and run at 2.3GHz with 24MB of cache and 7.2 GT/s of QuickPath bandwidth. There was also talk of 40W, 1.4GHz models at ISSCC but they have not been announced yet.

Intel has signed on nearly two dozen hardware partners to support the platform, including Asus, Cisco, Dell, EMC, and Lenovo. On the software end, Microsoft, SAP, Teradata, Splunk, and Pivotal also already support the new Xeon family. IBM and Oracle are among the few that support Xeon E7 v2 on both sides of the spectrum.

Source

App Stores For Supercomputers Enroute

December 13, 2013 by  
Filed under Computing

Comments Off on App Stores For Supercomputers Enroute

A major problem facing supercomputing is that the firms that could benefit most from the technology, aren’t using it. It is a dilemma.

Supercomputer-based visualization and simulation tools could allow a company to create, test and prototype products in virtual environments. Couple this virtualization capability with a 3-D printer, and a company would revolutionize its manufacturing.

But licensing fees for the software needed to simulate wind tunnels, ovens, welds and other processes are expensive, and the tools require large multicore systems and skilled engineers to use them.

One possible solution: taking an HPC process and converting it into an app.

This is how it might work: A manufacturer designing a part to reduce drag on an 18-wheel truck could upload a CAD file, plug in some parameters, hit start and let it use 128 cores of the Ohio Supercomputer Center’s (OSC) 8,500 core system. The cost would likely be anywhere from $200 to $500 for a 6,000 CPU hour run, or about 48 hours, to simulate the process and package the results up in a report.

Testing that 18-wheeler in a physical wind tunnel could cost as much $100,000.

Alan Chalker, the director of the OSC’s AweSim program, uses that example to explain what his organization is trying to do. The new group has some $6.5 million from government and private groups, including consumer products giant Procter & Gamble, to find ways to bring HPC to manufacturers via an app store.

The app store is slated to open at the end of the first quarter of next year, with one app and several tools that have been ported for the Web. The plan is to eventually spin-off AweSim into a private firm, and populate the app store with thousands of apps.

Tom Lange, director of modeling and simulation in P&G’s corporate R&D group, said he hopes that AweSim’s tools will be used for the company’s supply chain.

The software industry model is based on selling licenses, which for an HPC application can cost $50,000 a year, said Lange. That price is well out of the reach of small manufacturers interested in fixing just one problem. “What they really want is an app,” he said.

Lange said P&G has worked with supply chain partners on HPC issues, but it can be difficult because of the complexities of the relationship.

“The small supplier doesn’t want to be beholden to P&G,” said Lange. “They have an independent business and they want to be independent and they should be.”

That’s one of the reasons he likes AweSim.

AweSim will use some open source HPC tools in its apps, and are also working on agreements with major HPC software vendors to make parts of their tools available through an app.

Chalker said software vendors are interested in working with AweSim because it’s a way to get to a market that’s inaccessible today. The vendors could get some licensing fees for an app and a potential customer for larger, more expensive apps in the future.

AweSim is an outgrowth of the Blue Collar Computing initiative that started at OSC in the mid-2000s with goals similar to AweSim’s. But that program required that users purchase a lot of costly consulting work. The app store’s approach is to minimize cost, and the need for consulting help, as much as possible.

Chalker has a half dozen apps already built, including one used in the truck example. The OSC is building a software development kit to make it possible for others to build them as well. One goal is to eventually enable other supercomputing centers to provide compute capacity for the apps.

AweSim will charge users a fixed rate for CPUs, covering just the costs, and will provide consulting expertise where it is needed. Consulting fees may raise the bill for users, but Chalker said it usually wouldn’t be more than a few thousand dollars, a lot less than hiring a full-time computer scientist.

The AweSim team expects that many app users, a mechanical engineer for instance, will know enough to work with an app without the help of a computational fluid dynamics expert.

Lange says that manufacturers understand that producing domestically rather than overseas requires making products better, being innovative and not wasting resources. “You have to be committed to innovate what you make, and you have to commit to innovating how you make it,” said Lange, who sees HPC as a path to get there.

Source

Google Goes Quantum

October 22, 2013 by  
Filed under Computing

Comments Off on Google Goes Quantum

When is a blink not a natural blink? For Google the question has such ramifications that it has devoted a supercomputer to solving the puzzle.

Slashgear reports that the internet giant is using its $10 million quantum computer to find out how products like Google Glass can differentiate between a natural blink and a deliberate blink used to trigger functionality.

The supercomputer based at Google’s Quantum Artificial Intelligence Lab is a joint venture with NASA and is being used to refine the algorithms used for new forms of control such as blinking. The supercomputer uses D-Wave chips kept at as near to absolute zero as possible, which makes it somewhat impractical for everyday wear but amazingly fast at solving brainteasers.

A Redditor reported earlier this year that Google Glass is capable of taking pictures by responding to blinking, however the feature is disabled in the software code as the technology had not advanced enough to differentiate between natural impulse and intentional request.

It is easy to see the potential of blink control. Imagine being able to capture your life as you live it, exactly the way you see it, without anyone ever having to stop and ask people to say “cheese”.

Google Glass is due for commercial release next year but for the many beta testers and developers who already have one this research could lead to an even richer seam of touchless functionality.

If nothing else you can almost guarantee that Q will have one ready for Daniel Craig’s next James Bond outing.

Source

Chip Makers Going After Cars

October 14, 2013 by  
Filed under Around The Net

Comments Off on Chip Makers Going After Cars

Chip makers including Broadcom and Renesas Electronics are putting more focus on in-car entertainment with faster processors and networks for wireless HD movies and navigation, aiming to keep drivers informed and passengers entertained.

With PC sales slipping and the mobile device market proving highly competitive, chip makers are looking for greener pastures in other sectors like in-car entertainment and information.

From Renesas comes the R-Car M2 automotive SoC (System-on-a-Chip), which has enough power to handle simultaneous high-definition navigation, video and voice-controlled browsing.

The SoC is meant for use in mid-range systems. It features two ARM Cortex A-15 cores running at up to 1.5GHz and Renesas’ own SH-4A processor plus the PowerVR SGX544MP2 from Imagination Technologies for 3D graphics. This combination helps the M2 exceed the previous R-Car H1 with more than three times the CPU capacity and approximately six times better graphics performance.

Car makers that want to put a more advanced entertainment system in their upcoming models should go for the eight core R-Car H2 SoC, which was announced earlier this year. It is based on ARM’s big.LITTLE architecture, and uses four Cortex-A15 cores and another four Cortex-A7 cores.

The H2 will be able to handle four streams of 1080p video, including Blu-Ray at 60 frames per second, according to Renesas. Mass production is scheduled for the middle of next year, while the M2 won’t arrive in larger volumes until June 2015.

Broadcom on the other hand is seeking to drive better networking on the road. The company’s latest line of wireless chipsets for in-car connectivity uses the fast 802.11ac Wi-Fi wireless standard, which offers enough bandwidth for multiple displays and screen resolution of up to 1080p. Use of the 5GHz band for video allows it to coexist with Bluetooth hands-free calls on 2.4GHz, according Broadcom.

Broadcom has also implemented Wi-Fi Direct and Miracast. Wi-Fi Direct lets products such as smartphones, cameras and in this case in-car computers connect to one another without joining a traditional hotspot network, while Miracast lets users stream videos and share photos between smartphones, tablets and displays.

The BCM89335 Wi-Fi and Bluetooth Smart Ready combo chip and the BCM89071 Bluetooth and Bluetooth Smart Ready chip are now shipping in small volumes.

Source

AMD’s Richland Shows Up

September 26, 2013 by  
Filed under Computing

Comments Off on AMD’s Richland Shows Up

Kaveri is coming in a few months, but before it ships AMD will apparently spice up the Richland line-up with a few low-power parts.

CPU World has come across an interesting listing, which points to two new 45W chips, the A8-6500T and the A10-6700T. Both are quads with 4MB of cache. The A8-6500T is clocked at 2.1GHz and can hit 3.1GHz on Turbo, while the A10-6700T’s base clock is 2.5GHz and it maxes out at 3500MHz.

The prices are $108 and $155 for the A8 and A10 respectively, which doesn’t sound too bad although they are still significantly pricier than regular FM2 parts.

Source

« Previous PageNext Page »