Is nVidia Going Linux
The dark satanic rumor mill has manufactured a hell on earth yarn claiming that Nvidia is working on its own Linux OS for gamers.
A slide has tipped up showing a screen capture of an installer screen for this operating system supposedly going by the “NLINUX” codename at NVIDIA.
Not much to go on, but it does appear that Nvidia is looking at creating a distribution for gamers similar to that operated by Valve.
It is hard to see what Nvidia would get out of it. Nvidia also has its SHIELD TV that’s powered by Tegra hardware and offers a variety of games over their cloud/streaming “GeForce NOW” service.
So why would Nvidia need a full-blown Linux distribution? The only place it could use one is on the desktop, but that would just mean bringing another Linux distribution into a crowded market with little return for its efforts.
Nvidia already has control of the Linux gaming systems and its cards do better on Linux than AMDs so an “optimized” Linux OS is not going to sell them more graphics cards for Linux gamers. It would have to add something which is better than Steam, or Ubuntu and what could that be?
Courtesy-Fud
Is The GPU Market Going Down?
The global GPU market has fallen by 20 per cent over the last year.
According to Digitimes it fell to less than 30 million units in 2015 and the outfit suffering most was AMD. The largest graphics card player Palit Microsystems, which has several brands including Palit and Galaxy, shipped 6.9-7.1 million graphics cards in 2015, down 10 per cent on year. Asustek Computer shipped 4.5-4.7 million units in 2015, while Colorful shipped 3.9-4.1 million units, and is aiming to raise its shipments by 10 per cent on year in 2016.
Micro-Star International (MSI) enjoyed healthy graphics card shipments at 3.45-3.55 million in 2015, up 15 per cent on year, and EVGA, which has tight partnerships with Nvidia, also saw a significant shipment growth, while Gigabyte suffered from a slight drop on year. Sapphire and PowerColor suffered dramatic drops in shipments in 2015.
There are fears that several of the smaller GPU makers could be forced out of the market after AMD gets its act together with the arrival of Zen and Nvidia’s next-generation GPU architectures launch later in 2016.
Courtesy-Fud
Qualcomm Jumps Into VR
Qualcomm has thrown its hat into the virtual reality (VR) ring with the launch of the Snapdragon VR SDK for Snapdragon-based smartphones and VR headsets.
The SDK gives developers access to advanced VR features, according to Qualcomm, allowing them to simplify development and attain improved performance and power efficiency with Qualcomm’s Snapdragon 820 processor, found in Android smartphones such as the Galaxy S7 and tipped to feature in upcoming VR headsets.
In terms of features, the development kit offers tools such as digital signal processing (DSP) sensor fusion, which allows devs to use the “full breadth” of technologies built into the Snapdragon 820 chip to create more responsive and immersive experiences.
It will help developers combine high-frequency inertial data from gyroscopes and accelerometers, and there’s what the company calls “predictive head position processing” based on its Hexagon DSP, while Qualcomm’s Symphony System Manager makes easier access to power and performance management for more stable frame rates in VR applications running on less-powerful devices.
Fast motion to photon will offer single buffer rendering to reduce latency by up to 50 percent, while stereoscopic rendering with lens correction offers support for 3D binocular vision with color correction and barrel distortion for improved visual quality of graphics and video, enhancing the overall VR experience.
Stereoscopic rendering with lens correction supports 3D binocular vision with color correction and barrel distortion for improved visual quality of graphics and video, enhancing the overall VR experience.
Rounding off the features is VR layering, which improves overlays in a virtual world to reduce distortion.
David Durnil, senior director of engineering at Qualcomm, said: “We’re providing advanced tools and technologies to help developers significantly improve the virtual reality experience for applications like games, 360 degree VR videos and a variety of interactive education and entertainment applications.
“VR represents a new paradigm for how we interact with the world, and we’re excited to help mobile VR developers more efficiently deliver compelling and high-quality experiences on upcoming Snapdragon 820 VR-capable Android smartphones and headsets.”
The Snapdragon VR SDK will be available to developers in the second quarter through the Qualcomm Developer Network.
The launch of Qualcomm’s VR SDK comes just moments after AMD also entered the VR arena with the launch of the Sulon Q, a VR-ready wearable Windows 10 PC.
Courtesy-TheInq
GM Buys Cruise Automation
March 21, 2016 by admin
Filed under Around The Net
Comments Off on GM Buys Cruise Automation
General Motors the acquisition Cruise Automation for Cruise’s deep software talent and rapid development capability — a move designed to further accelerate GM’s development of autonomous vehicle technology.
Over the past two months, GM has entered into a $500 million alliance with ride-sharing company Lyft; formed Maven — its personal mobility brand for car-sharing fleets in many U.S. cities — and established a separate unit for autonomous vehicle development.
“This acquisition announcement clearly shows that GM is serious about developing the technology and controlling its own path to self-driving and driverless vehicles,” said Egil Juliussen, research director for IHS Automotive.
While GM did not disclose the financial details of the Cruise acquisition, reports estimated the purchase to be in the $1 billion range.
Founded in 2013, Cruise sells an aftermarket product that is positioned as a highway autopilot, according to IHS Automotive.
Vehicles using Cruise’s software cannot automatically changes lanes, but the technology does work at low speed and highway speed, meaning it’s classified between Level 2 and Level 3 in the National Highway Traffic Safety Administration’s levels of autonomous driving.
The NHTSA’s Level 3 includes limited self-driving automation and allows a driver to cede full control of all safety-critical functions under certain traffic or environmental conditions; Level 4 indicates a fully autonomous vehicle.
Cruise’s software was initially offered by Audi in its A4 and S4 vehicles as a $10,000 option that required installation work by Cruise. The product consisted of a sensor unit on top of the car and a computer in the trunk.
GM’s purchase of Cruise is likely to spur other carmakers “to react and determine what their strategy should be,” Juliussen said.
Other carmakers are likely to seek to become partners with Google and license Google’s self-driving and driverless software technology. Multiple manufacturers are likely to opt for a Google partnership, IHS said.
Source- http://www.thegurureview.net/aroundnet-category/gm-announces-acquisition-of-cruise-automation.html
Are Quantum Computers On The Horizon?
Massachusetts Institute of Technology (MIT) and Austria’s University of Innsbruck claim to have put together a working quantum computer capable of solving a simple mathematical problem.
The architecture they have devised ought to be relatively easy to scale, and could therefore form the basis of workable quantum computers in the future – with a bit of “engineering effort” and “an enormous amount of money”, according to Isaac Chuang, professor of physics, electrical engineering and computer science at MIT.
Chuang’s team has put together a prototype comprising the first five quantum bits (or qubits) of a quantum computer. This is being tested on mathematical factoring problems, which could have implications for applications that use factoring as the basis for encryption to keep information, including credit card details, secure.
The proof-of-concept has been applied only to the number 15, but the researchers claim that this is the “first scalable implementation” of quantum computing to solve Shor’s algorithm, a quantum algorithm that can quickly calculate the prime factors of large numbers.
“The team was able to keep the quantum system stable by holding the atoms in an ion trap, where they removed an electron from each atom, thereby charging it. They then held each atom in place with an electric field,” explained MIT.
Chuang added: “That way, we know exactly where that atom is in space. Then we do that with another atom, a few microns away – [a distance] about 100th the width of a human hair.
“By having a number of these atoms together, they can still interact with each other because they’re charged. That interaction lets us perform logic gates, which allow us to realise the primitives of the Shor factoring algorithm. The gates we perform can work on any of these kinds of atoms, no matter how large we make the system.”
Chuang is a pioneer in the field of quantum computing. He designed a quantum computer in 2001 based on one molecule that could be held in ‘superposition’ and manipulated with nuclear magnetic resonance to factor the number 15.
The results represented the first experimental realisation of Shor’s algorithm. But the system wasn’t scalable as it became more difficult to control as more atoms were added.
However, the architecture that Chuang and his team have put together is, he believes, highly scalable and will enable the team to build quantum computing devices capable of solving much bigger mathematical factors.
“It might still cost an enormous amount of money to build, [and] you won’t be building a quantum computer and putting it on your desktop anytime soon, but now it’s much more an engineering effort and not a basic physics question,” said Chuang.
In other quantum computing news this week, the UK government has promised £200m to support engineering and physical sciences PhD students and fuel UK research into quantum technologies, although most of the cash will be spent on Doctoral Training Partnerships rather than trying to build workable quantum computing prototypes.
Courtesy-TheInq
Courtesy-TheInq
Triada Trojan Aims For Android Devices
Kaspersky have found another scary trojan to wave under our noses and cause us to consider getting off the internet.
This one is called Triada and it targets Android devices with Windows-style malware swagger. Anyone running Android 4.4.4 and earlier is in trouble, according to Kaspersky, as they face an opponent created by “very professional cyber criminals” that can allow for in-app purchase theft and all the problems that come with privilege escalation.
And guess what? Android users dangle themselves in the way of the Triada threat when they download things from untrusted sources. Does no one listen to anything these days? Does it even matter? Kaspersky said in a blog post that the likely apps can “sometimes” make their way onto the official Android store.
There is something different about this attack. Kaspersky reports on a lot of these things, but Triada exploits Zygote, and that is a first.
“A distinguishing feature of this malware is the use of Zygote, the parent of the application process on an Android device that contains system libraries and frameworks used by every application installed on the device. In other words, it’s a demon whose purpose is to launch Android applications,” Kaspersky explained.
“This is the first time technology like this has been seen in the wild. Prior to this, a trojan using Zygote was known only as a proof-of-concept. The stealth capabilities of this malware are very advanced.
“After getting into the user’s device Triada implements in nearly every working process and continues to exist in the short-term memory. This makes it almost impossible to detect and delete using anti-malware solutions.”
The security firm added that the complexity of Triada’s functionality proves that professional cyber criminals with a deep understanding of the targeted mobile platform are behind the creation of this malware.
Kaspersky reckons that it is nigh on impossible to rid a device of the malware, and suggested that you might as well nuke your phone and start again.
Courtesy-TheInq
Intel Putting RealSense Into VR
March 16, 2016 by admin
Filed under Around The Net
Comments Off on Intel Putting RealSense Into VR
Intel is adapting its RealSense depth camera into an augmented reality headset design which it might be licensing to other manufacturers.
The plan is not official yet but appears to have been leaked to the Wall Street Journal. Achin Bhowmik, who oversees RealSense as vice president and general manager of Intel’s perceptual computing group, declined to discuss unannounced development efforts.
But he said Intel has a tradition of creating prototypes for products like laptop computers to help persuade customers to use its components. We have to build the entire experience ourselves before we can convince the ecosystem,” Bhowmik said.
Intel appears to be working on an augmented-reality headset when it teamed up with IonVR to to work on an augmented-reality headset that could work with a variety of operating systems, including Android and iOS. Naturally, it had a front-facing RealSense camera.
RealSense depth camera has been in development for several years and was shown as a viable product technology at the Consumer Electronics Show in 2014. Since then, nothing has happened and Microsoft’s Kinect sensor technology for use with Windows Hello in the Surface Pro 4 and Surface Book knocked it aside.
Intel’s biggest issue is that it is talking about making a consumer product which is something that it never got the hang of.
RealSense technology is really good at translating real-world objects into virtual space. In fact a lot better than the HoloLens because it can scan the user’s hands and translate them into virtual objects that can manipulate other virtual objects.
Courtesy-Fud
Samsung Bring 15TB SSD To Market
Samsung has now officially announced and started to ship its new Samsung PM1633a line of solid state drives for Enterprise Storage Systems, which includes the highest capacity SSD ever made by Samsung, the 15.35TB PM1366a model.
Revealed back during the 2015 Flash Memory Summit in August last year, the now available Samsung PM1633a enterprise SSD series is based on a standard 2.5-inch form factor and features a 12Gbps Serial Attached SCSI (SAS) interface. It also uses Samsung new controller as well as Samsung’s own 3rd generation 256Gb 48-layer TLC V-NAND.
As noted, the Samsung PM1633a lineup is based on Samsung’s 256Gb V-NAND flash chips. The 256Gb dies are stacked in 16 layers to form a single 512GB package and by adding up a total of 32 NAND packages, you get the 15.36TB model. According to Samsung, the 3rd generation 256Gb V-NAND will provide both significant performance as well as reliability improvements compared to the PM1633 drive which used 2nd generation 32-layer 128Gb V-NAND flash.
The controller has also been upgraded to concurrently access large amounts of high-density NAND flash and the PM1633a 15.36TB model comes with no less than 16GB of cache.
When it comes to performance, the Samsung PM1633a provides sequential read and write performance of up to 1,200MB/s while random 4K performance is set at up to 200,000 IOPS for read and up to 32,000 IOPS for write. The new Samsung PM1633a enterprise SSD also offers high high reliability date with 1DWPD (drive writes per day), adding up to 15.36TB that can be written every day without failure, which is quite important in the enterprise market.
While the 15.36TB model of the Samsung P1633a is already shipping to select enterprise customers, Samsung is also promising a wide range of capacities, including 480GB, 960GB, 1.92TB, 3.84TB and 7.68TB. According to Samsung, enterprise managers can now fit twice as many drives in a standard 19-inch 2U rack compared to a 3.5-inch storage drive.
Unfortunately, Samsung did not reveal any details regarding the price but we doubt that such high capacity and performance will have a low price tag.
Courtesy-Fud
IBM Goes After Groupon
March 14, 2016 by admin
Filed under Around The Net
Comments Off on IBM Goes After Groupon
IBM has filed suit against online deals marketplace Groupon for infringing four of its patents, including two that emerged from Prodigy, the online service launched by IBM and partners ahead of the World Wide Web.
Groupon has built its business model on the use of IBM’s patents, according to the complaint filed Wednesday in the federal court for the District of Delaware. “Despite IBM’s repeated attempts to negotiate, Groupon refuses to take a license, but continues to use IBM’s property,” according to the computing giant, which is asking the court to order Groupon to halt further infringement and pay damages.
IBM alleges that websites under Groupon’s control and its mobile applications use the technology claimed by the patents-in-suit for online local commerce marketplaces to connect merchants to consumers by offering goods and services at a discount.
About a year ago, IBM filed a similar lawsuit around the same patents against online travel company Priceline and three subsidiaries.
To develop the Prodigy online service that IBM launched with partners in the 1980s, the inventors of U.S. patents 5,796,967 and 7,072,849 developed new methods for presenting applications and advertisements in an interactive service that would take advantage of the computing power of each user’s PC and reduce demand on host servers, such as those used by Prodigy, IBM said in its complaint against Groupon.
“The inventors recognized that if applications were structured to be comprised of ‘objects’ of data and program code capable of being processed by a user’s PC, the Prodigy system would be more efficient than conventional systems,” it added.
Groupon is also accused of infringing U.S. Patent No.5,961,601, which was developed to find a better way of preserving state information in Internet communications, such as between an online merchant and a customer, according to IBM. Online merchants can use the state information to keep track of a client’s product and service selections while the client is shopping and then use that information when the client decides to make a purchase, something that stateless Internet communications protocols like HTTP cannot offer, it added.
Source- http://www.thegurureview.net/aroundnet-category/ibm-files-patent-infringement-lawsuit-against-groupon.html
Will Intel Release It’s 10nm Processors By 2017?
Comments Off on Will Intel Release It’s 10nm Processors By 2017?
Intel has said that a job advert which implied that it would not be using the 10nm process for two years was inaccurate and confirmed that it is on track for a 2017 release.
The advert, which was spotted by the Motley Fool has since been taken down, said the company’s 10-nanometer chip manufacturing technology would begin mass production “approximately two years” from the posting date.
Intel has said that the advert was wrong and confirmed that its “first 10-nanometer product is planned for the second half of 2017.”
It is not expected that Intel will roll out server chips in 2017. At the moment the plan appears to be introducing its second-generation 14-nanometer server chip family in early to mid-2017. But instead Intel will be trying to get its process ramped at high yields experimenting on the PC market so that 10-nanometer server processors will be ready for the first half of 2018.
This follows Intel’s traditional pattern of a having a few parts released as it experiments with the new tech. This is what happened in the first year of Intel’s 14-nanometer availability.
Courtesy-Fud