Qualcomm Jumps Into VR
Qualcomm has thrown its hat into the virtual reality (VR) ring with the launch of the Snapdragon VR SDK for Snapdragon-based smartphones and VR headsets.
The SDK gives developers access to advanced VR features, according to Qualcomm, allowing them to simplify development and attain improved performance and power efficiency with Qualcomm’s Snapdragon 820 processor, found in Android smartphones such as the Galaxy S7 and tipped to feature in upcoming VR headsets.
In terms of features, the development kit offers tools such as digital signal processing (DSP) sensor fusion, which allows devs to use the “full breadth” of technologies built into the Snapdragon 820 chip to create more responsive and immersive experiences.
It will help developers combine high-frequency inertial data from gyroscopes and accelerometers, and there’s what the company calls “predictive head position processing” based on its Hexagon DSP, while Qualcomm’s Symphony System Manager makes easier access to power and performance management for more stable frame rates in VR applications running on less-powerful devices.
Fast motion to photon will offer single buffer rendering to reduce latency by up to 50 percent, while stereoscopic rendering with lens correction offers support for 3D binocular vision with color correction and barrel distortion for improved visual quality of graphics and video, enhancing the overall VR experience.
Stereoscopic rendering with lens correction supports 3D binocular vision with color correction and barrel distortion for improved visual quality of graphics and video, enhancing the overall VR experience.
Rounding off the features is VR layering, which improves overlays in a virtual world to reduce distortion.
David Durnil, senior director of engineering at Qualcomm, said: “We’re providing advanced tools and technologies to help developers significantly improve the virtual reality experience for applications like games, 360 degree VR videos and a variety of interactive education and entertainment applications.
“VR represents a new paradigm for how we interact with the world, and we’re excited to help mobile VR developers more efficiently deliver compelling and high-quality experiences on upcoming Snapdragon 820 VR-capable Android smartphones and headsets.”
The Snapdragon VR SDK will be available to developers in the second quarter through the Qualcomm Developer Network.
The launch of Qualcomm’s VR SDK comes just moments after AMD also entered the VR arena with the launch of the Sulon Q, a VR-ready wearable Windows 10 PC.
Courtesy-TheInq
Microsoft Goes Quantum Computing
Software giant Microsoft is focusing a lot of its R&D money on quantum computing.
Peter Lee, the corporate vice president of Microsoft Research said that Quantum computing is “stupendously exciting right now.”
Apparently it is Microsoft Research’s largest area of investment and Lee is pretty certain it is on the verge of some major scientific achievements.
“There’s just hope and optimism those scientific achievements will lead to practical outcomes. It’s hard to know when and where,” Lee said.
This is the first we have heard about Redmond’s quantum ambitions for a while. In 2014 the company revealed its “Station Q” group located on the University of California, Santa Barbara, campus, which has focused on quantum computing since its establishment a decade ago.
We sort of assumed that Microsoft would not get much work done on Quantum states because faced with a choice most cats would rather die in a box rather than listen to Steve Ballmer. But we guess with a more cat friendly CEO it is moving ahead.
Lee said that he has explained quantum computing research to Microsoft chief executive Satya Nadella by comparing it with speech processing. In that field, Microsoft researchers worked “so hard for a decade with no practical improvement,” he said. Then deep learning brought about considerable leaps forward in speech recognition and Microsoft was in on the ground floor.
“With quantum, we’ve made just gigantic advancements making semiconductor interfacing, allowing semiconductor materials to operate as though they were superconducting. What that means is the possibility of semiconductors that can operate at extremely high clock rates with very, very little or no heat dissipation. It’s just really spectacular.”
Courtesy-Fud
Intel Putting RealSense Into VR
March 16, 2016 by admin
Filed under Around The Net
Comments Off on Intel Putting RealSense Into VR
Intel is adapting its RealSense depth camera into an augmented reality headset design which it might be licensing to other manufacturers.
The plan is not official yet but appears to have been leaked to the Wall Street Journal. Achin Bhowmik, who oversees RealSense as vice president and general manager of Intel’s perceptual computing group, declined to discuss unannounced development efforts.
But he said Intel has a tradition of creating prototypes for products like laptop computers to help persuade customers to use its components. We have to build the entire experience ourselves before we can convince the ecosystem,” Bhowmik said.
Intel appears to be working on an augmented-reality headset when it teamed up with IonVR to to work on an augmented-reality headset that could work with a variety of operating systems, including Android and iOS. Naturally, it had a front-facing RealSense camera.
RealSense depth camera has been in development for several years and was shown as a viable product technology at the Consumer Electronics Show in 2014. Since then, nothing has happened and Microsoft’s Kinect sensor technology for use with Windows Hello in the Surface Pro 4 and Surface Book knocked it aside.
Intel’s biggest issue is that it is talking about making a consumer product which is something that it never got the hang of.
RealSense technology is really good at translating real-world objects into virtual space. In fact a lot better than the HoloLens because it can scan the user’s hands and translate them into virtual objects that can manipulate other virtual objects.
Courtesy-Fud
The Linux Foundation Goes Zephyr
The Linux Foundation has launched its Zephyr Project as part of a cunning plan to create an open source, small footprint, modular, scalable, connected, real-time OS for IoT devices.
While there have been cut-down Linux implementations before the increase in numbers of smart, connected devices has made something a little more specialized more important.
Zephyr is all about minimizing the power, space, and cost budgets of IoT hardware.
For example a cut down Linux needs 200KB of RAM and 1MB of flash, IoT end points, which will often be controlled by tiny microcontrollers.
Zephyr has a small footpoint “microkernel” and an even tinier “nanokernel.” All this enables it to be CPU architecture independent, run on as little as 10KB while being scalable.
It can still support a broad range of wireless and wired technologies and of course is entirely open saucy released under the Apache v2.0 License.
It works on Bluetooth, Bluetooth Low Energy, and IEEE 802.15.4 (6LoWPAN) at the moment and supports x86, ARM, and ARC architectures.
Courtesy-Fud
U.S. Wants To Help Supercomputer Makers
Comments Off on U.S. Wants To Help Supercomputer Makers
Five of the top 12 high performance computing systems in the world are owned by U.S. national labs. But they are beyond reach, financially and technically, for many within the computing industry, even larger ones.
That’s according to U.S. Department of Energy (DOE) officials, who run the national labs. A new program aims to connect manufacturers with supercomputers and the expertise to use them.
This program provides $3 million, initially, for 10 industry projects, the DOE has announced. Whether the program extends into future fiscal years may well depend on Congress.
The projects are all designed to improve efficiency, product development and energy use.
For instance, Procter & Gamble will get help to reduce the paper pulp in products by 20%, “which could result in significant cost and energy savings” in this energy- intensive industry, according to the project description.
Another firm, ZoomEssence, which produces “powder ingredients that capture all the key sensory components of a liquid,” will work to optimize the design of a new drying method using HPC simulations, according to the award description.
Some other projects in the initial implementation of what is being called HPC4Mfg (HPC for Manufacturing) includes an effort to help Global Foundriesoptimize transistor design.
In another, the Ohio Supercomputer Center and the Edison Welding Institute will develop a welding simulation tool.
The national labs not only have the hardware; “more importantly the labs have deep expertise in using HPC to help solve complex problems,” said Donna Crawford, the associate director of computation at Lawrence Livermore National Laboratory, in a conference call. They have the applications as well, she said.
HPC can be used to design and prototype products virtually that otherwise might require physical prototypes. These systems can run simulations and visualizations to discover, for instance, new energy-efficient manufacturing methods.
Source-http://www.thegurureview.net/computing-category/u-s-wants-to-help-supercomputer-makers.html
Is Microsoft A Risk?
Hewlett Packard Enterprise (HPE) has cast a shade on what it believes to be the biggest risks facing enterprises, and included on that list is Microsoft.
We ain’t surprised, but it is quite a shocking and naked fact when you consider it. The naming and resulting shaming happens in the HPE Cyber Risk Report 2016, which HPE said “identifies the top security threats plaguing enterprises”.
Enterprises, it seems, have myriad problems, of which Microsoft is just one.
“In 2015, we saw attackers infiltrate networks at an alarming rate, leading to some of the largest data breaches to date, but now is not the time to take the foot off the gas and put the enterprise on lockdown,” said Sue Barsamian, senior vice president and general manager for security products at HPE.
“We must learn from these incidents, understand and monitor the risk environment, and build security into the fabric of the organisation to better mitigate known and unknown threats, which will enable companies to fearlessly innovate and accelerate business growth.”
Microsoft earned its place in the enterprise nightmare probably because of its ubiquity. Applications, malware and vulnerabilities are a real problem, and it is Windows that provides the platform for this havoc.
“Software vulnerability exploitation continues to be a primary vector for attack, with mobile exploits gaining traction. Similar to 2014, the top 10 vulnerabilities exploited in 2015 were more than one-year-old, with 68 percent being three years old or more,” explained the report.
“In 2015, Microsoft Windows represented the most targeted software platform, with 42 percent of the top 20 discovered exploits directed at Microsoft platforms and applications.”
It is not all bad news for Redmond, as the Google-operated Android is also put forward as a professional pain in the butt. So is iOS, before Apple users get any ideas.
“Malware has evolved from being simply disruptive to a revenue-generating activity for attackers. While the overall number of newly discovered malware samples declined 3.6 percent year over year, the attack targets shifted notably in line with evolving enterprise trends and focused heavily on monetisation,” added the firm.
“As the number of connected mobile devices expands, malware is diversifying to target the most popular mobile operating platforms. The number of Android threats, malware and potentially unwanted applications have grown to more than 10,000 new threats discovered daily, reaching a total year-over-year increase of 153 percent.
“Apple iOS represented the greatest growth rate with a malware sample increase of more than 230 percent.”
Courtesy-TheInq
Microsoft Goes Underwater
Technology giants are finding some of the strangest places for data centers these days.
Facebook, for example, built a data center in Lulea in Sweden because the icy cold temperatures there would help cut the energy required for cooling. A proposed Facebook data center in Clonee, Ireland, will rely heavily on locally available wind energy. Google’s data center in Hamina in Finland uses sea water from the Bay of Finland for cooling.
Now, Microsoft is looking at locating data centers under the sea.
The company is testing underwater data centers with an eye to reducing data latency for the many users who live close to the sea and also to enable rapid deployment of a data center.
Microsoft, which has designed, built, and deployed its own subsea data center in the ocean, in the period of about a year, started working on the project in late 2014, a year after Microsoft employee, Sean James, who served on a U.S. Navy submarine, submitted a paper on the concept.
A prototype vessel, named the Leona Philpot after an Xbox game character, operated on the seafloor about 1 kilometer from the Pacific coast of the U.S. from August to November 2015, according to a Microsoft page on the project.
The subsea data center experiment, called Project Natick after a town in Massachusetts, is in the research stage and Microsoft warns it is “still early days” to evaluate whether the concept could be adopted by the company and other cloud service providers.
“Project Natick reflects Microsoft’s ongoing quest for cloud datacenter solutions that offer rapid provisioning, lower costs, high responsiveness, and are more environmentally sustainable,” the company said.
Using undersea data centers helps because they can serve the 50 percent of people who live within 200 kilometers from the ocean. Microsoft said in an FAQ that deployment in deepwater offers “ready access to cooling, renewable power sources, and a controlled environment.” Moreover, a data center can be deployed from start to finish in 90 days.
Courtesy- http://www.thegurureview.net/aroundnet-category/microsoft-goes-deep-with-underwater-data-center.html
iOS Developers Warned About Taking Shortcuts
Comments Off on iOS Developers Warned About Taking Shortcuts
Slapdash developers have been advised not to use the open source JSPatch method of updating their wares because it is as vulnerable as a soft boiled egg, for various reasons.
It’s FireEye that is giving JSPatch the stink eye and providing the warning that it has rendered over 1,000 applications open to copy and paste theft of photos and other information. And it doesn’t end there.
FireEye’s report said that Remote Hot Patching may sound like a good idea at the time, but it really isn’t. It is so widely used that is has opened up a 1,220-wide iOS application hole in Apple users’ security. A better option, according to the security firm, is to stick with the Apple method, which should provide adequate and timely protection.
“Within the realm of Apple-provided technologies, the way to remediate this situation is to rebuild the application with updated code to fix the bug and submit the newly built app to the App Store for approval,” said FireEye.
“While the review process for updated apps often takes less time than the initial submission review, the process can still be time-consuming and unpredictable, and can cause loss of business if app fixes are not delivered in a timely and controlled manner.
“However, if the original app is embedded with the JSPatch engine, its behaviour can be changed according to the JavaScript code loaded at runtime. This JavaScript file is remotely controlled by the app developer. It is delivered to the app through network communication.”
Let’s not all make this JSPatch’s problem, because presumably it’s developers who are lacking.
FireEye spoke up for the open source security gear while looking down its nose at hackers. “JSPatch is a boon to iOS developers. In the right hands, it can be used to quickly and effectively deploy patches and code updates. But in a non-utopian world like ours, we need to assume that bad actors will leverage this technology for unintended purposes,” the firm said.
“Specifically, if an attacker is able to tamper with the content of a JavaScript file that is eventually loaded by the app, a range of attacks can be successfully performed against an App Store application.
Courteys-TheInq
Is Facebook Going Video?
February 9, 2016 by admin
Filed under Around The Net
Comments Off on Is Facebook Going Video?
Facebook is contemplating the development of a dedicated service or page where users will be able watch videos and not be bothered by other content.
The social network continues to see surging interest in video. During one day last quarter, its users watched a combined 100 million hours of video. Roughly 500 million users watch at least some video each day.
That’s a lot of video and a lot of viewers, and Facebook wants to capitalize on it.
“We are exploring a dedicated place on Facebook for when they just want to watch videos,” CEO Mark Zuckerberg said Wednesday during a conference call to discuss Facebook’s quarterly financial results.
But he was tight-lipped on how the video might actually be presented.
Asked if a stand-alone video app is in the cards, he mentioned the success of Messenger and a Facebook app for managing Pages. “I do think there are additional opportunities for this and we’ll continue looking at them,” he said.
Facebook wants to encourage more video viewing because it keeps users on the site longer, helping it to sell more ads.
“Marketers also really love video and it’s a compelling way to reach consumers,” COO Sheryl Sandberg said during the call.
Zuckerberg has been watching the growth of video for osme time. At a town hall meeting in November 2014, he predicted, ”In five years, most of [Facebook] will be video.”
And it’s likely that most of that video will be consumed over mobile networks.
Among Facebook’s heaviest users — the billion people who access it on a daily basis — 90 percent use a mobile device, either solely or in addition to their PC.
It’s financial results for the fourth quarter were strong. Revenue was $5.8 billion, up 52 percent from the same period in 2014, while net profit more than doubled to $1.6 billion.
http://www.thegurureview.net/aroundnet-category/facebook-exploring-a-dedicated-video-service.html
Samsung And TSMC Battle It Out
Samsung and TSMC are starting to slug it out introducing Gen.3 14 and 16-nano FinFET system semiconductor processes, but the cost could mean that smartphone makers shy away from the technology in the short term.
It is starting to look sales teams for the pair are each trying to show that they can use the technology to reduce the most electricity consumption and production costs.
In its yearly result for 2015, TSMC made an announcement that it is planning to enter mass-production system of chips produced by 16-nano FinFET Compact (FFC) process sometime during 1st quarter of this year. TSMC had finished developing 16-nano FFC process at the end of last year. During the announcement TSMC talked up the fact that its 16-nano FFC process focuses on reducing production cost more than before and implementing low electricity.
TSMC is apparently ready for mass-production of 16-nano FFC process sometime during 1st half of this year and secured Huawei’s affiliate called HiSilicon as its first customer.
HiSilicon’s Kirin 950 that is used for Huawei’s premium Smartphone called Mate 8 is produced by TSMC’s 16-nano FF process. Its A9 Chip, which is used for Apple’s iPhone 6S series, is mass-produced using the 16-nano FinFET Plus (FF+) process that was announced in early 2015. By adding FFC process, TSMC now has three 16-nano processors in action.
Samsung is not far behind it has mass-produced Gen.2 14-nano FinFET using a process called LPP (Low Power Plus). This has 15 per cent lower electricity consumption compared to Gen.1 14-nano process called LPE (Low Power Early).
Samsung Electronics’ 14-nano LPP process was seen in the Exynos 8 OCTA series that is used for Galaxy S7 and Qualcomm’s Snapdragon 820. But Samsung Electronics is also preparing for Gen.3 14-nano FinFET process.
Vice-President Bae Young-chang of Samsung’s LSI Business Department’s Strategy Marketing Team said it will use a process similar to the Gen.2 14-nano process.
Both Samsung and TSMC might have a few problems. It is not clear what the yields of these processes are and this might increase the production costs.
Even if Samsung Electronics and TSMC finish developing 10-nano process at the end of this year and enter mass-production system next year, but they will also have to upgrade their current 14 and 16-nano processes to make them more economic.
Even if 10-nano process is commercialized, there still will be many fabless businesses that will use 14 and 16-nano processes because they are cheaper. While we might see a few flagship phones using the higher priced chips, it might be that we will not see 10nm in the majority of phones for years.
Courtesy-Fud