Ubuntu Cross-Platform Delayed
Ubuntu will not offer cross-platform apps as soon as it had hoped.
Canonical had raised hopes that its plan for Ubuntu to span PCs and mobile devices would be realised with the upcoming Ubuntu 14.04 release, providing a write-once, run-on-many template similar to that planned by Google for its Chrome OS and Android app convergence.
This is already possible on paper and the infrastructure is in place on smartphone and tablet versions of Ubuntu through its new Unity 8 user interface.
However, Canonical has decided to postpone the rollout of Unity 8 for desktop machines, citing security concerns, and it will now not appear along with the Mir display server this coming autumn.
This will apply only to apps in the Ubuntu store, and in the true spirit of open source, anyone choosing to step outside that ecosystem will be able to test the converged Ubuntu before then.
Ubuntu community manager Jono Bacon told Ars Technica, “We don’t plan on shipping apps in the new converged store on the desktop until Unity 8 and Mir lands.
“The reason is that we use app insulation to (a) run apps securely and (b) not require manual reviews (so we can speed up the time to get apps in the store). With our plan to move to Mir, our app insulation doesn’t currently insulate against X apps sniffing events in other X apps. As such, while Ubuntu SDK apps in click packages will run on today’s Unity 7 desktop, we don’t want to make them readily available to users until we ship Mir and have this final security consideration in place.
“Now, if a core-dev or motu wants to manually review an Ubuntu SDK app and ship it in the normal main/universe archives, the security concern is then taken care of with a manual review, but we are not recommending this workflow due to the strain of manual reviews.”
As well as the aforementioned security issues, there are still concerns that cross-platform apps don’t look quite as good on the desktop as native desktop versions and the intervening six months will be used to polish the user experience.
Getting the holistic experience right is essential for Ubuntu in order to attract OEMs to the converged operating system. Attempts to crowdfund its own Ubuntu handset fell short of its ambitious $20m target, despite raising $10.2 million, the single largest crowdfunding total to date.
Samsung Joins OpenPower
Samsung has joined Google, Mellanox, Nvidia and other tech companies as part of IBM’s OpenPower Consortium. The OpenPower Consortium is working toward giving developers access to an expanded and open set of server technologies to improve data centre hardware using chip designs based on the IBM Power architecture.
Last summer, IBM announced the formation of the consortium, following its decision to license the Power architecture. The OpenPower Foundation, the actual entity behind the consortium, opened up the Power architecture technology, including specs, firmware and software under a license. Firmware is offered as open source. Originally, OpenPower was the brand of a range of System p servers from IBM that utilized the Power5 CPU. Samsung’s products currently utilize both x86 and ARM-based processors.
The intention of the consortium is to develop advanced servers, networking, storage and GPU-acceleration technology for new products. The four priority technical areas for development are system software, application software, open server development platform and hardware architecture. Along with its announcement of Samsung’s membership, the organization said that Gordon MacKean, Google’s engineering director of the platforms group, will now become chairman of the group. Nvidia has said it will use its graphics processors on Power-based hardware, and Tyan will be releasing a Power-based server, the first one outside IBM.
Are Transparent Semiconductors Next?
Comments Off on Are Transparent Semiconductors Next?
Scientist have emerged from their smoke filled labs with transparent thin-film organic semiconductors that could become the foundation for cheap, high-performance displays. Two university research teams have worked together to produce the world’s fastest thin-film organic transistors, proving that this experimental technology has the potential to achieve the performance needed for high-resolution television screens and similar electronic devices.
According to the latest issue of Nature Communications, engineers from the University of Nebraska-Lincoln (UNL) and Stanford University show how they created thin-film organic transistors that could operate more than five times faster than previous examples of this experimental technology.
Research teams led by Zhenan Bao, professor of chemical engineering at Stanford, and Jinsong Huang, assistant professor of mechanical and materials engineering at UNL used their new process to make organic thin-film transistors with electronic characteristics comparable to those found in expensive, curved-screen television displays based on a form of silicon technology.
At the moment the high tech method is to drop a special solution, containing carbon-rich molecules and a complementary plastic, onto a spinning platter made of glass. The spinning action deposits a thin coating of the materials over the platter. The boffins worked out that if they spun the platter faster and coated a tiny portion of the spinning surface, equivalent to the size of a postage stamp they could put a denser concentration of the organic molecules into a more regular alignment. The result was a great improvement in carrier mobility, which measures how quickly electrical charges travel through the transistor.
Red Hat Releases Linux E-Beta
Red Hat has made available a beta of Red Hat Enterprise Linux 7 (RHEL 7) for testers, just weeks after the final release of RHEL 6.5 to customers.
RHEL 7 is aimed at meeting the requirements of future applications as well as delivering scalability and performance to power cloud infrastructure and enterprise data centers.
Available to download now, the RHEL 7 beta introduces a number of enhancements, including better support for Linux Containers, in-place upgrades, XFS as the default file system, improved networking support and improved compatibility with Windows networks.
Inviting customers, partners, and members of the public to download the RHEL 7 beta and provide feedback, Red Hat is promoting the upcoming version as its most ambitious release to date. The code is based on Red Hat’s community developed Fedora 19 distribution of Linux and the upstream Linux 3.10 kernel, the firm said.
“Red Hat Enterprise Linux 7 is designed to provide the underpinning for future application architectures while delivering the flexibility, scalability, and performance needed to deploy across bare metal, virtual machines, and cloud infrastructure,” Senior Product Marketing Manager Kimberly Craven wrote on the Red Hat Enterprise Linux blog.
These improvements address a number of key areas, including virtualisation, management and interoperability.
Linux Containers, for example, was partially supported in RHEL 6.5, but this release enables applications to be created and deployed using Linux Container technology, such as the Docker tool. Containers offers operating system level virtualisation, which provides isolation between applications without the overhead of virtualising the entire server.
Red Hat said it is now supporting an in-place upgrade feature for common server deployment types. This will allow customers to migrate existing RHEL 6.5 systems to RHEL 7 without downtime.
RHEL 7 also makes the switch to XFS as its default file system, supporting file configurations up to 500TB, while ext4 file systems are now supported up to 50TB in size and B-tree file system (btrfs) implementations are available for users to test.
Interoperability with Windows has also been improved, with Red Hat now including the ability to bridge Windows and Linux infrastructure by integrating RHEL 7 and Samba 4.1 with Microsoft Active Directory domains. Red Hat Enterprise Linux Identity Management can also be deployed in a parallel trust zone alongside Active Directory, the firm said.
On the networking side, RHEL 7 provides support for 40Gbps Ethernet, along with improved channel bonding, TCP performance improvements and low latency socket poll support.
Other enhancements include support for very large scale storage configurations, including enterprise storage arrays, and uniform management tools for networking, storage, file systems, identities and security using the OpenLMI framework.
Google Goes Quantum
When is a blink not a natural blink? For Google the question has such ramifications that it has devoted a supercomputer to solving the puzzle.
Slashgear reports that the internet giant is using its $10 million quantum computer to find out how products like Google Glass can differentiate between a natural blink and a deliberate blink used to trigger functionality.
The supercomputer based at Google’s Quantum Artificial Intelligence Lab is a joint venture with NASA and is being used to refine the algorithms used for new forms of control such as blinking. The supercomputer uses D-Wave chips kept at as near to absolute zero as possible, which makes it somewhat impractical for everyday wear but amazingly fast at solving brainteasers.
A Redditor reported earlier this year that Google Glass is capable of taking pictures by responding to blinking, however the feature is disabled in the software code as the technology had not advanced enough to differentiate between natural impulse and intentional request.
It is easy to see the potential of blink control. Imagine being able to capture your life as you live it, exactly the way you see it, without anyone ever having to stop and ask people to say “cheese”.
Google Glass is due for commercial release next year but for the many beta testers and developers who already have one this research could lead to an even richer seam of touchless functionality.
If nothing else you can almost guarantee that Q will have one ready for Daniel Craig’s next James Bond outing.
AMD’s Richland Shows Up
Kaveri is coming in a few months, but before it ships AMD will apparently spice up the Richland line-up with a few low-power parts.
CPU World has come across an interesting listing, which points to two new 45W chips, the A8-6500T and the A10-6700T. Both are quads with 4MB of cache. The A8-6500T is clocked at 2.1GHz and can hit 3.1GHz on Turbo, while the A10-6700T’s base clock is 2.5GHz and it maxes out at 3500MHz.
The prices are $108 and $155 for the A8 and A10 respectively, which doesn’t sound too bad although they are still significantly pricier than regular FM2 parts.
Intel Goes AI
Intel has written a check for the Spanish artificial intelligence technology startup Indisys.
The outfit focuses on natural language recognition and the deal is worth $26 million. It follows Intel’s recent acquisition of Omek, an Israeli startup with specialties in gesture-based interfaces. Indisys employees have joined Intel already. Apparently the deal was signed on May 31 and the deal has been completed.
Intel would not confirm how they are using the tech: “Indisys has a deep background in computational linguistics, artificial intelligence, cognitive science, and machine learning. We are not disclosing any details about how Intel might use the Indisys technologies at this time.”
AMD’s Kaveri Coming In Q4
AMD really needs to make up its mind and figure out how it interprets its own roadmaps. A few weeks ago the company said desktop Kaveri parts should hit the channel in mid-February 2014. The original plan called for a launch in late 2013, but AMD insists the chip was not delayed.
Now though, it told Computerbase.de that the first desktop chips will indeed appear in late 2013 rather than 2014, while mobile chips will be showcased at CES 2014 and they will launch in late Q1 or early Q2 2014.
As we reported earlier, the first FM2+ boards are already showing up on the market, but at this point it’s hard to say when Kaveri desktop APUs will actually be available. The most logical explanation is that they will be announced sometime in Q4, with retail availability coming some two months later.
Kaveri is a much bigger deal than Richland, which was basically Trinity done right. Kaveri is based on new Steamroller cores, it packs GCN graphics and it’s a 28nm part. It is expected to deliver a significant IPC boost over Piledriver-based chips, but we don’t have any exact numbers to report.
nVidia Launching New Cards
We weren’t expecting this and it is just a rumour, but reports are emerging that Nvidia is readying two new cards for the winter season. AMD of course is launching new cards four weeks from now, so it is possible that Nvidia would try to counter it.
The big question is with what?
VideoCardz claims one of the cards is an Ultra, possibly the GTX Titan Ultra, while the second one is a dual-GPU job, the Geforce GTX 790. The Ultra is supposedly GK110 based, but it has 2880 unlocked CUDA cores, which is a bit more than the 2688 on the Titan.
The GTX 790 is said to feature two GK110 GPUs, but Nvidia will probably have to clip their wings to get a reasonable TDP.
We’re not entirely sure this is legit. It is plausible, but that doesn’t make it true. It would be good for Nvidia’s image, especially if the revamped GK110 products manage to steal the performance crown from AMD’s new Radeons. However, with such specs, they would end up quite pricey and Nvidia wouldn’t sell that many of them – most enthusiasts would probably be better off waiting for Maxwell.
ARM & Oracel Optimize Java
ARM’s upcoming ARMv8 architecture will form the basis for several processors that will end up in servers. Now the firm has announced that it will work with Oracle to optimise Java SE for the architecture to squeeze out as much performance as possible.
ARM’s chip licensees are looking to the 64-bit ARMv8 architecture to make a splash in the low-power server market and go up against Intel’s Atom processors. However unlike Intel that can make use of software already optimised for x86, ARM and its vendors need to work with software firms to ensure that the new architecture will be supported at launch.
Oracle’s Java is a vital piece of software that is used by enterprise firms to run back-end systems, so poor performance from the Java virtual machine could be a serious problem for ARM and its licensees. To prevent that, ARM said it will work with Oracle to improve performance, boot-up performance and power efficiency, and optimize libraries.
Henrik Stahl, VP of Java Product Management at Oracle said, “The long-standing relationship between ARM and Oracle has enabled our mutual technologies to be deployed across a broad spectrum of products and applications.
“By working closely with ARM to enhance the JVM, adding support for 64-bit ARM technology and optimizing other aspects of the Java SE product for the ARM architecture, enterprise and embedded customers can reap the benefits of high-performance, energy-efficient platforms based on ARM technology.”
A number of ARM vendors including x86 stalwart AMD are expected to bring out 64-bit ARMv8 processors in 2014, though it is thought that Applied Micro will be the first to market with an ARMv8 processor chip later this year.