Samsung Producing NVMe PCIe SSDs
Samsung Electronics has started mass production of what it claims is the industry’s first Non-Volatile Memory Express (NVMe) PCIe solid state drive (SSD), which has an M.2 form factor for use in PCs and workstations.
Samsung said in an announcement that it is “the first in the industry” to bring NVMe SSDs to OEMs for the PC market.
The SM951-NVMe operates at low power in standby mode and is the most compact of any NVMe SSD out there, according to the firm.
“Our new NVMe SSD will allow for faster, ultra-slim notebook PCs with extended battery use, while accelerating the adoption of NVMe SSDs in the consumer marketplace,” said SVP of memory marketing Jeeho Baek.
“Samsung will continue to stay a critical step ahead of others in the industry in introducing a diversity of next-generation SSDs that contribute to an enhanced user experience through rapid popularisation of ultra-fast, highly energy-efficient, compact SSDs.”
Samsung has added an NVMe version of the SM951 SSD after making a AHCI-based PCIe 3.0 version available since early January. This, Samsung said, will form an even stronger SSD portfolio.
The new NVMe-based SM951 SSD boasts a sequential data read and write speed of up to 2,260MBps and 1,600MBps respectively, while taking advantage of the firm’s own controller technology.
“These performance figures are the industry’s most advanced, with speeds four and three times faster than those of a typical SATA-based M.2 SSD which usually moves data at up to 540MBps and 500MBps respectively,” Samsung added.
The drive achieves these high speeds by using four 8Gbps lanes of simultaneous data flow. This allows for a data transfer rate of 32Gbps and a maximum throughput of 4GBps, giving the new drive a huge advantage over SATA-based M.2 SSDs, which can only transfer data at up to 600MBps.
When it comes to random read operations, the SM951-NVMe can process 300,000 IOPS operations, which is more than twice as fast as the 130,000 rate of its AHCI-based predecessor, Samsung said, while being more than three times faster than the 97,000 IOPS of a SATA-based SSD.
“Meeting all M.2 form factor requirements, the drive’s thickness does not exceed 4mm. [It] also weighs less than 7g, which is lighter than two nickels and only a tenth the weight of a 2.5in SSD. Capacities are 512GB, 256GB and 128GB,” Samsung explained.
Samsung said that the company plans to incorporate 3D V-NAND technology into its NVMe SSD line-up, which could see even higher densities and performance.
Earlier this week HP unveiled the HP Z Turbo Drive G2, a storage solution featuring Samsung’s NVMe SSDs to process large datasets.
The HP Z Turbo Drive G2 PCIe SSD is said to deliver four times traditional SATA SSD performance at a similar cost to previous devices. This will allow workstation users to “super-charge” the productivity and creativity of workflows, according to HP.
MidiaTek Developing Two SoC’s for Tablets
Comments Off on MidiaTek Developing Two SoC’s for Tablets
MediaTek is working on two new tablet SoCs and one of them is rumored to be a $5 design.
The MT8735 looks like a tablet version of Mediatek’s smartphone SoCs based on ARM’s Cortex-A53 core. The chip can also handle LTE (FDD and TDD), along with 3G and dual-band WiFi. This means it should end up in affordable data-enabled tablets. There’s no word on the clocks or GPU.
The MT8163 is supposed to be the company’s entry-level tablet part. Priced at around $5, the chip does not appear to feature a modem – it only has WiFi and Bluetooth on board. GPS is still there, but that’s about it.
Once again, details are sketchy so we don’t know much about performance. However, this is an entry-level part, so we don’t expect miracles. It will have to slug it out with Alwinner’s $5 tablet SoC, which was announced a couple of months ago
According to a slide published by Mobile Dad, the MT8753 will be available later this month, but we have no timeframe for the MT8163.
But there’s nothing to see here as far as Torvalds is concerned. It’s just another day in the office. And all this in “Back To The Future II” year, as well.
Meanwhile under the bonnet, the community are already slaving away on Linux 4.1 which is expected to be a far more extensive release, with 100 code changes already committed within hours of Torvalds announcement of 4.0.
But there is already some discord in the ranks, with concerns that some of the changes to 4.1 will be damaging to the x86 compatibility of the kernel. But let’s let them sort that out amongst themselves.
After all, an anti-troll dispute resolution code was recently added to the Linux kernel in an effort to stop some of the more outspoken trolling that takes place, not least from Torvalds himself, according to some members of the community.
RadioShack Plans To Sell Customer Data
April 22, 2015 by admin
Filed under Around The Net
Comments Off on RadioShack Plans To Sell Customer Data
RadioShack plans to keep moving forward with its plan to sell its customer data, despite opposition from a number of states.
The company has asked a bankruptcy court for approval for a second auction of its assets, which includes the consumer data.
The state of Texas, which is leading the action by the states, opposed the sale of personally identifiable information (PII), citing the online and in-store privacy policies of the bankrupt consumer electronics retailer.
The state claimed that it found from a RadioShack deposition that the personal information of 117 million customers could be involved. But it learned later from testimony in court that the number of customer files offered for sale might be reduced to around 67 million.
In the first round of the sale, RadioShack sold about 1,700 stores to hedge fund Standard General, which entered into an agreement to set up 1,435 of these as co-branded stores with wireless operator Sprint. Some other assets were also sold in the auction.
The sale of customer data, including PII, was withdrawn from the previous auction, though RadioShack did not rule out that it could be put up for sale at a later date.
The case could have privacy implications for the tech industry as it could set a precedent, for example, for large Internet companies holding consumer data, if they happen to go bankrupt.
Texas has asked the U.S. Bankruptcy Court for the District of Delaware for a case management order to ensure that in any motion for sale of the PII, RadioShack should be required to provide information on the kind of personal data that is up for sale and the number of customers that will be affected.
On Monday, Texas asked the court that its motion be heard ahead of RadioShack’s motion for approval to auction more assets.
The court had ordered in March the appointment of a consumer privacy ombudsman in connection with the potential sale of the consumer data including PII. RadioShack said in a filing Friday that it intends to continue working with the ombudsman and the states with regard to any potential sale of PII, but did not provide details.
Did AMD Commit Fraud?
AMD must face claims that it committed securities fraud by hiding problems with the bungled 2011 launch of Llano that eventually led to a $100 million write-down, a US court has decided.
According to Techeye US District Judge Yvonne Gonzales Rogers said plaintiffs had a case that AMD officials misled them by stating in the spring of 2011 and will have to face a full trial.
The lawsuit was over the Llano chip, which AMD had claimed was “the most impressive processor in history.”
AMD originally said that the product launch would happen in the fourth quarter of 2010, sales of the Llano were delayed because of problems at the company’s chip manufacturing plant.
The then Chief Financial Officer Thomas Seifert told analysts on an April 2011 conference call that problems with chip production for the Llano were in the past, and that the company would have ample product for a launch in the second quarter.
Press officers for AMD continued to insist that there were no problems with supply, concealing the fact that it was only shipping Llanos to top-tier computer manufacturers because it did not have enough chips.
By the time AMD ramped up Llano shipments in late 2011, no one wanted them any more, leading to an inventory glut.
AMD disclosed in October 2012 that it was writing down $100 million of Llano inventory as not shiftable.
Shares fell nearly 74 percent from a peak of $8.35 in March 2012 to a low of $2.18 in October 2012 when the market learned the extent of the problems with the Llano launch.
Toshiba And SanDisk Launch 3D Flash Chip
Comments Off on Toshiba And SanDisk Launch 3D Flash Chip
Toshiba has announced the world’s first 48-layer Bit Cost Scalable (BiCS) flash memory chip.
The BiCS is a two-bit-per-cell, 128Gb (16GB) device with a 3D-stacked cell structure flash that improves density and significantly reduces the overall size of the chip.
Toshiba is already using 15nm dies so, despite the layering, the finished product will be competitively thin.
24 hours after the first announcement, SanDisk made one of its own regarding the announcement. The two companies share a fabrication plant and usually make such announcements in close succession.
“We are very pleased to announce our second-generation 3D NAND, which is a 48-layer architecture developed with our partner Toshiba,” said Dr Siva Sivaram, executive vice president of memory technology at SanDisk.
“We used our first generation 3D NAND technology as a learning vehicle, enabling us to develop our commercial second-generation 3D NAND, which we believe will deliver compelling storage solutions for our customers.”
Samsung has been working on its own 3D stacked memory for some time and has released a number of iterations. Production began last May, following a 10-year research cycle.
Moving away from the more traditional design process, the BiCS uses a ‘charge trap’ which stops electrons leaking between layers, improving the reliability of the product.
The chips are aimed primarily at the solid state drive market, as the 48-layer stacking process is said to enhance reliability, write speed and read/write endurance. However, the BiCS is said to be adaptable to a number of other uses.
All storage manufacturers are facing a move to 3D because, unless you want your flash drives very long and flat, real estate on chips is getting more expensive per square inch than a bedsit in Soho.
Micron has been talking in terms of 3D NAND since an interview with The INQUIRER in 2013 and, after signing a deal with Intel, has predicted 10TB in a 2mm chip by the end of this year.
Production of the chips will roll out initially from Fab 5 before moving in early 2016 to Fab 2 at the firm’s Yokkaichi Operations plant.
This is in stark contrast to Intel, which mothballed its Fab 42 chip fabrication plant in Chandler, Arizona before it even opened, as the semiconductors for computers it was due to produce have fallen in demand by such a degree.
The Toshiba and Sandisk BiCS chips are available for sampling from today.
Will Intel Challenge nVidia In The GPU Space?
Comments Off on Will Intel Challenge nVidia In The GPU Space?
Intel has released details of its next -generation Xeon Phi processor and it is starting to look like Intel is gunning for a chunk of Nvidia’s GPU market.
According to a briefing from Avinash Sodani, Knights Landing Chief Architect at Intel, a product update by Hugo Saleh, Marketing Director of Intel’s Technical Computing Group, an interactive technical Q&A and a lab demo of a Knights Landing system running on an Intel reference-design system, Nvidia could be Intel’s target.
Knights Landing and prior Phi products are leagues apart and more flexible for a wider range of uses. Unlike more specialized processors, Intel describes Knights Landing as taking a “holistic approach” to new breakthrough applications.
The current generation Phi design, which operates as a coprocessor, Knights Landing incorporates x86 cores and can directly boot and run standard operating systems and application code without recompilation.
The test system had socketed CPU and memory modules was running a stock Linux distribution. A modified version of the Atom Silvermont x86 cores formed a Knights Landing ’tile’ which was the chip’s basic design unit consisting of dual x86 and vector execution units alongside cache memory and intra-tile mesh communication circuitry.
Each multi-chip package includes a processor with 30 or more tiles and eight high-speed memory chips.
Intel said the on-package memory, totaling 16GB, is made by Micron with custom I/O circuitry and might be a variant of Micron’s announced, but not yet shipping Hybrid Memory Cube.
The high-speed memory is similar to the DDR5 devices used on GPUs like Nvidia’s Tesla.
It looks like Intel saw that Nvidia was making great leaps into the high performance arena with its GPU and thought “I’ll be having some of that.”
The internals of a GPU and Xeon Phi are different, but share common ideas.
Nvidia has a big head start. It has already announced the price and availability of a Titan X development box designed for researchers exploring GPU applications to deep learning. Intel has not done that yet for Knights Landing systems.
But Phi is also a hybrid that includes dozens of full-fledged 64-bit x86 cores. This could make it better at some parallelizable application categories that use vector calculations.
Panasonic Appears To Be On The Hunt
April 8, 2015 by admin
Filed under Around The Net
Comments Off on Panasonic Appears To Be On The Hunt
Japanese electronics giant Panasonic Corp said it is gearing up to spend 1 trillion yen ($8.4 billion) on acquisitions over the next four years, bolstered by a stronger profit outlook for its automotive and housing technology businesses.
Chief Executive Kazuhiro Tsuga said at a briefing on Thursday that Panasonic doesn’t have specific acquisition targets in mind for now. But he said the firm will spend around 200 billion yen on M&A in the fiscal year that kicks off in April alone, and pledged to improve on Panasonic’s patchy track record on big deals.
“With strategic investments, if there’s an opportunity to accelerate growth, you need funds. That’s the idea behind the 1 trillion yen figure,” he said. Tsuga has spearheaded a radical restructuring at the Osaka-based company that has made it one of the strongest turnaround stories in Japan’s embattled technology sector.
Tsuga previously told Reuters that company was interested in M&A deals in the European white goods market, a sector where Panasonic has comparatively low brand recognition.
The firm said on Thursday it’s targeting operating profit of 430 billion yen in the next fiscal year, up nearly 25 percent from the 350 billion yen it expects for the year ending March 31.
Panasonic’s earnings have been bolstered by moving faster than peers like Sony Corp and Sharp Corp to overhaul business models squeezed by competition from cheaper Asian rivals and caught flat-footed in a smartphone race led by Apple Inc and Samsung Electronics. Out has gone reliance on mass consumer goods like TVs and smartphones, and in has come a focus on areas like automotive technology and energy-efficient home appliances.
Tsuga also sought to ease concerns that an expensive acquisition could set back its finances, which took years to recover from the deal agreed in 2008 to buy cross-town rival Sanyo for a sum equal to about $9 billion at the time.
Can MediaTek Bring The Cortex-A72 To Market?
Comments Off on Can MediaTek Bring The Cortex-A72 To Market?
MediaTek became the first chipmaker to publicly demo a SoC based on ARM’s latest Cortex-A72 CPU core, but the company’s upcoming chip still relies on the old 28nm manufacturing process.
We had a chance to see the upcoming MT8173 in action at the Mobile World Congress a couple of weeks ago.
The next step is to bring the new Cortex-A72 core to a new node and into mobiles. This is what MediaTek is planning to do by the end of the year.
Cortex-A72 smartphone parts coming in Q4
It should be noted that MediaTek’s 8000-series parts are designed for tablets, and the MT8173 is no exception. However, the new core will make its way to smartphone SoCs later this year, as part of the MT679x series.
According to Digitimes Research, MediaTek’s upcoming MT679x chips will utilize a combination of Cortex-A53 and Cortex-A57 cores. It is unclear whether MediaTek will use the planar 20nm node or 16nm FinFET for the new part.
By the looks of it, this chip will replace 32-bit MT6595, which is MediaTek’s most successful high performance part yet, with a few relatively big design wins, including Alcatel, Meizu, Lenovo and Zopo. The new chip will also supplement, and possibly replace the recently introduced MT6795, a 64-bit Cortex-A53/Cortex-A72 part used in the HTC Desire 826.
More questions than answers
Digitimes also claims the MT679x Cortex-A72 parts may be the first MediaTek products to benefit from AMD technology, but details are scarce. We can’t say whether or not the part will use AMD GPU technology, or some HSA voodoo magic. Earlier this month we learned that MediaTek is working with AMD and the latest report appears to confirm our scoop.
The other big question is the node. The chip should launch toward the end of the year, so we probably won’t see any devices prior to Q1 2016. While 28nm is still alive and kicking, by 2016 it will be off the table, at least in this market segment. Previous MediaTek roadmap leaks suggested that the company would transition to 20nm on select parts by the end of the year.
However, we are not entirely sure 20nm will cut it for high-end parts in 2016. Huawei has already moved to 16nm with its latest Kirin 930 SoC, Samsung stunned the world with the 14nm Exynos 7420, and Qualcomm’s upcoming Snapdragon 820 will be a FinFET part as well.
It is obvious that TSMC’s and Samsung’s 20nm nodes will not be used on most, if not all, high-end SoCs next year. With that in mind, it would be logical to expect MediaTek to use a FinFET node as well. On the other hand, depending on the cost, 20nm could still make sense for MediaTek – provided it ends up significantly cheaper than FinFET. While a 20nm chip wouldn’t deliver the same level of power efficiency and performance, with the right price it could find its way to more affordable mid-range devices, or flagships designed by smaller, value-oriented brands (especially those focusing on Chinese and Indian markets).
Can Linux Succeed On The Desktop?
Every three years I install Linux and see if it is ready for prime time yet, and every three years I am disappointed. What is so disappointing is not so much that the operating system is bad, it has never been, it is just that who ever designs it refuses to think of the user.
To be clear I will lay out the same rider I have for my other three reviews. I am a Windows user, but that is not out of choice. One of the reasons I keep checking out Linux is the hope that it will have fixed the basic problems in the intervening years. Fortunately for Microsoft it never has.
This time my main computer had a serious outage caused by a dodgy Corsair (which is now a c word) power supply and I have been out of action for the last two weeks. In the mean time I had to run everything on a clapped out Fujitsu notebook which took 20 minutes to download a webpage.
One Ubuntu Linux install later it was behaving like a normal computer. This is where Linux has always been far better than Windows – making rubbish computers behave. I could settle down to work right? Well not really.
This is where Linux has consistently disqualified itself from prime-time every time I have used it. Going back through my reviews, I have been saying the same sort of stuff for years.
Coming from Windows 7, where a user with no learning curve can install and start work it is impossible. Ubuntu can’t. There is a ton of stuff you have to upload before you can get anything that passes for an ordinary service. This uploading is far too tricky for anyone who is used to Windows.
It is not helped by the Ubuntu Software Centre which is supposed to make like easier for you. Say that you need to download a flash player. Adobe has a flash player you can download for Ubuntu. Click on it and Ubuntu asks you if you want to open this file with the Ubuntu Software Center to install it. You would think you would want this right? Thing is is that pressing yes opens the software center but does not download Adobe flash player. The center then says it can’t find the software on your machine.
Here is the problem which I wrote about nearly nine years ago – you can’t download Flash or anything proprietary because that would mean contaminating your machine with something that is not Open Sauce.
Sure Ubuntu will download all those proprietary drivers, but you have to know to ask – an issue which has been around now for so long it is silly. The issue of proprietary drives is only a problem for those who are hard core open saucers and there are not enough numbers of them to keep an operating system in the dark ages for a decade. However, they have managed it.
I downloaded LibreOffice and all those other things needed to get a basic “windows experience” and discovered that all those typefaces you know and love are unavailable. They should have been in the proprietary pack but Ubuntu has a problem installing them. This means that I can’t share documents in any meaningful way with Windows users, because all my formatting is screwed.
LibreOffice is not bad, but it really is not Microsoft Word and anyone who tries to tell you otherwise is lying.
I download and configure Thunderbird for mail and for a few good days it actually worked. However yesterday it disappeared from the side bar and I can’t find it anywhere. I am restricted to webmail and I am really hating Microsoft’s outlook experience.
The only thing that is different between this review and the one I wrote three years ago is that there are now games which actually work thanks to Steam. I have not tried this out yet because I am too stressed with the work backlog caused by having to work on Linux without regular software, but there is an element feeling that Linux is at last moving to a point where it can be a little bit useful.
So what are the main problems that Linux refuses to address? Usability, interface and compatibility.
I know Ubuntu is famous for its shit interface, and Gnome is supposed to be better, but both look and feel dated. I also hate Windows 8′s interface which requires you to use all your computing power to navigate through a touch screen tablet screen when you have neither. It should have been an opportunity for Open saucers to trump Windows with a nice interface – it wasn’t.
You would think that all the brains in the Linux community could come up with a simple easy to use interface which lets you have access to all the files you need without much trouble. The problem here is that Linux fans like to tinker they don’t want usability and they don’t have problems with command screens. Ordinary users, particularly more recent generations will not go near a command screen.
Compatibly issues for games has been pretty much resolved, but other key software is missing and Linux operators do not seem keen to get them on board.
I do a lot of layout and graphics work. When you complain about not being able to use Photoshop, Linux fanboys proudly point to GIMP and say that does the same things. You want to grab them down the throat and stuff their heads down the loo and flush. GIMP does less than a tenth of what Photoshop can do and it does it very badly. There is nothing that can do what CS or any real desktop publishers can do available on Linux.
Proprietary software designed for real people using a desktop tends to trump anything open saucy, even if it is producing a technology marvel.
So in all these years, Linux has not attempted to fix any of the problems which have effectively crippled it as a desktop product.
I will look forward to next week when the new PC arrives and I will not need another Ubuntu desktop experience. Who knows maybe they will have sorted it in three years time again.
SUSE Goes OpenStack Cloud 5
SUSE has released OpenStack Cloud 5, the latest version of the its infrastructure-as-a-service private cloud distro.
Version 5 adds the OpenStack brand front and centre, and its credentials are based on the latest Juno build of the OpenStack open source platform.
This version includes enhanced networking flexibility, with additional plug-ins available and the addition of distributed virtual routing. This enables individual computer nodes to handle routing tasks together, or if needs be, clustering together.
Increased operational efficiency comes in the form of a new seamless integration with existing servers running outside the cloud. In addition, log collection is centralized into a single view.
As you would expect, SUSE OpenStack 5 is designed to fit perfectly alongside the company’s other products, including the recently launched Suse Enterprise Storage and Suse Linux Enterprise Server 12 as well as nodes from earlier versions.
Deployment has also been simplified as part of a move to standardise “as-a-service” models.
Also included is the company’s new Sahara data processing project designed to run Hadoop and Spark on top of OpenStack without degradation. MapR has released support for its own service by way of a co-branded plug-in.
“Furthering the growth of OpenStack enterprise deployments, Suse OpenStack Cloud makes it easier for customers to realise the benefits of a private cloud, saving them money and time they can use to better serve their own customers and business,” said Brian Green, managing director, UK and Ireland, at Suse.
“Automation and high availability features translate to simplicity and efficiency in enterprise data centers.”
Suse OpenStack Cloud 5 becomes generally available from today.