Toshiba To Offer A 20-megapixel Image Chip
Comments Off on Toshiba To Offer A 20-megapixel Image Chip
Toshiba is gearing up for to offer a 20-megapixel image sensor for digital cameras that it says will be the highest resolution of its kind.
The Tokyo-based firm said the new chips will be able to support capturing 30 frames per second at full resolution. They will also be able to shoot video at 60 frames per second at 1080P or 100 frames at 720P.
Toshiba said it will begin shipping samples of the new CMOS chips in January, with mass production to begin in August of 300,000 units monthly. Toshiba is best known in components for its NAND flash memory, which it develops with partner SanDisk, but is also a major manufacturer of LSI and other semiconductors.
Digital point-and-shoot cameras are steadily falling in price, squeezed between brutal competition among manufacturers and the increasing threat of smartphones and mobile devices. While the number of pixels a camera can capture is not always a direct measure of the overall quality of its images, it is a key selling point to consumers.
The image resolution of top-end smartphones now often meets or exceed that of digital cameras. The Nokia 808 PureView launched earlier this year has a 41-megapixel image sensor.
The Japanese manufacturer said it has increased the amount of information pixels in the new chip can store compared to its previous generation of CMOS, producing better overall images. It has also reduces the size of pixels – the new 20-megapixel version has individual pixels that measure 1.2 micrometers, down from 1.34 micrometers in its 16-megapixel product.
Is Intel Really Catching ARM?
A new report suggests that Intel is close to matching ARM on power efficiency.
The study by Bernstein Research analysts said that the days of Intel being mocked because its power-hungry chips shortened the battery life of mobile devices could be over. Bernstein noted that the ARM camp has such a commanding lead in phones and tablets that Intel probably won’t make much of a dent in those markets for a couple of years — even with its energy-efficient chips.
But it said that both company’s chip types “are very close in terms of power efficiency and processing power.” It said that the fight between the ARM and Intel camps will heat up meaningfully as early as 2013, with likely damages on both sides and no winner. For its study, Bernstein compared Intel’s chip in a Motorola RAZR phone and a RAZR phone with an ARM chip. It also compared both chips in similar tablets outfitted with the Windows 8.
The bad news in the report for Intel was that ARM’s chips have become more powerful, making them “a very compelling choice” for consumers looking for low-end notebooks.
Toyota Goes Wireless Charging
January 2, 2013 by admin
Filed under Around The Net
Comments Off on Toyota Goes Wireless Charging
Toyota is taking the smartphone boom quite seriously and the car-maker hopes to offer the first wireless charging systems in select models as early as next year.
Toyota’s wireless system will be compatible with the Qi standard and it will be introduced in the new Avalon sedan next year. Of course, it will be optional and it will be part of Toyota’s $1,950 “technology package” which includes other geeky goodies as well.
According to the BBC, Chrysler is also planning to offer a similar system in the Dodge Dart. Other car-makers will no doubt offer wireless charging functionality sooner rather than later.
The number of Qi compatible phones is limited for the time being. Just 34 phones support it, including the Lumia 920, Nexus 4 and HTC Windows Phone 8X. However, some very popular devices like Apple’s iPhone and Sammy’s Galaxy S series phones don’t support wireless charging just yet.
Raspberry Pi Gets A Store
Raspberry Pi Foundation has opened a store to enable users to easily download applications that run on the credit-card sized computer.
The Raspberry Pi Foundation said it partnered with Indiecity and Velocix to create a store for applications that run on the Raspberry Pi computer. The Foundation said that the store itself is an application that runs under its Raspbian Linux distribution and at launch has 23 applications available for download.
The Raspberry Pi Store contains games such as Freeciv alongside applications such as Libreoffice and Asterisk. The Raspberry Pi Foundation said its store accepts compiled binaries, Python code, images, audio and video.
The Raspberry Pi Store will allow developers to charge for applications, with the Foundation saying that it hopes to see a mix of hobbyist and commercial software. The Foundation also asked users that download applications to review them in order to improve the results put out by its recommendations system.
While the Raspberry Pi was initially intended to help teach people how to program, the device has gained wider popularity due to the fact that its hardware can run many typical PC desktop applications. The Foundation’s Raspberry Pi Store will make it easier for users to find and install applications on the device, which can only be a good thing for the Raspberry Pi Foundation and Linux adoption.
Intel Details 22nm SoC
Thanks to a long spate of bad luck over at AMD, Intel now finds itself in a rather safe market lead, at least in high-end and server markets. However, in the low-end and mobile, Intel has a lot of catching up to do.
ARM still dominates the mobile market and Intel is looking to take on the British chip designer with new 22nm SoCs of its own. Intel outlined its SoC strategy at the 2012 International Electron Devices Meeting in San Francisco the other day.
The cunning plan involves 3D tri gate transistors and Intel’s 22nm fabrication process, or in other words it is a brute force approach. Intel can afford to integrate the latest tech in cheep and cheerful 22nm Atoms, thus making them more competitive in terms of power efficiency.
Since Intel leads the way with new manufacturing processes it already has roughly a year of experience with 22nm chips, while ARM partners rely on 28nm, 32nm and more often than not, 40nm processes. Intel’s next generation SoCs will also benefit from other off-the-shelf Intel tech, such as 3D tri-gate transistors.
Will Lenovo Go Public In 2K14?
Lenovo’s parent firm Legend Holdings could float an initial public offering (IPO) as soon as 2014, according to the firm’s chairman.
Liu Chuanzhi, chairman of Legend Holdings told China Business News that the firm plans to list on the China A-share market between 2014 and 2016. Chuanzhi also reportedly said the company will invest $3.2bn by 2014 to develop its various businesses.
Legend Holdings is 36 percent owned by the Chinese state controlled Academy of Sciences, with a further 20 percent owned by the private investment firm China Oceanwide Holdings Group.
Legend Holdings also has venture capital and real estate interests outside of Lenovo Group. The firm’s system building operations however have gone from strength to strength since it bought IBM’s PC business back in 2005, and it is now heavily promoting its Yoga tablet-laptop hybrid device.
Earlier this year Gartner reported that Lenovo had overtaken HP to become the largest PC vendor, something that HP disputed by offering IDC’s figures. Regardless of HP’s protestations then, Lenovo is set to overtake HP as its PC business continues to grow while HP’s has been shrinking for some time.
Legend Holdings might want to cash in on Lenovo’s high flying status and a cash injection from an IPO could help the company invest in designing products for the smartphone and tablet markets.
.
TSMC To Boost 28nm Production
TSMC is able to make chips using 28nm process technology at a speedier pace that it originally anticipated. This means that the chipmaker will likely be able to meet demand for existing orders and start accepting new designs.
TSMC promised to increase its 28nm capacity to 68 thousand 300mm wafers per month by the end of the year. It did this by ramping up fab 15/phase 2 to 50,000 300mm wafers a month. According to the Taiwan Economic News it looks like the outfit managed to beat its own projections, which should be good news for customers like AMD, Nvidia and Qualcomm. Well not AMD of course. It just told Globalfoundries to stop making so many of its chips so it can save a bit of money.
But it looks like TSMC is flat out. In November the fab 15/phase 2 processed 52,000 wafers. When combined with fab 15/phase 1, TSMC should be able to process 75 – 80, 000 300mm wafers using 28nm process technologies this month. TSMC produces the majority of 28nm chips at fab 15, which will have capacity of more than 100,000 300mm wafers per month when fully operational.
Qualcomm Throws $$ At Sharp
December 12, 2012 by admin
Filed under Consumer Electronics
Comments Off on Qualcomm Throws $$ At Sharp
Qualcomm is set to make a $120m investment in troubled Japanese display maker Sharp.
Rumours had been floating around that Qualcomm was looking to make an investment in Sharp, and the display maker has confirmed the investment. Qualcomm initially will invest $60m in Sharp through its Pixtronix subsidiary by the end of 2012 to help develop Sharp’s IGZO display technology.
Qualcomm will make a further $60m investment in Sharp should the initial work on its IGZO displays seem promising. Should Qualcomm complete the $120m investment in Sharp, that will make it the single largest shareholder with around five percent of the firm, primarily due to the fact that Sharp’s share price has fallen by almost 75 percent in 2012.
While Sharp said it will work with Qualcomm on further developing its display technology, the two firms will also look at working together on developing chip fabrication technologies.
It had been reported that Intel and Dell were also sniffing around Sharp, while Hon Hai is known to be looking to make a stake in the firm, though its demand for a seat on Sharp’s board is likely the main sticking point in negotiations. Sharp has warned that its future is in doubt if it cannot secure investment to repay large debts it amassed as part of its LCD manufacturing push back in 2006 and 2007.
nVidia Speaks On Performance Issue
Nvidia has said that most of the outlandish performance increase figures touted by GPGPU vendors was down to poor original code rather than sheer brute force computing power provided by GPUs.
Both AMD and Nvidia have been using real-world code examples and projects to promote the performance of their respective GPGPU accelerators for years, but now it seems some of the eye popping figures including speed ups of 100x or 200x were not down to just the computing power of GPGPUs. Sumit Gupta, GM of Nvidia’s Tesla business said that such figures were generally down to starting with unoptimized CPU code.
During Intel’s Xeon Phi pre-launch press conference call, the firm cast doubt on some of the orders of magnitude speed up claims that had been bandied about for years. Now Gupta told The INQUIRER that while those large speed ups did happen, it was possible because of poorly optimized code to begin with, thus the bar was set very low.
Gupta said, “Most of the time when you saw the 100x, 200x and larger numbers those came from universities. Nvidia may have taken university work and shown it and it has an 100x on it, but really most of those gains came from academic work. Typically we find when you investigate why someone got 100x [speed up] is because they didn’t have good CPU code to begin with. When you investigate why they didn’t have good CPU code you find that typically they are domain scientist’s not computer science guys – biologists, chemists, physics – and they wrote some C code and it wasn’t good on the CPU. It turns out most of those people find it easier to code in CUDA C or CUDA Fortran than they do to use MPI or Pthreads to go to multi-core CPUs, so CUDA programming for a GPU is easier than multi-core CPU programming.”
Do Supercomputers Lead To Downtime?
As supercomputers grow more powerful, they’ll also become more susceptible to failure, thanks to the increased amount of built-in componentry. A few researchers at the recent SC12 conference, held last week in Salt Lake City, offered possible solutions to this growing problem.
Today’s high-performance computing (HPC) systems can have 100,000 nodes or more — with each node built from multiple components of memory, processors, buses and other circuitry. Statistically speaking, all these components will fail at some point, and they halt operations when they do so, said David Fiala, a Ph.D student at the North Carolina State University, during a talk at SC12.
The problem is not a new one, of course. When Lawrence Livermore National Laboratory’s 600-node ASCI (Accelerated Strategic Computing Initiative) White supercomputer went online in 2001, it had a mean time between failures (MTBF) of only five hours, thanks in part to component failures. Later tuning efforts had improved ASCI White’s MTBF to 55 hours, Fiala said.
But as the number of supercomputer nodes grows, so will the problem. “Something has to be done about this. It will get worse as we move to exascale,” Fiala said, referring to how supercomputers of the next decade are expected to have 10 times the computational power that today’s models do.
Today’s techniques for dealing with system failure may not scale very well, Fiala said. He cited checkpointing, in which a running program is temporarily halted and its state is saved to disk. Should the program then crash, the system is able to restart the job from the last checkpoint.
The problem with checkpointing, according to Fiala, is that as the number of nodes grows, the amount of system overhead needed to do checkpointing grows as well — and grows at an exponential rate. On a 100,000-node supercomputer, for example, only about 35 percent of the activity will be involved in conducting work. The rest will be taken up by checkpointing and — should a system fail — recovery operations, Fiala estimated.
Because of all the additional hardware needed for exascale systems, which could be built from a million or more components, system reliability will have to be improved by 100 times in order to keep to the same MTBF that today’s supercomputers enjoy, Fiala said.
Fiala presented technology that he and fellow researchers developed that may help improve reliability. The technology addresses the problem of silent data corruption, when systems make undetected errors writing data to disk.