Hackers Going After Traffic Signs
After hackers played several high-profile pranks with traffic signs, including warning San Francisco drivers of a Godzilla attack, the U.S. government advised operators of electronic highway signs to take “defensive measures” to better secure their property.
Last month, signs on San Francisco’s Van Ness Ave were photographed flashing “Godzilla Attack! Turn Back” and highway signs across North Carolina were tampered with last week to read “Hack by Sun Hacker.”
The Department of Homeland Security’s Industrial Control Systems Cyber Emergency Response Team, or ICS-CERT, this week advised cities, highway operators and other customers of digital-sign maker Daktronics Inc to take “defensive measures” to minimize the possibility of similar attacks.
It said that information had been posted on the Internet advising hackers how to access those systems using default passwords coded into the company’s software. “ICS-CERT recommends entities review sign messaging, update access credentials and harden communication paths to the signs,” the agency said in an alert posted on Thursday.
Jody Huntimer, a representative for Daktronics, declined to say if the recent attacks involved the bug reported by ICS-CERT.
“We are working with the ICS-CERT team to clarify the current alert and will release a statement once we have assessed the situation and developed customer recommendations,” Huntimer said via email.
Krebs on Security, a widely read security blog, posted a confidential report from the Center for Internet Strategy, or CIS, which was sent to state security officials. It warned that the pranks created a public safety risk because drivers often slow or stop to view the signs and take pictures.
CIS also predicated that amateur hackers might attempt to hack into other systems in the coming weeks following the May 27 release of “Watch Dogs,” a video game from Ubisoft focused on hacking critical infrastructure.
Will Arm/Atom CPUs Replace Xeon/Opteron?
Comments Off on Will Arm/Atom CPUs Replace Xeon/Opteron?
Analyst are saying that smartphone chips could one day replace the Xeon and Opteron processors used in most of the world’s top supercomputers. In a paper in a paper titled “Are mobile processors ready for HPC?” researchers at the Barcelona Supercomputing Center wrote that less expensive chips bumping out faster but higher-priced processors in high-performance systems.
In 1993, the list of the world’s fastest supercomputers, known as the Top500, was dominated by systems based on vector processors. They were nudged out by less expensive RISC processors. RISC chips were eventually replaced by cheaper commodity processors like Intel’s Xeon and AMD Opteron and now mobile chips are likely to take over.
The transitions had a common thread, the researchers wrote: Microprocessors killed the vector supercomputers because they were “significantly cheaper and greener,” the report said. At the moment low-power chips based on designs ARM fit the bill, but Intel is likely to catch up so it is not likely to mean the death of x86.
The report compared Samsung’s 1.7GHz dual-core Exynos 5250, Nvidia’s 1.3GHz quad-core Tegra 3 and Intel’s 2.4GHz quad-core Core i7-2760QM – which is a desktop chip, rather than a server chip. The researchers said they found that ARM processors were more power-efficient on single-core performance than the Intel processor, and that ARM chips can scale effectively in HPC environments. On a multi-core basis, the ARM chips were as efficient as Intel x86 chips at the same clock frequency, but Intel was more efficient at the highest performance level, the researchers said.
Do Supercomputers Lead To Downtime?
As supercomputers grow more powerful, they’ll also become more susceptible to failure, thanks to the increased amount of built-in componentry. A few researchers at the recent SC12 conference, held last week in Salt Lake City, offered possible solutions to this growing problem.
Today’s high-performance computing (HPC) systems can have 100,000 nodes or more — with each node built from multiple components of memory, processors, buses and other circuitry. Statistically speaking, all these components will fail at some point, and they halt operations when they do so, said David Fiala, a Ph.D student at the North Carolina State University, during a talk at SC12.
The problem is not a new one, of course. When Lawrence Livermore National Laboratory’s 600-node ASCI (Accelerated Strategic Computing Initiative) White supercomputer went online in 2001, it had a mean time between failures (MTBF) of only five hours, thanks in part to component failures. Later tuning efforts had improved ASCI White’s MTBF to 55 hours, Fiala said.
But as the number of supercomputer nodes grows, so will the problem. “Something has to be done about this. It will get worse as we move to exascale,” Fiala said, referring to how supercomputers of the next decade are expected to have 10 times the computational power that today’s models do.
Today’s techniques for dealing with system failure may not scale very well, Fiala said. He cited checkpointing, in which a running program is temporarily halted and its state is saved to disk. Should the program then crash, the system is able to restart the job from the last checkpoint.
The problem with checkpointing, according to Fiala, is that as the number of nodes grows, the amount of system overhead needed to do checkpointing grows as well — and grows at an exponential rate. On a 100,000-node supercomputer, for example, only about 35 percent of the activity will be involved in conducting work. The rest will be taken up by checkpointing and — should a system fail — recovery operations, Fiala estimated.
Because of all the additional hardware needed for exascale systems, which could be built from a million or more components, system reliability will have to be improved by 100 times in order to keep to the same MTBF that today’s supercomputers enjoy, Fiala said.
Fiala presented technology that he and fellow researchers developed that may help improve reliability. The technology addresses the problem of silent data corruption, when systems make undetected errors writing data to disk.
Lenovo Eyes The U.S.
Lenovo hopes that computers made in its first U.S. manufacturing plant will draw more consumers, while also making the delivery of ThinkPad laptops and tablets faster to U.S. customers.
The company, which is based in China, earlier this month announced it would open a factory to make computers in Whitsett, N.C. — its first such facility in the U.S. Lenovo said the factory would create about 115 manufacturing jobs. A spokesman later added that the company may expand the facility in the future, which could create more jobs.
Manufacturing in the U.S. will help Lenovo get its products to customers more quickly, said Peter Hortensius, senior vice president of the product group at Lenovo, in an interview at a company event in New York on Tuesday evening.
The company will manufacture ThinkPad laptops and tablets starting early next year, and with the new factory, Lenovo hopes computers could reach customers within a week, or in some cases, overnight. But initial supplies of products like the ThinkPad Tablet 2, which will become available in October, will not be made in the U.S. factory.
Many Lenovo computer shipments originate from China and are supposed to reach customers in 10 days, but in some cases take weeks. The company also has factories in Japan, Brazil, Germany and Mexico.
The “Made in USA” tag on computers manufactured in North Carolina will resonate with some buyers, Hortensius said. Lenovo’s main U.S. operations are in that state, and the company also has a distribution center there.