DELL’s security research team has revealed that a new form of ransomware, dubbed “Cryptolocker” has managed to infect up to 250,000 devices, stealing almost a million dollars in Bitcoins.
“Based on the presented evidence, researchers estimate that 200,000 to 250,000 systems were infected globally in the first 100 days of the CryptoLocker threat,” Dell announced in a Secureworks post.
The firm worked out that if the Cryptolocker ransomware threat actors had sold its 1,216 total Bitcoins (BTC) that they collected from September this year, immediately upon receiving them, they would have earned nearly $380,000.
“If they elected to hold these ransoms, they would be worth nearly $980,000 as of this publication based on the current weighted price of $804/BTC,” Dell said.
Cryptolocker is unique when compared against your average ransomware. Instead of using a custom cryptographic implementation like many other malware families, Cryptolocker uses third-party certified cryptography offered by Microsoft’s CryptoAPI.
“By using a sound implementation and following best practices, the malware authors have created a robust program that is difficult to circumvent,” Dell said.
Conventionally, ransomware prevents victims from using their computers normally and uses social engineering to convince them that failing to follow the malware authors’ instructions will lead to real-world consequences. These consequences, such as owing a fine or facing arrest and prosecution, are presented as being the result of a fabricated indiscretion such as pirating music or downloading illegal pornography.
“Victims of traditional forms of ransomware could ignore the demands and use security software to unlock the system and remove the offending malware,” Dell explained. “Cryptolocker changes this dynamic by aggressively encrypting files on the victim’s system and returning control of the files to the victim only after the ransom is paid.”
Dell said that the earliest samples of Cryptolocker appear to have been released on 5 September this year. However, details about its initial distribution phase are unclear.
“It appears the samples were downloaded from a compromised website located in the United States, either by a version of Cryptolocker that has not been analysed as of this publication, or by a custom downloader created by the same authors,” Dell added.
Dell seems to think that early versions of Cryptolocker were distributed through spam emails targeting business professionals as opposed to home internet users, with the lure often being a ‘consumer complaint’ against the email recipient or their organisation.
Attached to these emails would be a ZIP archive with a random alphabetical filename containing 13 to 17 characters, containing a single executable with the same filename as the ZIP archive but with an EXE extension, so keep your eye out for emails that fit this description.
Despite growing resentment from companies and powerful industry groups, the Federal Trade Commission continues to insist that it wants to be the nation’s enforcer of data security standards.
The FTC, over the past years, has gone after companies that have suffered data breaches, citing the authority granted to it under a section of the FTC Act that prohibits “unfair” and “deceptive” trade practices. The FTC extracted stiff penalties from some companies by arguing that their failure to properly protect customer data represented an unfair and deceptive trade practice.
On Thursday, FTC Chairwoman Edith Ramirez called for legislation that would bestow the agency with more formal authority to go after breached entities.
“I’d like to see FTC be the enforcer,” Law360 quoted Ramirez as saying at a privacy event organized by the National Consumers League in Washington. “If you have FTC enforcement along with state concurrent jurisdiction to enforce, I think that would be an absolute benefit, and I think it’s something we’ve continued to push for.”
According to Ramirez, the FTC supports a federal data-breach notification law that would also give it the authority to penalize companies for data breaches. In separate comments at the same event, FTC counsel Betsy Broder reportedly noted that the FTC’s enforcement actions stem from the continuing failure of some companies to adequately protect data in their custody.
“FTC keeps bringing data security cases because companies keep neglecting to employ the most reasonable off-the-shelf, commonly available security measures for their systems,” Law360 quoted Broder as saying.
An FTC spokeswoman was unable to immediately confirm the comments made by Ramirez and Broder but said the sentiments expressed in the Law360 story accurately describe the FTC’s position on enforcement authority.
The comments by the senior officials come amid heightening protests against what some see as the FTC overstepping its authority by going after companies that have suffered data breaches.
Over the past several years, the agency has filed complaints against dozens of companies and extracted costly settlements from many of them for data breaches. In 2006 for instance, the FTC imposed a $10 million fine on data aggregator ChoicePoint, and more recently, online gaming company RockYou paid the agency $250,000 to settle data breach related charges.
Red Hat has made available a beta of Red Hat Enterprise Linux 7 (RHEL 7) for testers, just weeks after the final release of RHEL 6.5 to customers.
RHEL 7 is aimed at meeting the requirements of future applications as well as delivering scalability and performance to power cloud infrastructure and enterprise data centers.
Available to download now, the RHEL 7 beta introduces a number of enhancements, including better support for Linux Containers, in-place upgrades, XFS as the default file system, improved networking support and improved compatibility with Windows networks.
Inviting customers, partners, and members of the public to download the RHEL 7 beta and provide feedback, Red Hat is promoting the upcoming version as its most ambitious release to date. The code is based on Red Hat’s community developed Fedora 19 distribution of Linux and the upstream Linux 3.10 kernel, the firm said.
“Red Hat Enterprise Linux 7 is designed to provide the underpinning for future application architectures while delivering the flexibility, scalability, and performance needed to deploy across bare metal, virtual machines, and cloud infrastructure,” Senior Product Marketing Manager Kimberly Craven wrote on the Red Hat Enterprise Linux blog.
These improvements address a number of key areas, including virtualisation, management and interoperability.
Linux Containers, for example, was partially supported in RHEL 6.5, but this release enables applications to be created and deployed using Linux Container technology, such as the Docker tool. Containers offers operating system level virtualisation, which provides isolation between applications without the overhead of virtualising the entire server.
Red Hat said it is now supporting an in-place upgrade feature for common server deployment types. This will allow customers to migrate existing RHEL 6.5 systems to RHEL 7 without downtime.
RHEL 7 also makes the switch to XFS as its default file system, supporting file configurations up to 500TB, while ext4 file systems are now supported up to 50TB in size and B-tree file system (btrfs) implementations are available for users to test.
Interoperability with Windows has also been improved, with Red Hat now including the ability to bridge Windows and Linux infrastructure by integrating RHEL 7 and Samba 4.1 with Microsoft Active Directory domains. Red Hat Enterprise Linux Identity Management can also be deployed in a parallel trust zone alongside Active Directory, the firm said.
On the networking side, RHEL 7 provides support for 40Gbps Ethernet, along with improved channel bonding, TCP performance improvements and low latency socket poll support.
Other enhancements include support for very large scale storage configurations, including enterprise storage arrays, and uniform management tools for networking, storage, file systems, identities and security using the OpenLMI framework.
Earlier this year Intel made a lot of noise about leasing its foundries to third parties, but at least one big played does not appear to be interested.
Speaking at a tech conference, Qualcomm CEO Paul Jacobs said his company is not interested in using Intel fabs and that it will continue to cooperate with established foundries like TSMC.
Jacobs argued that Intel is great at building huge volumes of equally huge cores, but TSMC is a tad more flexible. He pointed out that foundries like TSMC can run build multiple different products simultaneously, controlling the process using software.
“Intel is famous, has been known for having a copy-exact model, so they need very large volumes of a particular chip to run through that,” Jacobs said, reports ITProPortal.
However, Jacobs did point out that he was glad to hear Intel is joining the foundry space and that it will be interesting to see how it plays out.
December 23, 2013 by admin
Filed under Around The Net
The browser cookies that online businesses use to track Internet customers for targeted advertising are also used by the National Security Agency to track surveillance targets and break into their systems.
The agency’s use of browser cookies is restricted to tracking specific suspects rather than sifting through vast amounts of user data, theWashington Post reported Tuesday, citing internal documents obtained from former NSA contractor Edward Snowden.
Google’s PREF (for preference) cookies, which the company uses to personalize webpages for Internet users based on their previous browsing habits and preferences, appears to be a particular favorite of the NSA, the Post noted.
PREF cookies don’t store any user identifying information such as user name or email address. But they contain information on a user’s general location, language preference, search engine settings, number of search results to display per page and other data that lets advertisers uniquely identify an individual’s browser.
The Google cookie, and those used by other online companies, can be used by the NSA to track a target user’s browsing habits and to enable remote exploitation of their computers, the Post said.
Documents made available by Snowden do not describe the specific exploits used by the NSA to break into a surveillance target’s computers. Neither do they say how the NSA gains access to the tracking cookies, the Post reported.
It is theorized that one way the NSA could get access to the tracking cookies is to simply ask the companies for them under the authority granted to the agency by the Foreign Intelligence Surveillance Act (FISA).
Separately, the documents leaked by Snowden show that the NSA is also tapping into cell-phone location data gathered and transmitted by makers of mobile applications and operating systems. Google and other Internet companies use the geo-location data transmitted by mobile apps and operating systems to deliver location-aware advertisements and services to mobile users.
However, the NSA is using the same data to track surveillance targets with more precision than was possible with data gathered directly from wireless carriers, the Post noted. The mobile app data, gathered by the NSA under a program codenamed “Happyfoot,” allows the agency to tie Internet addresses to physical locations more precisely than was possible with cell-phone location data.
An NSA division called Tailored Access Operations uses the data gathered from tracking cookies and mobile applications to launch offensive hacking operations against specific target computers, the Post said.
An NSA spokeswoman Wednesday did not comment on the specific details in the Post story but reiterated the agency’s commitment to fulfill its mission of protecting the country against those seeking to do it harm.
“As we’ve said before, NSA, within its lawful mission to collect foreign intelligence to protect the United States, uses intelligence tools to understand the intent of foreign adversaries and prevent them from bringing harm to innocent Americans and allies,” the spokeswoman said.
The Post’s latest revelations are likely to shine a much-needed spotlight on the extensive tracking and monitoring activities carried out by major Internet companies in order to deliver targeted advertisements to users.
Privacy rights groups have protested such tracking for several years and have sought legislation that would give users more visibility and control over the data that is collected on them by online companies.
Intel apparently built an IPAD ten years before Steve Jobs though of the tablet and the name. It was in the days when sticking an I in front of anything meant it was Intel rather than Apple and the Intel Pad, or IPAD for short, could browse the Internet, play music and videos, and even act as a digital picture frame.
Intel scrapped the IPAD before consumers could get their hands on it as its move into Tablets was seen as one of the outfit’s biggest blunders. According to CNET in the late 1990s and early 2000s, Intel wanted to diversify its operations beyond the PC. The IPAD came from one of several small teams within its research arm tasked with exploring new business opportunities. The IPAD, which included a touch screen and stylus, would not run entirely on its own but connected to a computer to browse the Internet through an Intel wireless technology.
Intel thought that “mobility” meant moving around your home or business and the IPAD was to be a portable device you could take around your house. The reason that they never thought of connecting it to the phone network was because Intel wanted to tie it all back to its core PC chip business. After several years of development on the Intel Web Tablet, then-CEO Craig Barrett unveiled the device at the Consumer Electronics Show in January 2001. The company planned to sell the tablet to consumers later that year.
Sadly though it miffed Intel’s PC partners, which didn’t want a product that could potentially compete with them and Intel caved in and cancelled the project.
The Bluetooth Special Interest Group (SIG) has announced Bluetooth 4.1, the first version of Bluetooth to lay the foundations for IPV6 capability.
The first hints of what the Bluetooth SIG had planned for this new version were revealed to The INQUIRER in October during our exclusive interview with Steve Hegenderfer at Appsworld. There, he revealed his aspirations for the Bluetooth protocol to become integral to the Internet of Things.
At the front end of Bluetooth 4.1, the biggest change for users is that the retry duration for lost devices has been increased to a full three minutes, so if you wander off with your wireless headphones still on, there’s more of a chance of being able to seamlessly carry on listening upon your return.
Behind the scenes, devices fitted with Bluetooth 4.1 will be able to act as both hub and end point. The advantage of this is that multiple devices can share information between them without going via the host device, so your smartwatch can talk to your heart monitor and send the combined data in a single transmission to your smartphone.
This sort of “pooling” of devices represents an “extranet of things”, and the technology can therefore be applied to a wider area in forming the “Internet of Things” too.
The other major additions are better isolation techniques to ensure that Bluetooth, which broadcasts on an unregulated band, doesn’t interfere either with itself or with signals from other protocols broadcasting at similar frequencies, including WiFi.
The Bluetooth protocol has retained complete backwards compatibility, so a new Bluetooth 4.1 enabled device will work seamlessly with a Bluetooth 1.0 dongle bought in a pound shop.
In addition, Bluetooth 4.0 devices can be Bluetooth 4.1 enabled through patches, so we should see some Bluetooth 4.1 enabled hardware arrive early in 2014.
IBM is in the throes of developing software that will allow organizations to use multiple cloud storage services interchangeably, reducing dependence on any single cloud vendor and ensuring that data remains available even during service outages.
Although the software, called InterCloud Storage (ICStore), is still in development, IBM is inviting its customers to test it. Over time, the company will fold the software into its enterprise storage portfolio, where it can back up data to the cloud. The current test iteration requires an IBM Storewize storage system to operate.
ICStore was developed in response to customer inquiries, said Thomas Weigold, who leads the IBM storage systems research team in IBM’s Zurich, Switzerland, research facility, where the software was created. Customers are interested in cloud storage services but are worried about trusting data with third party providers, both in terms of security and the reliability of the service, he said.
The software provides a single interface that administrators can use to spread data across multiple cloud vendors. Administrators can specify which cloud providers to use through a point-and-click interface. Both file and block storage is supported, though not object storage. The software contains mechanisms for encrypting data so that it remains secure as it crosses the network and resides on the external storage services.
A number of software vendors offer similar cloud storage broker capabilities, all in various stages of completion, notably Red Hat’s DeltaCloud and Hewlett Packard’s Public Cloud.
ICStore is more “flexible,” than other approaches, said Alessandro Sorniotti, an IBM security and cloud system researcher who also worked on the project. “We give customers the ability to select what goes where, depending on the sensitivity and relevance of data,” he said. Customers can store one copy of their data on one provider and a backup copy on another provider.
ICStore supports a number of cloud storage providers, including IBM’s SoftLayer, Amazon S3 (Simple Storage Service), Rackspace, Microsoft Windows Azure and private instances of the OpenStack Swift storage service. More storage providers will be added as the software goes into production mode.
“Say, you are using SoftLayer and Amazon, and if Amazon suffers an outage, then the backup cloud provider kicks in and allows you to retrieve data,” from SoftLayer, Sorniotti said.
ICStore will also allow multiple copies of the software to work together within an enterprise, using a set of IBM patent-pending algorithms developed for data sharing. This ensures that the organization will not run into any upper limits on how much data can be stored.
IBM has about 1,400 patents that relate to cloud computing, according to the company.
Hewlett-Packard reclaimed its server crown from IBM last quarter as the overall market contracted and Taiwanese vendors made big gains selling directly to Internet giants like Google and Facebook, according to an IDC report.
HP expanded its share of the market only modestly from a year earlier but IBM’s portion declined 4.5 points despite solid mainframe sales, to leave HP in the top spot. HP finished the third quarter with 28.1% of worldwide server revenue to IBM’s 23.4%, IDC said.
But the strongest growth was for the “ODM direct” segment which IDC broke out for the first time this quarter. It stands for original design manufacturers, which are Taiwanese firms like Quanta Computer, Wistron Group, Inventec and Compal, which sell partial and fully-built servers to the big cloud providers.
It’s a growing segment and one that threatens the incumbents. ODM’s accounted for 6.5% of server revenue last quarter, up 45.2% from a year earlier, IDC said. If the ODM category were a single vendor, it would be the third largest ahead of Dell.
Almost 80% of the ODM’s server revenue came from the U.S., primarily from sales to Google, Amazon, Facebook and Rackspace.
Overall, the server market declined 3.7% from a year earlier to $12.1 billion. It was the third consecutive quarter of declining revenue but IDC predicts improvement with a refresh cycle early next year. In terms of units shipped, volumes were about flat year over year, meaning average selling prices dropped.
Volume systems — mostly x86 servers — picked up slightly from last year, with 3.5% revenue growth. But sales of midrange and high-end systems dropped 17.8% and 22.5%, respectively, IDC said.
IBM fared worst of the top 5 vendors, with revenue down 19.4% due to “soft demand for System x and Power Systems,” IDC said. Dell retained third place with 16.2% of revenue, about flat from last year, while Cisco Systems and Oracle tied for fourth.
Cisco saw the most growth of the top vendors, with a nearly 43% revenue jump, IDC said.
The next generation desktop and mobile Atom is Cherry Trail in 14nm and the first parts are expected in late 2014. Intel has been working hard to accelerate the introduction of Atom parts based on the new architecture and in 2014 it will finally ship Broadwell notebook chips and Cherry Trail Atoms in the same year, both using the new 14nm node.
The Cherry View is a notebook SoC version of a chip based on new Airmont core, while Cherry Trail is the part meant for tablets. The phone version is based on Moorefield architecture and they are all expected to show up in late 2014, most likely very late Q3 2014.
The TDP should go down compared to Bay Trail platform as the new 14nm needs less voltage to hit the same speed and should produce less heat at the same time. With the 14nm shrink Intel’s new Atoms will be able to get more fanless design wins.
The significance of 14nm products for mobile phones and tablets will be in the fact that ARM alliance lead by Qualcomm, Samsung, MediaTek and a few other players will be struggling to get 20nm designs out of the door in 2014, and Intel can already get to a 14nm.
However, Intel still has to integrate LTE inside its mobile phone SoCs, which has traditionally been proven to be a tough task. At this time only Qualcomm has on-die LTE and its LTE enabled SoCs are under the bonnet of almost every significant ARM based high-end phone out there.
Only time will tell how successful Intel’s mobile push will be. Even with these 14nm parts, once they show up roughly a year from now, it might be really tough for Intel to get some high-volume design wins in the phone space, despite the transition to 14nm.
Comments