Juniper Networks Goes OpenStack
Juniper and Mirantis are getting close, with news that they are to form a cloud OpenStack alliance.
The two companies have signed an engineering partnership that the companies believe will lead to a reliable, scalable software-defined networking solution.
Mirantis OpenStack will now inter-operate with Juniper Contrail Networking, as well as OpenContrail, an open source software-defined networking system.
The two companies have published a reference architecture for deploying and managing Juniper Contrail Networking with Mirantis OpenStack to simplify deployment and reduce the need for third-party involvement.
Based on OpenStack Juno, Mirantis OpenStack 6.0 will be enhanced by a Fuel plugin in the second quarter that will make it even easier to deploy large-scale clouds in house.
However, Mirantis has emphasized that the arrival of Juniper to the fold is not a snub to the recently constructed integration with VMware.
Nick Chase of Mirantis explained, “…with this Juniper integration, Mirantis will support BOTH VMware vCenter Server and VMware NSX AND Juniper Networks Contrail Networking. That means that even if they’ve got VMware in their environment, they can choose to use NSX or Contrail for their networking components.
“Of course, all of that begs the question, when should you use Juniper, and when should you use VMware? Like all great engineering questions, the answer is ‘it depends’. How you choose is going to be heavily influenced by your individual situation, and what you’re trying to achieve.”
Juniper outlined its goals for the tie-up as:
– Reduce cost by enabling service providers and IT administrators to easily embrace SDN and OpenStack technologies in their environments
– Remove the complexity of integrating networking technologies in OpenStack virtual data centres and clouds
– Increase the effectiveness of their operations with fully integrated management for the OpenStack and SDN environments through Fuel and Juniper Networks® Contrail SDN Controller
The company is keen to emphasize that this is not meant to be a middle finger at VMware, but rather a demonstration of the freedom of choice offered by open source software. However, it serves as another demonstration of how even the FOSS market is growing increasingly proprietary and competitive.
Medical Data Becoming Valuable To Hackers
Comments Off on Medical Data Becoming Valuable To Hackers
The personal information stored in health care records fetches increasingly impressive sums on underground markets, making any company that stores such data a very attractive target for attackers.
“Hackers will go after anyone with health care information,” said John Pescatore, director of emerging security trends at the SANS Institute, adding that in recent years hackers have increasingly set their sights on EHRs (electronic health records).
With medical data, “there’s a bunch of ways you can turn that into cash,” he said. For example, Social Security numbers and mailing addresses can be used to apply for credit cards or get around corporate antifraud measures.
This could explain why attackers have recently targeted U.S. health insurance providers. Last Tuesday, Premera Blue Cross disclosed that the personal details of 11 million customers had been exposed in a hack that was discovered in January. Last month, Anthem, another health insurance provider, said that 78.8 million customer and employee records were accessed in an attack.
Both attacks exposed similar data, including names, Social Security numbers, birth dates, telephone numbers, member identification numbers, email addresses and mailing addresses. In the Premera breach, medical claims information was also accessed.
If the attackers try to monetize this information, the payout could prove lucrative.
Credentials that include Social Security numbers can sell for a couple of hundred dollars since the data’s lifetime is much longer compared to pilfered credit card numbers, said Matt Little, vice president of product development at PKWARE, an encryption software company with clients that include health care providers. Credit card numbers, which go for a few dollars, tend to work only for a handful of days after being reported stolen.
SUSE Brings Hadoop To IBM z Mainframes
SUSE and Apache Hadoop vendor Veristorm are teaming up to bring Hadoop to IBM z and IBM Power systems.
The result will mean that regardless of system architecture, users will be able to run Apache Hadoop within a Linux container on their existing hardware, meaning that more users than ever will be able to process big data into meaningful information to inform their business decisions.
SUSE’s Veristorm Data Hub and vStorm Enterprise Hadoop will now be available as zDoop, the first mainframe-compatible Hadoop iteration, running on SUSE Linux Enterprise Server for System z, either on IBM Power12 or Power8 machines in little-endian mode, which makes it significantly easier for x86 based software to be ported to the IBM platform.
SUSE and Veristorm have also committed to work together on educating partners and channels on the benefits of the overall package.
Naji Almahmoud, head of global business development for SUSE, said: “The growing need for big data processing to make informed business decisions is becoming increasingly unavoidable.
“However, existing solutions often struggle to handle the processing load, which in turn leads to more servers and difficult-to-manage sprawl. This partnership with Veristorm allows enterprises to efficiently analyse their mainframe data using Hadoop.”
Veristorm launched Hadoop for Linux in April of last year, explaining that it “will help clients to avoid staging and offloading of mainframe data to maintain existing security and governance controls”.
Sanjay Mazumder, CEO of Veristorm, said that the partnership will help customers “maximize their processing ability and leverage their richest data sources” and deploy “successful, pragmatic projects”.
SUSE has been particularly active of late, announcing last month that its software-defined Enterprise Storage product, built around the open source Ceph framework, was to become available as a standalone product for the first time.
Target Settles Security Breach
Target is reportedly close to paying out $10m to settle a class-action case that was filed after it was hacked and stripped of tens of millions of peoples’ details.
Target was smacked by hackers in 2013 in a massive cyber-thwack on its stores and servers that put some 70 million people’s personal information in harm’s way.
The hack has had massive repercussions. People are losing faith in industry and its ability to store their personal data, and the Target incident is a very good example of why people are right to worry.
As well as tarnishing Target’s reputation, the attack also led to a $162m gap in its financial spreadsheets.
The firm apologized to its punters when it revealed the hack, and chairman, CEO and president Gregg Steinhafel said he was sorry that they have had to “endure” such a thing
Now, according to reports, Target is willing to fork out another $10m to put things right, offering the money as a proposed settlement in one of several class-action lawsuits the company is facing. If accepted, the settlement could see affected parties awarded some $10,000 for their troubles.
We have asked Target to either confirm or comment on this, and are waiting for a response. For now we have an official statement at Reuters to turn to. There we see Target spokeswoman Molly Snyder confirming that something is happening but not mentioning the 10 and six zeroes.
“We are pleased to see the process moving forward and look forward to its resolution,” she said.
Not available to comment, not that we asked, will be the firm’s CIO at the time of the hack. Thirty-year Target veteran Beth Jacob left her role in the aftermath of the attack, and a replacement was immediately sought.
“To ensure that Target is well positioned following the data breach we suffered last year, we are undertaking an overhaul of our information security and compliance structure and practices at Target,” said Steinhafel then.
“As a first step in this effort, Target will be conducting an external search for an interim CIO who can help guide Target through this transformation.”
“Transformational change” pro Bob DeRodes took on the role in May last year and immediately began saying the right things.
“I look forward to helping shape information technology and data security at Target in the days and months ahead,” he said.
“It is clear to me that Target is an organization that is committed to doing whatever it takes to do right by their guests.”
We would ask Steinhafel for his verdict on DeRodes so far and the $10m settlement, but would you believe it, he’s not at Target anymore either having left in the summer last year with a reported $61m golden parachute.
IBM Debuts New Mainframe
IBM has started shipping its all-new first z13 mainframe computer.
IBM has high hopes the upgraded model will generate solid sales based not only on usual customer patterns but its design focus aimed at helping them cope with expanding mobile usage, analysis of data, upgrading security and doing more “cloud” remote computing.
Mainframes are still a major part of the Systems and Technology Group at IBM, which overall contributed 10.8 percent of IBM’s total 2014 revenues of $92.8 billion. But the z Systems and their predecessors also generate revenue from software, leasing and maintenance and thus have a greater financial impact on IBM’s overall picture.
The new mainframe’s claim to fame is to use simultaneous multi-threading (SMT) to execute two instruction streams (or threads) on a processor core which delivers more throughput for Linux on z Systems and IBM z Integrated Information Processor (zIIP) eligible workloads.
There is also a single Instruction Multiple Data (SIMD), a vector processing model providing instruction level parallelism, to speed workloads such as analytics and mathematical modeling. All this means COBOL 5.2 and PL/I 4.5 exploit SIMD and improved floating point enhancements to deliver improved performance over and above that provided by the faster processor.
Its on chip cryptographic and compression coprocessors receive a performance boost improving both general processors and Integrated Facility for Linux (IFL) cryptographic performance and allowing compression of more data, helping tosave disk space and reducing data transfer time.
There is also a redesigned cache architecture, using eDRAM technology to provide twice as much second level cache and substantially more third and fourth level caches compared to the zEC12. Bigger and faster caches help to avoid untimely swaps and memory waits while maximisng the throughput of concurrent workload Tom McPherson, vice president of z System development, said that the new model was not just about microprocessors, though this model has many eight-core chips in it. Since everything has to be cooled by a combination of water and air, semiconductor scaling is slowing down, so “you have to get the value by optimizing.
The first real numbers on how the z13 is selling won’t be public until comments are made in IBM’s first-quarter report, due out in mid-April, when a little more than three weeks’ worth of billings will flow into it.
The company’s fiscal fortunes have sagged, with mixed reviews from both analysts and the blogosphere. Much of that revolves around IBM’s lag in cloud services. IBM is positioning the mainframe as a prime cloud server, one of the systems that is actually what cloud computing goes to and runs on.
HGST To Debut 10TB HD
HGST has revealed the world’s first 10TB hard drive, but you probably won’t be installing one anytime soon.
The company has been working on the 10TB SMR HelioSeal hard drive for months and now it is almost ready to hit the market.
The drive uses Shingled Magnetic Recording (SMR) to boost density, thus enabling HGST to cram more data on every platter. ZDnet got a quick peek at the drive at a Linux event in Boston, which also featured a burning effigy of Nick Farrell.
Although we’ve covered some SMR drives in the past, the technology is still not very mature and so far it’s been limited to niche drives and enterprise designs, not consumer hard drives. HGST’s new drive is no exception – it is designed for data centers rather than PCs. While you won’t use it to store your music and video, you might end up streaming them from one of these babies.
Although data centres are slowly turning to SSDs for many hosts, even on cheap shared hosting packages, there is still a need for affordable mechanical storage. Even when hard drives are completely phased out from the frontline, they still have a role to play as backup drives.
Can Linux Succeed On The Desktop?
Every three years I install Linux and see if it is ready for prime time yet, and every three years I am disappointed. What is so disappointing is not so much that the operating system is bad, it has never been, it is just that who ever designs it refuses to think of the user.
To be clear I will lay out the same rider I have for my other three reviews. I am a Windows user, but that is not out of choice. One of the reasons I keep checking out Linux is the hope that it will have fixed the basic problems in the intervening years. Fortunately for Microsoft it never has.
This time my main computer had a serious outage caused by a dodgy Corsair (which is now a c word) power supply and I have been out of action for the last two weeks. In the mean time I had to run everything on a clapped out Fujitsu notebook which took 20 minutes to download a webpage.
One Ubuntu Linux install later it was behaving like a normal computer. This is where Linux has always been far better than Windows – making rubbish computers behave. I could settle down to work right? Well not really.
This is where Linux has consistently disqualified itself from prime-time every time I have used it. Going back through my reviews, I have been saying the same sort of stuff for years.
Coming from Windows 7, where a user with no learning curve can install and start work it is impossible. Ubuntu can’t. There is a ton of stuff you have to upload before you can get anything that passes for an ordinary service. This uploading is far too tricky for anyone who is used to Windows.
It is not helped by the Ubuntu Software Centre which is supposed to make like easier for you. Say that you need to download a flash player. Adobe has a flash player you can download for Ubuntu. Click on it and Ubuntu asks you if you want to open this file with the Ubuntu Software Center to install it. You would think you would want this right? Thing is is that pressing yes opens the software center but does not download Adobe flash player. The center then says it can’t find the software on your machine.
Here is the problem which I wrote about nearly nine years ago – you can’t download Flash or anything proprietary because that would mean contaminating your machine with something that is not Open Sauce.
Sure Ubuntu will download all those proprietary drivers, but you have to know to ask – an issue which has been around now for so long it is silly. The issue of proprietary drives is only a problem for those who are hard core open saucers and there are not enough numbers of them to keep an operating system in the dark ages for a decade. However, they have managed it.
I downloaded LibreOffice and all those other things needed to get a basic “windows experience” and discovered that all those typefaces you know and love are unavailable. They should have been in the proprietary pack but Ubuntu has a problem installing them. This means that I can’t share documents in any meaningful way with Windows users, because all my formatting is screwed.
LibreOffice is not bad, but it really is not Microsoft Word and anyone who tries to tell you otherwise is lying.
I download and configure Thunderbird for mail and for a few good days it actually worked. However yesterday it disappeared from the side bar and I can’t find it anywhere. I am restricted to webmail and I am really hating Microsoft’s outlook experience.
The only thing that is different between this review and the one I wrote three years ago is that there are now games which actually work thanks to Steam. I have not tried this out yet because I am too stressed with the work backlog caused by having to work on Linux without regular software, but there is an element feeling that Linux is at last moving to a point where it can be a little bit useful.
So what are the main problems that Linux refuses to address? Usability, interface and compatibility.
I know Ubuntu is famous for its shit interface, and Gnome is supposed to be better, but both look and feel dated. I also hate Windows 8′s interface which requires you to use all your computing power to navigate through a touch screen tablet screen when you have neither. It should have been an opportunity for Open saucers to trump Windows with a nice interface – it wasn’t.
You would think that all the brains in the Linux community could come up with a simple easy to use interface which lets you have access to all the files you need without much trouble. The problem here is that Linux fans like to tinker they don’t want usability and they don’t have problems with command screens. Ordinary users, particularly more recent generations will not go near a command screen.
Compatibly issues for games has been pretty much resolved, but other key software is missing and Linux operators do not seem keen to get them on board.
I do a lot of layout and graphics work. When you complain about not being able to use Photoshop, Linux fanboys proudly point to GIMP and say that does the same things. You want to grab them down the throat and stuff their heads down the loo and flush. GIMP does less than a tenth of what Photoshop can do and it does it very badly. There is nothing that can do what CS or any real desktop publishers can do available on Linux.
Proprietary software designed for real people using a desktop tends to trump anything open saucy, even if it is producing a technology marvel.
So in all these years, Linux has not attempted to fix any of the problems which have effectively crippled it as a desktop product.
I will look forward to next week when the new PC arrives and I will not need another Ubuntu desktop experience. Who knows maybe they will have sorted it in three years time again.
HGST Buys Amplidata
HGST announced the acquisition of Belgian software-defined storage provider Amplidata.
Amplidata has been instrumental in HGST’s Active Archive elastic storage solution unveiled at the company’s Big Bang event last September in San Francisco.
Use of Amplidata’s Himalaya distributed storage system, combined with HGST’s unique Helium filled drives, creates systems that can store 10 petabytes on a single rack, designed for cold storage literally and figuratively.
Dave Tang, senior vice president and general manager of HGST’s Elastic Storage Platforms Group, said “Software-defined storage solutions are essential to scale-out storage of the type we unveiled in September. The software is vital to ensuring the durability and scalability of systems.”
Steve Milligan, president and chief executive of Western Digital, added: “We have had an ongoing strategic relationship with Amplidata that included investment from Western Digital Capital and subsequent joint development activity.
“Amplidata has deep technical expertise, an innovative spirit, and valuable intellectual property in this fast-growing market space.
“The acquisition will support our strategic growth initiatives and broaden the scope of opportunity for HGST in cloud data centre storage infrastructure.”
The acquisition is expected to be completed in the first quarter of the year. No financial terms were disclosed.
Amplidata will ultimately be incorporated into the HGST Elastic Storage Platforms Group, a recognition of the fact that every piece of hardware is, in part, software.
Mike Cordano, president of HGST, said at last year’s Big Bang event: “We laugh when we hear that we’re a hardware company. People don’t realise there’s over a million lines of code in that drive. That’s what the firmware is.
“What we’re starting to do now is add software to that and, along with the speed of the PCI-e interface, that makes a much bigger value proposition.”
IBM Goes Bare Metal
IBM has announced the availability of OpenPower servers as part of the firm’s SoftLayer bare metal cloud offering.
OpenPower, a collaborative foundation run by IBM in conjunction with Google and Nvidia, offers a more open approach to IBM’s Power architecture, and a more liberal licence for the code, in return for shared wisdom from member organisations.
Working in conjunction with Tyan and Mellanox Technologies, both partners in the foundation, the bare metal servers are designed to help organisations easily and quickly extend infrastructure in a customized manner.
“The new OpenPower-based bare metal servers make it easy for users to take advantage of one of the industry’s most powerful and open server architectures,” said Sonny Fulkerson, CIO at SoftLayer.
“The offering allows SoftLayer to deliver a higher level of performance, predictability and dependability not always possible in virtualised cloud environments.”
Initially, servers will run Linux applications and will be based on the IBM Power8 architecture in the same mold as IBM Power system servers.
This will later expand to the Power ecosystem and then to independent software vendors that support Linux on Power application development, and are migrating applications from x86 to the Power architecture.
OpenPower servers are based on open source technology that extends right down to the silicon level, and can allow highly customised servers ranging from physical to cloud, or even hybrid.
Power systems are already installed in SoftLayer’s Dallas data centre, and there are plans to expand to data centres throughout the world. The system was first rolled out in 2014 as part of the Watson portfolio.
Prices will be announced when general availability arrives in the second quarter.
Qualcomm Goes Ultrasonic
Qualcomm has unveiled what it claims is the world’s first ‘ultrasonic’ fingerprint scanner, in a bid to improve mobile security and further boost Android’s chances in the enterprise space.
The Qualcomm Snapdragon Sense ID 3D Fingerprint technology debuted during the chipmaker’s Mobile World Congress (MWC) press conference on Monday.
The firm claimed that the new feature will outperform the fingerprint scanners found on smartphones such as the iPhone 6 and Galaxy S6.
Qualcomm also claimed that, as well as “better protecting user data”, the 3D ultrasonic imaging technology is much more accurate than capacitive solutions currently available, and is not hindered by greasy or sweaty fingers.
Sense ID offers a more “innovative and elegant” design for manufacturers, the firm said, owing to its ability to scan fingerprints through any material, be it glass, metal or sapphire.
This means, in theory, that future fingerprint sensors could be included directly into a smartphone’s display.
Derek Aberle, Qualcomm president, said: “This is another industry first for Qualcomm and has the potential to revolutionise mobile security.
“It’s also another step towards the end of the password, and could mean that you’ll never have to type in a password on your smartphone again.”
No specific details or partners have yet been announced, but Qualcomm said that the Sense ID technology will arrive in devices in the second half of 2015, when the firm’s next-generation Snapdragon 820 processor is also tipped to debut.
The firm didn’t reveal many details about this chip, except that it will feature Kryo 64-bit CPU tech and a new machine learning feature dubbed Zeroth.
Qualcomm also revealed more details about LTE-U during Monday’s press conference, confirming plans to extend LTE to unused spectrum using technology integrated in its latest small-cell solutions and RF transceivers for mobile devices.
“We face many challenges as demand for data constantly grows, and we think the best way to fix this is by taking advantage of unused spectrum,” said Aberle.
Finally, the chipmaker released details about a new a partnership with Cyanogen, the open-source outfit responsible for the CyanogenMod operating system.
Qualcomm said that it will provide support for the best features and UI enhancements of CyanogenMod on Snapdragon processors, which will be available for the release of Qualcomm Reference Design in April.
The MWC announcements follow the launch of the ARM Cortex-based Snapdragon 620 and 618 chips last month, which promise to improve connectivity and user experience on high-end smartphones and tablets.
Aberle said that these chips will begin to show up in devices in mid to late 2015.