Syber Group
Toll Free : 855-568-TSTG(8784)
Subscribe To : Envelop Twitter Facebook Feed linkedin

Russia Banking On Home Grown CPUs

May 28, 2015 by  
Filed under Computing

Comments Off on Russia Banking On Home Grown CPUs

A Russian firm announced its intention to build its own homegrown CPUs as part of a cunning plan to keep the Americans from spying on the glorious Empire of Tsar Putin and oil oligarchs.

Moscow Centre of SPARC Technologies (MCST) has announced it’s now taking orders for its Russian-made microprocessors from domestic computer and server manufacturers.

Dubbed the Elbrus-4C, it was fully designed and developed in MCST’s Moscow labs. It’s claimed to be the most high-tech processor ever built in Russia. They claim it is comparable with Intel’s Core i3 and Intel Core i5 processors, although they do not say what generation as one spec we found claimed it could manage a blistering 1.3 GHz which is slightly less than an average mobile phone.

MCST unveiled a new PC, the Elbrus ARM-401 which is powered by the Elbrus-4C chip and runs its own Linux-based Elbrus operating system. MCST claimed it can run Windows and Linux distributions. Yhe company has built a data centre server rack, the Elbrus-4.4, which is powered by four Elbrus-4C microprocessors and supports up to 384GB of RAM.

MCST said the Elbrus-4.4 is suitable for web servers, database servers, storage systems, servers, remote desktops and high-performance clusters.

Sergei Viljanen, editor in chief of the Russian-language PCWorld website said that the chip was at least five years behind the west.

“Russian processor technology is still about five years behind the west. Intel’s chips come with a 14nm design, whereas the Elbrus is 65 nm, which means they have a much higher energy consumption.”

MCST’s Elbrus-4C chips are powered by a 4-core processors, and come with an interface for hard drives and other peripherals. The company finalized development of Elbrus-4C in April 2014, and began mass production last autumn.

Source

Intel Rewards RealSense Developers

May 21, 2015 by  
Filed under Computing

Comments Off on Intel Rewards RealSense Developers

Intel has awarded  $1m to a number of developers as part of its RealSense 3D App Challenge, which was launched last year.

Announced by Intel president Renee James at Computex 2014, the RealSense App Challenge was part of Intel’s efforts to boost RealSense globally and generate software innovation around the ecosystem.

More than 7,000 software creators in 37 countries applied to compete, and 400 were selected to develop new applications for entertainment, learning and collaboration.

Several hundred developers of creative app ideas in these categories received the latest edition of the RealSense 3D Camera and RealSense software development kit, which included free tools, examples and application programing interfaces with which to develop their ideas.

Intel announced on Thursday that the grand prize winner, who picks up $100,000, is Brazilian developer Alexandre Ribeiro da Silva of Anima Games.

His Seed app requires gamers to use reflexes and rational thinking to solve puzzles. The goal of the game is to guide a little floating seed through its journey to reforest a devastated land.

The second prize of $50,000 was awarded to Canadian developer David Schnare of Kinetisense. His OrthoSense app uses RealSense to help medical professionals remotely rehabilitate a patient who has suffered a hand injury by tracking their range of movement over time.

“This practical application of human-computer interaction is an impressive example of how technology can make our lives better,” Intel said.

Another notable winner was Lee Bamber from the UK, who received recognition for his virtual 3D video maker. The app allows a user to record themselves as a 3D hologram and then transport to a variety of scenes.

Once recorded, they can then change the camera position over the course of the playback to add an extra dimension to a video blogs, storybook or v-mails, for instance.

“The idea of the app is that you can choose the backdrop then set the lighting as you would in a studio then do the acting,” Bamber explained in his video.

Doug Fisher, SVP and general manager of Intel’s Software and Services Group, said in a blog post that now the app challenge is complete “the real work begins”, as Intel Software will continue to encourage all finalists to bring products to market.

“We also will continue mobilising our resources to inspire, educate and advance innovation through programmes such as the Intel Developer Zone, where developers can engage to find new software tools and build industry relationships,” he said.

“Human-computer interactions will no longer be defined by mice, keyboards and 2D displays. Our physical and digital worlds are coming together. When they do, the opportunities for us as consumers and businesses will explode.”

Source

USAA Exploring Bitcoins

May 20, 2015 by  
Filed under Around The Net

Comments Off on USAA Exploring Bitcoins

USAA, a San Antonio, Texas-based financial institution serving current and former members of the military, is researching the underlying technology behind the digital currency bitcoin to help make its operations more efficient, a company executive said.

Alex Marquez, managing director of corporate development at USAA, said in an interview that the company and its banking, insurance, and investment management subsidiaries hoped the “blockchain” technology could help decentralize its operations such as the back office.

He said USAA had a large team researching the potential of the blockchain, an open ledger of a digital currency’s transactions, viewed as bitcoin’s main technological innovation. It lets users make payments anonymously, instantly, and without government regulation.

The blockchain ledger is accessible to all users of bitcoin, a virtual currency created through a computer “mining” process that uses millions of calculations. Bitcoin has no ties to a central bank and is viewed as an alternative to paying for goods and services with credit cards.

“We have serious interest in the blockchain and we think the technology would have an impact on the organization,” said Marquez. “The fact that we have such a large group of people working on this shows how serious we are about the potential of this technology.”

USAA, which provides banking, insurance and other products to 10.7 million current or former members of the military, owns and manages assets of about $213 billion.

Marquez said USAA had no plans to dabble in the bitcoin as a currency. Its foray into the blockchain reflects a trend among banking institutions trying to integrate bitcoin technology into their systems. BNY Mellon and UBS have announced initiatives to explore the blockchain technology.

Most large banks are testing the blockchain internally, said David Johnston, managing director at Dapps Venture Fund in San Antonio, Texas. “All of the banks are going through that process of trying to understand how this technology is going to evolve.”

“I would say that by the end of the year, most will have solidified a blockchain technology strategy, how the bank is going to implement and how it will move the technology forward.”

USAA is still in early stages of its research and has yet to identify how it will implement the technology.

In January this year, USAA invested in Coinbase, the biggest bitcoin company, which runs a host of services, including an exchange and a wallet, which is how bitcoins are stored by users online.

Source

Oracle Launches OpenStack Platform With Intel

April 7, 2015 by  
Filed under Computing

Comments Off on Oracle Launches OpenStack Platform With Intel

Oracle and Intel have teamed up for the first demonstration of carrier-grade network function virtualization (NFV), which will allow communication service providers to use a virtualized, software-defined model without degradation of service or reliability.

The Oracle-led project uses the Intel Open Network Platform (ONP) to create a robust service over NFV, using intelligent direction of software to create viable software-defined networking that replaces the clunky equipment still prevalent in even the most modern networks.

Barry Hill, Oracle’s global head of NFV, told The INQUIRER: “It gets us over one of those really big hurdles that the industry is desperately trying to overcome: ‘Why the heck have we been using this very tightly coupled hardware and software in the past if you can run the same thing on standard, generic, everyday hardware?’. The answer is, we’re not sure you can.

“What you’ve got to do is be smart about applying the right type and the right sort of capacity, which is different for each function in the chain that makes up a service.

“That’s about being intelligent with what you do, instead of making some broad statement about generic vanilla infrastructures plugged together. That’s just not going to work.”

Oracle’s answer is to use its Communications Network Service Orchestration Solution to control the OpenStack system and shrink and grow networks according to customer needs.

Use cases could be scaling out a carrier network for a rock festival, or transferring network priority to a disaster recovery site.

“Once you understand the extent of what we’ve actually done here, you start to realize just how big an announcement this is,” said Hill.

“On the fly, you’re suddenly able to make these custom network requirements instantly, just using off-the-shelf technology.”

The demonstration configuration optimizes the performance of an Intel Xeon E5-2600 v3 processor designed specifically for networking, and shows for the first time a software-defined solution which is comparable to the hardware-defined systems currently in use.

In other words, it can orchestrate services from the management and orchestration level right down to a single core of a single processor, and then hyperscale it using resource pools to mimic the specialized characteristics of a network appliance, such as a large memory page.

“It’s kind of like the effect that mobile had on fixed line networks back in the mid-nineties where the whole industry was disrupted by who was providing the technology, and what they were providing,” said Hill.

“Suddenly you went from 15-year business plans to five-year business plans. The impact of virtualization will have the same level of seismic change on the industry.”

Today’s announcement is fundamentally a proof-of-concept, but the technology that powers this kind of next-generation network is already evolving its way into networks.

Hill explained that carrier demand had led to the innovation. “The telecoms industry had a massive infrastructure that works at a very slow pace, at least in the past,” he said.

“However, this whole virtualization push has really been about the carriers, not the vendors, getting together and saying: ‘We need a different model’. So it’s actually quite advanced already.”

NFV appears to be the next gold rush area for enterprises, and other consortium are expected to make announcements about their own solutions within days.

The Oracle/Intel system is based around OpenStack, and the company is confident that it will be highly compatible with other systems.

The ‘Oracle Communications Network Service Orchestration Solution with Enhanced Platform Awareness using the Intel Open Network Platform’ – or OCNSOSWEPAUTIONP as we like to think of it – is currently on display at Oracle’s Industry Connect event in Washington DC.

The INQUIRER wonders whether there is any way the marketing department can come up with something a bit more catchy than OCNSOSWEPAUTIONP before it goes on open sale.

Source

Juniper Networks Goes OpenStack

April 3, 2015 by  
Filed under Computing

Comments Off on Juniper Networks Goes OpenStack

Juniper and Mirantis are getting close, with news that they are to form a cloud OpenStack alliance.

The two companies have signed an engineering partnership that the companies believe will lead to a reliable, scalable software-defined networking solution.

Mirantis OpenStack will now inter-operate with Juniper Contrail Networking, as well as OpenContrail, an open source software-defined networking system.

The two companies have published a reference architecture for deploying and managing Juniper Contrail Networking with Mirantis OpenStack to simplify deployment and reduce the need for third-party involvement.

Based on OpenStack Juno, Mirantis OpenStack 6.0 will be enhanced by a Fuel plugin in the second quarter that will make it even easier to deploy large-scale clouds in house.

However, Mirantis has emphasized that the arrival of Juniper to the fold is not a snub to the recently constructed integration with VMware.

Nick Chase of Mirantis explained, “…with this Juniper integration, Mirantis will support BOTH VMware vCenter Server and VMware NSX AND Juniper Networks Contrail Networking. That means that even if they’ve got VMware in their environment, they can choose to use NSX or Contrail for their networking components.

“Of course, all of that begs the question, when should you use Juniper, and when should you use VMware? Like all great engineering questions, the answer is ‘it depends’. How you choose is going to be heavily influenced by your individual situation, and what you’re trying to achieve.”

Juniper outlined its goals for the tie-up as:

– Reduce cost by enabling service providers and IT administrators to easily embrace SDN and OpenStack technologies in their environments

– Remove the complexity of integrating networking technologies in OpenStack virtual data centres and clouds

– Increase the effectiveness of their operations with fully integrated management for the OpenStack and SDN environments through Fuel and Juniper Networks® Contrail SDN Controller

The company is keen to emphasize that this is not meant to be a middle finger at VMware, but rather a demonstration of the freedom of choice offered by open source software. However, it serves as another demonstration of how even the FOSS market is growing increasingly proprietary and competitive.

Source

Medical Data Becoming Valuable To Hackers

April 2, 2015 by  
Filed under Computing

Comments Off on Medical Data Becoming Valuable To Hackers

The personal information stored in health care records fetches increasingly impressive sums on underground markets, making any company that stores such data a very attractive target for attackers.

“Hackers will go after anyone with health care information,” said John Pescatore, director of emerging security trends at the SANS Institute, adding that in recent years hackers have increasingly set their sights on EHRs (electronic health records).

With medical data, “there’s a bunch of ways you can turn that into cash,” he said. For example, Social Security numbers and mailing addresses can be used to apply for credit cards or get around corporate antifraud measures.

This could explain why attackers have recently targeted U.S. health insurance providers. Last Tuesday, Premera Blue Cross disclosed that the personal details of 11 million customers had been exposed in a hack that was discovered in January. Last month, Anthem, another health insurance provider, said that 78.8 million customer and employee records were accessed in an attack.

Both attacks exposed similar data, including names, Social Security numbers, birth dates, telephone numbers, member identification numbers, email addresses and mailing addresses. In the Premera breach, medical claims information was also accessed.

If the attackers try to monetize this information, the payout could prove lucrative.

Credentials that include Social Security numbers can sell for a couple of hundred dollars since the data’s lifetime is much longer compared to pilfered credit card numbers, said Matt Little, vice president of product development at PKWARE, an encryption software company with clients that include health care providers. Credit card numbers, which go for a few dollars, tend to work only for a handful of days after being reported stolen.

Source

SUSE Brings Hadoop To IBM z Mainframes

April 1, 2015 by  
Filed under Computing

Comments Off on SUSE Brings Hadoop To IBM z Mainframes

SUSE and Apache Hadoop vendor Veristorm are teaming up to bring Hadoop to IBM z and IBM Power systems.

The result will mean that regardless of system architecture, users will be able to run Apache Hadoop within a Linux container on their existing hardware, meaning that more users than ever will be able to process big data into meaningful information to inform their business decisions.

SUSE’s Veristorm Data Hub and vStorm Enterprise Hadoop will now be available as zDoop, the first mainframe-compatible Hadoop iteration, running on SUSE Linux Enterprise Server for System z, either on IBM Power12 or Power8 machines in little-endian mode, which makes it significantly easier for x86 based software to be ported to the IBM platform.

SUSE and Veristorm have also committed to work together on educating partners and channels on the benefits of the overall package.

Naji Almahmoud, head of global business development for SUSE, said: “The growing need for big data processing to make informed business decisions is becoming increasingly unavoidable.

“However, existing solutions often struggle to handle the processing load, which in turn leads to more servers and difficult-to-manage sprawl. This partnership with Veristorm allows enterprises to efficiently analyse their mainframe data using Hadoop.”

Veristorm launched Hadoop for Linux in April of last year, explaining that it “will help clients to avoid staging and offloading of mainframe data to maintain existing security and governance controls”.

Sanjay Mazumder, CEO of Veristorm, said that the partnership will help customers “maximize their processing ability and leverage their richest data sources” and deploy “successful, pragmatic projects”.

SUSE has been particularly active of late, announcing last month that its software-defined Enterprise Storage product, built around the open source Ceph framework, was to become available as a standalone product for the first time.

Source

Target Settles Security Breach

March 30, 2015 by  
Filed under Computing

Comments Off on Target Settles Security Breach

Target is reportedly close to paying out $10m to settle a class-action case that was filed after it was hacked and stripped of tens of millions of peoples’ details.

Target was smacked by hackers in 2013 in a massive cyber-thwack on its stores and servers that put some 70 million people’s personal information in harm’s way.

The hack has had massive repercussions. People are losing faith in industry and its ability to store their personal data, and the Target incident is a very good example of why people are right to worry.

As well as tarnishing Target’s reputation, the attack also led to a $162m gap in its financial spreadsheets.

The firm apologized to its punters when it revealed the hack, and chairman, CEO and president Gregg Steinhafel said he was sorry that they have had to “endure” such a thing

Now, according to reports, Target is willing to fork out another $10m to put things right, offering the money as a proposed settlement in one of several class-action lawsuits the company is facing. If accepted, the settlement could see affected parties awarded some $10,000 for their troubles.

We have asked Target to either confirm or comment on this, and are waiting for a response. For now we have an official statement at Reuters to turn to. There we see Target spokeswoman Molly Snyder confirming that something is happening but not mentioning the 10 and six zeroes.

“We are pleased to see the process moving forward and look forward to its resolution,” she said.

Not available to comment, not that we asked, will be the firm’s CIO at the time of the hack. Thirty-year Target veteran Beth Jacob left her role in the aftermath of the attack, and a replacement was immediately sought.

“To ensure that Target is well positioned following the data breach we suffered last year, we are undertaking an overhaul of our information security and compliance structure and practices at Target,” said Steinhafel then.

“As a first step in this effort, Target will be conducting an external search for an interim CIO who can help guide Target through this transformation.”

“Transformational change” pro Bob DeRodes took on the role in May last year and immediately began saying the right things.

“I look forward to helping shape information technology and data security at Target in the days and months ahead,” he said.

“It is clear to me that Target is an organization that is committed to doing whatever it takes to do right by their guests.”

We would ask Steinhafel for his verdict on DeRodes so far and the $10m settlement, but would you believe it, he’s not at Target anymore either having left in the summer last year with a reported $61m golden parachute.

Source

IBM Debuts New Mainframe

March 27, 2015 by  
Filed under Computing

Comments Off on IBM Debuts New Mainframe

IBM has started shipping its all-new first z13 mainframe computer.

IBM has high hopes the upgraded model will generate solid sales based not only on usual customer patterns but its design focus aimed at helping them cope with expanding mobile usage, analysis of data, upgrading security and doing more “cloud” remote computing.

Mainframes are still a major part of the Systems and Technology Group at IBM, which overall contributed 10.8 percent of IBM’s total 2014 revenues of $92.8 billion. But the z Systems and their predecessors also generate revenue from software, leasing and maintenance and thus have a greater financial impact on IBM’s overall picture.

The new mainframe’s claim to fame is to use simultaneous multi-threading (SMT) to execute two instruction streams (or threads) on a processor core which delivers more throughput for Linux on z Systems and IBM z Integrated Information Processor (zIIP) eligible workloads.

There is also a single Instruction Multiple Data (SIMD), a vector processing model providing instruction level parallelism, to speed workloads such as analytics and mathematical modeling. All this means COBOL 5.2 and PL/I 4.5 exploit SIMD and improved floating point enhancements to deliver improved performance over and above that provided by the faster processor.

Its on chip cryptographic and compression coprocessors receive a performance boost improving both general processors and Integrated Facility for Linux (IFL) cryptographic performance and allowing compression of more data, helping tosave disk space and reducing data transfer time.

There is also a redesigned cache architecture, using eDRAM technology to provide twice as much second level cache and substantially more third and fourth level caches compared to the zEC12. Bigger and faster caches help to avoid untimely swaps and memory waits while maximisng the throughput of concurrent workload Tom McPherson, vice president of z System development, said that the new model was not just about microprocessors, though this model has many eight-core chips in it. Since everything has to be cooled by a combination of water and air, semiconductor scaling is slowing down, so “you have to get the value by optimizing.

The first real numbers on how the z13 is selling won’t be public until comments are made in IBM’s first-quarter report, due out in mid-April, when a little more than three weeks’ worth of billings will flow into it.

The company’s fiscal fortunes have sagged, with mixed reviews from both analysts and the blogosphere. Much of that revolves around IBM’s lag in cloud services. IBM is positioning the mainframe as a prime cloud server, one of the systems that is actually what cloud computing goes to and runs on.

Source

Can Linux Succeed On The Desktop?

March 25, 2015 by  
Filed under Computing

Comments Off on Can Linux Succeed On The Desktop?

Every three years I install Linux and see if it is ready for prime time yet, and every three years I am disappointed. What is so disappointing is not so much that the operating system is bad, it has never been, it is just that who ever designs it refuses to think of the user.

To be clear I will lay out the same rider I have for my other three reviews. I am a Windows user, but that is not out of choice. One of the reasons I keep checking out Linux is the hope that it will have fixed the basic problems in the intervening years. Fortunately for Microsoft it never has.

This time my main computer had a serious outage caused by a dodgy Corsair (which is now a c word) power supply and I have been out of action for the last two weeks. In the mean time I had to run everything on a clapped out Fujitsu notebook which took 20 minutes to download a webpage.

One Ubuntu Linux install later it was behaving like a normal computer. This is where Linux has always been far better than Windows – making rubbish computers behave. I could settle down to work right? Well not really.

This is where Linux has consistently disqualified itself from prime-time every time I have used it. Going back through my reviews, I have been saying the same sort of stuff for years.

Coming from Windows 7, where a user with no learning curve can install and start work it is impossible. Ubuntu can’t. There is a ton of stuff you have to upload before you can get anything that passes for an ordinary service. This uploading is far too tricky for anyone who is used to Windows.

It is not helped by the Ubuntu Software Centre which is supposed to make like easier for you. Say that you need to download a flash player. Adobe has a flash player you can download for Ubuntu. Click on it and Ubuntu asks you if you want to open this file with the Ubuntu Software Center to install it. You would think you would want this right? Thing is is that pressing yes opens the software center but does not download Adobe flash player. The center then says it can’t find the software on your machine.

Here is the problem which I wrote about nearly nine years ago – you can’t download Flash or anything proprietary because that would mean contaminating your machine with something that is not Open Sauce.

Sure Ubuntu will download all those proprietary drivers, but you have to know to ask – an issue which has been around now for so long it is silly. The issue of proprietary drives is only a problem for those who are hard core open saucers and there are not enough numbers of them to keep an operating system in the dark ages for a decade. However, they have managed it.

I downloaded LibreOffice and all those other things needed to get a basic “windows experience” and discovered that all those typefaces you know and love are unavailable. They should have been in the proprietary pack but Ubuntu has a problem installing them. This means that I can’t share documents in any meaningful way with Windows users, because all my formatting is screwed.

LibreOffice is not bad, but it really is not Microsoft Word and anyone who tries to tell you otherwise is lying.

I download and configure Thunderbird for mail and for a few good days it actually worked. However yesterday it disappeared from the side bar and I can’t find it anywhere. I am restricted to webmail and I am really hating Microsoft’s outlook experience.

The only thing that is different between this review and the one I wrote three years ago is that there are now games which actually work thanks to Steam. I have not tried this out yet because I am too stressed with the work backlog caused by having to work on Linux without regular software, but there is an element feeling that Linux is at last moving to a point where it can be a little bit useful.

So what are the main problems that Linux refuses to address? Usability, interface and compatibility.

I know Ubuntu is famous for its shit interface, and Gnome is supposed to be better, but both look and feel dated. I also hate Windows 8′s interface which requires you to use all your computing power to navigate through a touch screen tablet screen when you have neither. It should have been an opportunity for Open saucers to trump Windows with a nice interface – it wasn’t.

You would think that all the brains in the Linux community could come up with a simple easy to use interface which lets you have access to all the files you need without much trouble. The problem here is that Linux fans like to tinker they don’t want usability and they don’t have problems with command screens. Ordinary users, particularly more recent generations will not go near a command screen.

Compatibly issues for games has been pretty much resolved, but other key software is missing and Linux operators do not seem keen to get them on board.

I do a lot of layout and graphics work. When you complain about not being able to use Photoshop, Linux fanboys proudly point to GIMP and say that does the same things. You want to grab them down the throat and stuff their heads down the loo and flush. GIMP does less than a tenth of what Photoshop can do and it does it very badly. There is nothing that can do what CS or any real desktop publishers can do available on Linux.

Proprietary software designed for real people using a desktop tends to trump anything open saucy, even if it is producing a technology marvel.

So in all these years, Linux has not attempted to fix any of the problems which have effectively crippled it as a desktop product.

I will look forward to next week when the new PC arrives and I will not need another Ubuntu desktop experience. Who knows maybe they will have sorted it in three years time again.

Source

« Previous PageNext Page »