Syber Group
Toll Free : 855-568-TSTG(8784)
Subscribe To : Envelop Twitter Facebook Feed linkedin

Amazon Web Services Goes Zocalo

December 4, 2014 by  
Filed under Computing

Comments Off on Amazon Web Services Goes Zocalo

Amazon Web Services (AWS) has announced two much-needed boosts to its fledgling Zocalo productivity platform, making the service mobile and allowing for file capacities of up to 5TB.

The service, which is designed to do what Drive does for Google and what Office 365 does for software rental, has gained mobile apps for the first time as Zocalo appears on the Google Play store and Apple App Store.

Amazon also mentions availability on the Kindle store, but we’re not sure about that bit. We assume it means the Amazon App Store for Fire tablet users.

The AWS blog says that the apps allow the user to “work offline, make comments, and securely share documents while you are in the air or on the go.”

A second announcement brings Zocalo into line with the AWS S3 storage on which it is built. Users will receive an update to their Zocalo sync client which will enable file capacities up to 5TB, the same maximum allowed by the Amazon S3 cloud.

To facilitate this, multi-part uploads will allow users to carry on an upload from where it was after a break, deliberate or accidental.

Zocalo was launched in July as the fight for enterprise storage productivity hots up. The service can be trialled for 30 days free of charge, offering 200GB each for up to 50 users.

Rival services from companies including the aforementioned Microsoft and Google, as well as Dropbox and Box, coupled with aggressive price cuts across the sector, have led to burgeoning wars for the hearts and minds of IT managers as Microsoft’s Office monopoly begins to wane.

Source

Amazon Intel Zeon Inside

November 26, 2014 by  
Filed under Computing

Comments Off on Amazon Intel Zeon Inside

Amazon has become the latest vendor to commission a customized Xeon chip from Intel to meet its exact compute requirements, in this case powering new high-performance C4 virtual machine instances on the AWS cloud computing platform.

Amazon announced at the firm’s AWS re:Invent conference in Las Vegas that the latest generation of compute-optimized Amazon Elastic Compute Cloud (EC2) virtual machine instances offer up to 36 virtual CPUs and 60GB of memory.

“These instances are designed to deliver the highest level of processor performance on EC2. If you’ve got the workload, we’ve got the instance,” said AWS chief evangelist Jeff Barr, detailing the new instances on the AWS blog.

The instances are powered by a custom version of Intel’s latest Xeon E5 v3 processor family, identified by Amazon as the Xeon E5-2666 v3. This runs at a base speed of 2.9GHz, and can achieve clock speeds as high as 3.5GHz with Turbo boost.

Amazon is not the first company to commission a customized processor from Intel. Earlier this year, Oracle unveiled new Sun Server X4-4 and Sun Server X4-8 systems with a custom Xeon E7 v2 processor.

The processor is capable of dynamically switching core count, clock frequency and power consumption without the need for a system level reboot, in order to deliver an elastic compute capability that adapts to the demands of the workload.

However, these are just the vendors that have gone public; Intel claims it is delivering over 35 customized versions of the Intel Xeon E5 v3 processor family to various customers.

This is an area the chipmaker seems to be keen on pursuing, especially with companies like cloud service providers that purchase a great many chips.

“We’re really excited to be working with Amazon. Amazon’s platform is the landing zone for a lot of new software development and it’s really exciting to partner with those guys on a SKU that really meets their needs,” said Dave Hill, ‎senior systems engineer in Intel’s Datacenter Group.

Also at AWS re:Invent, Amazon announced the Amazon EC2 Container Service, adding support for Docker on its cloud platform.

Currently available as a preview, the EC2 Container Service is designed to make it easy to run and manage distributed applications on AWS using containers.

Customers will be able to start, stop and manage thousands of containers in seconds, scaling from one container to hundreds of thousands across a managed cluster of Amazon EC2 instances, the firm said.

Source

RedHat Ups Game With Fedora 21

October 10, 2014 by  
Filed under Computing

Comments Off on RedHat Ups Game With Fedora 21

RedHat has announced the Fedora 21 Alpha release for Fedora developers and any brave users that want to help test it.

Fedora is the leading edge – some might say bleeding edge – distribution of Linux that is sponsored by Red Hat. That’s where Red Hat and other developers do new development work that eventually appears in Red Hat Enterprise Linux (RHEL) and other Red Hat based Linux distributions, including Centos, Scientific Linux and Mageia, among others. Therefore, what Fedora does might also appear elsewhere eventually.

The Fedora project said the release of Fedora 21 Alpha is meant for testing in order to help it identify and resolve bugs, adding, “Fedora prides itself on bringing cutting-edge technologies to users of open source software around the world, and this release continues that tradition.”

Specifically, Fedora 21 will produce three software products, all built on the same Fedora 21 base, and these will each be a subset of the entire release.

Fedora 21 Cloud will include images for use in private cloud environments like Openstack, as well as AMIs for use on Amazon, and a new image streamlined for running Docker containers called Fedora Atomic Host.

Fedora 21 Server will offer data centre users “a common base platform that is meant to run featured application stacks” for use as a web server, file server, database server, or as a base for offering infrastructure as a service, including advanced server management features.

Fedora 21 Workstation will be “a reliable, user-friendly, and powerful operating system for laptops and PC hardware” for use by developers and other desktop users, and will feature the latest Gnome 3.14 desktop environment.

Those interested in testing the Fedora 21 Alpha release can visit the Fedora project website.

Source

nVidia Finally Goes 20nm

October 3, 2014 by  
Filed under Computing

Comments Off on nVidia Finally Goes 20nm

For much of the year we were under the impression that the second generation Maxwell will end up as a 20nm chip.

First-generation Maxwell ended up being branded as Geforce GTX 750 and GTX 750 TI and the second generation Maxwell launched a few days ago as the GTX 980 and Geforce GTX 970, with both cards based on the 28nm GM204 GPU.

This is actually quite good news as it turns out that Nvidia managed to optimize power and performance of the chip and make it one of the most efficient chips manufactured in 28nm.

Nvidia 20nm chips coming in 2015

Still, people keep asking about the transition to 20nm and it turns out that the first 20nm chip from Nvidia in 20nm will be a mobile SoC.

The first Nvidia 20nm chip will be a mobile part, most likely Erista a successor of Parker (Tegra K1).

Our sources didn’t mention the exact codename, but it turns out that Nvidia wants to launch a mobile chip first and then it plans to expand into 20nm with graphics.

Unfortunately we don’t have any specifics to report.

AMD 20nm SoC in 2015

AMD is doing the same thing as its first 20nm chip, codenamed Nolan, is an entry level APU targeting tablet and detachable markets.

There is a strong possibility that Apple and Qualcomm simply bought a lot of 20nm capacity for their mobile modem chips and what was left was simply too expensive to make economic sense for big GPUs.
20nm will drive the voltage down while it will allow higher clocks, more transistors per square millimeter and it will overall enable better chips.

Just remember Nvidia world’s first quad-core Tegra 3 in 40nm was rather hot and making a quad core in 28nm enabled higher performance and significantly better battery life. The same was true of other mobile chips of the era.

We expect similar leap from going down to 20nm in 2015 and Erista might be the first chip to make it to 20nm. A Maxwell derived architecture 20nm will deliver even more efficiency. Needless to say AMD plans to launch 20nm GPUs next year as well.

It looks like Nvidia’s 16nm FinFET Parker processor, based on the Denver CPU architecture and Maxwell graphics won’t appear before 2016.

Source

Intel Sampling Xeon D 14nm

September 24, 2014 by  
Filed under Computing

Comments Off on Intel Sampling Xeon D 14nm

Intel has announced that it is sampling its Xeon D 14nm processor family, a system on chip (SoC) optimized to deliver Intel Xeon processor performance for hyperscale workloads.

Announcing the news on stage during a keynote at IDF in San Francisco, Intel SVP and GM of the Data Centre Group, Diane Bryant, said that the Intel Xeon processor D, which initially was announced in June, will be based on 14nm process technology and be aimed at mid-range communications.

“We’re pleased to announce that we’re sampling the third generation of the high density [data center system on a chip] product line, but this one is actually based on the Xeon processor, called Xeon D,” Bryant announced. “It’s 14nm and the power levels go down to as low as 15 Watts, so very high density and high performance.”

Intel believes that its Xeon D will serve the needs of high density, optimized servers as that market develops, and for networking it will serve mid-range routers as well as other network appliances, while it will also serve entry and mid-range storage. So, Intel claimed, you will get all of the benefits of Xeon-class reliability and performance, but you will also get a very small footprint and high integration of SoC capability.

This first generation Xeon D chip will also showcase high levels of I/O integrations, including 10Gb Ethernet, and will scale Intel Xeon processor performance, features and reliability to lower power design points, according to Intel.

The Intel Xeon processor D product family will also include data centre processor features such as error correcting code (ECC).

“With high levels of I/O integration and energy efficiency, we expect the Intel Xeon processor D product family to deliver very competitive TCO to our customers,” Bryant said. “The Intel Xeon processor D product family will also be targeted toward hyperscale storage for cloud and mid-range communications market.”

Bryant said that the product is not yet available, but it is being sampled, and the firm will release more details later this year.

This announcement comes just days after Intel launched its Xeon E5 v2 processor family for servers and workstations.

Source

Will Intel’s Core M Go Commercial?

September 12, 2014 by  
Filed under Computing

Comments Off on Will Intel’s Core M Go Commercial?

Intel is getting down from four processor lines to three and it looks like Broadwell won’t come with an M-processor line and 57W, 47W, 37W parts. This is not something we expect to happen at this point. The H-processor line will take over the 47W TDP high performance market for mobile computers and some AIOs.

The H-processor 47W line, U-Processor Line with 15W and 28W TDP parts will end up with 5th Gen Intel Core branding. We expect a range of Core i3, Core i5 and Core i7 parts that will be revealed probably at some point after Intel Developer Forum, or after mid-September 2014.

The Y-processor line will end up with the new Intel Core M processor brand and it will aim for high performance detachable and convertible systems that will show up in the latter part of Q4 2014.

Broadwell with 4.5W TDP and Core M branding will end up only in these fancy detachable notebooks and might be one of the most powerful and fastest tablet/detachable platforms around. It will also ‘speak’ Windows 8.1 at launch and we should see some Google Chrome OS products in early 2015.

Intel also plans to keep the Pentium and Celeron brands around and they will be used for Bay Trail-M processors. These parts have been shipping for more than three quarters in entry level detachables such as the Asus T100TA.

Source

Vendors Testing New Xeon Processors

September 11, 2014 by  
Filed under Computing

Comments Off on Vendors Testing New Xeon Processors

Intel is cooking up a hot batch of Xeon processors for servers and workstations, and system vendors have already designed systems that are ready and raring to go as soon as the chips become available.

Boston is one of the companies doing just that, and we know this because it gave us an exclusive peek into its labs to show off what these upgraded systems will look like. While we can’t share any details about the new chips involved yet, we can preview the systems they will appear in, which are awaiting shipment as soon as Intel gives the nod.

Based on chassis designs from Supermicro, with which Boston has a close relationship, the systems comprise custom-built solutions for specific user requirements.

On the workstation side, Boston is readying a mid-range and a high-end system with the new Intel Xeon chips, both based on two-socket Xeon E5-2600v3 rather than the single socket E5-1600v3 versions.

There’s also the mid-range Venom 2301-12T, which comes in a mid-tower chassis and ships with an Nvidia Quadro K4000 card for graphics acceleration. It comes with 64GB of memory and a 240GB SSD as a boot device, plus two 1TB Sata drives configured as a Raid array for data storage.

For extra performance, Boston has also prepared the Venom 2401-12T, which will ship with faster Xeon processors, 128GB of memory and an Nvidia Quadro K6000 graphics card. This also has a 240GB SSD as a boot drive, with two 2TB drives configured as a Raid array for data storage.

Interestingly, Intel’s new Xeon E5-2600v3 processors are designed to work with 2133MHz DDR4 memory instead of the more usual DDR3 RAM, and as you can see in the picture below, DDR4 DIMM modules have slightly longer connectors towards the middle.

For servers, Boston has prepared a 1U rack-mount “pizza box” system, the Boston Value 360p. This is a two-socket server with twin 10Gbps Ethernet ports, support for 64GB of memory and 12Gbps SAS Raid. It can also be configured with NVM Express (NVMe) SSDs connected to the PCI Express bus rather than a standard drive interface.

Boston also previewed a multi-node rack server, the Quattro 12128-6, which is made up of four separate two-socket servers inside a 2U chassis. Each node has up to 64GB of memory, with 12Gbps SAS Raid storage plus a pair of 400GB SSDs.

Source

Can A Linux Cert Payoff?

September 5, 2014 by  
Filed under Computing

Comments Off on Can A Linux Cert Payoff?

The Linux Foundation has announced an online certification programme for entry-level system admininstration and advanced Linux software engineering professionals to help expand the global pool of Linux sysadmin and developer talent.

The foundation indicated that it established the certification programme because there’s increasing demand for staff in the IT industry, saying, “Demand for experienced Linux professionals continues to grow, with this year’s Linux Jobs Report showing that managers are prioritizing Linux hires and paying more for this talent.

“Because Linux runs today’s global technology infrastructure, companies around the world are looking for more Linux professionals, yet most hiring managers say that finding Linux talent is difficult.”

Linux Foundation executive director Jim Zemlin said, “Our mission is to address the demand for Linux that the industry is currently experiencing. We are making our training [programme] and Linux certification more accessible to users worldwide, since talent isn’t confined to one geography or one distribution.

“Our new Certification [Programme] will enable employers to easily identify Linux talent when hiring and uncover the best of the best. We think Linux professionals worldwide will want to proudly showcase their skills through these certifications and that these certificates will become a hallmark of quality throughout our industry.”

In an innovative departure from other Linux certification testing offered by a number of Linux distribution vendors and training firms, the foundation said, “The new Certification [Programme] exams and designations for Linux Foundation Certified System Administrator (LFCS) and Linux Foundation Certified Engineer (LFCE) will demonstrate that users are technically competent through a groundbreaking, performance-based exam that is available online, from anywhere and at any time.”

The exams are customised somewhat to accommodate technical differences that exist between three major Linux distributions that are characteristic of those usually encountered by Linux professionals working in the IT industry. Exam takers can choose between CentOS, openSUSE or Ubuntu, a derivative of Debian.

“The Linux Foundation’s certification [programme] will open new doors for Linux professionals who need a way to demonstrate their know-how and put them ahead of the rest,” said Ubuntu founder Mark Shuttleworth.

Those who want to look into acquiring the LFCS and LFCE certifications can visit the The Linux Foundation website where it offers the exams, as well as training to prepare for them. The exams are priced at $300, but apparently they are on special introductory offer for $50.

The Linux Foundation is a nonprofit organization dedicated to accelerating the growth of Linux and collaborative development. It is supported by a diverse roster of almost all of the largest IT companies in the world except Microsoft.

Source

AMD’s Carrizo Goes Mobile Only

August 8, 2014 by  
Filed under Computing

Comments Off on AMD’s Carrizo Goes Mobile Only

AMD’s upcoming Carrizo APU might not make it to the desktop market at all.

According to Italian tech site bitsandchips.it, citing industry sources, AMD plans to limit Carrizo to mobile parts. Furthermore the source claims Carrizo will not support DDR4 memory. We cannot confirm or deny the report at this time.

If the rumours turn out to be true, AMD will not have a new desktop platform next year. Bear in mind that Intel is doing the exact same thing by bringing 14nm silicon to mobile rather than desktop. AMD’s roadmap previously pointed to a desktop Carrizo launch in 2015.

AMD’s FM2+ socket and Kaveri derivatives would have to hold the line until 2016. The same goes for the AM3+ platform, which should also last until 2016.

Not much is known about Carrizo at the moment, hence we are not in a position to say much about the latest rumours. AMD’s first 20nm APU will be Nolan, but Carrizo will be the first 20nm big core. AMD confirmed a number of delays in a roadmap leaked last August.

The company recently confirmed its first 20nm products are coming next year. In all likelihood AMD will be selling 32nm, 28nm and 20nm parts next year.

Source

nVidia Releases CUDA

July 10, 2014 by  
Filed under Computing

Comments Off on nVidia Releases CUDA

Nvidia has released CUDA – its code that lets developers run their code on GPUs – to server vendors in order to get 64-bit ARM cores into the high performance computing (HPC) market.

The firm said today that ARM64 server processors, which are designed for microservers and web servers because of their energy efficiency, can now process HPC workloads when paired with GPU accelerators using the Nvidia CUDA 6.5 parallel programming framework, which supports 64-bit ARM processors.

“Nvidia’s GPUs provide ARM64 server vendors with the muscle to tackle HPC workloads, enabling them to build high-performance systems that maximise the ARM architecture’s power efficiency and system configurability,” the firm said.

The first GPU-accelerated ARM64 software development servers will be available in July from Cirrascale and E4 Computer Engineering, with production systems expected to ship later this year. The Eurotech Group also plans to ship production systems later this year.

Cirrascale’s system will be the RM1905D, a high density two-in-one 1U server with two Tesla K20 GPU accelerators, which the firm claims provides high performance and low total cost of ownership for private cloud, public cloud, HPC and enterprise applications.

E4′s EK003 is a production-ready, low-power 3U dual-motherboard server appliance with two Tesla K20 GPU accelerators designed for seismic, signal and image processing, video analytics, track analysis, web applications and Mapreduce processing.

Eurotech’s system is an “ultra-high density”, energy efficient and modular Aurora HPC server configuration, based on proprietary Brick Technology and featuring direct hot liquid cooling.

Featuring Applied Micro X-Gene ARM64 CPUs and Nvidia Tesla K20 GPU accelerators, the new ARM64 servers will provide customers with an expanded range of efficient, high-performance computing options to drive compute-intensive HPC and enterprise data centre workloads, Nvidia said.

Nvidia added, “Users will immediately be able to take advantage of hundreds of existing CUDA-accelerated scientific and engineering HPC applications by simply recompiling them to ARM64 systems.”

ARM said that it is working with Nvidia to “explore how we can unite GPU acceleration with novel technologies” and drive “new levels of scientific discovery and innovation”.

Source

« Previous PageNext Page »