All posts by Business Cloud News

EMC launches new open source tech for the software defined datacentre

EMC2EMC is launching RackHD and revised version of CoprHD and REX-Ray in its quest to be a top open source influence on tomorrow’s software defined datacentre industry.

RackHD is a hardware management and orchestration software that promises to automate functions such as the discovery, description, provisioning and programming of servers. EMC says it will speed up the process of installing third platform apps by automatically updating firmware and installing operating systems.

Meanwhile, version 2.4 of storage automator CoprHD was improved with help from Intel and Oregon State University. It can now centralise and transform storage from multiple vendors into a simple management platform and interface, EMC claims.

The updated version of storage orchestration engine REX-Ray 0.3 has added storage platform support for Google Compute Engine in addition to EMC Isilon and EMC VMAX.

These products are aimed at modern data centres with a multi-vendor mix of storage, networking and servers and an increasing use of commodity hardware as building blocks of software defined hyperscale infrastructure. In these cases the use of low-level operating systems or updating firmware and BIOS across numerous devices is a cumbersome manual task for data centre engineers, says EMC. RackHD was created to automate and simplify these fundamental tasks across a broad range of datacentre hardware.

According to EMC, developers can use the RackHD API as a component in a larger orchestration system or create a user interface for managing hardware services regardless of the underlying hardware in place.

Intel and Oregon State University have joined EMC’s CoprHD Community as the newest contributors to the storage vendor’s open source initiative. Intel is leading a project to integrate Keystone with CoprHD, allowing the use of the Cinder API and the CoprHD API to provide block storage services.

“We discovered how difficult it was to implement any kind of automation tooling for a mix of storage systems,” said Shayne Huddleston, Director of IT Infrastructure at Oregon State University. “Collaborating with the CoprHD community will allow us avoid vendor lock-in and support our entire infrastructure.”

Riverbed says it’ll make apps respond faster on BT’s cloud of clouds

BT cloud of cloudsBT is to use Riverbed’s SteelHead application accelerator in its global telecoms network to bolster its cloud of clouds strategy.

BT and Riverbed will embed the service at global business hubs in Europe, North America and Asia. Installations are to be made at any location where BT has direct links to major cloud providers and high-capacity internet breakout. The service will be globally available from early 2016 and accessible through BT’s IP Connect VPN from 198 countries and territories.

Steelhead is designed to boost application performance and optimise bandwidth use. As a result customers should get faster responses from BT’s own cloud services and other vendors’ Software-as-a-Service (SaaS) offerings. This partnership is the first time Riverbed technology has been installed into the core of a global telecoms network.

App acceleration and bandwidth efficiencies aside, customers using the new service will have greater control over their applications, a more commanding view of performance across the network and significantly more reliability and security from applications delivered over the internet, says BT.

The new service uses network function virtualisation (NFV) to help customers get a broader range of virtualised functions, such as application performance management and fast access to private and public clouds.

The inclusion of Riverbed helps BT tackle the performance and reliability of applications in the cloud, which have become a big issue for clients, according to Keith Langridge, VP of network services at BT Global Services. “This joint offering with Riverbed is a milestone on the journey to software-defined networks and creates an additional differentiator against our competitors,” said Langridge.

CIOs want the benefits of a hybrid enterprise without the challenges of application delivery that this complex environment creates, according to Paul O’Farrell, General Manager for SteelHead at Riverbed. “Riverbed invented WAN optimization in 2004 with SteelHead and now it’s the leader in application performance infrastructure,” said O’Farrell, “we’re offering an easier on-ramp to cloud computing with BT’s Cloud Connect service.”

IBM acquires Clearleap’s cloud based video

IBM Bluemix CloudIBM says it has acquired cloud based video service provider Clearleap in a bid to make video a strategic source of data on any device at any time.

Clearleap’s video services will be offered through IBM Cloud data centres around the world, which will give clients global 24×7 service and technical support for problem identification and resolution. Clients using the service can now share data and content across geographies and hybrid clouds. IBM will offer the Clearleap APIs on IBM Bluemix in 2016 so clients can build new video offerings quickly and easily.

IBM says Clearleap’s open API framework makes it easy to build video into applications and adapt it to specific business needs like custom workflows and advanced analytics. The framework also means that it works with many third-party applications that customers may already have.

In addition, the Clearleap platform includes subscription and monetization services and data centres from which to host digital video assets. This means IBM customers pass the multi screen video experience on to their own clients.

Clearleap will be integrated into the IBM Cloud platform to make it easy for clients to make money from user video experiences. IBM says this is part of its broader strategy to help clients realise the value of video as it becomes increasingly important in business.

With businesses increasingly using video for CEO webcasts, conference keynotes, customer care and how-to videos, a secure, scalable and open cloud-based system for managing these services has become a priority, says IBM.

Clearleap’s ability to instantly ramp up capacity has won it clients such as HBO, A+E Networks, the NFL, BBC America, Sony Movie Channel, Time Warner Cable and Verizon Communications. Clearleap is headquartered in Atlanta and has data centres in Atlanta, Las Vegas, Frankfurt, and Amsterdam.

“Clearleap joins IBM as visual communications are exploding across every industry,” said Robert LeBlanc, Senior VP of IBM Cloud, “clients want content delivered quickly and economically to any device in the most natural way.”

Meanwhile, in a move that will support the delivery of video services over the cloud, IBM announced a new system that lets developers create apps that tap into vast amounts of unstructured data.

IBM Object Storage, now available on Bluemix, promises simple and secure store and access

Functions. According to IBM 80% of the 2.5 billion gigabytes of data created every day is unstructured content – with most of it video.

Deutsche Telekom launches pan-European public cloud on Cisco platform

T-Mobile Computing cloudDeutsche Telekom has announced the start of a new pan-European public cloud service aimed at businesses of all sizes. The debut offering will be DSI Intercloud, run by T-Systems, which will offer Infrastructure as a service (IaaS) to businesses across Europe. In the first half of 2016, software and platforms as a service deals (SaaS and PaaS) will be launched.

The service, built on a Cisco platform by T-Systems, the business division of Deutsche Telekom, will run from German data centres and be subject to Germany’s data sovereignty regulations.

The pay-as-you-go cloud services can be ordered through Telekom’s new cloud portal, with no minimum purchase requirements and contract periods. Prices start at €0.05 per hour for computing resources and storage at €0.02 per gigabyte. Deutsche Telekom said it hopes to create the foundation for a secure European Internet of Things with high availability and scalability for real time analytics.

Data security company Covata test piloted the platform and will be the first customer to use the DSI Intercloud infrastructure service. Another beta tester was communications company Unify, which used it to investigate the viability of open source cloud platforms running from German data centres.

The new DSI Intercloud marks the latest chapter in the Cisco Intercloud initiative. In June BCN reported how Cisco had bolstered the Intercloud, which it launched in 2014, with 35 partnerships as it aimed to simplify hybrid clouds. Cisco and Deutsche Telekom say they will focus on delivering high availability and scalability for real-time analytics at the edge of the networks, in order to cater for IoT experiences. Edge of networking big data analytics is to become a key a concept in the IoT, BCN reported in December. Last week Hewlett Packard enterprises (HPE) revealed how it is helping IoT system users to decentralised all their processing jobs and devolve decision making to local areas. The rationale is to keep masses of data off the networks and deal with it locally.

Deutsche Telekom said the Cisco partnership is an important building block in expanding its cloud business and aims to at least double its cloud revenue by the end of 2018. In fiscal year 2014, net sales of cloud solutions at T-Systems increased by double figures, mainly in secure private clouds.

Red Hat helps Medlab share supercomputer in the cloud

redhat office logoA cloud of bioinformatics intelligence has been harmonised by Red Hat to create ‘virtual supercomputers’ that can be shared by the eMedlab collective of research institutes.

The upshot is that researchers at institutes such as the Wellcome Trust Sanger, UCL and King’s College London can carry out much more powerful data analysis when researching cancers, cardio-vascular conditions and rare diseases.

Since 2014 hundreds of researchers across the eMedlab have been able to use a high performance computer (HPC) with 6,000 cores of processing power and 6 Petabytes of storage from their own locations. However, the cloud environment now collectively created by technology partners Red Hat, Lenovo, IBM and Mellanox, along with supercomputing integrator OCF, means none of the users have to shift their data to the computer. Each of the seven institutes can configure their share of the HPC according to their needs, by self-selecting the memory, processors and storage they’ll need.

The new HPC cloud environment uses a Red Hat Enterprise Linux OpenStack platform with Lenovo Flex hardware to create virtual HPC clusters bespoke to each individual researchers’ requirements. The system was designed and configured by OCF, working with partners Red Hat, Lenovo, Mellanox and eMedlab’s research technologists.

With the HPC hosted at a shared data centre for education and research, the cloud configuration has made it possible to run a variety of research projects concurrently. The facility, aimed solely at the biomedical research sector, changes the way data sets are shared between leading scientific institutions internationally.

The eMedLab partnership was formed in 2014 with funding from the Medical Research Council. Original members University College London, Queen Mary University of London, London School of Hygiene & Tropical Medicine, the Francis Crick Institute, the Wellcome Trust Sanger Institute and the EMBL European Bioinformatics Institute have been joined recently by King’s College London.

“Bioinformatics is a very, data-intensive discipline,” says Jacky Pallas, Director of Research Platforms at University College London. “We study a lot of de-identified, anonymous human data. It’s not practical for scientists to replicate the same datasets across their own, separate physical HPC resources, so we’re creating a single store for up to 6 Petabytes of data and a shared HPC environment within which researchers can build their own virtual clusters to support their work.”

In other news Red Hat has announced a new upgrade of CloudForms with better hybrid cloud management through more support for Microsoft Azure Support, advanced container management and improvements to its self-service features.

MapR claims world’s first converged data platform with Streams

Navigating big dataApache Hadoop system specialist MapR Technologies claims it has invented a new system to make sense of all the disjointed streams of real time information flooding into big data platforms. The new MapR Streams system will, it says, blend everything from systems logs to sensors to social media feeds, whether it’s transactional or tracking data, and manage it all under one converged platform.

Stream is described as a stream processing tool that can handle real-time event handling and high scalability. When combined with other MapR offerings, it can harmonise existing storage data and NoSQL tools to create a converged data platform. This, it says, is the first of its kind in the cloud industry.

Starting from early 2016, when the technology becomes available, cloud operators can combine Streams with MapR-FS for storage and the MapR-DB in-Hadoop NoSQL database, to build a MapR Converged Data Platform. This will liberate users from having to monitor information from streams, file storage, databases and analytics, the vendor says.

Since it can handle billions of messages per second and join clusters from separate data centres across the globe, the tool could be of particular interested to cloud operators, according to Michael Brown, CTO at comScore. “Our system analyses over 65 billion new events a day, and MapR Streams is built to ingest and process these events in real time, opening the doors to a new level of product offerings for our customers,” he said.

While traditional workloads are being optimised, new workloads from the emerging IoT dataflows are presenting far greater challenges that need to be solved in a fraction of the time, claims MapR. The MapR Streams will help companies deal with the volume, variety and speed at which data has to be analysed while simplifying the multiple layers of hardware stacks, networking and data processing systems, according to MapR. Blending MapR Streams into a converged data system eliminates multiple siloes of data for streaming, analytics and traditional systems of record, MapR claimed.

MapR Streams supports standard application programming interfaces (APIs) and integrates with other popular stream processors like Spark Streaming, Storm, Flink and Apex. When available, the MapR Converged Data Platform will be offered as a free to use Community Edition to encourage developers to experiment.

Microsoft goes open source on Chakra JavaScript engine

Microsoft is to make the Chakra JavaScript engine open source and will publish the code on its GitHub page next month. The rationale is to extend the functions of the code, used in the Edge and Internet Explorer 9 browsers, to a much wider role.

The new open source versions of the Chakra engine are to be known as its open sourcing ChakraCore. Announcing the changes at Java development show JS Conf US in Florida, Microsoft now intends to run ChakraCore’s development as a community project which both Intel and AMD have expressed interest in joining. Initially the code will be for Windows only but the rationale behind the open source strategy is to take ChakraCore across platforms, in a repeat of the exercise it pioneered with .NET.

In a statement, Gaurav Seth, Microsoft’s Principal Programme Manager, explained that as Java Script’s role widens, so must the community of developers that support it and opening up the code base will help support that growth.

“Since Chakra’s inception, JavaScript has expanded from a language that primarily powered the web browser experience to a technology that supports apps in stores, server side applications, cloud based services, NoSQL databases, game engines, front-end tools and now the Internet of Things,” said Seth. Over time, Chakra evolved to fit many of these and this meant that apart from throughput, Chakra had to support native interoperability, scalability and manage resource consumption. Its interpreter played a key role in moving the technology across platform architectures but it can only take it so far, said Seth.

“Now we’re taking the next step by giving developers a fully supported and fully open-source JavaScript engine available to embed in their projects, innovate on top of, and contribute back to ChakraCore,” said Seth. The modern JavaScript Engine must go beyond browser work and run everything from small-footprint devices for IoT applications to high-throughput, massively parallel server applications based on cloud technologies, he said.

ChakraCore already fits into any application stack that calls for speed and agility but Microsoft intends to give it greater license to become more versatile and extend beyond the Windows ecosystem, said Seth. “We are committed to bringing ChakraCore to other platforms in the future. We’d invite developers to help us in this pursuit by letting us know which other platforms they’d like to see ChakraCore supported on to help us prioritize future investments, or even by helping port it to the platform of their choice,” said Seth.

Exponential-e launches CloudPort automated cloud portal for the channel

Cloud and network provider Exponential-e has launched a cloud pricing portal, CloudPort, to help its sales channel partner providers run their cloud reselling businesses more efficiently.

The system aims to make it easier for dealers of cloud services to install and manage their accounts, with quicker installation times and a fine tuned sales process that could help Exponential-e’s partners win more deals. Dealers can use the portal to get access to live a quotation tracking system that is designed to create price protection for Exponential-e’s business partners.

Exponential-e owns a 100 Gigabit Ethernet Layer 2 VPLS Network and integrates with third party providers and bespoke applications for both enterprise and SME clients. By tapping into Exponential-e’s product portfolio and carrier network CloudPort gives resellers access to suite of enterprise cloud services, including cloud compute, storage, server replication, colocation, online backup, SIP, internet and networking circuits.

According to researcher IDC global spending on cloud computing infrastructure is expected to grow by 21 per cent year over year to $32 billion in 2015. CloudPort will empower partners to meet this growing customer demand for cloud services and channel partners will be able to seamlessly tap into Exponential-e’s cloud solution portfolio and build systems to order for their clients, according to Exponential-e CEO Lee Wade. “CloudPort is a game-changer in cloud service delivery for the channel,” he said.

Resellers will also be able to configure services in order to adapt them for customers, according to Exponetial-e’s Head of Channel Strategy Michala Hart. “We will work with our partners to incorporate additional functionality into this new automated portal in the coming year. The portal will continue to evolve providing new solutions and services to our partners, facilitating innovation,” said Hart. The system is due to go live on December 14.

Abraxas uses Huawei Cloud Fabric for SDN datacentre

Cloud service provider Abraxas has built a new a virtualized multi-tenant cloud datacentre in Geneva, Switzerland using Huawei’s Cloud Fabric systems.

Huawei’s Cloud Fabric will give the datacentre the foundations on which to build a software defined network later, according to outsourcing giant Abraxas, which runs cloud computing services for enterprises, government agencies and scientific research institutions across Europe.

The Cloud Fabric is built out of a network of Huawei’s CloudEngine datacentre switches to create what Huawei describes as a Transparent Interconnection of Lots of Links (TRILL) and Ethernet Virtual Network (EVN). The Huawei equipment helped Abraxas build an ultra-large cross-datacentre Layer 2 network, which it says will give datacentre managers and cloud operators complete flexibility when installing Virtual Machine (VM) resources.

Virtualization of these core switches, using a technique that Huawei describes as “1: N”, helps to lower the running cost of the network and gives more service options with its variety of Virtual Switches (vSwitches), each of which can create completely independent autonomous sub-networks. The CloudEngine datacentre switches, when used with Huawei’s Agile Controller, can create the right conditions for a completely software defined network, when the time comes.

Abraxas needed to make more efficient use of its IT resources and to create the foundation for a strategy to migrate services onto its datacentres, said Olaf Sonderegger, ICT Architect, Infrastructure Management at Abraxas. But it also had to prepare for the virtualised future, said Sonderegger. “In order to fulfil sustainable service development, our datacentre network architecture has to be flexible enough to evolve into SDN-enabled architecture,” said Sonderegger.

Verizon announces IBM integration partnership for SCI customers

VerizonVerizon has announced IBM as the latest partner in its Secure Cloud Interconnect (SCI) service, bringing the total to eight cloud service options for its clients.

Verizon Secure Cloud Interconnect customers can now connect to IBM Cloud data centre sites in Dallas and San Jose in the US and Tokyo and Sydney in the Asia Pacific region. Two additional sites are planned in Europe for the beginning of 2016.

The Verizon Interconnect supports IBM’s broader portfolio of Direct Link services, which allow customers to link their existing IT infrastructure and the cloud compute resources on the IBM Cloud. The service has three offerings, Cloud Exchange, Network Service Provider (NSP) and Colocation, in a range it says will cover all public, private and hybrid eventualities.

The new IBM Cloud addition means Verizon’s Secure Cloud Interconnect now offers access to eight cloud providers. It already has links with AWS, Google CloudPlatform, HPE Rapid Connect, Microsoft ExpressRoute for Office 365TM, Microsoft Azure ExpressRoute, Microsoft Azure Government and Verizon’s own cloud service along with service from data centre providers Coresite, Equinix and Verizon. Its service is available at 50 global locations in the Americas, Latin America, Europe and the Asia-Pacific region.

Users of Verizon’s Secure Cloud Interconnect are promised a direct line to IBM Cloud services through a secure, flexible private link that promises to move workloads easily between clouds. Verizon says it gives enterprise clients more options for storing data. The new service brings a variety of settings, which means data can be stored in a traditional IT environment, a dedicated on- or-off premises cloud and a shared off-premises cloud. This, says Verizon, makes the adoption of a hybrid cloud more achievable and provides a cloud computing estate that is easier to adjust according as business requirements change.

“With SDN at the heart of our Secure Cloud Interconnect solution, IBM customers will find it delivers an unbeatable combination,” said Shawn Hakl, VP of enterprise networking and innovation for Verizon. Yesterday Telecoms.com reported on a similar deal between HPE and NTT.

Elsewhere, Verizon has also announced the expansion of its IoT portfolio, as it launched what it claims is the world’s first Cat1 LTE network feature for IoT. In addition, it announced that it will be giving developers additional tools on its ThingSpace platform, with more application programme interfaces (APIs) and application enablement platforms (AEPs) including an integration of its Bug Labs’ dweet APIs and freeboard visualisation engine.