Deutsche Telekom launches pan-European public cloud on Cisco platform

T-Mobile Computing cloudDeutsche Telekom has announced the start of a new pan-European public cloud service aimed at businesses of all sizes. The debut offering will be DSI Intercloud, run by T-Systems, which will offer Infrastructure as a service (IaaS) to businesses across Europe. In the first half of 2016, software and platforms as a service deals (SaaS and PaaS) will be launched.

The service, built on a Cisco platform by T-Systems, the business division of Deutsche Telekom, will run from German data centres and be subject to Germany’s data sovereignty regulations.

The pay-as-you-go cloud services can be ordered through Telekom’s new cloud portal, with no minimum purchase requirements and contract periods. Prices start at €0.05 per hour for computing resources and storage at €0.02 per gigabyte. Deutsche Telekom said it hopes to create the foundation for a secure European Internet of Things with high availability and scalability for real time analytics.

Data security company Covata test piloted the platform and will be the first customer to use the DSI Intercloud infrastructure service. Another beta tester was communications company Unify, which used it to investigate the viability of open source cloud platforms running from German data centres.

The new DSI Intercloud marks the latest chapter in the Cisco Intercloud initiative. In June BCN reported how Cisco had bolstered the Intercloud, which it launched in 2014, with 35 partnerships as it aimed to simplify hybrid clouds. Cisco and Deutsche Telekom say they will focus on delivering high availability and scalability for real-time analytics at the edge of the networks, in order to cater for IoT experiences. Edge of networking big data analytics is to become a key a concept in the IoT, BCN reported in December. Last week Hewlett Packard enterprises (HPE) revealed how it is helping IoT system users to decentralised all their processing jobs and devolve decision making to local areas. The rationale is to keep masses of data off the networks and deal with it locally.

Deutsche Telekom said the Cisco partnership is an important building block in expanding its cloud business and aims to at least double its cloud revenue by the end of 2018. In fiscal year 2014, net sales of cloud solutions at T-Systems increased by double figures, mainly in secure private clouds.

Red Hat helps Medlab share supercomputer in the cloud

redhat office logoA cloud of bioinformatics intelligence has been harmonised by Red Hat to create ‘virtual supercomputers’ that can be shared by the eMedlab collective of research institutes.

The upshot is that researchers at institutes such as the Wellcome Trust Sanger, UCL and King’s College London can carry out much more powerful data analysis when researching cancers, cardio-vascular conditions and rare diseases.

Since 2014 hundreds of researchers across the eMedlab have been able to use a high performance computer (HPC) with 6,000 cores of processing power and 6 Petabytes of storage from their own locations. However, the cloud environment now collectively created by technology partners Red Hat, Lenovo, IBM and Mellanox, along with supercomputing integrator OCF, means none of the users have to shift their data to the computer. Each of the seven institutes can configure their share of the HPC according to their needs, by self-selecting the memory, processors and storage they’ll need.

The new HPC cloud environment uses a Red Hat Enterprise Linux OpenStack platform with Lenovo Flex hardware to create virtual HPC clusters bespoke to each individual researchers’ requirements. The system was designed and configured by OCF, working with partners Red Hat, Lenovo, Mellanox and eMedlab’s research technologists.

With the HPC hosted at a shared data centre for education and research, the cloud configuration has made it possible to run a variety of research projects concurrently. The facility, aimed solely at the biomedical research sector, changes the way data sets are shared between leading scientific institutions internationally.

The eMedLab partnership was formed in 2014 with funding from the Medical Research Council. Original members University College London, Queen Mary University of London, London School of Hygiene & Tropical Medicine, the Francis Crick Institute, the Wellcome Trust Sanger Institute and the EMBL European Bioinformatics Institute have been joined recently by King’s College London.

“Bioinformatics is a very, data-intensive discipline,” says Jacky Pallas, Director of Research Platforms at University College London. “We study a lot of de-identified, anonymous human data. It’s not practical for scientists to replicate the same datasets across their own, separate physical HPC resources, so we’re creating a single store for up to 6 Petabytes of data and a shared HPC environment within which researchers can build their own virtual clusters to support their work.”

In other news Red Hat has announced a new upgrade of CloudForms with better hybrid cloud management through more support for Microsoft Azure Support, advanced container management and improvements to its self-service features.

[video] Optimizing Hybrid Cloud Environments | @CloudExpo @IBMcloud #Cloud

This video highlights IBM’s capabilities for managing and optimizing hybrid cloud environments. Learn how to balance the needs of business with the needs of corporate IT departments. Now you can maximize speed while maintaining the ability to deploy complex applications quickly in a controlled and secure way across a variety of cloud implementations.

read more

Why You Can’t Talk About Microservices Without Mentioning Netflix | @CloudExpo #API #Microservices

About six years ago, Netflix began the move from a monolithic to cloud-based microservices architecture, openly documenting the journey along the way. Netflix is one of the earliest adopters of microservices, a term that didn’t even exist when Netflix began moving away from its monolith. Today, the Netflix application is powered by an architecture featuring an API Gateway that handles about two billion API edge requests every day which are handled by approximately 500+ microservices. Netflix has been so successful with its architecture, that the company has open sourced a great deal of its platform including the technologies powering its microservices. Netflix has become one of the most well-known examples of a modern microservices architecture; if an article mentions microservices, odds are, it also mentions Netflix.

read more

MapR claims world’s first converged data platform with Streams

Navigating big dataApache Hadoop system specialist MapR Technologies claims it has invented a new system to make sense of all the disjointed streams of real time information flooding into big data platforms. The new MapR Streams system will, it says, blend everything from systems logs to sensors to social media feeds, whether it’s transactional or tracking data, and manage it all under one converged platform.

Stream is described as a stream processing tool that can handle real-time event handling and high scalability. When combined with other MapR offerings, it can harmonise existing storage data and NoSQL tools to create a converged data platform. This, it says, is the first of its kind in the cloud industry.

Starting from early 2016, when the technology becomes available, cloud operators can combine Streams with MapR-FS for storage and the MapR-DB in-Hadoop NoSQL database, to build a MapR Converged Data Platform. This will liberate users from having to monitor information from streams, file storage, databases and analytics, the vendor says.

Since it can handle billions of messages per second and join clusters from separate data centres across the globe, the tool could be of particular interested to cloud operators, according to Michael Brown, CTO at comScore. “Our system analyses over 65 billion new events a day, and MapR Streams is built to ingest and process these events in real time, opening the doors to a new level of product offerings for our customers,” he said.

While traditional workloads are being optimised, new workloads from the emerging IoT dataflows are presenting far greater challenges that need to be solved in a fraction of the time, claims MapR. The MapR Streams will help companies deal with the volume, variety and speed at which data has to be analysed while simplifying the multiple layers of hardware stacks, networking and data processing systems, according to MapR. Blending MapR Streams into a converged data system eliminates multiple siloes of data for streaming, analytics and traditional systems of record, MapR claimed.

MapR Streams supports standard application programming interfaces (APIs) and integrates with other popular stream processors like Spark Streaming, Storm, Flink and Apex. When available, the MapR Converged Data Platform will be offered as a free to use Community Edition to encourage developers to experiment.

The Utopia of API Documentation | @CloudExpo #API #Cloud

It’s proven time and again how much API documentation matters to your developer experience-in fact, it kind of matters more than anything else as to whether your API is adopted or not. And certainly developer experience matters to your overall bottom line. After all, in the world of the application programming interface or the API, developers are your users and therefore their user experience matters most.
There’s no doubt your API documentation has to be sexy, but, as sexiness is in the eye of the beholder, there’s a lot of debate about just what that means. Today, SmartBear sits down with Arnaud Lauret of AXA Banque (a.k.a. the API Handyman) to talk about this idea of documentation utopia and his vision of an ideal world where APIs and documentation live in perfect harmony.

read more

How much money is your cloud slurping up? Businesses still in the dark, says survey

(Image Credit: iStockPhoto/trekandshoot)

Many organisations continue to be left in the dark when it comes to providing details on cloud costs and consumption, according to research released by Cloud Cruiser.

The survey, which polled almost 350 IT professionals who were at the AWS re:Invent shows in 2014 and 2015, found almost half (42%) of respondents saying it continues to be difficult to properly allocate public cloud usage and costs, even though 85% believe it is valuable to share cloud consumption metrics with the business.

AWS usage has become universal among the companies attending AWS re:Invent over the past year, the research reveals. Whereas 7% of respondents in the 2014 survey said they were not using AWS yet, that figure drops to 0% for this year.

Similarly, the numbers of those using AWS for more business-critical operations continues to grow. Almost half (45%) of those polled say they use Amazon’s cloud for enterprise applications – ERP, CRM, HR, and email – compared to 38% in 2014, while the numbers using AWS for software development and testing has also risen (62% in 2015, 60% in 2014). The percentage of companies who use AWS as a sandbox for testing public cloud offerings has gone down year on year.

AWS made changes to its ‘reserved instance’ (RI) model – whereby users can reserve EC2 compute capacity for one or three years at a discounted hourly rate – this time last year. The usage naturally indicates a long-term commitment to Amazon’s cloud, and this is borne out in the survey results. 61% said they expect to increase their use of RIs, while 6% do not expect a change in use and 2% expect a decrease. 29% of respondents said they do not use RIs, while 2% did not know what they were.

In addition to AWS, event respondents in 2014 were more likely to supplement it with other offerings, such as Microsoft Azure and Google Cloud Platform. This year, the confidence in public cloud means the tables have turned: 43% say they have no additional resources, compared to 27% the year before.

This shows the benefits of public cloud are becoming more apparent to business – but at a price, argues Deirdre Mahon, Cloud Cruiser chief marketing officer. “Lack of usage visibility and cost transparency are key problem areas which have in turn created demand for new, flexible and easy to use solutions that drive efficiency, cost savings and a way to hold business users accountable for what they are consuming,” she explained.

“Investing in the right cloud efficiency solutions will continue to be critical for businesses of all sizes, even during the early stages of adoption.”

Are you surprised at the results of this cloud survey? Let us know your thoughts in the comments.

Microsoft goes open source on Chakra JavaScript engine

Microsoft is to make the Chakra JavaScript engine open source and will publish the code on its GitHub page next month. The rationale is to extend the functions of the code, used in the Edge and Internet Explorer 9 browsers, to a much wider role.

The new open source versions of the Chakra engine are to be known as its open sourcing ChakraCore. Announcing the changes at Java development show JS Conf US in Florida, Microsoft now intends to run ChakraCore’s development as a community project which both Intel and AMD have expressed interest in joining. Initially the code will be for Windows only but the rationale behind the open source strategy is to take ChakraCore across platforms, in a repeat of the exercise it pioneered with .NET.

In a statement, Gaurav Seth, Microsoft’s Principal Programme Manager, explained that as Java Script’s role widens, so must the community of developers that support it and opening up the code base will help support that growth.

“Since Chakra’s inception, JavaScript has expanded from a language that primarily powered the web browser experience to a technology that supports apps in stores, server side applications, cloud based services, NoSQL databases, game engines, front-end tools and now the Internet of Things,” said Seth. Over time, Chakra evolved to fit many of these and this meant that apart from throughput, Chakra had to support native interoperability, scalability and manage resource consumption. Its interpreter played a key role in moving the technology across platform architectures but it can only take it so far, said Seth.

“Now we’re taking the next step by giving developers a fully supported and fully open-source JavaScript engine available to embed in their projects, innovate on top of, and contribute back to ChakraCore,” said Seth. The modern JavaScript Engine must go beyond browser work and run everything from small-footprint devices for IoT applications to high-throughput, massively parallel server applications based on cloud technologies, he said.

ChakraCore already fits into any application stack that calls for speed and agility but Microsoft intends to give it greater license to become more versatile and extend beyond the Windows ecosystem, said Seth. “We are committed to bringing ChakraCore to other platforms in the future. We’d invite developers to help us in this pursuit by letting us know which other platforms they’d like to see ChakraCore supported on to help us prioritize future investments, or even by helping port it to the platform of their choice,” said Seth.