Mobile and the Internet of Things (IoT) have thrust many businesses into the era of the truly digital enterprise. This means opening up new channels and business opportunities. APIs are the key, enabling the digital enterprise to connect itself to myriad external apps and partners while cracking open an equally expansive range of fresh internal connections. Everything a digital enterprise touches through a Mobile or IoT device is backed by an API.
As an enterprise, you are consuming more data and services as APIs from third parties than ever before. The question, however, is “Do you know if you are getting the service level out of those third-party APIs that you are paying for?” Are you able to stop the he said/she said arguments with support when something goes wrong?
Monthly Archives: March 2015
Modular data centres: Standardise your way to a customised cloud
(c)iStock.com/Henrik5000
It may seem counterintuitive, but enterprises and smaller firms can standardise their way to their own hybrid cloud model through smart use of modular data centres.
Large enterprises (and the rest of us) have tended to see pre-fabricated modular data centres (or containers) as an emergency option for extra capacity or one-off events. However, they have one crucial advantage over installed data centres: whilst the majority of data centres are designed, installed and tested in situ and therefore, their performance is rarely optimised, containers are standardised and tested for optimum performance from the off.
Cloud for enterprises and smaller firms?
The pressure is certainly on business units to deliver cloud to boost their competitiveness: Gartner predict that most enterprises will have some hybrid cloud deployment by 2018 with the Data Center Users’ Group predicting that two-thirds of data centre computing will be done in the Cloud in 2025. Development has however, focused on enterprise-level innovations, such as converged infrastructures based on VBlock class of units – but what are the options for smaller enterprises, mid-range or even fast-growing small firms?
Containerised units are delivering predictable costs, which are not that common in existing data centre setups
This question is being addressed by a new breed of providers now delivering a ‘data centre in a box’ – with integral racks, cooling, UPS, generators, all built and tested in controlled environments before rapid deployment. This pre-fabrication delivers two crucial advantages over data centres that are added to existing buildings: first, PFMs enable users to run software-defined virtual machines on the same hardware infrastructure and second, they ensure that key elements of a data centre, the server, uninterrupted power supplies (UPS) and cooling all work in harmony.
More than the sum of their parts
With containerised data centres, converged infrastructures that were mainly the preserve of larger enterprises, are fast becoming a practical proposition for smaller firms, because they enable different N / N+1 / 2N configurations for converged infrastructure as well as UPS and cooling. As a result, businesses or datacenter solution providers alike can contain risks and identify CAPEX/OPEX costs before deploying their hybrid cloud or extra compute capacity options. Some suppliers standard units provide V-Block container options for buyers to customise their approach.
Using PFMs also means that all the different pre-tested infrastructure elements work efficiently from the outset. With containers being self-contained with modular elements, server racks, UPS and refrigeration units can all be swapped while maintaining required hot / cold air flows maintained, even if the unit is added to an existing building. The integrated design delivers airflows that facilitate pre-tested direct expansion-based refrigeration technologies or low-cost direct air evaporative cooling systems; there is no need for less efficient and more costly items such as CRAC units.
Small footprint units can be quickly added to existing data centre rooms or office car parks, dealing with expansion demands and, an area that is too often ignored, opening up the possibility of innovations such as locating the data centre’s refrigeration compressors externally for low cost ‘free cooling’ where lower ambient temperature air from outside (rather than warm air from elsewhere in the data centre) is drawn into the cooling equipment.
In terms of location and running costs, PFM units can slash OPEX costs in a way that the only the largest technology firms are starting to do consistently with their huge-scale, self-designed and built data centres.
Predictable costs
One UK supplier reckons that containerised data centres deliver savings of 60% or more over the Uptime Institute’s 2013 cost model which projected a cost of $10 / watt for adding data centre capacity into existing buildings (based on architecture and engineering fees, land, building core, shell, mechanical and electrical systems and fire protection but not IT migration costs) because standard units will deliver PuE of near 1.0 from the outset.
With containerised data centres converged infrastructures, mainly the preserve of larger enterprises, are fast becoming a practical proposition for smaller firms
Buyers could make ten-year OPEX savings of approximately £21,000 over Uptime Institute models for an entry-level, 10 ft 6kW, 2-rack 42U/600/1000 container, or £175,200 for a 40 ft 50kW, 16 Rack 42U/600/1000 unit.
Containerised units are delivering predictable costs, which are not that common where existing data centre setups mean complex reconfiguration, extended testing and thus, open-ended cost calculations.
Of course containers ensure faster deployment but what of the old charge that containers are emergency items, not robust enough for enterprise-grade or harsh environments? Today’s pre-tested units are meeting security and fire resistance to RF0/RF30/RF60 – EN1047-2, and water & dust protection to IP64 – EN60529. Suppliers can provide fully insulated units ensuring fire detection and suppression to EN12094-1, along with effective web-based security and environmental monitoring. Readers will not have forgotten the recent images of a container unit that survived the fire that levelled the rest of an office building.
Known performance and costs
In recent times, there has been excitement around converged infrastructure (CI) vendors and their ‘Vblock options’ – that free large enterprises to virtualise and develop their Cloud options. However, the standardisation of data centre units that is being achieved through pre-designed and tested PFMs leverages combined CI / power / cooling performance uplift – the type of improvements that adapting existing data centre in offices simply cannot match.
Analysing the rise of the distributed cloud model
(c)iStock.com/Erikona
The current model for public cloud computing is an optimisation of the traditional construct of internet-facing data centres: maximise the scale of the facility to serve as broad a market as possible, using the Internet as the means of distribution.
This model is wedded to a separation of the compute from the ‘dumb network’, or to use the original terminology of some 40 years ago, we are still in a world where we separate processing (computing) from inter-process communication (network).
The convenience of this separated relationship has bought us the Internet economy: an autonomous network that can deliver anything anywhere. The rise of cloud computing ‘as a service’ exploits this Internet delivery model, removing the need to buy and build your own data centres.
If buying digital infrastructure as a service is the future for computing, then the next question is whether the current architecture of cloud computing is the final resting place or will evolution take us elsewhere?
The clue to future direction is the current level of investment and activity around how to improve and secure the communication between clouds and between legacy assets and the cloud. Software defined networking, and the numerous cloud connect/exchange products, all seek to create an improvement in the level of communication between clouds. Simplifying the complex world of networks, or trying to bypass them by ‘overlaying’ relationships across the still dumb network. Conversely, those in search of surety look to the narrow and legacy-oriented approach of the ‘meeting room’ model of neutral colocation providers with ‘direct cloud connect products’.
The challenge is that none of these approaches move us on fundamentally. And the elephant in the room is that many are not yet ready to jettison the concept of massive scale data centres with its separated access.
Instead of seeing separate processing pools, connected by ubiquitous-but-murky access or limited-but-assured access, I would invite you to think about a DISTRIBUTED CLOUD where you can distribute and run workloads anywhere. Processing and storage are wherever and whenever you need them to be, for reasons of latency, language, resilience or data sovereignty. The network is appropriately mobile, secure, assured or ubiquitous, according to location.
The key to creating such a distributed cloud is to build the compute into the network. And Interoute has already done this. We have created a distributed, global cloud which offers very low latency, private and public networking with a global pool of computing and storage that you can place anywhere. By deploying network technologies like MPLS we are able to provide logical separation and security for customers, allowing them to build a ‘single tenant’ infrastructure on our global network, as if it was their own.
The distributed cloud model supports the rise in the use of container technologies like Docker, where the developer abstracts from the data centre infrastructure to a distributed computational environment populated by containers. It is unnecessary for the developer to have to ‘go under the hood’ and create static routing relationships between virtual machines. The goal is to provide simple addressing to each application. Add to this the possibility to create resilient, scalable clusters which straddle multiple nodes, without the constraint of traditional routing.
This forward-thinking approach also answers the needs of those running legacy applications, where you want to consolidate or migrate workloads into the cloud without having to jump to a whole new ‘Internet only’ model, which delays implementation, and for the enterprise that mostly means delays in competitiveness and knowledge.
Here in Europe we are sensitive about the idea of ‘one cloud location to rule them all’, not only because of the data sovereignty issue but because of latency and the languages of Europe. If you are building a website in Spain then 90% of your market is going to be Spain and having your ‘processing’ in Ireland or the UK is an unnecessary complication and expense for data traffic that predominantly should stay local. That distance to the processing location hinders performance and throughput so basically you get a slower cloud for your money, or you must upgrade to a more expensive one.
The evolution of a distributed cloud which I envision builds on the technological core of ‘Cloud 1.0’ – fully elastic resources and scale –but moves forward by combining in an intelligent way the twin elements of the digital economy, the network and the computer. Once this model takes hold, the ability to evolve applications toward higher levels of availability and resilience accelerates and all those efforts and products that are trying to simplify the managing of networks simply disappear, as the ‘network’ just works as part of the platform.
I feel we are harking back to the 1980s when John Gage touted the idea of the network as a computer. 30 years on we are finally there.
Microsoft Obtains ISO Cloud Privacy Certification
When it comes to cloud computing and services, privacy is at the front of every company’s mind. When the United States began to demand access to cloud-based data from Microsoft’s Ireland data center, customers recognized that their information might not be safe from privacy violations even if their information is not resident in the US. Many industry players, including Microsoft, have started to fight these demands. No matter what they decide to do, the EU or the US governments will not be happy.
Microsoft truly believes their customers own their own data, not the cloud providers who they store it with. Microsoft claims to be the first major cloud provider to adopt the ISO/IEC 27018. This is the first global standard for cloud privacy, and many of Microsoft’s programs have been evaluated for compliance by the British Standards Institute.
The ISO/IEC 27018 establishes commonly accepted control objectives and guidelines for implementing measures to protect identifying persona information in accordance with the 29100 policy. Microsoft’s general counsel Brad Smith said that they are optimistic that this policy can serve as a template for regulators and customers as they both desire strong privacy protection. Adherence to this policy will ensure that customer’s privacy will be protected in many ways.
First, customers will be in charge of their data, and Microsoft will only process personally identifiable information based on what the customer wants. Second, customers will always know what is happening to their data, all returns, transfers, and deletion of data will be transparent. Third, there will be restrictions on how Microsoft handles personal data, including restricting its transmissions over public networks, storage on transportable media and processes for data recovery. Fourth, the data will not be used for advertising purposes. Lastly, Microsoft will inform their customers about government access to data. The standard requires law enforcement requests for data must be disclosed to the customers.
Adherence to this standard is an important move to reassure its enterprise customers that their information is safe. However, the execution of these promises is worth more than making the promises. There are still lingering concerns and fears about data privacy and security around shifting to the cloud, so Microsoft’s announcement is a step in the right direction.
The post Microsoft Obtains ISO Cloud Privacy Certification appeared first on Cloud News Daily.
Continuous Delivery, Real-Time Test Analysis By @XebiaLabs | @DevOpsSummit [#DevOps]
You have to plan your Continuous Delivery pipeline with quality in mind from the outset. The only way to effectively do that is to design tests before development really begins, to continually collect metrics, and to build a test automation architecture integrated into your Continuous Delivery pipeline. The test automation architecture defines the setup of test tools and tests throughout the pipeline and should support flexibility and adaptability to help you meet your business objectives.
This test automation architecture should facilitate easy selection of just the right tests to be performed to match the flow of your software through your pipeline as it gets closer to release. From unit tests and code level security, through functional and component testing, to integration and performance testing. The further the software progresses along the pipeline, the greater the number of dependencies on other systems, data, and infrastructure components – and, the more difficult it is to harmonize these variables. Making sane selections of tests to run at each moment in the pipeline becomes more and more important in order to focus on the area of interest.
Why Database as a Service Is Like a Stack of Pancakes | @CloudExpo [#Cloud]
Life may be like a box of chocolates , but Database as a Service is like a stack of pancakes. Let us count the ways…
Variety is the spice of life – The last time I looked at the International House of Pancakes (IHOP) menu, there were 16 different kinds of pancakes to choose from. OpenStack Trove doesn’t yet support quite that many databases, but between NoSQL and traditional relational databases, it already supports nine different database engines with more coming all the time.
Avoid Cloud Migration Failure By @HawkinsJohn | @CloudExpo [#Cloud]
The reasons for migrating business applications to the cloud are generally positive and abundant. However, with the positives come a few negatives. Unfortunately, organizations can quickly run into a number of roadblocks during their migration projects. The critical question that needs to be asked is why? The answer is rather simple. As with many business-critical projects, the devil is found in the details.
Some common factors that contribute to issues with cloud migrations are an inability to align business with IT, improperly setting performance expectations, and a lack of retooling of the organization to support the target architecture. There are three factors that must be considered before a migration happens.
The New War on Cyberattacks By @Vormetric | @CloudExpo [#Cloud]
When it comes to cybersecurity initiatives, the U.S. government has not taken a back seat. Perhaps owning to the number of high profile breaches and damaging insider attacks that have occurred in the past few years, this White House in particular has been very vocal about the federal government’s role in putting a stop to cyberattacks.
Recent examples of this include Obama’s visit to Stanford University to put meet with CEOs and CSOs at major companies and discuss cybersecurity challenges, the establishment of a cybersecurity center to collect cybersecurity intelligence and the signing of an executive order to promote information sharing. In January, Congress also kicked off hearings about a national breach notification law; the law’s tenants would, in most cases, mandate breach notification to consumers within 30 days. Currently, there are 51 differing state and territorial breach notification rules in effect. The proposed federal data breach legislation to regularize responses across the country would replace the patchwork of varied laws on the topic.
Docker Acquires SDN Startup | @Docker @DevOpsSummit [#SDN #DevOps]
Docker has acquired software-defined networking (SDN) startup SocketPlane. SocketPlane, which was founded in Q4, 2014, with a vision of delivering Docker-native networking, has been an active participant in shaping the initial efforts around Docker’s open API for networking. The explicit focus of the SocketPlane team within Docker will be on collaborating with the partner community to complete a rich set of networking APIs that addresses the needs of application developers and network and system administrators alike.
Future Growth For @CommVault | @CloudExpo [#Cloud]
CommVault has announced that top industry technology visionaries have joined its leadership team. The addition of leaders from companies such as Oracle, SAP, Microsoft, Cisco, PwC and EMC signals the continuation of CommVault Next, the company’s business transformation for sales, go-to-market strategies, pricing and packaging and technology innovation. The company also announced that it had realigned its structure to create business units to more directly match how customers evaluate, deploy, operate, and purchase technology.