Category Archives: hybrid cloud

Microsoft unveils new Azure Stack migration strategy

Microsoft is to build its Azure Stack by increments on a foundation of consistency and continuity, it has pledged. The software turned cloud service vendor has blogged about the next move in its hybrid cloud strategy. Later this week it will offer the first technical preview of the new Microsoft Azure Stack.

In deference to its increasing numbers of Azure users who are nervous of committing to the public cloud, Microsoft announced it will provide incremental upgrades and changes on a foundation of continuity and consistency. While Azure Stack will use identical application programming interfaces (APIs) to the ones that reach into Microsoft Azure, developers are to be given guidance on creating .Net or open source apps that can straddle both public and private cloud. Meanwhile, according to Mike Neil, Microsoft’s enterprise Cloud VP, IT professionals can transform on-premises data centre resources into Azure IaaS/PaaS services without losing their powers of management and automation.

Microsoft is seeing nearly 100,000 new Azure subscriptions every month but many enterprises fear going fully public because of the data sovereignty and regulatory issues, Neil said. Microsoft’s strategy is to work around a client base with one foot in the public cloud and one on-premises. It will do this by providing a consistent cloud platform that spans hybrid environments. In a series of Technical previews, starting on Friday 29th of January, Microsoft is to show how Azure Stack inventions for the hyperscale data centre can be layered onto the hybrid cloud.

Since the APIs are the same, in future apps can be written once and deploy to Azure and Azure Stack and use the Azure ecosystem to jumpstart their Azure Stack development efforts. The same management, DevOps and automation tools will apply said Neil. The application model is based on Azure Resource Manager, so developers can take the same declarative approach to applications, regardless of whether they run on Azure or Azure Stack. Tooling-wise, developers can use Visual Studio, PowerShell, as well as other open-source DevOps tools, creating the same end user experiences as in Azure, Neil said.

A series of technical previews will be the vehicle for adding services and content such as OS images and Azure Resource Manager templates. “Azure has hundreds of applications and components on GitHub and as the corresponding services come to Azure Stack, users can take advantage of those as well,” said Neil, who disclosed that open source partners like Canonical are contributing validated Ubuntu Linux images to make open source applications work in Azure Stack environments.

The first Technical Preview of Azure Stack on Friday, January 29 will be followed by a web cast on February 3rd by Azure CTO Mark Russinovich and Chief Architect of Enterprise Cloud Jeffrey Snover.

Actifio claims Global Manager will slash costs of managing hybrid cloud data

cloud storm rainVirtualisation company Actifio claims its new Global Manager can create the same savings for hybrid cloud managers that its earlier systems achieved in product data management.

Actifio’s virtualisation technology inventions aim to cut costs by preventing the endless, expensive replication of massive data sets by each different DevOps team across an enterprise. The new Actifio Global Manager (AGM) offers enterprises and service providers a way to manage data more efficiently across the full lifecycle of applications in hybrid cloud environments.

Actifio claims it can scale thousands of application instances associated with petabytes of data deployed across private data centres, hybrid and public clouds. After an early access programme with 100 beta testers, Actifio has launched AGM on general release, targeting web-scale environments.

Users are evolving to multi-site, multi-appliance environments and use public cloud infrastructure like Amazon AWS as part of their data centre. At the same time data migration, load balancing and migration become increasingly fraught and expensive, according to David Chang, Actifio’s Senior VP of Solutions Development.

The new AGM system will allow companies to save on storage by obviating the need for petabytes of duplicated data, improving on service levels, cutting capital and operational expenses through software defined storage, load balancing, simplifying capacity management, deepening the integration of systems and giving managers a better view of their virtualised estate, according to Actifio.

By helping clients to ‘scale up from one to multiple instances’, Actifio said, its AGM system will manage thousands of applications, petabytes of data, independent of hardware infrastructure or physical location. This, it claims, makes for a painless application data lifecycle across private, public or hybrid cloud infrastructures.

After validation testing of Actifio Global Manager and its RESTful API this year, beta tester Net3 Technologies, a cloud service provider, is building it into its automation platform. “Now we can scale and manage the data infrastructure of clients more easily,” said Jeremy Wolfram, Director of Development at Net3 Technologies.

“Actifio Global Manager unshackles the infrastructure dependency and makes it faster and easier for our largest customers and service provider partners to access and manage their data at global web-scale,” said Actifio founder Ash Ashutosh.

New Service Director from HPE could simplify hybrid cloud management for telcos

HPE street logoHPE claims its new Service Director system could put comms service providers back in control of their increasingly complex hybrid computing estates. It aims to achieve this by simplifying the management of network function virtualisation (NFV).

HPE claims that Service Director will automate many of the new management tasks that have been created by the expanding cloud environment and provide a simpler system of navigation for all the different functions that have to be monitored and managed. The new offering builds on HPE NFV Director’s management and orchestration (MANO) capacity and bridges existing physical and new virtualized environments.

As virtualisation has expanded it has extended beyond the remit of current generations of operations support systems (OSS) and the coexistence of physical and virtual infrastructure can introduce obstacles that slow the CSPs down, HPE said. It claims the Service Director will help CSPs roll out new offerings quicker.

The main benefits of the system outlined by HPE are automation of operations, shared information, flexible modelling of services and openness. With a single view of the entire infrastructure and dynamic service descriptors, it aims to make it easier to spot problems and create new services, HPE claims. As an open system the Service Director platform will have interfaces to any new third party software defined networking controllers and policy engines.

Since there is no such thing as a green field NFV set up there has to be a system to rationalise the legacy systems and the new virtualised estate, said David Sliter, HPE’s comms VP. “Service Director is a transformational change in the relationship between assurance and fulfilment, allowing the OSS resource pool to be treated, automated and managed as a service,” said Sliter.

The telecoms industry needs an omnipotent service orchestration system that can span every existing NFV MANO and OSS silo, according to analyst Caroline Chappell, principal analyst of NFV and Cloud for Heavy Reading. A model-driven, fulfilment and assurance system like Service Director could speed up the delivery of services across a hybrid physical and virtual network, Chappell said.

HPE Service Director 1.0 will be available worldwide in early 2016, with options for pre-configured systems to address specific use cases as extensions to the base product, starting with HPE Service Director for vCPE 1.0.

Citrix to sell CloudPlatform and CloudPortal to Accelerite, improve XenApp

CitrixCitrix has announced it will sell its CloudPlatform and CloudPortal Business Manager systems to infrastructure software vendor Accelerite. The acquisition is expected to close in Q1 2016, subject to conditions.

Accelerite, a subsidiary of Persistent Systems, has recently acquired cloud and virtualisation product lines from HP, Intel and Openwave. Citrix will work with Accelerite to build on CloudPlatform integrations with XenServer, NetScaler and Citrix Workspace Cloud.

The Apache-based CloudPlatform is used to create and run public and private cloud infrastructure services. CloudPortal Business Manager automates provisioning, billing, metering and user management. Its strength is that it allows service providers to deliver a range of cloud services while integrating with existing business, operations and IT systems, according to Nara Rajagopalan, CEO of Accelerite,

The new additions give Accelerite a more complete portfolio and it can now fill the gap in end-to-end life cycle management for public and private clouds, it said. Despite the increasing adoption of container technology in the cloud industry many enterprises cannot deploy and manage them. CloudPlatform’s simplicity and large customer base provide a means of addressing this emerging shortfall as the industry evolves into hyper-convergence, Rajagopalan said in a statement.

“Citrix will work closely with Accelerite to build on CloudPlatform integrations with our key offerings that enable the secure delivery of apps and data,” said Steve Wilson, the VP of Core Infrastructure at Citrix.

Citrix will continue to work with both the OpenStack and CloudStack open source communities to optimise its NetScaler, XenServer and Citrix Workspace Cloud.

Meanwhile, at the Citrix Summit 2016 in Las Vegas Citrix announced that new releases of XenApp and XenDesktop are available for download. The new 7.7 XenDesktop release is a product of collaboration between Citrix and Microsoft and it promises new cloud provisioning and collaboration options. The new versions will improve the flexibility of the FlexCast Management Architecture (FMA) across multiple geographical locations, Citrix claims.

Among the promised improvements are a fully native Skype for Business user experience within a virtual app or desktop, as well as high-quality voice and video. The new versions will make it easier to set up virtual desktops in Microsoft Azure by using the Machine Creation Services (MCS) feature of XenApp and XenDesktop. Citrix Provisioning Services will also now supports the on-premises provisioning of Windows 10 virtual desktops, it claims.

New Xangati platform gets automated storm remediation

cloud storm rainCalifornia based network performance manager Xangati has launched a new automated system for boosting hybrid cloud output.

The Xangati Virtual Appliance (XVA) system has an automatic mechanism for storm remediation for virtualised and VDI infrastructures. It can also support Microsoft Hyper-V environments natively. By integrating with ServiceNow the XVA system can also share trouble tickets and storm alerts with the ServiceNow ITSM (IT Service Management) portal.

As hybrid clouds become increasingly popular companies are discovering that performance can be slowed down by a variety of cloud components and it is proving difficult to identify the root causes, the vendor said. XVA will help them to pinpoint whether the trouble is being caused by created storage, CPU, memory or boot storms. The new Xangati XVA will allow virtualisation system administrators to address CPU and memory performance issues by automatically balancing workloads across vCenter hosts, it said.

The system provides real time key performance data, capacity planning and cost optimisation. Xangati’s Efficiency Index measures the extent to which available CPU, memory, storage and network interface capacity is being used.

The new functions will allow service providers to see how well their systems are running, give them a quicker resolution and improve their chances of meeting service level agreements, said Atchison Frazer, VP of marketing at Xangati. Other features within the offering include support for NetApp Storage Systems, XenApp and Splunk.

“Xangati is moving towards an automated response to off complex degrading conditions,” said Frazer. Xangati is a virtual appliance that runs on VMware vSphere. The word Xangati is derived from the Sanskrit word Sangathi, which translates as “coming together to know more about ourselves.”

Deciding between private and public cloud

cloud computing machine learning autonomousInnovation and technological agility is now at the heart of an organization’s ability to compete.  Companies that rapidly onboard new products and delivery models gain competitive advantage, not by eliminating the risk of business unknowns, but by learning quickly, and fine-tuning based on the experience gathered.

Yet traditional IT infrastructure models hamper an organizations’ ability to deliver the innovation and agility they need to compete. Enter the cloud.

Cloud-based infrastructure is an appealing prospect to address the IT business agility gap, characterized by the following:

  1. Self-service provisioning. Aimed at reducing the time to solution delivery, cloud allows users to choose and deploy resources from a defined menu of options.
  2. Elasticity to match demand.  Pay for what you use, when you use it, and with flexible capacity.
  3. Service-driven business model.  Transparent support, billing, provisioning, etc., allows consumers to focus on the workloads rather than service delivery.

There are many benefits to this approach – often times, cloud or “infrastructure as a service” providers allow users to pay for only what they consume, when they consume it, as well as fast, flexible infrastructure deployment, and low risks related to trial and error for new solutions.

Public cloud or private cloud – which is the right option?

A cloud model can exist either on-premises, as a private cloud, or via public cloud providers.

In fact, the most common model is a mix of private and public clouds.  According to a study published in the RightScale 2015 State of the Cloud Report, enterprises are increasingly adopting a portfolio of clouds, with 82 percent reporting a multi-cloud strategy as compared to 74 percent in 2014.

With that in mind, each workload you deploy (e.g. tier-1 apps, test/dev, etc.) needs to be evaluated to see if it should stay on-premises or be moved offsite.

So what are the tradeoffs to consider when deciding between private and public cloud?  First, let’s take a look at the considerations for keeping data on-premises.

  1. Predictable performance.  When consistent performance is needed to support key business applications, on-premises IT can deliver performance and reliability within tight tolerances.
  2. Data privacy.  It’s certainly possible to lose data from a private environment, but for the most part, on-premises IT is seen as a better choice for controlling highly confidential data.
  3. Governance and control.  The private cloud can be built to guarantee compliance – country restrictions, chain of custody support, or security clearance issues.

Despite these tradeoffs, there are instances in which a public cloud model is ideal, particularly cloud bursting, where an organization experiences temporary demand spikes (seasonal influxes).  The public cloud can also offer an affordable alternative to disaster recovery and backup/archiving.

Is your “private cloud” really a cloud at all?

There are many examples of the same old legacy IT dressed up with a thin veneer of cloud paint.  The fact is, traditional IT’s complexity and inefficiency makes it unsuitable to deliver a true private cloud.

Today, hyperconverged infrastructure is one of the fastest growing segments in the $107B IT infrastructure market, in part because of its ability to enable organizations to deliver a cloud-operating model with on-premises infrastructure.

Hyperconvergence surpasses the traditional IT model by incorporating IT infrastructure and services below the hypervisor onto commodity x86 “building blocks”.  For example, SimpliVity hyperconverged infrastructure is designed to work with any hypervi­sor on any industry-standard x86 server platform. The combined solution provides a single, shared resource pool across the entire IT stack, including built-in data efficiency and data protection, eliminating point products and inefficient siloed IT architectures.

Some of the key characteristics of this approach are:

  • Single vendor for deploying and supporting infrastructure.  Traditional IT requires users to integrate more than a dozen disparate components just to support their virtualized workloads.  This causes slow deployments, finger pointing, performance bottlenecks, and limits how it can be reused for changing workloads. Alternatively, hyperconvergence is architected as a single atomic building block, ready to be deployed when the customer unpacks the solution.
  • The ability to start small and scale out without penalty.  Hyperconvergence eliminates the need for resource allocation guesswork.  Simply start with the resources needed now, then add more, repurpose, or shut down resources with demand—all with minimal effort and cost, and no performance degradation.
  • Designed for self-service provisioning. Hyperconvergence offers the ability to create policies, provision resources, and move workloads, all at the VM-level, without worrying about the underlying physical infrastructure.  Because they are software defined, hyperconverged solutions can also integrate with orchestration and automation tools like VMware vRealize Automation and Cisco UCS Director.
  • Economics of public cloud. By converging all IT infrastructure components below the hypervisor and reducing operating expenses through simplified, VM-centric management, hyperconverged offerings deliver a cost model that closely rivals the public cloud. SimpliVity, for example, is able to deliver a cost-per-VM that is comparable to AWS, including associated operating expenses and labour costs.

It’s clear that the cloud presents a compelling vision of improved IT infrastructure, offering the agility required to support innovation, experimentation and competitive advantage.  For many enterprises, public cloud models are non-starters due to the regulatory, security, performance, and control drawbacks, for others, the public cloud or infrastructure as a service is an ideal way to quickly increase resources.

Hyperconvergence is also helping enterprises increase their business agility by offering all the cloud benefits, without added risks or uncertainty. Today technology underpins competitive advantage and organizations must choose what works best for their business and their applications, making an approach combining public cloud and private cloud built on hyperconverged infrastructure an even more viable solution.

Written by Rich Kucharski, VP Solutions Architecture, SimpliVity.

Cloudyn gets $11 million to take cloud monitoring global

Cloud monitoring service Cloudyn has raised $11 million to fund global expansion, brand raising and service integration from a Series B round of financing.

The latest cash injection comes 15 months after it was awarded for $4 million as investors noted how it had grown to monitor 8% of all Amazon Web Services. With cloud computing operators now generating $321billion a year, according to 451 Research, the monitoring of both infrastructure and platform services (IaaS and PaaS) is becoming increasingly critical.

The popularity of hybrid clouds, which straddle both public and private premises, has added complexity to the management task, creating a need for specialist monitors such as Isreal-based start up Cloudyn. A study conducted by 451 Research predicts that many companies plan to spend up to 50% of their cloud budget on these services.

Since 2014, when Cloudyn received $4 million in funding, the company says it has focused on winning clients among Fortune 1000 enterprises and managed service providers. Cloudyn has tripled its revenue for three consecutive years while doubling its head count. It currently monitors 200,000 virtual machines and 12,000 concurrent applications.

The new round of venture funding was led by Carmel Ventures and included contributions from previous investors Titanium Investments and RDSeed. Ronen Nir, General Partner at Carmel, will join Cloudyn’s board of directors.

There’s growing need for enterprises to perfect their resource allocation, boost performance and cut reducing cloud spend, according to Ronen Nir, the cloud specialist at Carmel Ventures. “Cloudyn’s technology provides meaningful and actionable data which has both operational and financial metrics,” said Nir.

“The funding will allow us to build on this momentum and increase our market share in North America and global markets,” said Sharon Wagner, CEO of Cloudyn.

HPE launches Synergy to help balance hybrid clouds

HPE street logoHewlett Packard Enterprise (HPE) has launched a new service aimed at helping hybrid cloud users strike the right work-cloud balance.

As companies adopt hybrid clouds, they will become increasingly aware that these half private half public clouds do not provide an instant one size fits all solution and HPE Synergy, it says, will give hybrids the fluidity to adjust.

HPE Synergy will work with existing systems from established brand such as Arista, CapGemini, Chef, Docker, Microsoft, Nvidia and VMware, said HPE in a statement. It will be available to customers and channel partners in around April 2016.

The new HPE Synergy service is an intelligent system with a simplified application programming interface (API). This combination of artificial intelligence and a portal will apparently create liquidity in the computing resources of the public and private cloud, meaning that conditions can be constantly monitored and adjustments constantly calculated. The upshot, according to HPE, is a system that can load balance between its public and private capacities and create the right blend for each set of circumstances.

Synergy creates resource pools comprising computing, storage and fabric networking capacity. These can be calculated for each case, according to its needs and the available resources. This capacity management is achieved through a system that can legislate for physical, virtual and containerised workloads.

According to HPE, Synergy’s software-defined intelligence self-discovers and self-assembles the perfect configuration and infrastructure possible (given the resources available) needed for repeatable frictionless updates. Meanwhile, the single unified API offers the chance to programme and control the bare-metal infrastructure as a service interface. The HPE OneView user interface acts as a window on the entire range of different types of storage that an enterprise might have.

The rationale is that everyone is going to hybrid computing, so it makes sense to help them move their resources across the border between private and public cloud as easily as possible, according to HPE general manager Antonio Neri.

“Hybrids of traditional IT and private clouds will dominate the market over the next five years,” said Neri. Clients will want the speed and agility of the cloud and the reliability and security of their own data centres. “With HPE Synergy, IT can deliver infrastructure as code and give businesses a cloud experience in their data centre,” said Neri.

EMC announces new protection for data as cloud hybrids become the norm

Storage vendor EMC has created a new product range to protect data as it moves in and out of the various parts of a hybrid cloud.

On Tuesday it announced news products and services designed to integrate primary storage and data protection systems across private and public clouds. The aim is to combine the flexibility of public cloud services with the control and security of a private cloud infrastructure.

The new offerings carry out one of three functions, characterised as tiering data across diverse storage infrastructures, protecting data in transit to and from the Cloud and protecting data once its static in the cloud.

EMC says that by integrating its VMAX systems through new improvements to its FAST.X tiering systems it can make it cheaper for customers to prioritise their storage according to the expense of the medium. The new additions to the management system have now automated the tiering of public clouds and cater for both EMC and non-EMC storage systems.

The new levels of protection for data, as it travels in and out of the cloud, is provided by

CloudBoost 2.0. This, claims EMC, will work with EMC’s Data Protection Suite and Data Domain so that private cloud users can move data safely to the cheaper media in the public cloud for long-term data retention.

Once resident in the public cloud, data can be better protected now as a result of new Spanning product features, which can cater for different regional conditions across the European Union. Spanning Backup for Salesforce now offers better SaaS data restoration options so it’s easier restore lost or deleted data. Spanning’s new European data destination option will also aid compliance with European data sovereignty laws and regulations. Meanwhile, the Data Protection as a Service (DPaaS) offering for private clouds now has better capacity management, secure multi-tenancy and a dense shelf configuration that EMC says will ‘dramatically’ cut the cost of ownership.

Meanwhile, EMC also announced a new generation of its NetWorker data protection software.  NetWorker 9 has a new universal policy engine to automate and simplify data protection regardless of where the data resides.

“Tiering is critical to business in our own data centres,” said Arrian Mehis, general manager of VMware Cloud practice at Rackspace, “and in the data centres of our customers.”

ENDS

Avere-Microsoft joint effort enables Azure hybrids

server rackEnterprise storage vendor Avere Systems is to work with Microsoft so that its Virtual FXT Edge filers can be used with Microsoft Azure.

The hardware maker, which specialises in creating storage devices that caters for hybrid cloud set ups, says the two vendors are collaborating to make it easier and cheaper to get the qualities of the cloud from IT infrastructure that is situated ‘on premise’.

The system aims to simplify the task of creating a system for providing computing power, memory and storage on demand for enterprise IT staff who are not specialists in running cloud services. The Avere technology is designed to make data that is held on network attached storage (NAS) more readily accessible to Azure, so that users don’t experience any latency.

The rationale is that many companies want the liquidity of cloud computing but are not allowed to move their data off the company premises, according to Avere. Its solution was to invent a ‘virtual NAS’ system that is easy for an enterprise IT department employee to install and manage. Meanwhile the system is sophisticated enough to provide multi-protocol file access (including NFS and SMB) and clusters, making it powerful enough to deliver high availability, scalable performance and capacity.

As hybrid cloud systems become the de facto standard for enterprises, it’s important that they are easy enough for IT department employees to manage, according to Nicole Herskowitz, Microsoft Azure’s Senior Director of Product Marketing, Microsoft Azure.

By adapting the system to work smoothly with Azure, enterprise IT department managers can deploy thousands of Azure HPC instances on-demand to crunch data with low latency and no data migration. This means businesses can tap into hyper-converged infrastructure of Azure with ease, without breaking the bank, Avere claims.

“At Avere, we’ve been dedicated to shattering the myth that organizations can’t have enterprise NAS performance in the public cloud,” said Rebecca Thompson, VP Marketing of Avere Systems, “with Microsoft we’re helping enterprises harness the computing power of Microsoft Azure, which is used by 57% of Fortune 500 companies for big data applications.”