Archivo de la categoría: Virtualisation

Openstack targets telcos with NFV push

Digital illustration of Cloud computing devicesA new report indicates that there could be a boom in network function virtualisation projects this year, with NFV the second most popular subject of research after containers, reports Telecoms.com.

According to a report from the OpenStack Foundation, only container technology is under closer scrutiny than NFV by technology buyers and decision makers in the world’s enterprises and service providers.

The paper, Accelerating NFV Delivery with OpenStack, reports on the findings of the foundation’s most recent user survey, in which 76 per cent of those questioned identified an important telecoms function that had to be addressed through virtualisation. Of the OpenStack user base 12% were traditional telcos and another 64% were companies that now include telecoms as part of their roster of services, such as the categories of cable TV and ISP companies, telco and networking and data centre/co-location companies.

By comparison, an OpenStack user survey in 2014 suggested its user base of telcos was much smaller, the Foundation says, and only an elite of global telcos, such as NTT and Deutsche Telekom, were investigating NFV use. Since then there has been a surge in interest, it reports, with

increasing numbers of telecom-specific NFV features, such as support for multiple IPv6 prefixes, being requested or submitted by OpenStack users.

Container technology information is even more sought after than NFV, according to OpenStack, but the two issues are not mutually exclusive. Sources have speculated that the technologies may be used in tandem as OpenStack is the foundation of rationalising the hybrid nature of most telco’s infrastructure.

According to the paper’s executive summary OpenStack could provide cost effective route to the creation of private clouds without vendor lock-in, since proprietary hardware is becoming associated with NFV.

“While the interoperability between NFV infrastructure platforms that use OpenStack is still a work in progress, the majority of configurations surpass expectations,” concluded the paper co-authored by Kathy Cacciatore, the OpenStack Foundation’s Consulting Marketing Manager.

Big Switch Networks wins $48.5M to bring SDN to telcos, data centres and enterprises

Network Function VirtualisationSanta Clara based software defined networking vendor Big Switch Networks (BSN) has won another $48.5 million to bring its bare metal networking fabrics to new markets, reports Telecoms.com.

The networking specialist, which has now received $93.5m since its launch in 2010, aims to use the new cash injection to fund more R&D and to create news sales and marketing channels in the Europe, Asia Pacific, the UAE and the US.

Investors from Morgenthaler Ventures, Silver Lake Waterman, Index Ventures, Khosla Ventures, Redpoint Ventures, Accton, CID Group and MSD Capital put the cash up after hearing how the company achieved 300% growth last year. Its two technology inventions have found three popular use cases among telecoms carriers, data centre companies, service providers and enterprises.

BSN offers clients a Big Monitoring Fabric and a Big Cloud Fabric, both of which are based on bare metal software defined networking principles, with a centralised product-specific controller managing a network of bare-metal Ethernet switches. The controller, the managed switches, and the links connecting to them form the network fabric. BSN defines the software for a centralised controller running on industry standard servers and the operating system that runs on the bare-metal Ethernet switches. The Big Monitoring Fabric, which connects networks with monitoring tools and Big Cloud Fabric, which provides software defined management of switching fabrics in data centres, have won telco and data centre clients in the US, APAC and EMEA. Its main vertical markets are telcos and IT, financial services, government, service providers and higher education.

In addition to the extra funding, BSN announced that it has recruited former NetApp CEO Dan Warmenhoven and venture capitalist Gary Morgenthaler, who have both steered companies through the transition that comes with rapid expansion.

According to IHS Research, the percentage of users of software defined networking in enterprise communications will grow from 6% to 23% in 2016. It also estimates that spending on data centre networking will reach $13 billion in 2019, up from $781 million in 2014.

“Nobody can ignore the advantages of software defined networking,” said Shawn O’Neill, MD of one Big Switch’s venture partners Silver Lake Waterman.

Big Switch is fundamentally changing the economics of data centre networking and SDN, claimed  another investor, Mike Volpi, a partner at Index Ventures. “This financing will fuel significant go-to-market acceleration and geographic expansion,” said Volpi.

New Service Director from HPE could simplify hybrid cloud management for telcos

HPE street logoHPE claims its new Service Director system could put comms service providers back in control of their increasingly complex hybrid computing estates. It aims to achieve this by simplifying the management of network function virtualisation (NFV).

HPE claims that Service Director will automate many of the new management tasks that have been created by the expanding cloud environment and provide a simpler system of navigation for all the different functions that have to be monitored and managed. The new offering builds on HPE NFV Director’s management and orchestration (MANO) capacity and bridges existing physical and new virtualized environments.

As virtualisation has expanded it has extended beyond the remit of current generations of operations support systems (OSS) and the coexistence of physical and virtual infrastructure can introduce obstacles that slow the CSPs down, HPE said. It claims the Service Director will help CSPs roll out new offerings quicker.

The main benefits of the system outlined by HPE are automation of operations, shared information, flexible modelling of services and openness. With a single view of the entire infrastructure and dynamic service descriptors, it aims to make it easier to spot problems and create new services, HPE claims. As an open system the Service Director platform will have interfaces to any new third party software defined networking controllers and policy engines.

Since there is no such thing as a green field NFV set up there has to be a system to rationalise the legacy systems and the new virtualised estate, said David Sliter, HPE’s comms VP. “Service Director is a transformational change in the relationship between assurance and fulfilment, allowing the OSS resource pool to be treated, automated and managed as a service,” said Sliter.

The telecoms industry needs an omnipotent service orchestration system that can span every existing NFV MANO and OSS silo, according to analyst Caroline Chappell, principal analyst of NFV and Cloud for Heavy Reading. A model-driven, fulfilment and assurance system like Service Director could speed up the delivery of services across a hybrid physical and virtual network, Chappell said.

HPE Service Director 1.0 will be available worldwide in early 2016, with options for pre-configured systems to address specific use cases as extensions to the base product, starting with HPE Service Director for vCPE 1.0.

Cisco boosts SDN range with ACI update

Cisco corporateCisco claims that customers can take a further step towards network automation as it launched a new release of Application Centric Infrastructure (ACI) software to its software defined networking range.

Despite massive demand there are only 5% of networks being automated, according to Cisco’s own customer feedback. In response it has moved to simplify the task by making it easier to address all the various autonomous segments of any complicated network infrastructure.

The new software revision of ACI makes it capable of microsegmentation of both physical (i.e. bare metal) applications and virtualized applications, which are separated from the hardware by virtual operating systems such as VMware VDS and Microsoft Hyper-V. By extending ACI across multi-site environments it will enable cloud operators and network managers to devise policy-driven automation of multiple data centres.

In addition, Cisco claimed it has paved the way for integration with Docker containers through its contributions to open source. This, it said, means customers can get a consistent policy model and have more options to choose from when using the Cisco Application Policy Infrastructure Controller (APIC).

ACI now supports automated service insertion for any third party service running between layers four and seven on the network stack, it said. More support will be put behind cloud automation tools like VMware vRealize Automation and OpenStack, including open standards-based Opflex support with Open vSwitch (OVS).

The ACI ecosystem now makes the automation of entire application suites possible, including Platform as a Service (PAAS) and Software as a Service (SAAS) and there are now over 5000 Nexus 9000 ACI-ready customers using Cisco’s open platform it said.

“Customers tell me that only five to ten percent of their networks are automated today,” said Soni Jiandani, SVP at Cisco. Though they are eager to adopt comprehensive automation for their networks and network services through a single pane of management, they haven’t managed it yet. However, since several ACI customers have achieved full this could be the next step, said Jiandani.

Google launches virtual machine customisation facility

Google cloud platformGoogle has announced a new more fitting way of buying virtual machines (VMs) in the cloud. It claims the extra attention to detail will stamp out forced over purchasing and save customers money.

With the newly launched beta of Custom Machine Types for Google’s Compute Engine, Google promised that it will bring an end to the days when “major cloud buyers force you to overbuy”. Google has promised that under its new system users can buy the exact amount of processing power and memory that they need for their VM.

The new system, explained in a Google blog, aims to improve the experience for customers when buying a new virtual machine in the cloud. Google says it wants to replace the old system, where users have to choose from a menu of pre-configured CPU and RAM options on machines that are never quite adjusted right to fit the user. Since VMs usually come in multiples of two, Google explained, customers frequently have to buy eight CPUs, even when they only need six.

The Custom Machine Types system will let users buy virtual CPU (vCPU) and RAM in smaller units (Gigibytes rather than Gigabytes) and give customer more options to adjust the number of cores and memory as needed. If a customer’s bottom line expands, the cloud can be ‘let out’ accordingly. In another tailoring option, Google has introduced smaller units of charging (with per-minute billing) in a bid to create more accurate metering of the customer’s consumption of resources.

In the US every vCPU hour will cost $0.03492 and every GiB of RAM will cost $0.00468 per hour. The price for Europe and Asia, however, is a slightly higher rate $0.03841 per vCPU hour. Rates will decrease on bulk purchasing however.

Support is available in Google’s command line tools and through its application programming interface (API) and Google says it will create a special graphical interface for its virtual machine shop in its Developer Console. Developers can specify their choice of operating system for their tailored VM, with the current options being CentOS, CoreOS, Debian, OpenSUSE and Ubuntu.

Meanwhile, elsewhere in the Google organisation, it is working with content deliverer Akamai Technologies to reduce hosting and egress costs and improve performance for Akamai customers taking advantage of Google Cloud Platform.

Red Hat launches Cloud Access on Microsoft Azure

redhat office logoRed Hat has followed its recent declaration of a partnership with Microsoft by announcing the availability of Red Hat Cloud Access on Microsoft Azure.

The Access service will make it easier for subscribers to move any eligible, unused Red Hat subscriptions from their data centre to the Azure cloud. Red Hat Cloud Access will give them the support relationship they enjoy with Red Hat with the cloud computing powers of Azure, the software vendor said on its official blog. Cloud Access extends to Red Hat Enterprise Linux, Red Hat JBoss Middleware, Red Hat Gluster Storage and OpenShift Enterprise. The blog hints that more collaborations with Microsoft are to come.

Meanwhile, in his company blog Azure CTO Mark Russinovich gave a public preview of the coming Azure Virtual Machine Scale Sets offering. VM Scale Sets are an Azure Compute resource that allow users to create and manage a collection of virtual machines as a set. These scale sets are designed for building large-scale services targeting big computing, big data and containerized workloads, all of which are increasing in significance as cloud computing evolves, said Russinovich.

By integrating with Azure Insights Autoscale, they provide the capacity to expand and contract to fit requirements with no need to pre-provision virtual machines. This allows users to match their consumption of computing resources to their application needs more accurately.

VM Scale Sets can be controlled within Azure Resource Manager templates and they will support Windows and Linux platform images, as well as custom images and extensions. “When you define a VM Scale Set, you only define the resources you need, so besides making it easier to define your Azure infrastructure, this also allows Azure to optimize calls to the underlying fabric, providing greater efficiency,” said Russinovich. “To deploy a scale set, all you need is an Azure subscription.”

Example Virtual Machine Scale Set templates are available on the GitHub repository.

AT&T, Ericsson and Apcera demonstrate NFV in a PaaS environment

Voice and video can work in the most complicated clouds, according to an integration breakthrough demonstrated at the OpenStack summit in Tokyo.

AT&T and Ericsson claim they’ve created an improvement to container technology that makes cloud telco platforms far more secure and yet easier to set up. They jointly presented their invention in proof of concept exercise, along with cloud service provider Apcera.

Container technology, previously used for creating secure environments for text based office and enterprise productivity applications, has been tweaked in order to overcome some of its security limitations, when telecoms is handled in the cloud.

Telco AT&T, equipment maker Ericsson and cloud service provider Apcera described how they came together in order to bring their own perspectives of the multiple levels of the OpenStack hierarchy. The joint problem they faced is that the virtualization of telecoms still has some teething problems that need to be resolved, such as the interaction of various web browsers and video and audio services.

The companies demonstrated how they have tweaked container technology to create a containerised policy driven PaaS that can use the telecoms related Virtualized Network Function (VNF). The resulting telecoms-charged ‘advanced container’ was able to house a Web Communication Gateway (vWCG) that fully integrated with OpenStack.

The proof of concept exercise showed audio and video communications actually worked between multiple Web browsers on the virtualized telephony system.

Never mind the complexity of what’s happening across the comms stack and the cloud, the main thing to take home is that this system works with a few clicks of a mouse, said Magnus Arildsson, Head of IaaS and PaaS at Ericsson. “This is an important step toward fast, secure and policy-integrated deployment of Telco VNFs on micro-services-based containers,” he said.

Ericsson and Apcera accelerated the development of the micro-services-based PaaS environment, said Derek Collison, CEO of Apcera. “This exercise paves the way for cost-effective, efficient deployments and further collaboration with telco operators to integrate carrier-grade requirements with our cloud platform.”

Radisys and Sanctum create SDN solution to lost revenue

Network Function VirtualisationService accelerator Radisys is working with software defined networking (SDN) specialist Sanctum Networks to create a carrier-class cloud service that can support communication service providers (CSPs) worldwide.

The SDN cloud was built by combining Sanctum’s Jupiter SDN Controller with Radisys’ FlowEngine Intelligent Traffic Distribution System. Together they aim to create a software defined infrastructure powerful enough to identify, provide and support instant network service offerings with complete visibility.

Using this foundation, the partners say, CSPs will have a much better chance of redefining and improving the subscriber experience and guaranteeing a better of level of service. Software defined networking will also create more options for new over the top services for mobile telcos looking for new revenue streams, the collaborators claim.

The cloud is built on Sanctum’s intelligent SDN orchestrator and FlowEngine’s programmable data plane processor. The system can optimise service delivery in real-time as demands change and also unifies data across many silos, claims Joseph Sulistyo, director of product management and strategy, Radisys. Success hinges on how enthusiastically the network administrators take to the system and get the full use out of it, according to Sulistyo. “Sanctum Networks has developed a well-designed user interface and dynamic network programming model,” said Sulistyo.

Mobile telcos need somebody to remove the complexity of cloud service delivery in real-world deployments, explained Nazneen Shaikh, vice president of product management at Sanctum Networks. “CSPs can improve service visibility into their network, while ensuring carrier-grade reliability, scale and performance,” said Shaikh.

The billing, network and application data at many mobile operators is all over the place, according to Ravi Palepu, senior director of global telco solutions at revenue management specialist Virtusa. “As a consequence many telcos are either losing revenue by under billing, or losing customers by over billing,” said Palepu, “anything that unifies the data could help telcos identify where they are losing money.”

CloudBolt adds SDN, containerization and support

Network Function VirtualisationVirtual appliance maker CloudBolt Software now offers better support for microservices, software defined networking (SDN) and containerization on its platform. It is also offering compatibility with a variety of container services, it has announced.

According to CloudBolt its customers can now virtualize their networks with access to VMware NSX directly through its system. There is also new additional support for Docker and Kubernetes for customers wishing to use application containerization. The addition of new support capacity for IBM SoftLayer, HP Helion and CenturyLink Cloud means that there are now 13 cloud platforms compatible with CloudBolt.

The addition of Docker and Kubernetes marks the first time that any form of microservices management has been available on Cloudbolt’s system. The new compatibility with VMware NSX will allow customers to spin up virtual environments within the CloudBolt platform, explained CEO Jon Mittelhause. This means they can now use software defined networks and networks function virtualisation to simplify network configuration and management. The newest incarnation of CloudBolt also extends support for OpenStack and Cloud Foundry.

Meanwhile, widening the choice of cloud environments available to customers gives CloudBolt clients more options for locations from which to configure and publish capacity through the CloudBolt service catalogue, it says. New sites are available in Canada, India, Italy and Mexico.

News of the product upgrade follows a strategic decision to move the company’s corporate headquarters from Washington, DC to Silicon Valley in California as it seeks extra funding. Yesterday the 2015 Global Venture Capital Confidence Survey compiled by Deloitte and the National Venture Capital Association (NVCA) suggested that investors are most likely to put funds behind Silicon Valley start ups.

CloudBolt, founded in 2011, recently received an additional $2 million in funding from investors in a bid to cater to rising demands from customers.

“In the past year, we have seen a marked increase in the number of enterprises that want the benefits of SDN and container technologies,” said Mittelhauser, “the latest version of CloudBolt should make it easier and cheaper for enterprises to reap the benefits of these technologies.”

VMware opens up at VMworld San Francisco

VMWare campus logoVirtualisation pioneer VMware has unveiled a raft of new services tailored for hybrid cloud services and open systems at its annual VMworld conference in San Francisco.

VMware announced the launch of VMware Integrated OpenStack 2.0, the company’s second release of its distribution of the OpenStack open-source cloud software. The new release, based on OpenStack Kilo, will be available on September 30.

“Customers can now upgrade from version one to version two in a more operationally efficient manner and even roll back if anything goes wrong,” said VMware product line manager Arvind Soni.

The move could be seen as a U-turn by VMware, whose revenue streams come from sales of its vSphere virtualization software. The most recent annual VMware report warned that “open source technologies for virtualization, containerization, and cloud platforms such as Xen, KVM, Docker, Rocket, and OpenStack provide significant pricing competition and let competing vendors [use] OpenStack to compete directly with our SDDC initiative.”

However, with OpenStack distributions available from Canonical, HP, Huawei and Oracle – and investment in OpenStack companies from Intel, IBM and other major players, VMware has announced continued support. In October 2014 parent company EMC bought three OpenStack start ups – Cloudscaling, Maginatics and Spanning – to provide a variety of cloud services which adhere to the increasingly popular open standard.

Meanwhile, testing and running disaster recovery plans will be quicker, promises VMWare, now its vCloud Air service has a new cloud-based Site Recovery Manager. The service is now offered on a pay-per-use basis, replacing the more expensive annual subscriptions.

In the event of a disaster recovery event or test, fees will be charged for each virtual machine protected and the storage they consume, said VMware.

Storage could get cheaper as VMware has introduced vCloud Air Object Storage on the Google Cloud Platform. The debut product from VMware’s new Google reseller relationship will be available from September 30th, which will also see an alternative offering launched: vCloud Air Object Storage service, powered by EMC.

The start of the fourth financial quarter should also see VMware release its new vCloud Air SQL database as a service, as the virtualisation vendor looking to match the breadth of features offered the cloud industry’s top service providers.

With a new Hybrid Cloud Manager, VMware aims to help clients to migrate workloads, extend the range of their data centres and fine tune the process of juggling resources between private and public clouds. The management takes place through the interface of VMware’s vSphere Web Client, and will support the migration of virtual machines.