Archivo de la categoría: VMware

Cost, flexibility driving UK public sector to cloud

The UK public sector is warming to cloud

The UK public sector is warming to cloud

A recent survey of over 600 UK decision makers suggests over three quarters (85 per cent) of UK public sector employees are using some form of public cloud services.

The VMware-sponsored research sheds some light on adoption drivers, with cost savings looking like the most frequently cited. More than a third of respondents (34 per cent) said affordability was the main reason for choosing to buy cloud services in their department, followed by ease of use (23 per cent).

“The findings from this research are very positive for the public sector. Line of businesses are using public cloud services to drive efficiencies across the organisation – both for employees to access data inside the organisation, and to speed the delivery of citizen-focused services, for example passport applications, that fluctuate at times throughout the year,” said Andy Tait, head of public sector strategy, VMware.

While cloud services aren’t always cheaper than their legacy alternatives it is perhaps unsurprising that affordability is one of the leading drivers of cloud uptake in the public sector given increased budgetary pressure and savings requirements being placed on departments.

Still, the research highlights a growing IT security issue. The survey results show just under two-thirds of (60 per cent) of public sector respondents use some form of public cloud services, whether offered by IT or not.

“In order for the UK public sector to drive efficiencies in a secure, flexible, agile and compliant manner, business users need to look at embracing a hybrid cloud strategy that can provide portability of workloads, one set of management tools and deliver services such as disaster recovery and built in security – without the cost of having to investing in unnecessary resources and tools,” Tait said.

Report: EMC mulls selling itself to its subsidiary VMware

EMC might be selling itself to VMware in a move that could see the child become the parent

EMC might be selling itself to VMware in a move that could see the child become the parent

Storage giant EMC is reportedly considering a buyout by its virtualization-focused subsidiary, VMware, at the behest of activist investor Elliott Manage according to multiple reports.

The deal according to Re/Code, which first reported the news, would work like this: VMware would issue between $50bn and $55bn in new share, with about $30bn going towards cancelling EMC’s stake in VMware, and the remaining shares in VMware issues to current EMC stakeholders.

While no deal has been agreed or confirmed by spokespeople at EMC, VMware and Elliott Management it is clear the EMC Federation is under increasing pressure to split up and drastically reorganize its operations, something that has been on the cards for a couple of years now amidst flat or declining revenues and a bloating portfolio of products and services.

Elliott has made no secret of its desire to see EMC balkanize the Federation – EMC, VMware and Pivotal – into autonomous entities with more streamlined product portfolios, much like its support of Citrix’s reorganization and divestiture(s).

The market’s reaction to the potential acquisition was mixed, with VMware’s share price dropping from $93.43 to $85.72 per share in the space of just over an hour after the news broke, levelling off at $86.65 per share by the close of trading yesterday. EMC shares, however, rose from about $25.93 per share to $27.05 during the same period, closing at $26.85.

A deal that would see EMC and VMware combine into one entity wouldn’t be too far fetched given EMC’s more recent acquisition streak – namely, software companies that bolster its software-defined storage and enterprise software capabilities. VMware, an embedded component of today’s datacenters, complements that strategy nicely, but with the news sending VMware’s share price downward it doesn’t seem the market favours the child becoming the parent.

In a call with analysts in July EMC chairman and chief executive Joe Tucci rejected the possibility of a split, but emphasized a transformation that puts cloud technology (like VMware’s) at its core.

“Undoubtedly everybody on this call believes deeply that one of the biggest transitions every company has to do is move to the cloud. We talked about digital transformation which I think is an even bigger market where the Internet of Things and all of that falls in. But just take where we live in datacentres. And datacentres are moving to cloud technologies, both private and managed.”

“Obviously, if you were doing that, would you rather do that as just VMware, just EMC, just Pivotal with their past or are you a lot stronger in front of a customer’s doing it together? So, do I think we’re much stronger? The answer is absolutely. So I think splitting this federation or spinning off VMware is not a good idea. I firmly believe that we are better together, a lot better together.”

Will Microsoft’s ‘walled-garden’ approach to virtualisation pay off?

Microsoft's approach to virtualisation: Strategic intent or tunnel vision?

Microsoft’s approach to virtualisation: Strategic intent or tunnel vision?

While the data centre of old played host to an array of physical technologies, the data centre of today and of the future is based on virtualisation, public or private clouds, containers, converged servers, and other forms of software-defined solutions. Eighty percent of workloads are now virtualised with most companies using heterogeneous environments.

As the virtual revolution continues on, new industry players are emerging ready to take-on the market’s dominating forces. Now is the time for the innovators to strike and to stake a claim in this lucrative and growing movement.

Since its inception, VMware has been the 800 lb gorilla of virtualisation. Yet even VMware’s market dominance is under pressure from OpenSource offerings like KVM, RHEV-M, OpenStack, Linux Containers and Docker. There can be no doubting the challenge to VMware presented by purveyors of such open virtualisation options; among other things, they feature REST APIs that allow easy integration with other management tools and applications, regardless of platform.

I see it as a form of natural selection; new trends materialise every few years and throw down the gauntlet to prevailing organisations – adapt, innovate or die. Each time this happens, some new players will rise and other established players will sink.

VMware is determined to remain afloat and has responded to the challenge by creating an open REST API for VSphere and other components of the VMware stack.  While I don’t personally believe that this attempt has resulted in the most elegant API, there can be no arguing that it is at least accessible and well-documented, allowing for integration with almost anything in a heterogeneous data centre. For that, I must applaud them.

So what of the other giants of yore? Will Microsoft, for example, retain its regal status in the years to come? Not if the Windows-specific API it has lumbered itself with is anything to go by! While I understand why Microsoft has aspired to take on VMware in the enterprise data centre, its API, utilising WMI (Windows Management Instrumentation), only runs on Windows! As far as I’m concerned this makes it as useless as a chocolate teapot. What on earth is the organisation’s end-goal here?

There are two possible answers that spring to my mind, first that this is a strategic move or second that Microsoft’s eyesight is failing.

Could the Windows-only approach to integrating with Microsoft’s Hyper-V virtualisation platform be an intentional strategic move on its part? Is the long-game for Windows Server to take over the enterprise data centre?

In support of this, I have been taking note of Microsoft sales reps encouraging customers to switch from VMware products to Microsoft Hyper-V. In this exchange on Microsoft’s Technet forum, a forum user asked how to integrate Hyper-V with a product running on Linux.  A Microsoft representative then responded saying (albeit in a veiled way) that you can only interface with Hyper-V using WMI, which only runs on Windows…

But what if this isn’t one part of a much larger scheme? The only alternative I can fathom then is that this is a case of extreme tunnel vision, the outcome of a technology company that still doesn’t really get the tectonic IT disruptions and changes happening in the outside world. If it turns out that Microsoft really does want Windows Server to take over the enterprise data centre…well, all I can say is, good luck with that!

Don’t get me wrong. I am a great believer in competition, it is vital for the progression of both technology and markets. And it certainly is no bad thing when an alpha gorilla faces troop challenger. It’s what stops them getting stale, invigorating them and forcing them to prove why they deserve their silver back.

In reality, Microsoft probably is one of the few players that can seriously threaten VMWare’s near monopolistic market dominance of server virtualisation. But it won’t do it like this. So unless new CEO Satya Nadella’s company moves to provide platform-neutral APIs, I am sad to say that its offering will be relegated to the museum of IT applications.

To end with a bit of advice to all those building big data and web-scale applications, with auto-scaling orchestration between applications and virtualisation hypervisors: skip Hyper-V and don’t go near Microsoft until it “gets it” when it comes to open APIs.

Written by David Dennis, vice president, marketing & products, GroundWork

Box, Docker, eBay, Google among newly formed Cloud Native Computing Foundation

The Cloud Native Computing Foundation is putting Linux containers at the core of its definition of 'cloud-native' apps

The Cloud Native Computing Foundation is putting Linux containers at the core of its definition of ‘cloud-native’ apps

The Linux Foundation along with a number of enterprises, cloud service providers , telcos and vendors have banded together to form the Cloud Native Computing Foundation in a bid to standardise and advance Linux containerisation for cloud.

The newly formed open source foundation, a Linux Foundation collaborative project, plans to create and drive adoption of common container technologies at the orchestration level, and integrate hosts and services by defining common APIs and standards.

The organisation also plans to assemble specifications to address a “comprehensive set of container application infrastructure needs.”

The members at launch include AT&T, Box, Cisco, Cloud Foundry Foundation, CoreOS, Cycle Computing, Docker, eBay, Goldman Sachs, Google, Huawei, IBM, Intel, Joyent, Kismatic, Mesosphere, Red Hat, Switch Supernap, Twitter, Univa, VMware and Weaveworks.

“The Cloud Native Computing Foundation will help facilitate collaboration among developers and operators on common technologies for deploying cloud native applications and services,” said Jim Zemlin, executive director at The Linux Foundation.

“By bringing together the open source community’s very best talent and code in a neutral and collaborative forum, the Cloud Native Computing Foundation aims to advance the state-of-the-art of application development at Internet scale,” Zemlin said.

The central goal of the foundation will be to harmonise container standards and techniques. A big challenge with containers today is there are many, many ways to implement them, with a range of ‘open ecosystems’ and vendor-specific approaches, all creating one heterogeneous, messy pool of technologies that don’t always play well together.

That said, the foundation expects to build on other existing open source container initiatives including Docker’s recently announced Open Container Initiative (OCI), with which it will work on building its container image spec into the standards it develops. Google also announced that the foundation would henceforth govern development of Kubernetes, which reached v.1 this week, over to the foundation.

“Google is committed to advancing the state of computing, and to helping businesses everywhere benefit from the patterns that have proven so effective to us in operating at Internet scale,” said Craig McLuckie, product manager at Google. “We believe that this foundation will help harmonize the broader ecosystem, and are pleased to contribute Kubernetes, the open source cluster scheduler, to the foundation as a seed technology.”

Ben Golub, chief executive of Docker said while the OCI offers a solid foundation for container-based computing many standards and fine details have yet to be agreed.

“At the orchestration layer of the stack, there are many competing solutions and the standard has yet to be defined. Through our participation in the Cloud Native Computing Foundation, we are pleased to be part of a collaborative effort that will establish interoperable reference stacks for container orchestration, enabling greater innovation and flexibility among developers. This is in line with the Docker Swarm integration with Mesos,” Golub said.

CenturyLink open sources more cloud tech

CenturyLink has open sourced a batch of cloud tools

CenturyLink has open sourced a batch of cloud tools

CenturyLink has open sourced a number of tools aimed at improving provisioning for Chef on VMware infrastructure as well as Docker deployment, orchestration and monitoring.

Among the projects open sourced by the company include a Chef provisioning driver for vSphere, Lorry.io – a tool for creating, composing and validating Docker images, and imagelayers.io – a tool that helps improve Docker image visualisation in order to help give developers more visibility into their workloads.

“The embrace of open-source technologies within the enterprise continues to rise, and we are proud to be huge open-source advocates and contributors at CenturyLink,” said Jared Wray, senior vice president of platforms at CenturyLink.

“We believe it’s critical to be active in the open-source community, building flexible and feature-rich tools that enable new possibilities for developers.”

While CenturyLink’s cloud platform is proprietary and developed in house Wray has repeatedly said open source technologies form an essential part of the cloud ecosystem – Wray himself was a big contributor to Cloud Foundry, the open source PaaS tool, when developing Iron Foundry.

The company has also previously open sourced other tools, too. Last summer it punted a Docker management platform it calls Panamax into the open source world, a platform is designed to ease the development and deployment of any application sitting within a Docker environment. It has also open sourced a number of tools designed to help developers assess the total cost of ownership of multiple cloud platforms.

Real World Example: Deploying VMware NSX in the Financial Sector

I recently finished up a project implementing VMware’s NSX and wanted to take a minute to recap my experience. The client I worked with provides call center services in the financial sector. They have to be able to securely access systems that have the ability to see credit card information along with other personal, sensitive information.

VMware NSXThe customer is building out new facilities to host their primary, PCI-related, applications.  In this environment, they have to be able to provide the highest levels of security, while providing high performing networking services. To achieve the necessary requirements, they have had to purchase new infrastructure: blade center systems, networking infrastructure (Nexus 5672s, Nexus 6000s, Nexus 7710s, Juniper SRXs, F5 load balancers, etc.), Software licensing, among other things.

They came across the need to purchase additional pairs of F5 load balancers but were up against their budget. When this happened, the Director / VP in charge of the project evaluated VMware’s NSX technology. After some initial discussions, he realized that NSX could not only provide the type of security the environment needed to drive higher efficiencies but could also provide some of the general networking services he was looking for.

Previous network designs included the need for complete isolation of some workloads and, to achieve this, the design called for trusted traffic to traverse a separate pair of distribution/access layer switches to reach external networks. This design also made it necessary to acquire separate F5 load balancers, as specific traffic was not allowed to comingle on the same physical infrastructure due to the way the security team wanted to steer trusted and untrusted traffic. This meant that the team was required to purchase twice the hardware; separate Nexus 6000s and separate F5 load balancers.

Because of the NSX Distributed Firewall capabilities, security teams have the ability to place required rules and policies closer to applications than has previously been achievable. Because of this, networking designs changed, and allowed for infrastructure requirements previously deemed necessary to be alleviated. The ability to stop untrusted traffic before it ever reaches a logical or physical wire gave the team the opportunity to converge more of their networking equipment; eliminating the need to utilize separate Nexus 6000s. In addition, with the NSX Edge Services Gateway having the ability to provide network load-balancing, they were no longer required to purchase additional physical equipment to provide this service. With the budget they put towards NSX licensing, they were able to get the all the security and load balancing services they were looking for and also put money back into their budget.

The Engagement:

Over the span of approximately one month, the security team, networking team, server / virtualization team, and an auditing team worked together in designing what the NSX solution needed to achieve and how it would be implemented. I believe this to be an important aspect of NSX projects because of the misconception that the server / virtualization teams are trying to take over everything. Without each team, this project would have been a disaster.

As requirements were put forth, we built out NSX in building blocks. First, we identified that we would utilize VXLAN as a means to achieve desired efficiencies: eliminating VLAN sprawl, segregating trusted traffic in the logical, software layer, and allowing Disaster Recovery designs to become easier when using the same IP address space. Once networks and routing were implemented, we were able to test connectivity from various sites, while achieving all requirements by the security team. The next item was implementing NSX security. This item required new ways of thinking for most teams. With VMware NSX, customers have the ability to manage security based on vCenter objects, which provides more flexibility. We had to walk through what the contents of each application were, what types of communications were necessary, what types of policies were required, and, in identifying these items, we were able to build dynamic and static Security Groups. We then built Security Policies (some basic that could apply to a majority of similar applications, some application specific) and were able to re-use these policies against various Security Groups, speeding the deployment of application security. We applied weights to these policies to ensure application specific policies took precedence over the generic. In addition to Netflow, we applied “Flow Monitoring” as a means for the networking and security teams to monitor traffic patterns within the NSX environment.

All in all, this was a very successful project. Our client can now better secure their internal applications as well as better secure sensitive customer data.

Remember, NSX can be mislabeled as a server team product, however, the network team and security team need to know how it works and need to be able to implement it.

Are you interested in learning more about how GreenPages can help with similar projects? Email us at socialmedia@greenpages.com

 

By Drew Kimmelman, Consultant

VMware open sources IAM, cloud OS tools

VMware is open sourcing cloud tools

VMware is open sourcing cloud tools

VMware has open sourced two sets of tools the company said would accelerate cloud adoption in the enterprise and improve their security posture.

The company announced Project Lightwave, which the company is pitching as the industry’s first container identity and access management tool for cloud-native applications, and Project Photon, a lightweight Linux operating system optimised for running these kinds of apps in vSphere and vCloud Air.

The move follows Pivotal’s recent launch of Lattic, a container cluster scheduler for Cloud Foundry that the software firm is pitching as a more modular way of building apps exposing CF components as standalone microservices (thus making apps built with Lattice easier to scale).

“Through these projects VMware will deliver on its promise of support for any application in the enterprise – including cloud-native applications – by extending our unified platform with Project Lightwave and Project Photon,” said Kit Colbert, vice president and chief technology officer for Cloud-Native Applications, VMware.

“Used together, these new open source projects will provide enterprises with the best of both worlds. Developers benefit from the portability and speed of containerized applications, while IT operations teams can maintain the security and performance required in today’s business environment,” Colbert said.

Earlier this year VMware went on the container offensive, announcing an updated vSphere platform that would enable users to run Linux containers side by side with traditional VMs as well as its own distribution of OpenStack.

The latest announcement – particularly Lattice – is part of a broader industry trend that sees big virtualisation incumbents embrace a more modular, cloud-friendly architecture (which many view as synonymous with containers) in their offerings. This week one of VMware’s chief rivals in this area, Microsoft, announced its own container-like architecture for Azure following a series of moves to improve support for Docker on its on-premise and cloud platforms.

VMware, Telstra bring virtualisation giant’s public cloud to Australia

Telstra and VMware are bringing the virtualisation incumbent's public cloud service to Australia

Telstra and VMware are bringing the virtualisation incumbent’s public cloud service to Australia

VMware announced it is partnering with Telstra to bring its vCloud Air service to Australia.

VMware said the initial VMware vCloud Air deployment in Australia is hosted out of an unspecified Telstra datacentre.

“We continue to see growing client adoption and interest as we build out VMware vCloud Air with our newest service location in Australia,” said Bill Fathers, executive vice president and general manager, Cloud Services Business Unit, VMware.

“VMware’s new Australia service location enables local IT teams, developers and lines of business to create and build their hybrid cloud environments on an agile and resilient IT platform that supports rapid innovation and business transformation,” Fathers said.

Last July VMware made a massive push into the Asia Pacific region, inking deals with SoftBank in Japan and China Telecom in China to bring its public cloud service to the area. But the company said it was adding an Australian location in a bid to appeal to users that have strict data residency requirements.

Duncan Bennet, ‎vice president and managing director, VMware A/NZ added: “Australian businesses will have the ability to seamlessly extend applications into the cloud without any additional configuration, and will have peace of mind, knowing this IT infrastructure will provide a level of reliability and business continuity comparable to in-house IT. It means businesses can quickly respond to changing business conditions, and scale IT up and down as required without disruption to the overall business.”

Telstra has over the past couple of years inked a number of partnerships with large enterprise IT incumbents to strengthen its position in the cloud segment. It was one of the first companies to sign up to Cisco’s Intercloud programme last year, and earlier this month announced a partnership with IBM that will see the Australian telco offer direct network access to SoftLayer cloud infrastructure to local customers.

Citic Telecom taps VMwarre for desktop as a service

Citic Telecom is using VMware Horizon to stand up its desktop as a service offering

Citic Telecom is using VMware Horizon to stand up its desktop as a service offering

Hong Kong-based telco Citic Telecom CPC has launched a desktop as a service offering based on VMware Horizon, which the company claims is the first of its kind in the region.

The company said the virtual desktop solution will be aimed at enterprises in the Asia Pacific region that operate in multiple locations but don’t necessarily have the resources to stand up their infrastructure.

“Many enterprises in Hong Kong and the Asia Pacific region are employing a bigger mobile workforce, and more and more are running on a multi-office model. However, the technical infrastructure of these enterprises is not able to support the dynamic requirements in everyday operations,” said Mr. Daniel Kwong, Senior vice president of information technology and security services at Citic Telecom.

The move is part of a broader effort to strengthen the telco’s reach in the regional IT services, particularly in Singapore, Taiwan and mainland China. To strengthen its cloud offerings the company launched its first cloud datacentre in Shanghai last year, and it also plans to open two more cloud datacentres in Beijing and Guangzhou in late 2015.

It will also give the virtualisation incumbent a boost in the region. Last year VMware expanded its cloud services in Asia in partnership with China Telecom in China and SoftBank in Japan.

Danny Tam, general manager of VMware Hong Kong, said: “The mobile cloud is transforming how enterprises should operate, and this is the core of what we do. Our collaboration with Citic Telecom CPC will open the door to more enterprises in Hong Kong and the Asia Pacific region, given Citic Telecom CPC’s extensive network and the credibility that both VMware and Citic Telecom CPC have in the market.”

VMware’s Partnership with Google: vCloud Air & the Google Cloud Platform

 

vCloud AirFollowing on from Chris Ward’s excellent blog coming out of VMware PEX 2015, I wanted to add some details to the recent VMware announcement (January 29, 2015) to partner with Google to “deliver greater enterprise access to public cloud services” via a combination of VMware vCloud Air and the Google Cloud Platform.

For those who are unfamiliar, vCloud Air (formally VMware vCloud Hybrid Service or vCHS) is a public Infrastructure-as-a-Service (IaaS) cloud platform built on the same traditional VMware vSphere we are all used to but managed 24/7 by VMware and their public cloud partners.  vCloud Air offers services such as infrastructure, disaster recovery and backups, and allows you to extend both your network and workloads from traditional on-premise to the cloud with relative ease.

For some time now, Google has offered broad cloud platform services in the following categories, but as part of the first wave of integration into the vCloud Air space, only the highlighted sub-set of (4) Google Cloud Platform services will be made available to existing VMware vCloud Air customers, using a PAYG consumption model:

 

  • Compute (no current/planned integration points)
  • Storage
    • Cloud Storage – Google’s distributed low-cost object storage service
    • Cloud Datastore – Google’s schema-less, document-based NoSQL database service with automatic scale and full transactional integrity.
  • Networking
    • Cloud DNS – A globally distributed low-latency DNS service
  • Big Data
    • BigQuery – A real-time big data analytics service suitable for running ad-hoc BI queries across billions of data points in seconds.
  • Services (no current/planned integration points)
  • Management (no current/planned integration points)

 

Additionally, while Google offers their own management framework, there are some rumors that the partnership could eventually mature to include integration with VMware’s own vRealize Operations management solution.  This will most likely be offered via VMware’s vRealize Air platform (in beta), which currently offers both Automation and Compliance programs.  To quote our CTO, Chris Ward, “VMware vRealize Air checks a lot of boxes for customers of all sizes seeking multi-vendor, multi-cloud provisioning and management of their infrastructure services.

Industry experts, including GreenPages, Forrester and Gartner, are calling this partnership a big “win” for VMware customers, especially enterprise customers.  This relationship will help to truly legitimize not only the cloud, but also the place of the enterprise customer in the cloud.  Specifically, it will allow enterprise customers who are looking for broader database, analytics, and storage options and support, beyond the current vCloud Air portfolio, to find a suitable and scalable landing place for their applications and workloads.  This will build on vCloud Air’s current support for over 5,000 applications and over 90 operating systems.

This is also a strong move for both VMware and Google.  This relationship will give Google much needed enterprise IT exposure, something that VMware has deep roots in, and accelerates VMware’s ability to offer tools to manage a public cloud, an area in which Google has developed a global dominate position.

As with the vSphere 6 announcement, there is no “official” release date, but rumors are suggesting everything from the “first half of 2015” to availability “later this year.”  Additionally, VMware had no details to share around pricing, but as soon as we know more and have had a chance to sample the integration ourselves, we will share more details.  However, if history is anything to go by we should likely expect something in place by VMworld 2015.

If you have any questions or would like any additional details around this new partnership, email us at socialmedia@greenpages.com

By Tim Cook, Practice Manager, Advanced Virtualization