There’s no denying we’re in the midst of a cloud craze – but, behind all the hype, is cloud adoption really imperative? How much of the cloud buzz is fabrications of possible benefits, and how much of it is reality? To decipher what’s true and what’s false, I’ve put together a list of five reasons why cloud adoption is important, based on the facts.
Freedom from responsibility and costs. Yes, when you move services to a cloud provider, you immediately relinquish some control. However, you instantly free yourself from the cost of hardware and server management, and gain back the time (or manpower) that it would ordinarily take to run IT systems.
Archivo mensual: febrero 2015
‘The Cloud Foundry Story’ By @Pivotal | @DevOpsSummit [#DevOps]
Cloud Foundry open Platform as a Service makes it easy to operate, scale and deploy application for your dedicated cloud environments. It enables developers and operators to be significantly more agile, writing great applications and deliver them in days instead of months. Cloud Foundry takes care of all the infrastructure and network plumbing that you need to build, run and operate your applications and can do this while patching and updating systems and services without any downtime.
New DevOps Partners @Cigniti and @Enov8Inc | @DevOpsSummit [#DevOps]
The Enov8 SolutionEcosystem allows enterprises to inject a level of transparency and discipline into DevOps and / or environment management operations. Ecosystem provides you with a foundation where you can systematically improve overall governance through out-of-the-box best practices and prioritize automation efforts based on the current work effort.
“Cigniti’s focus to offer end-to-end software testing services solutions gets further strengthened through this partnership. Test environment management and test data management solutions are very important for enterprises across the verticals. With increasing need for DevOps and Agile testing this partnership also positions us to serve ISVs in helping accelerate their market readiness with quality software,” said Srikanth Chakkilam, Executive Director at Cigniti Technologies.
vSphere 6, vSAN 6 & Other Key Announcements from VMware PEX
Well, there’s nothing like coming back to the beautiful 4 ft. of New England snow after having been in the temperate climate of the bay area for the past week. Might be time to consider becoming a snow bird! In any case, there was a lot of news coming out of the VMware Partner Exchange (PEX) event over the course of the past week. The 3 major announcements were vSphere 6.0, vSAN 6.0, and the VMware/Google partnership. There was also some interesting news from EMC in relation to their highly anticipated launch into the hyper-converged market and the announcement of VSPEX Blue. Today, I’ll cover the highlights of these announcements starting with vSphere 6.0.
vSphere 6.0
vSphere 6.0 represents one of the, if not the, biggest launches in the history of VMware. The core themes of vSphere 6 are scale and elasticity. I won’t go through every new bell and whistle in this post but will focus on the highlights which include increased scale, cloud readiness and elasticity, storage, and high availability improvements. First, on the scaling front, basically everything has doubled from vSphere 5.5: 64 hosts per cluster rather than 32, 12TB of RAM per host, 480 CPUs per host, etc. When it comes to individual VMs, the same holds true with support for 128 vCPUs and 4TB RAM per VM. I would love to see a system that runs VMs of that scale!
In the cloud readiness/elasticity arena, we now have more truly federated vCenters with shared catalogs, templates, etc. between them. WAY better than simple Linked Mode of the past. We also finally have the long awaited long distance vMotion capability supporting up to 100ms of latency and breaking the old layer 2 network boundaries. However, beware of the large pipes required to really make it sing! Perhaps one of the most interesting new features is Instant Clone, which allows instantaneously cloning a running VM in memory. This is going to be a great leap forward for developers, virtual desktop environments, or anywhere else where fast cloning can be utilized.
On the storage front, we saw the official introduction of Virtual Volumes (vVOL) into vSphere. Essentially, vVOL enables storage management at the VM rather than the LUN level which can greatly simplify management. This has been talked about for several years but is now finally a reality and we should see the majority of storage vendors offering supporting solutions very soon. We also saw that vSphere Data Protection Advanced (vDPA) is now just rolled into the vSphere product rather than requiring additional licensing. If you’re an EMC Avamar customer, this is great news as you’ll be able to integrate and replicate your vDPA backups to your physical Avamar appliances. If you’ve been looking at vSphere replication, there are some great enhancements there as well, including network compression and fast full sync. In the HA area, we’ve long awaited multi-vCPU(up to 4) support for Fault Tolerance. I believe we’ll see some actual use of this cool new feature now that it can protect higher end VMs.
vSAN 6.0 was rolled out as part of the vSphere 6.0 announcement. As you probably know, vSAN is the idea of taking local server storage across multiple hosts and clustering it together to create a pool of primary storage capacity without the requirement of a traditional external shared storage architecture. vSAN 1.0 was released a little more than a year ago and is the underlying foundation of the EVO:Rail hyper-converged solutions on the market today. The problem was, while it did work, vSAN 1.0 was missing several capabilities required to really bring it into the production primary storage conversation. Many of those missing links are now filled in with vSAN 6.0.
vSAN 6.0 now supports an ‘all flash’ configuration allowing persistent data to be stored on the flash drives, whereas in 1.0 the flash was used only for caching. We also have a new file system format with vSAN 6.0, providing much more efficient snapshots and increased overall performance. Support for VMDKs up to 62TB and up to 64 vSAN nodes in a cluster bring it online with the new vSphere 6 max configs.
On the HA front, with vSAN 6.0, you can now have fault domains. This basically gives you the ability to recover from a full rack failure, as well as a host failure (assuming you have a good number of hosts in your cluster). Finally, there is greater visibility from a health and troubleshooting perspective built into vSAN 6.0, which should allow it to find its way into more production environments.
The final big announcement at the event was the partnership with Google to provide some of the Google cloud services within the vCloud Air platform. My colleague Tim Cook will be posting a separate segment covering the details of that partnership.
So, when can you download the bits and get all of this goodness in your own environment? Well, I don’t have a hard date, but my guess is we’ll see the GA code released sometime before the end of March. As always, feel free to reach out if you would like more information.
If you’d like to speak with Chris in any more detail about these announcements, feel free to email us at socialmedia@greenpages.com
By Chris Ward, CTO
Kafka 0.8.2 Monitoring Support By @Sematext | @DevOpsSummit [#DevOps]
Kafka 0.8.2 has a pile of new metrics for all three main Kafka components: Producers, Brokers, and Consumers. Not only does it have a lot of new metrics, the whole metrics part of Kafka has been redone — we worked closely with Kafka developers for several weeks to bring order and structure to all Kafka metrics and make them easy to collect, parse and interpret.
We could list all the Kafka metrics you can get via SPM, but in short — SPM monitors all Kafka metrics and, as with all things SPM monitors, all these metrics are nicely graphed and are filterable by server name, topic, partition, and everything else that makes sense in Kafka deployments.
Gartner analyst muses on why so many are upset with their private cloud
(c)iStock.com/maxkabakov
According to survey figures released by Gartner, 95% of attendees at the analyst house’s Datacentre Conference in Las Vegas are unhappy with their private cloud deployments.
The 140 respondents were given six potential options to explain what was going wrong with their private cloud, alongside a ‘nothing is going wrong’ option. 31% cited a failure to change the operational model, 19% said it was simply doing too little, and 13% cited a failure to change the funding model.
Picture credit: Gartner
Bittman admitted he was “a little surprised” at the results, although some commentators below the line argued the question was leading in focusing too much on the negative side of public cloud.
Regardless, the increasing prevalence of hybrid cloud models – as Matt Asay wrote for Tech Republic, “no wonder private cloud vendors have started calling themselves ‘hybrid’ clouds” – has meant the private cloud as we know it is facing a tipping point.
In a report last October, Verizon argued the long-held public v private cloud discussion was “inadequate to describe the massive variety of cloud services available today.” The report noted how more companies were taking a planned, lifecycle approach to adopting cloud, and that each application warranted a different approach on its own merits.
Some companies are trying to blur the lines. CenturyLink’s newest private cloud, released in August, has private cloud instance plugged in to public cloud nodes thus running off the same platform. It’s certainly hybrid IT, but the company was insistent it was still a private cloud. David Linthicum, writing in the same month, argued private clouds had a new role of being points of control, or interfaces, into public clouds.
Yet a report from Piper Jaffray in January found CIOs were still concerned over public cloud solutions. 35% of respondents said the security of public cloud was the primary reason for keeping data on premise.
What do you make of the survey results?
Microsoft gives $500k of Azure credits, Office 365 subscriptions to Y Combinator startups
(c)iStock.com/Topp_Yimgrimm
Microsoft is giving $500,000 (£328,000) of free Azure hosting credit to Y Combinator (YC) startups, according to YC president Sam Altman.
The Redmond giant will also be offering YC firms in the Winter 2015 batch and beyond three years of Office 365 subscription, access to Microsoft developer staff, as well as one year of CloudFlare and DataStax enterprise services.
In a blog post published on February 9, Altman notes the company “[doesn’t] want to leave software companies out”, after biotech startups benefited to the tune of $20,000 in Transcriptic credits and hardware firms were able to leverage a partnership with Bolt.
“This is a big deal for many startups,” Altman wrote. “It’s common for hosting to be the second largest expense after salaries.”
Naturally, Microsoft isn’t the only cloud provider to be so altruistic. November 2013 saw Rackspace launch the Rackspace Startups Programme announce support to the tune of £250,000 to give UK-based startups a foothold in the cloud, with the programme expanding globally the year after, while IBM launched the slightly less snappily titled IBM Global Entrepreneur Program for Cloud Startups, with £75,000 of potential investment on tap, a year later. At the time, Big Blue trilled in its press notes that it was offering more dollar than Google, Microsoft, and Amazon Web Services.
This sort of move is altruistic only to a point, however. Microsoft will doubtless be ploughing this money into YC in the hope that its more successful startups will become major Redmond customers. Two of the most famous YC graduates, Dropbox and Docker, are partnering with Microsoft.
Elsewhere, Microsoft has topped the January rankings of influential cloud organisations, according to Compare the Cloud. The table, which is calculated from an analysis of “all major global news, blogs, forums and social media interaction” over 90 days, saw Microsoft finish ahead of SAP, Amazon, Apple and Oracle to make up the top five.
Big Data & Datacenter Development By @MatMathews | @CloudExpo [#BigData]
You may have seen last week that we partnered with Cloudera, certifying the Plexxi Switch on Cloudera’s Enterprise 5 platform.
This partnership, while exciting for us and our partners, plays a larger role in the IT landscape as a whole. According to an article this week by Arthur Cole of Enterprise Networking Planet, this move embodies Cole’s belief that networking infrastructure development is increasingly being driven by Big Data applications. He cites the key challenge as not finding somewhere to store all the data (e.g., storage) but rather how to make it available to “diverse and disparate sets of resources quickly and at a relatively low cost.”
Announcing the @PagerDuty and @Dynatrace Integration | @DevOpsSummit [#DevOps]
Dynatrace monitors your entire application delivery chain. All of your transactions are tracked end-to-end, from user clicks to individual lines of code, using Dynatrace PurePath technology. Dynatrace constantly monitors your servers’ host and process health, and will automatically notify you if your business-critical transactions are running slower than normal with automatic baselines. Dynatrace comes with several incidents configured out of the box, but you can also create custom incidents to get as granular as you want. However, even with all of this information, alerts are only as good as how well you can respond to them. Too often, missed or ignored incidents can lead to severe consequences, so it’s important to make sure every incident gets noticed.
DevOps Use on the Rise, But Confusion Remains By @Skytap | @DevOpsSummit [#DevOps]
I recently poured over F5’s “The State of Application Delivery in 2015” and InformationWeek’s “2015 App Dev Priorities Survey” that they presented with Dr. Dobb’s. The similarity of their titles, and even their release dates made me wonder how unique each of their findings would be, and I challenged myself to flush out any interesting findings or patterns that emerged from their collective research.
Both reports were really well done, and I would recommend them to anyone looking to see if your organization is on the right path of change, or to anyone who knows you’re not on the right path, but you’re willing to get on it—if you could just figure out where it is.