In our first installment of this blog series, we went over the different types of applications migrated to the cloud and the benefits IT organizations hope to achieve by moving applications to the cloud.
Unfortunately, IT can’t just press a button or even whip up a few lines of code to move applications to the cloud. Like any strategic move by IT, a cloud migration requires advanced planning.
Monthly Archives: August 2017
Eight Hottest Tech Trends in 1776 | @CloudExpo #ML #ITaaS #Cloud
Back, by popular demand, a reprise of my 8 Hottest Tech Trends of 1776. Enjoy!
A little more than 241 years ago, our forefathers used the best technology available to inspire colonial proto-Americans to revolt against King George. At that time, the “best” technology available was the printing press and the “best” social network required the use of “word of mouth” in Public Houses. Grog was the lubricant that facilitated this communication and the rest, as they say, is history.
Simple Secrets of Successful Knowledge Management | @CloudExpo #AI #ML #Cloud
Knowledge that doesn’t serve is knowledge wasted. And for knowledge gained from experience and research to be useful, IT enterprises need to organize, manage and offer it in the best way possible. Fortunately, the best way isn’t a Herculean task when you employ simple tricks to build a profound knowledge base (KB). A sound knowledge base eliminates the need to rediscover or reformulate knowledge and improves the support process. With that in mind, consider these best practices to help build a successful knowledge base.
Amazon’s Q2 Results
Microsoft and Google released spectacular results, and Amazon Web Services, the king of cloud market, was not to be left behind.
During the second quarter of 2017, AWS earned $4.1 billion in revenue. This was almost 11 percent of the overall revenue of Amazon during this period. This is a significant jump from the revenue it earned over the previous two quarters, which were, $3.54 billion and $3.66 billion respectively. The overall sales for this period was a whopping $38 billion.
One of the areas that did see a big decline is in the operating margin, that had reduced to 22.3 percent, the lowest value over the last six quarters. During the conference call, Brian Olsavsky, the Chief Financial Officer of Amazon, explained that this fall in operating margin was due to a 71 percent increase in assets that were acquired in the form of capital leases for its cloud business.
These capital leases have directly contributed to a manifold increase in the infrastructure of AWS, especially its geographic expansion across different countries. Over the last year, AWS has stepped up its operations in a big way to counter the threat from companies like Microsoft and Google. And these investments are paying off.
At this point though, it’s difficult to say how much better AWS is when compared to Google and Microsoft. This is because Amazon is the only company that discloses the revenue and performance of its cloud business separately while the other two club it in a bucket called “other revenue.” So, it’s hard to say how much contribution came from the cloud business in this bucket, so a comparison becomes difficult.
One good way to ascertain performance is through market share, even if it’s not an accurate one. By this parameter, Amazon gained a one percent market share over the last four quarters. This makes it a dominant player in the cloud market, though Microsoft and Google are fast catching up. During this same period, Microsoft’s market share increased by three percent while Google and IBM stayed steady at one percent increase.
Nevertheless, this is another excellent performance by AWS as it dominated the cloud market with a market share of 40 percent. This company alone has generated $1.2 billion in revenue over the last four quarters and this trend is expected to continue as more companies, especially in the developing world, adopt cloud over the rest of 2017 and in years ago.
Since AWS has established its infrastructure and presence in all growing economies, either by itself or through collaboration, it’s in the driver’s seat to make the most of cloud adoption across these countries.
All this is good news for investors as the share prices moved up after the results were announced. In fact, this rise in share price put Amazon’s CEO, Jeff Bezos, as the richest man in the world. However, the position went back to Bill Gates when Amazon’s shares stabilized over the next few days.
The post Amazon’s Q2 Results appeared first on Cloud News Daily.
Cloud job roles soar – but salaries not accelerating with it
If you’re looking for your next break in the cloud computing industry, be warned: salaries are slowing down while job postings soar.
That is the verdict of Experis, a professional IT resourcing provider, whose latest Tech Cities Job Watch looked at the challenges and opportunities in the UK technology job market.
According to the research, which utilised Innovantage’s recruitment software to analyse more than half a million employer websites, the number of cloud roles almost doubled (97.73% increase) between Q216 and Q217, but salaries for permanent roles only went up 2.7% on average. For contractors, day rates have stubbornly remained at £481 year on year.
So why the disparity? According to Experis, the maturation of the industry is partly responsible.
The data infers that, as more and more companies have made the transition to the cloud, fewer roles for building platforms from scratch are becoming available. “Demand for cloud skills is increasingly being driven by organisations looking for more IT professionals to maintain, optimise and enhance their existing cloud platforms,” said Geoff Smith, managing director of Experis Europe.
“As these skills are often less specialist, businesses appear to be finding it comparatively easier to fill vacant cloud positions – causing pay growth in this discipline to slow,” added Smith.
As a result, it is up to professionals to skill up in more nascent areas, such as the Internet of Things (IoT), machine learning, and mobile applications, if they are to stand out. “The diverse cloud requirements that we see as a result of emerging technologies like IoT, big data and mobile are driving the increase in demand, but not all businesses are seeking dedicated or specialist cloud architects,” added Martin Ewings, director of specialist markets at Experis UK & Ireland.
“Where cloud has been embraced, businesses will be seeking IT professionals to maintain rather than build these platforms.”
According to the figures, there were 9,783 permanent cloud-based roles advertised across the UK in the second quarter of this year, representing almost a quarter (24.4%) of all tech jobs including big data, IT security, mobile, and web development, alongside 6,942 contract roles.
Experis added that there are five primary activities which do require specialised cloud knowledge; application development, application deployment, application security, database specialists, and migration specialists. Specific cloud skills in demand this quarter include OpenStack and Rackspace.
You can find out more and read the full report here (registration required).
The winners and losers in the Walmart vs. AWS row
Opinion In late June, the Wall Street Journal reported that Walmart had announced to technology companies and vendors that if they want to do business with the retail giant, they couldn’t run applications on Amazon Web Services.
This seemed at the time to be just another battle ground between two huge companies, one representing the pure online retail world and one representing a mix of traditional brick-and-mortar and e-business. This is nothing new, as Walmart has flexed its muscle in many other areas recently in its war with Amazon.
Walmart has reportedly told trucking contractors and forklift contractors not to work for Amazon or risk losing Walmart’s business. So, it would seem Walmart’s AWS declaration is just another attempt to cut off an Amazon business line. That is until you consider the following points.
Although Walmart in recent years has gotten significantly outside of its comfort zone by introducing more and more e-commerce and e-business solutions, the retail giant doesn’t compete with Amazon in the cloud technology space, and Walmart’s e-commerce business is an adjunct operation to their bricks-and-mortar business. So, I for one was curious – as I believe many others are – as to the real motives behind the move. Even more so, I found myself wondering how effective will this move prove to be, and who really wins and who really loses by Walmart’s action.
The losers
When it comes to smaller vendors and technology providers, they are generally losers in this chess game. These companies will need to consider their technology debt position in order to react appropriately (and you thought I was just going to talk business). The providers will need to redevelop their services for Microsoft Azure or Google Cloud or risk losing Walmart’s business. This would represent a significant cost and time challenge, which needs to be weighed against the value of doing business with Walmart. Even if these providers decide to containerise their workloads for portability, it would still require considerable development effort. The small providers that will survive and thrive in this situation, are those that have lower technical debt and can easily re-focus efforts to other public cloud platforms.
Larger vendors and suppliers face many of same challenges as their smaller counterparts. The difference is that due to their larger resource pool they might be better able to cope or possibly push back against Walmart’s edict. Ultimately they also will have to choose an appropriate technology and business path and deal with the change. Once again, the depth of their technology debt and the strength of their relationship with Walmart will rule their decision making.
The winners
Microsoft and Google might be the only clear winners in this situation. Both companies should be going to a full court press, in reaching out to the affected suppliers and vendors, in order to wrest market share from Amazon. I have to presume that these efforts have begun already; it is probably time to pony up consulting and engineering resources to take advantage of the situation. Both companies are in a position to gain if they move quickly.
Amazon, according to reports, currently holds approximately 44% of the public cloud provisioning market. While that makes them the clear leader, it also makes them the hunted. Microsoft has recently reported steady gains in market share, and most of that gain has come by way of taking share away from AWS. While AWS is a key part of Amazon’s empire there is still much speculation in the financial press about the larger effect of this move on either Amazon or Walmart.
One significant item that is somewhat overlooked is that, especially in North America, where Walmart goes, other retailers follow. As per the WSJ article from June, other retailers are now following Walmart’s lead and requesting that their technology suppliers get off of AWS in favour of another cloud platform.
The financial press is seriously divided on the question of who ultimately wins the Amazon vs. Walmart war. For every article declaring Walmart is coming back against Amazon, there is another declaring Amazon the victor. The question for Walmart, when it comes to their ‘no AWS’ pronouncement, is whether the pain inflicted on the vendors and themselves is equal to, or greater than, the loss to AWS – and more importantly to Amazon – as a whole?
For that answer, only time will tell.
[session] TCO and the Cloud Adoption Lifecycle | @CloudExpo @CloudHealthTech #AWS #API #Cloud
IT organizations are moving to the cloud in hopes to approve efficiency, increase agility and save money. Migrating workloads might seem like a simple task, but what many businesses don’t realize is that application migration criteria differs across organizations, making it difficult for architects to arrive at an accurate TCO number.
In his session at 21st Cloud Expo, Joe Kinsella, CTO of CloudHealth Technologies, will offer a systematic approach to understanding the TCO of a cloud application in order to ensure a successful migration. Additionally, he can detail CloudHealth’s experience partnering with AWS Migration and provide several proof points for the audience – helping to guide their migration planning process.
How to become a ‘dynamic’ cloud user to reap cost and agility benefits
As organisations move towards more sophisticated multi-cloud environments, leveraging DevOps, containers and more, they will see greater agility, lower cost, and faster time to market.
This is the verdict of digital performance monitoring and management provider New Relic, who gathered more than 500 responses from organisations across the US, UK, Germany, and France. As first reported by ZDNet, the company put its findings in an eBook, ‘Achieving Serverless Success with Dynamic Cloud and DevOps’.
New Relic put organisations into three categories based on their responses; traditional data centre users, static cloud users, and ‘dynamic’ cloud users. The latter are defined as ‘exploiting the cloud in a dynamic way, automatically allocating and de-allocating resources on the fly for maximum agility to deal with spikes in demand and accelerate time to market’, as the report puts it.
Not surprisingly, dynamic cloud users are more likely – 23% more likely, to be precise – to be utilising emerging technologies, such as function as a service (FaaS), containers, and container orchestration.
Even less surprisingly, the differences in where dynamic and static cloud users place their workloads are vast. Dynamic cloud users are at minimum splitting their resources equally between private data centres and the public cloud, with 80% going down this route. This contrasts with only a handful of static cloud users, and an even smaller number of traditional data centre users – although some of the latter were going all-in on public cloud, a situation which eluded the static cloud users polled.
More than anything else, the dynamic cloud is reaping serious dividends. According to the research, these organisations are more likely to see improvements in application uptime – 26% compared to 19% for static cloud – operating costs (21% and 9% respectively) and client satisfaction (36% and 15%).
So what makes organisations ‘dynamic cloud’ users? Use only the resources you need, allocate and de-allocate resources on the fly, and use resource allocation as an integral part of your application architecture. The survey also noted the importance of making sure you’re using multiple cloud service providers; more than two thirds (68%) of dynamic cloud companies said they expected to use three or more vendors in three years, while only 30% of overall survey respondents say they only use a single public cloud vendor today.
“To take advantage of many of the most important benefits of cloud computing, you need to do more than simply move all or part of your application to cloud-based servers in a simple migration,” the report noted. “And while maintaining some of your applications in the cloud and some of them in your own data centres can be an effective part of a migration strategy, it is not a long-term solution to maximise the benefits of the cloud.”
You can read the full eBook here (no registration required).
[session] Serverless Computing | @CloudExpo @Unit4Global #ERP #CloudNative #Serverless
Cloud resources, although available in abundance, are inherently volatile. For transactional computing, like ERP and most enterprise software, this is a challenge as transactional integrity and data fidelity is paramount – making it a challenge to create cloud native applications while relying on RDBMS.
In his session at 21st Cloud Expo, Claus Jepsen, Chief Architect and Head of Innovation Labs at Unit4, will explore that in order to create distributed and scalable solutions ensuring high availability and fault tolerance using non-RDBMS in enterprise software, new technologies and architectural patterns need to be to in practice ensuring equal data fidelity and transactional integrity.
Seven key network metrics for a cloud-driven world
Today’s IT teams aren’t so different from the IT teams of days gone by. They’re still committed to running businesses smoothly by testing technology, measuring the results, and re-calibrating. But much of the technology they’re using and managing has changed dramatically. Enterprises are much more distributed than they used to be, with more remote offices and users accessing resources from mobile devices and laptops, sometimes many miles from headquarters or a branch office.
Add to this the ever-more-important challenge of providing those end users with good performance. Constant uptime is now just an expected baseline, and IT teams today will get helpdesk tickets more often from application slowdowns than from actual outages. But helpdesk tickets are harder to close now that those applications are hosted somewhere in the cloud or a provider’s data centre. The lack of visibility can be a shock to IT teams after they’ve deployed public cloud or adopted business-critical SaaS apps, like Office 365, then have to support users of those products without having insight into them.
It may seem like IT is starting from scratch in this cloud-driven, increasingly software-defined world. But don’t throw the old metrics out—they still have a lot of use. End-user experience can be quantified and should be measured over time, continuously if possible. It is possible these days to have a tool that can see past the firewall to get these metrics. These seven metrics stand the test of time and can give IT a pretty good picture of how applications are performing for users.
The seven network metrics you should track
Latency measures the time that network packets take to travel from source to destination. It’s best measured asymmetrically to match the way the internet works, since so many applications today are delivered via internet. Users accessing cloud and SaaS applications can multiply latency quickly over a branch network.
The lack of visibility can be a shock to IT teams after they’ve deployed public cloud or adopted business-critical SaaS apps
Capacity is the new bandwidth. It’s the maximum possible transit rate between the source and destination over a network. That can affect user experience, since the slowest point on the network path can cause slowness down the line. Measure capacity to see the entire application path to find issues that are affecting users and validate provider SLAs. Both utilized and available capacity are important numbers—high utilization can indicate performance degradation.
Packet loss is the percentage of network packets lost between source and destination. It can cause congestion on the network and slowdowns for users, who will notice slow apps with just 1% packet loss. Since the source and destination can be hard to pinpoint when cloud is in the picture, packet loss can be harder to track than back in the days of apps served over one LAN.
Jitter is the time difference in network packets are arriving at their destination. If users are having varying experiences across applications, jitter may be the cause. This is also what causes dropped VoIP calls or poor video streaming quality—particularly important for IT teams supporting remote offices and call centers.
QoS is another metric that isn’t new, but is quite useful for modern needs. Quality of Service levels need to be enforced by IT for them to work, so that different types of application traffic are treated differently based on their assigned lane or category.
Throughput can show you where users are running into downtime or slowdowns in their applications, based on whether your service provider is handling the demands of your network and apps. Insufficient throughput can cause disruptions for users.
Response time is best used when IT benchmarks SaaS and cloud app response time up front, then measures over time. Seeing how applications and services are performing compared to provider SLAs can give you some leverage in getting better response times for users.
With cloud and SaaS running so much of the IT show, you may feel helpless when unknown network issues or application slowdowns happen. But you don’t have to. Track these metrics to get back some visibility and get users a better experience.