All posts by davidauslander

Technical debt and the cloud: The key steps to repaying your development deficit

Opinion Many years ago, I compared the concept of technical debt to a mismanaged baseball team. The baseball team in question spent untold millions on ageing veterans, rather than home-grown talent that could take them into the future. IT departments the world over tend to exhibit the same behaviours in retaining ageing technology rather than keeping up with upgrades, new methodologies, and new paradigms.

The obvious result of this lack of foresight is the gathering of overwhelming technical debt. After years of ignoring or putting off dealing with the growing problem, many CIOs suddenly find themselves before the CEO and board of directors asking for funding to stem the rising tide. Many times, the funding necessary is not forthcoming without presentation of a solid business case and return on investment documentation. Unfortunately, paying off technical debt isn’t a business case. It’s an explanation of how the IT department got into this situation. 

Enter cloud computing. Everyone wants to move to the cloud, so the CIO has an instant business case.  Problem solved, right? Well not so fast.

If you are reading this then you probably have a good sense of the benefits to be gained from migrating workloads to the public cloud. In particular, the cloud can reduce infrastructure deployment cost, decrease maintenance costs, create standardisation, increase utilisation, and lower overall service lifecycle costs. Migrating workloads to the cloud can be the best way to pay off that technical debt. But there is a good deal of work to do first and a good number of risks that need mitigation prior to executing a cloud migration program.

A clear assessment of the current environment will be necessary in order to determine the level of effort for migration. The information to gather will include physical platform characteristics, application characteristics, performance and capacity data, and application to infrastructure mapping. This assessment will serve as the first step in architecting an overall solution.

Many of the workloads in an IT department beset by technical debt will need to be re-platformed from their current out of date environments as part of the migration path to the cloud. Even if the workloads run on cloud-ready operating systems, the workloads will probably need to undergo a physical to virtual conversion. 

Security is a major area that needs to be assessed and planned for in architecting the Cloud based solution. Remember that most cloud providers have a statement of “separation of responsibilities” when it comes to security. Usually, the shared responsibility statement sounds something like: the cloud provider is responsible for the security of the cloud and the customer is responsible for the security in the cloud.    

Another major area that needs to be considered – and will ultimately drive choices in the cloud – is service and data availability planning. Cloud providers differ in how they support availability services, especially in how they support multi-region failover. The criticality of the services in question and the level of availability they need should be built into the solution.

One of the major reasons that companies fall into a technical debt cycle is that their IT department is already overwhelmed with just regular functions that they need to perform. Many organisations come to realise that they need help in executing a cloud migration. Whether the company chooses a consulting partner, the cloud provider, or both,  it is often beneficial to get assistance from another party who has done this before.

These are just some of the areas that need attention when creating your cloud migration strategy. While this might seem like a lot of work it is well worth the effort. Cloud computing, properly planned for and executed, is singularly capable of turning an enterprise IT department fortunes around. No, cloud migration is not a magic pill, it requires a coordinated effort between IT operations, development, infrastructure support, the cloud provider and executive management. If an organisation and its partners can muster this kind of cooperation, plan effectively, and execute efficiently, technical debt can be reduced if not paid off using cloud computing.

The winners and losers in the Walmart vs. AWS row

Opinion In late June, the Wall Street Journal reported that Walmart had announced to technology companies and vendors that if they want to do business with the retail giant, they couldn’t run applications on Amazon Web Services.

This seemed at the time to be just another battle ground between two huge companies, one representing the pure online retail world and one representing a mix of traditional brick-and-mortar and e-business. This is nothing new, as Walmart has flexed its muscle in many other areas recently in its war with Amazon.

Walmart has reportedly told trucking contractors and forklift contractors not to work for Amazon or risk losing Walmart’s business. So, it would seem Walmart’s AWS declaration is just another attempt to cut off an Amazon business line. That is until you consider the following points.

Although Walmart in recent years has gotten significantly outside of its comfort zone by introducing more and more e-commerce and e-business solutions, the retail giant doesn’t compete with Amazon in the cloud technology space, and Walmart’s e-commerce business is an adjunct operation to their bricks-and-mortar business. So, I for one was curious – as I believe many others are – as to the real motives behind the move. Even more so, I found myself wondering how effective will this move prove to be, and who really wins and who really loses by Walmart’s action.

The losers

When it comes to smaller vendors and technology providers, they are generally losers in this chess game. These companies will need to consider their technology debt position in order to react appropriately (and you thought I was just going to talk business). The providers will need to redevelop their services for Microsoft Azure or Google Cloud or risk losing Walmart’s business. This would represent a significant cost and time challenge, which needs to be weighed against the value of doing business with Walmart. Even if these providers decide to containerise their workloads for portability, it would still require considerable development effort. The small providers that will survive and thrive in this situation, are those that have lower technical debt and can easily re-focus efforts to other public cloud platforms.

Larger vendors and suppliers face many of same challenges as their smaller counterparts. The difference is that due to their larger resource pool they might be better able to cope or possibly push back against Walmart’s edict. Ultimately they also will have to choose an appropriate technology and business path and deal with the change. Once again, the depth of their technology debt and the strength of their relationship with Walmart will rule their decision making.

The winners

Microsoft and Google might be the only clear winners in this situation. Both companies should be going to a full court press, in reaching out to the affected suppliers and vendors, in order to wrest market share from Amazon. I have to presume that these efforts have begun already; it is probably time to pony up consulting and engineering resources to take advantage of the situation. Both companies are in a position to gain if they move quickly.

Amazon, according to reports, currently holds approximately 44% of the public cloud provisioning market. While that makes them the clear leader, it also makes them the hunted. Microsoft has recently reported steady gains in market share, and most of that gain has come by way of taking share away from AWS. While AWS is a key part of Amazon’s empire there is still much speculation in the financial press about the larger effect of this move on either Amazon or Walmart.

One significant item that is somewhat overlooked is that, especially in North America, where Walmart goes, other retailers follow. As per the WSJ article from June, other retailers are now following Walmart’s lead and requesting that their technology suppliers get off of AWS in favour of another cloud platform.

The financial press is seriously divided on the question of who ultimately wins the Amazon vs. Walmart war. For every article declaring Walmart is coming back against Amazon, there is another declaring Amazon the victor. The question for Walmart, when it comes to their ‘no AWS’ pronouncement, is whether the pain inflicted on the vendors and themselves is equal to, or greater than, the loss to AWS – and more importantly to Amazon – as a whole?

For that answer, only time will tell.

Getting the balance right in microservices development

Choices, choices, choices. User requirements and non-functional requirements are just the beginning of the balancing act of services development. New development paradigms usually take a few years before their practitioners get a handle of the factors that they need to balance.

In the case of microservices, this balancing act comes down to three things: granularity, data consistency, and performance. The most usable and best-performing services built on the microservices architecture will find a balance of these three factors that work for the developers, users, and the business. Let’s start by defining these three factors in the context of microservices architecture.


To be more specific: how many microservices and how granular the functionality of the services. The ultimate goal is to have the most granularity possible. Microservices architecture calls for the creation of a set of distinct functions that run on demand and then shut down.

The purpose of the granularity is twofold. Firstly, granularity enables rapid deployment of fixes and updates without lengthy and expensive testing cycles. Secondly, by execution on demand in a ‘serverless’ construct, the right level of granularity can help reduce cloud based infrastructure costs.

Data consistency

Since the microservices architecture calls for a discrete set of small functions defining an application or service the question arises as to how data will be passed between and acted upon by multiple functions while remaining consistent. Also, the question needs to be asked as to how multiple services built on microservices will access common data.

Think about a logistics system where multiple services, booking, confirmation, insurance, tracking, billing, and so on, each consist of multiple microservices functions and also need to pass information between them. Although the intent of this article is not to explore persistence technologies, some examples are: DBMS, journals and enterprise beans. The function of all of these and many others is to have data outlive the process that created it.


The starting and stopping of a process, even a microservices process, takes time and processing power. That may seem like an obvious statement but it requires some thought. If a process starts once and remains idle waiting for an input to run, there is no start-up delay except for the original startup or starting additional processes to handle the load.

In a microservices scenario, each time an input is received a process is started to handle the request. Then once the process ends its execution the process is shut down. This is an area where pre-provisioning of microservices servers (as in Microsoft Azure’s container based microservices solution) could be of benefit. 

Balancing act

Granularity, as defined above, is the ultimate point of balance in a microservices architecture. While trying to make their services as granular as possible, if a team makes their microservices too granular, problems arise in the data consistency/persistence and performance realms. If the services aren’t granular enough you might get performance gains but you lose resiliency, flexibility, and scalability.

If a service is built in too granular a form, data consistency can become difficult simply because of the rise in the number of data connections and elements that need to be kept consistent. Performance suffers, in an overly granular scenario, due to the processing requirements of startup and shutdown, as described above.

The balance between the various functional requirements, non-functional requirements and the will to utilise new paradigm capabilities can lead to many different balancing scenarios. Realistically, creating a solid microservices based environment comes down to basic application architecture principals.

Understanding the needs and architecting the solution based on all the functional and non-functional requirements. When developing in a microservices architecture we need to resist the temptation to go too granular in order while also introducing the scalability and flexibility that the microservices architecture is known for. 

The changing face of security in the age of the cloud

The computing world just keeps on progressing but as we all know with progress comes additional challenges. This is especially true of challenges around security. Every advance in computing has given rise to the same question: “how do we secure this new toy?”

When client/server architecture was all the rage in the late 1990s there was great excitement about the advantages it brought about but also a concern for the security implications of distributed clients and centralised servers. When server consolidation came of age in the early 2000s the concern was how to keep applications secure when running on the same server.

In the age of cloud computing, we seem to have introduced more security impacts than ever before. Cloud computing has been the basis for many tremendous benefits in the computing industry and has positively impacted many businesses around the world. While we can celebrate all the advances we need to be very aware of all the new threats that have come with the steps forward. The following are some of the areas that concern security professionals in the 2017:


As I have stated in a past article, security concerns are still the number one impediment to cloud adoption in the computing world today. With that said, more and more organisations are moving production workloads to the cloud every day and how to secure those workloads is a question with no single answer. Whether cloud workloads are treated as if they are in one’s own data center or secured through as-a-service tools, placing workloads into the cloud comes with some measure of uncertainty that requires research, planning, and execution to mitigate.

Edge/fog networks

The concept behind fog computing isn’t really all that new. I remember moving web servers to the outer edge of the network, outside the firewall, so that they can be closer to the users. The difference now is that fog computing supports larger numbers of devices either at the edge of the managed network or, in the case of IoT, placed physically very far from the control plane.

The somewhat obvious threat vector is the vulnerability of these fog/edge to attack and the continuation of that attack to the control plane, aggregation layer or even all the way to the virtual private network or data centre. This needs to be dealt with in much the same way as this type of problem was handled in past. The fog/edge devices need to be hardened and the communications path between those devices and the aggregation layer and the data center (cloud or other) need to be secured.

Mobile users

It was so much easier to secure an environment when we knew who our user base was. Well, not anymore. The preponderance of mobile devices that the service developers can have no control over, leaves the service network open to attack via those devices. A user who utilizes your provided tested and secured app could easily have installed another app which is just a front for malware of some kind. Beyond just writing apps that are secure the systems as the front end of the data centre or cloud environment that support these apps have to be strongly secured. Additionally, communications between app and service layer need to be secured and monitored.   


On June 26 the largest container shipping company in the world Maersk Lines, Russian oil producer Rosneft, and pharmaceutical giant Merck, along with hundreds of other institutions around the world, were all but shut down by a global malware/ransomware attack.

That the perpetrators used various public cloud-based resources to launch the attack is a very real possibility. Security professionals around the globe are concerned about the form the next big malware, virus, or ransomware attack will take. Practical and logical steps, including planning for recovery, training, and maintenance, need to be taken to prevent organizations from falling prey to these attacks.

Global data expansion

Many years ago, I wrote an article on how server consolidation can positively impact data centre security by reducing the number of operating system instance to maintain and by reducing the number of possible targets for hackers. In today’s ever expanding global data environment we have to ask ourselves: have we provided too many targets for the bad guys?

The answer is maybe. Each individual and organisation have to be engaged in preventing data loss and data theft by utilising the many means of securing data that exist today. Data at rest encryption, automated and versioned replication or version backup, are just some of the ways an enterprise can protect themselves. These security concepts apply equally to preventing and/or recovering from malware attacks.

The only way to survive the many security threats that exist is to: recognise the threats, learn about them and how to fight them; build a comprehensive plan for protecting your organisation and for reaction to and recovery from an attack; whether it is basic security maintenance or implementation of major security efforts, take action. Don’t just sit back and wait for an attack to happen.

A comparison of Azure and AWS microservices solutions

Amazon Web Services introduced their Lambda microservices engine in 2014; and since that time AWS Lambda has been the standard microservices engine for the public cloud. As Microsoft’s Azure cloud has gained in popularity the question on my mind was: will Microsoft also create a Microservices solution?

I recently attended a Town Hall hosted by Microsoft presenting their solution for microservices in the Azure cloud. The session provided a view into Microsoft’s approach and support for this rapidly growing cloud computing technology.

At the beginning of the meeting, Microsoft spent a good deal of time on their container environment including the container types that they now support; Docker Swarm, Kubernetes, and Mesos DC/OS. As the meeting went on it became apparent that the MS container environment, along with Azure Service Fabric, was the basis for their microservices solution, but that there seemed to be no microservices engine per se.

Being that my experience is more on the AWS side of the cloud computing house I was hoping to form some comparison of Azure and AWS microservices solutions. So without providing a value judgment I will try to compare the two approaches.

AWS Lambda

AWS Lambda is a microservices engine that requires no pre-provisioning of resources other than the creation of a template for the Microservices container, including a pointer to the code to execute on activation. Lambda creates the Microservices container at event detection (events can be defined as input to a queue, creation of an object in a storage container or HTTP or mobile app request).

An obvious benefit to Lambda is AWS’ claim to have zero administration, other than the setup, configuration and maintenance of the code. In all microservices constructs, if a persistence model is required the developer is responsible for its creation. The AWS Lambda engine spins down the instance once the code execution is completed.

As part of the zero administration feature when using Lambda no availability solution needs to be defined. Even though Lambda intervenes in the process of deploying the service instance, AWS claims that any possible delay is on the order of a few milliseconds.

MS Containers

Microsoft provides no microservices engine, a la AWS Lambda. Rather, they base the solution, as stated above, on containers and orchestration.  Microsoft recommends that for each microservice a pair of primary containers be deployed, for availability. These permanent – as permanent as anything is in the cloud world – primary containers, through the use of Azure Service Fabric or a third party service construct, form a model for the creation of execution containers.

Execution containers are activated, similarly to Lambda, at event detection. The service construct will create and destruct execution containers based on the primary entities and rules configured for them. Again your own persistence model needs to be applied. While Microsoft’s solution does place some administrative burden on the user, this is slight even without a distinct engine.  


AWS and Microsoft present two somewhat different approaches to Microservices although at the root of the two solutions the goal is the same. Microservices computing’s goal is to provide an automated, low maintenance platform for rapid spin-up and spin-down of lightweight service instances. Service developers need to weigh aspects of these, or any other, microservices paradigms to determine the best fit for their organization and for the services under development.  Some of the aspects that should be inspected are performance, administration, ease of use, service deployment and update, and provider support. 

A deeper dive into cloud security as a service: Advantages and issues

In a recent article which focused on cloud security I presented a comparison between security-as-a-service and traditional style security tooling in the cloud. This installment is a deeper dive into the security as a service (SECaaS) paradigm.

It would seem to me that a natural outgrowth of the cloud computing and ‘everything as a service’ paradigm that the technology world is undergoing, would be that the tools and services we use to manage and secure our cloud environments also move into an ‘as a service’ mode.

In much the way one would expect, SECaaS works under the principle of a small agent controlled from an external service provider. It is not so different conceptually from controlling a number of firewalls (virtual or physical) from an external management console.

Here’s how it works. A security administrator sets the policy for the service in the SECaaS provider cloud, using online management tools, and sets what policy or policies applies to a group of VMs classified by any number of criteria.

Then, the SECaaS services governs the security activity within and around the VM via a lightweight, generic, agent installed within the VM. When a new VM is created out of a template the agent is included in the image.

Finally, the agent executes various security functions according to the direction/policy communicated from within the provider’s cloud environment.

For example, the security administrator creates a segmentation policy that all webserver VMs will only accept traffic on ports 80 and 443. The administrator creates a policy in the SECaaS cloud which is transmitted to the agents on all webserver VMs in the environment. The agent then acts to block and/or allow traffic as per this and other policies that apply to this type of VM.


The advantages of using a SECaaS solution include:

  • Increased agility. As the number of VMs expands contracts or moves (between physical facilities, and possibly cloud providers) the security level is maintained. This is because SECaaS agents are generally configured to reach back to the ‘mothership’ on activation.
  • Reduced complexity. No need to deploy lots of different security tools into the environment and thereby add complexity.
  • Security staff. In 2016, according to ESG Research, 46% of organizations reported a shortage of cyber security skills in their staff. SECaaS solutions can help to increase the skill sets of junior security administrators by providing a single pane of glass view of the security functions within the environment. SECaaS providers are working towards making policy setting tools more intuitive, thus making it easier for a limited size and/or skilled staff to be more effective.
  • Consolidated control. Offloading of security policy creation and security management to a consolidated management point, that itself is managed and secured by a trusted external partner. This requires that trust and partnership be present in the relationship with the SECaaS provider. 


  • Most SECaaS providers offer services that control a limited set of security functions such as identity and access management (IAM), segmentation, threat detection, anti-virus, vulnerability analysis, and compliance checking. Issues can arise when multiple providers are selected for parts of an overall solution. This leaves the VM stuffed with various distinct agents, reintroducing complexity, lowering agility as well as lowering manageability. The solution to this issue is to seek out those few providers that are reaching for a comprehensive approach. For example CloudPassage Halo  and TrendMicro AWS Defender provide much more comprehensive solutions than many others.   
  • Currently no SECaaS services that I have found provide support for serverless or micro-services environments. With the rapid rise in these types of cloud application hosting environments this will become a critical distinguishing factor in an organisation’s decision to use SECaaS technologies. As more providers enter the SECaaS market it is assumed that the needs of these types of environments will be addressed.


As more organisations continue to adopt and move to the public cloud it becomes even more critical to secure those environments, applications and services. SECaaS providers continue to enhance their offerings and continue to add specific security services to their portfolios. As SECaaS matures it becomes an even more viable option for securing enterprise public and hybrid cloud deployments.

Read more: Cloud security best practice: Security as a service or cloud security tooling?

Cloud security best practice: Security as a service or cloud security tooling?

A recent survey on cloud security and cloud adoption found that the single biggest impediment to moving to the public cloud was continued concerns around security.

While there has been tremendous progress in the area of cloud security in recent years, another important finding of the LinkedIn survey was that legacy tools, reconfigured for use in the public cloud just don’t work. This is mostly due to the nature of the cloud computing environment especially the aspects of dynamic networking, and workload agility.

The two major methodologies that have grown up to deal with these concerns are the development of specific security tools targeted to cloud environments and the development of security as a service (SECaaS). In the case of both methodologies a number of players have entered the fray, including a number of legacy security appliance manufacturers and cloud management platform developers.

On the tooling side a number of legacy security tools have been reborn as cloud security virtual appliances, including firewalls, anti-virus and identity management tools. Also new cloud purposed tools are being rolled-out such as web application firewall, network segmentation, and compliance checking. The SECaaS methodology calls for comprehensive, separated grid, security services and again a number of vendors are seeking footholds in this space.

The biggest selling points around “tooling” for cloud security are the ability to control your own environment and roll out tools that, while they work differently than their legacy counterparts, are conceptually familiar. When it comes to the reborn legacy tools, a virtual perimeter firewall looks and feels much like the physical firewall appliances that were rolled out in the data centre. When tooling the security of the environment relies solely on the team configuring the appliances. Virtual security appliance vendors include Barracuda, Fortinet, Blue Coat and Cisco.

When speaking about “cloud born” tools such as network micro-segmentation, threat identification and compliance checking, the emphasis is no longer on securing the environment but focusing on the individual workloads. Not a familiar place for the legacy security professional but in many cases much more effective in securing the environment. Vendors in this space include VMware, Threat Stack and AlertLogic. Many of the major infrastructure vendors have programs to assess and secure the environment based on tooling as part of migration to the cloud, including IBM Cognitive Security and HP Enterprise Secure Cloud.

The major difference of SECaaS is the ability to offload the backend processing to a separate provider and only run a lightweight agent on each VM. This provides agility in securing workloads whether they are moving to different physical hardware, different data centers or changing in numbers. The agent serves as a translator between the backend service and an executor of the appropriate policies. SECaaS can provide all of the functions that appliances can including segmentation, anti-virus, threat identification and compliance checking.

Another benefit found in SECaaS products is metered licensing. Much like the public cloud itself payment for services is based on usage. The questions around SECaaS – at least in my mind – revolve on an individual product’s ability to secure serverless or micro-services based applications, since these paradigms support application execution environments that are constantly in flux.

Examples of SECaaS providers are Bitglass, Alien Vault, Okta, Trend Micro, CloudPassage and Palerra (a division of Oracle). Most SECaaS providers are focusing on slices of the security pie such as IAM, encryption, anti-virus or compliance, recently a few multi-faceted SECaaS solutions have begun to emerge (for instance CloudPassage Halo), which is where this paradigm really becomes interesting. Still, adoption of SECaaS may present similar challenges to cloud adoption itself, because, in general, security professionals operate based on what they trust.

Security still stands as the most critical piece of architecting and implementing any computing environment. There are an increasing number of ways to secure public and hybrid cloud environments hopefully resulting in increased cloud adoptions as enterprises become more comfortable. Whether tooling or SECaaS, the key is planning for the security solution, or set of solutions, that best fit the enterprise, and the services that said enterprise will present.

How the public cloud can benefit global entities and transactions


Editor’s note: This is the second part of a two-part series following up on the July piece ‘How the public cloud can benefit the global economy’, which drills down into two of the four areas outlined – better coordination of efforts between international entities, and increased speed of international transactions. The below is an image created by Chef Software which outlines the four towers. You can read part one, focusing on new business models and data sharing and collaboration, here.

Better coordination of efforts between international entities

In my July article on the cloud and the global economy, I used an example of international security entities scrambling to track threats across borders.

In the past, when these and other governmental entities agreed to coordinate data efforts, multi-billion dollar projects taking many years were commissioned. These projects would try to standardise application development efforts and data access methods.

The overriding attitude was that by standardising everything, the security could be strengthened. Unfortunately, this gave rise to two major problems. The first was the extremely high cost – in both development and time – of developments and changes. The second issue was that once any one of the systems accessing the data was breached, all other systems could be accessed.

Enter the public cloud. Once access and security standards have been agreed upon, a big data, unstructured service can be hosted in the cloud. The participating governments – or for that matter, corporations – can then responsibly access the data through the use of their own methods and data science-based applications, without the need for huge coordination efforts.

Another benefit of this looser style of coordination is the global accessibility of public cloud resources. Wherever in the world the entities (governmental, corporate, NGO, and so on) are located, they can gain access to the data. So for example if an NGO employee needs to get information on a given topic – assuming that he or she has proper credentials to view that data – that access is then available from wherever they find themselves.

Increased speed of international transactions

Volume of financial transactions is critical to the economic growth. The more financial activity generated by a particular country, the more capital is in play with which to grow the economy. If this is relevant to a single country, it should be even more so when speaking of global economic growth. The faster international financial and corporate entities can move money and complete transactions, the greater global economic volume.

So what cloud technologies, or more specifically public cloud technologies, can be used to improve financial performance? The answer: microservices application architectures, rapid scaling, and globalised service and data accessibility.

Microservices architectures and services, such as AWS Lambda, allow for new data to be processed rapidly, through serverless infrastructure models that scale up or down to meet performance needs. Microservices also allow for the secure transfer of information from the public cloud back to an on-premise data service. This allows sensitive data to be securely accessible through cloud-based services while keeping them under secure lock and key.

Other than the scaling available through microservices, the very nature of public cloud infrastructure services is geared towards automated and rapid scaling of cloud infrastructure to meet demand. Data and application services in the cloud are easily globally distributed and globally accessible. When public cloud content distribution services, such as AWS Cloudfront, are utilised, the services and data associated with them can be made accessible globally with predictable performance characteristics.


Advances in technology have always had an impact on economic conditions. With the increase in flexibility, elasticity, scalability and resilience in cloud technologies, the public and hybrid cloud can continually enable a positive impact on the global economy.

Public cloud and the global economy: Digging further into the opportunities


Editor’s note: This article is a follow-up to the July piece ‘How the public cloud can benefit the global economy’, which drills down into two of the four areas outlined in that piece – how the public cloud enables new business models, and data sharing and collaboration between entities. The below is an image created by Chef Software which outlines the four towers.

Picture credit: Chef

New business models

Large organisations worldwide are looking for new ways to conduct business more efficiently, more effectively, and at lower cost. Many of these companies are looking to new technologies as the catalyst of these efforts. Use of the public cloud, as part of a hybrid cloud solution, has grown and continues to grow as the basis for these new technologies.

Innovation around new business models will depend on the business type and technical maturity of a new organisation. While many possibilities exist, the models that seem to have the most potential are those that utilise the data gathering and distribution, and service distribution, aspects of the cloud.

One example of a potential new business model could be in the insurance industry. Efficiencies in the insurance industry are based on the quality of the data gathered, the speed at which this data is assimilated and analysed, the speed at which the data can be distributed, and the ability to effectively present that data to users and customers.

Using cloud technologies, information can be gathered from many sources, assimilated either in the public cloud or transferred securely to an on-premise private cloud implementation via microservices. The information, once analysed, can be distributed globally using public cloud regionalisation services, and presented using globalised applications through public cloud-based content distribution.

Data sharing and collaboration between entities

In recent years, companies and organisations within the same industry have begun to co-operate on specific initiatives, even while competing in other circumstances. IT industry-wide initiatives, such as open source, have transformed the way software is developed and the way platforms are created and managed. This ‘coop-etition’ has led to the need for easy collaboration between organisations and common platforms through which to share data without having to develop a portfolio of services that may only be used for a single initiative.

The answer, in my opinion, is data sharing through the public cloud. Once the data, files and documents have been loaded into a content distribution system – for example, AWS CloudFront – they can be accessed globally using whatever tools the user currently has. While access controls must be set up to ensure that data does not become corrupted, this process of creating a shared data repository can be accomplished in a matter of hours as opposed to weeks to build out new collaborative services.

Here’s an example. Two software companies decide to co-develop a set of services for a common customer. Architecture and design documents, libraries, and code elements (AWS CodeCommit for example) can be shared via a common repository, created in a few hours, and populated as the project progresses. The fact that one company is in New York and the other in India makes no difference.


The global nature of the cloud, the ease of distributing content and data, and the ability to gather information from multiple points in the globe and assimilate them as if they came from one source, can be of great benefit to global entities and the global economy as a whole.

Editor’s note: Part two will aim to discuss the final two pillars – better co-ordination of efforts between international entities, and increased speed of international transactions. Look out for this on CloudTech in the coming weeks.

How the public cloud can benefit the global economy


A great deal has been written over the past few years about the financial benefit of migrating to the public or hybrid cloud for enterprises around the world. For enterprise looking for economic benefits of moving to the cloud, the main points have always been the ability to only pay for what you need, reduced operational cost, agility, availability, and elasticity.

The naysayers continue to point to the risks and the considerable labour involved in an enterprise-wide cloud migration. I will not delve into either of those topics here – there is enough to read about that as it is. But looking further ahead, a question comes to mind: is there a global economic impact due to the adoption of cloud computing?

While I am not an economist, I believe that there is a case to be made that, given the valid risks, the global economy does benefit from large scale cloud adoption. Here are some technical areas to contemplate.

New business models

Writing for this publication last year, Louis Columbus argued that enterprises are discovering new models for doing business based on their transition to the cloud. Individual enterprises are developing these new models and, as will be discussed below, finding new means of collaboration with partners and even former competitors. This can lead to major shifts in global industries’ business models as well as the individual enterprise.

Data sharing and collaboration between entities

As business models shift based on cloud technologies and collaboration, and data sharing becomes the norm, an atmosphere of cooperation/competition can emerge. This is as opposed to the strictly competitive atmosphere that exists today.

The best example of a leading indicator – note the economic reference – is the air of cooperation, at the cloud orchestration and container support layers, between public cloud providers. Imagine if, for instance, a group of global insurance companies utilised cloud technologies to share basic liability data. Instead of expending resources to develop their own methodologies, creating barriers for themselves and their competitors, the cloud’s inherent data sharing capabilities can securely share information that eventually all the entities involved would have anyway. This type of collaboration could change the face of major industries.

Better coordination of efforts between international entities

In a world where international security entities are scrambling to ‘know’ where the next threat is coming from, information is paramount. Instead of taking years in creating complex interfaces between nation based systems, placing common information in a highly-secured, globally distributed, continuously updated, cloud-based service would be of greater benefit.

Increased speed of international transactions

The basic concept of time value of money (TVM) is just as valid today in the digital world as it was years ago. The speed at which international financial transactions are carried out, and the volume of those transactions, are critical in today’s global economy. The rise of cloud-based technologies, such as microservices architectures and automated sending of cloud-based resources, are playing a major role in the speed and volume aspects of global transactions.


I have not attempted to present hard economic data to show that cloud technology is having an impact globally – instead, I have hopefully stated the conversation around the global uses of cloud technologies and the impact that they can have – and are possibly already having – at a global rather than enterprise scale. Cloud technologies have the potential to change not only the way enterprises do business, but possibly whole industries – thus shifting the global economy.