The key aspects to consider when executing a smooth move to the cloud

As the benefits of cloud computing become more pronounced, more businesses are migrating to the cloud. Greater scalability, flexibility and financial security often come as a result of making the shift to cloud computing – and those are just a few of the advantages. The allure of the cloud is well known. However, the fine details of cloud migration and implementation are often overlooked.

Migrating to the cloud is more complicated than many companies anticipate. Too many of the business are pulling the trigger on cloud migration with only the first few steps in mind — the cost of the service and the logistics of the physical transfer of the data itself. Moving data is a lot like moving to a new house or apartment. If you have never done it before, you may be thinking, “I’ll just move all my stuff and pay the rent or mortgage.” Anyone who has moved in the past few months can tell you that it’s often more complicated than that.

For starters, you have to select a place of residence. As you do this, you must consider the needs of your family. Think of the features and amenities in a home that will be of most value to you. Moving into a public space such as an apartment is often cost effective. Still, apartments have their drawbacks. Houses offer the advantage of greater privacy and circumstantial control. If you need something in between, a townhome could serve as a sort of hybrid that offers the best of both worlds. Other considerations: What level of upkeep will the property require? Is there a big yard? Will the house require renovation in order to suit the needs of your family? You could always just build your own home – although, this could become very complicated if you have no experiences with homebuilding.

As you can see, there are numerous unseen variables involved in moving to a new house or apartment. Believe it or not, all of these examples are directly comparable to considerations that should be made when migrating data to a cloud. If you didn’t already make this connection, take a minute to reread the previous paragraph with the following comparisons in mind: family = company; home/property = cloud platform; apartment = public cloud space; house = private cloud space; townhouse = hybrid cloud arrangement.

These are just a few of the factors that home movers or data migrants should take into account. With this analogy as a backdrop, consider a few tips for avoiding problems when migrating to the cloud.

Start simple

Cloud computing is a powerful tool. This technology has created so many options and opportunities to improve the internal mechanism of a company. Still, let’s not get hasty. Start by doing some research and assessing your company’s cloud computing needs.

Understand the pros and cons of public, private and hybrid cloud computing. Once you have an idea of what you are looking for, consider cloud computing service options. If you don’t know much about the market, there are a few providers that are well suited to companies who are beginning their cloud computing journey. According to Logicworks CTO Jason McKay, “One cloud does not fit all, but if you pick a major IaaS cloud provider like AWS or Azure, one cloud certainly fits most.” You could also attempt to build your own cloud computing platform; however, this is not recommended if you or members of your IT staff have little or no cloud computing experience. The same is said of hybrid cloud configurations.

The point is, keep it simple. Begin with a simple, singular cloud computing configuration. Experts say that most successful complex cloud computing configurations are outgrowths of an initially simple setup.

Plan ahead

A survey conducted by IDC revealed that out of over 6,000 executives, only 3% would characterise their cloud strategy as “optimised.” 47% describe their cloud strategy as “opportunistic or ad hoc.” In order for cloud computing provide maximum benefit, companies must have a plan for cloud migration. The following are a couple suggestions to keep in mind as you prepare for cloud migration.

  • Have a plan for maintenance and data management. Some platforms include tools that will help you to do manage your cloud data, at least on a general level. Beyond this, IT personnel should have a firm grasp of the company’s data needs before cloud computing is implemented. This way they anticipate cloud management needs and are prepared to proactively solve problems right from the start.
  • Have a plan for account controls. If you’ve already predetermined your security preferences, authorised access preferences, finance and resource management preferences and data preferences before cloud implementation, you will find cloud computing to be a more effective and hassle free tool. What’s more, if you have a clearly defined cloud management rhythm established from the get go, it will be easier to grow when the time comes.

Positive Outlook for Alibaba

Alibaba is in the news again, and this time too, for the right reasons. Analysts world-over are painting an optimistic picture of its cloud business.

This technology giant has been making rapid expansion to its cloud business, as it has added new centers in Australia, Dubai, Japan and London over the last year. Within the next few years, it plans to tighten the competition for global cloud services – a market that is being dominated by companies like AWS and Microsoft.

If you’re wondering why there is so much talk about Alibaba’s cloud business, it’s simply because of its potential to become a big player in the coming years.

Already, it is the leading cloud services provider in China. It is estimated that the Chinese market has huge potential, and currently only a small piece of the cake is covered. Imagine the growth potential of Alibaba just within China when it’s cloud industry matures?  To give you a perspective, the Chinese cloud market was worth $1.5 billion in 2013, accounting for only three percent of China’s enterprise market. According to Bain and Company, this is expected to grow to $20 billion by 2020, signaling a growth rate of almost $1 billion per year.

And that’s not the end because the Chinese government has made cloud computing a priority. In its 13th five year plan that spans from 2016 to 2020, this government wants to give a big impetus to cloud, and to achieve this, it’s willing to offer support to cloud service providers as well as those that want to embrace cloud for their operations. Big tax incentives are being offered to lure more companies to move to the cloud, and all this means, the Chinese market is a huge potential waiting to be explored.

Many companies like IBM are partnering with local Chinese companies like 21ViaNet and Wanda Group to get a strong foothold in the Chinese market, simply because of the huge opportunity it offers. For Alibaba, this is not an issue as it is the most-established and leading cloud services provider in this region.

In addition to China, Alibaba is also rapidly expanding to other parts of the world in a bid to increase its customer base and service coverage levels. This could be a reality soon as this company is sitting on a decent pile of cash generated by many of its popular technology businesses within and outside of China. Also, investors feel confident about this company, so funding is never an issue. Since this is an important part of expansion, we can expect Alibaba to have a smooth transition from a Chinese cloud provider to a global cloud provider.

Alibaba has already started taking steps towards this transition by opening data centers in different parts of the world and entering into partnership with companies in the field of advanced technologies such as artificial intelligence, machine-to-machine learning, deep learning, virtual reality, augmented reality and more. With these partnerships, Alibaba plans to offer world-class products and cloud services to its customers.

The future is sure going to be interesting!

The post Positive Outlook for Alibaba appeared first on Cloud News Daily.

What Are Containers? | @DevOpsSummit #DevOps #Docker #Kubernetes

End-user experience is everything when it comes to facilitating workplace productivity. You could deploy or develop the most powerful applications anyone has ever seen-but they won’t do any good if they offer a poor experience. This is a major reason why applications are moving to SaaS, PaaS and IaaS cloud computing models. The cloud simplifies applications on the back end, which translates to smoother end-user experiences.
But as cloud apps integrate more services and user bases diversify, IT has to do more to streamline application platforms. Enter container technology.

read more

The Best Programming Languages and Frameworks | @CloudExpo #API #Java #Python

Choosing a programming framework for a small business can be overwhelming– there are so many. Here are a few of the best choices, to help you get started.
Ask a room of ten developers which programming framework is the ‘best on the market,’ and you’re liable to receive ten different answers. Each developer will sing the praises of a different language, and each one will very probably feel that theirs is the only logical choice. The most confusing thing, though? Each and every one of those developers will be correct.

read more

Announcing @Loom_Systems AI Analytics to Exhibit at @CloudExpo NY | #AI #Cloud #Analytics

SYS-CON Events announced today that Loom Systems will exhibit at SYS-CON’s 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY.
Founded in 2015, Loom Systems delivers an advanced AI solution to predict and prevent problems in the digital business. Loom stands alone in the industry as an AI analysis platform requiring no prior math knowledge from operators, leveraging the existing staff to succeed in the digital era. With offices in San Francisco and Tel Aviv, Loom Systems works with customers across industries around the world.

read more

Why AWS and public clouds are a great fit for digital health companies

Global equity funding to private digital health startups grew for the 7th straight year in 2016, with a 12% increase from $5.9B in 2015 to $6.6B in 2016, according to CBInsights.

Not incidentally, the rise of digital health has coincided with rising familiarity and market acceptance of public cloud providers like Amazon Web Services (AWS). Public cloud is what has allowed growing healthcare software companies to get to market faster, scale, and meet compliance obligations at a fraction of the cost of custom-built on-premise systems.

Digital health go-to-market journey

Ten years ago, when digital technology was disrupting established companies in nearly every industry, health IT was still dominated by a handful of established enterprises and traditional software companies. In the scramble to meet Meaningful Use requirements for stimulus funding, healthcare providers and insurance companies moved en masse to adopt EMR, EHR, and HIE systems. A few years later, another scramble began as the insurance industry rushed to build HIX (Health Insurance Exchanges) under Obamacare.

Today, most healthcare software products are delivered as Software-as-a-Service platforms. Except for core systems, customers do not anticipate needing to add infrastructure to host new software products. They expect to access these services on the cloud, and be able to add or remove capacity on demand. While some legacy software products will struggle to modernize their code to run in the cloud, next generation cloud-native products benefit from the inherent competitive advantages of infrastructure-as-a-service.

In a public cloud like Amazon Web Services, you can:

  • Spin up servers in minutes
  • Get best-in-class infrastructure security off the shelf
  • Use compute/storage/network resources that have already been assessed to HIPAA standards, limiting the scope of your HIPAA assessment and removing the burden of maintaining most physical security controls (learn more about HIPAA compliance on AWS in our new eBook)
  • Leverage native analytics and data warehousing capabilities without having to build your own tools
  • Start small and scale fast as your business grows

Arguably the most important benefit for new companies is the ability to launch your software product into production in a short span of time. In order to comply with HIPAA, you still have to undergo a risk assessment prior to launch, but a good portion of that assessment can rely on AWS’ own risk assessment.

SaaS – not just for startups

The benefits of the SaaS delivery model are not limited to new startups. More established companies — who saw the market shift and took action early — have also benefited from the public cloud.

A top health insurance company recently launched an online wellness and population health management application for diabetes patients. The program combines a number of cloud-based technologies including Big Data, Internet of Things, and Live Media Streaming — all while maintaining HIPAA compliance.

This is all possible because the company hosted its new product on the AWS cloud.

The company also chose AWS because it supports the hyperscale growth of data that must be delivered seamlessly in patient-facing applications that monitor real-time health goals. This kind of data-crunching would be considerably more expensive in an on-premises data centre. AWS also take care of a significant portion of the risk and cost of protecting physical access to sensitive health data.

They didn’t build the infrastructure for the application alone. They relied on cloud automation and a partner (Logicworks).

Cloud automation

One of the core benefits of AWS is that it has the potential to significantly reduce day-to-day IT operations tasks. IT can focus more on developing software, and less on building and maintaining infrastructure.

However, AWS is not maintenance-free out of the box. AWS is just rented servers, network, and storage; you still have to configure networks, set up encryption, build and maintain machine images — hundreds of tasks large and small that take up many man-hours per week. In order to make AWS “run itself”, you need automation.

Cloud automation is any software that orchestrates AWS. AWS officially recommends the following aspects of automation:

  • Each AWS environment is coded into a template that can be reused to produce new environments quickly (AWS CloudFormation)
  • Developers can trivially launch new environments from a catalog of available AWS CloudFormation templates (AWS Service Catalog)
  • OS is bootstrapped by a configuration management tool like Puppet or Chef, so that all configurations are consistently implemented and enforced. Or you can use AWS’ native service, AWS Opsworks.
  • Deployment is automated. Ideally, an instance can be created, OS and packages are installed, it receives the latest version of code, and it is launched in an Auto Scaling Group or a container cluster without human intervention.
  • All CloudFormation, configuration management, etc. is versioned and maintained in a repository.

And yes, it is entirely possible to use these automation tools in a HIPAA-restricted environment. However, creating this software from scratch is time-consuming and complex. It requires vastly different skills from those required to launch AWS or write an application — and most healthcare companies don’t really have the time or resources for it, so hiring a partner is the best approach.

The value of external expertise for health IT on AWS

The AWS cloud is a new landscape for most risk-averse companies. Established healthcare companies struggle to understand the new responsibility model for security and compliance on AWS, while new healthcare companies just want to get HIPAA compliance “out of the way” so they can move on to growing their business. This is where a partner can help. An experienced AWS consulting partner can reduce the risk of migration and accelerate the process of getting a HIPAA audit-ready environment up and running quickly.

The good news is that AWS has a very robust partner ecosystem for healthcare companies. Visit the AWS healthcare partner page for more information. Or contact Logicworks — we currently manage AWS for companies like Orion Health, MassMutual, and Spring Venture Group with ePHI for more than 50 million Americans.

The post Why Digital Health Companies Belong in AWS Cloud appeared first on Gathering Clouds.

IBM Named «Diamond Sponsor» of @CloudExpo NY and CA | #AI #DevOps

SYS-CON Events announced today that SoftLayer, an IBM Company, has been named “Gold Sponsor” of SYS-CON’s 18th Cloud Expo, which will take place on June 7-9, 2016, at the Javits Center in New York, New York. SoftLayer, an IBM Company, provides cloud infrastructure as a service from a growing number of data centers and network points of presence around the world. SoftLayer’s customers range from Web startups to global enterprises.

read more

Microsoft and Publicis get into a partnership

The cloud industry is being driven by acquisitions and partnerships, as companies are looking to leverage the strengths and advances made by other companies in the same field. In a way, every company is also extending its own technology and expertise to other areas of operations by partnering with niche companies. The latest in this series is the partnership between Microsoft and Publicis.

Publicis Groupe is a marketing and advertising company that is expanding its existing alliance with Microsoft. The partnership aims to tap into artificial intelligence and behavioral data segments to offer more targeted email campaigns. Specifically, Publicis will tap into the features of Microsoft Azure and Cortana Intelligence Suite to get greater insights about the behavior and expectations of its customers. Based on this information, it wants to help its clients to create targeted campaigns that will reach out to end-users in an appealing way. More importantly, it is sure to improve the conversion rate and bring in more business and revenue for Publicis’ clients.

Publicis already has its own intelligence system called Cosmos AI, and  this will be made available to business through the Azure cloud platform. From Publicis perspective, this is a lucrative deal as its clients can now have access to both Cosmos and Azure, and this means, Publicis can charge more licensing fees from them. In fact, this combined offering of artificial intelligence can add to the revenue of Publicis in a big way.

For Microsoft too, this is a good deal as it’s another opportunity to extends its platform and collaborate with the ad agency. Microsoft is looking to optimize its platform in the best way possible, especially as it wants to take on AWS – the most dominant player in the cloud market. Also, the fact that the cloud market is growing by leaps and bounds means that more opportunities are coming up for all companies in this market, and Microsoft wants to grab as much as it can.

This partnership between Microsoft and Publicis is not something new, as both the companies have a long history of working together. In 2009, Microsoft had sold its digital agency called Razorfish to Publicis for a sum of $530 million. Interestingly, Microsoft had acquired Razorfish only in 2007 from a company called aQuantive for a whopping $6.3 billion.

So, what does this mean for the cloud industry and the many enterprises that use it?

Almost every partnership augurs well for clients because they have more choices and at the same time, they have access to more features. The same is true for this partnership too. When Cosmos data product is combined with Azure cloud platform, it’s bounty time for marketers, as they can get deeper insights into individual customers, based on which, they can customize their products, create appropriate marketing campaigns and do so much more. The best part is all this information is available in real-time, so they can come with new ways to boost their revenue.

Due to these reasons, this partnership between Microsoft and Publicis works well for everyone.

The post Microsoft and Publicis get into a partnership appeared first on Cloud News Daily.

A guide: Using SmartNICs to implement zero-trust cloud security

In an age of zero-trust security, enterprises are looking to secure individual virtual machines (VMs) in their on-premise data centres, cloud or hybrid environments to prevent increasingly sophisticated attacks. The problem is that firewalling individual VMs using tools like software appliance firewalls or Connection Tracking (Conntrack) is operationally challenging to manage. It delivers bad performance, restricting VM mobility and consuming many CPU cycles on servers, which limits their ability to process applications and workloads.

As the need for VM security grows, IT managers end up spending on more and more servers, most of which are tied up with security processing rather than application processing. In this article, we will look at zero-trust security and how best to implement it in data centres.

About zero-trust security

Forrester Research first introduced the Zero-Trust Model for Cybersecurity in its 2013 NIST paper, “Developing a Framework to Improve Critical Infrastructure Cybersecurity.” In this model, all network traffic is untrusted, whether it comes from within the enterprise network or from outside the network. Before this model there was the concept of a trusted network (usually the data center network or enterprise LAN), and an untrusted network (essentially outside the data center or enterprise LAN). Typically, the trust was enforced by a perimeter security mechanism (Figure 1a).

Zero-trust advocated that (a) all resources be accessed securely irrespective of location, (b) adoption and enforcement of least privilege and role-based access, and, (c) inspection and logging of traffic. In traditional enterprise networks, these were implemented primarily by two main mechanisms:

  • Segmentation – Mostly network segmentation using VLANs. However VLANs just provide segmentation, not security
  • Perimeter security at the edge of the segments

This is depicted in Figure 1b.

Zero-trust in data centres

Large-scale data centres deploy a wide variety of services. A single user request can spawn many services within a data center, leading to both east-west traffic within the data center and north-south traffic between the data center and the Internet. For example, consider the process of ordering something on Amazon, where a front-end web server shows the product, but then services are required to accept and validate credit card information, issue a confirmation and send a fulfillment request. This means we must apply the zero-trust model within the data center as well.

There are three reasons why a zero-trust model using security appliances cannot be deployed in data centers, as shown in Figure 1b.  

First, operationally it is extremely cumbersome. The traffic from each server has to be backhauled to a security appliance, and all appliances must be properly configured. This leads to manual errors and operational challenges related to keeping the appliances up to date with changes in service requirements and/or changes in service deployments.

Second, it does not scale well and delivers inferior performance. Most of the security appliances today can handle traffic on the order of 200Gb/s. As servers start getting upgraded to and saturating 10Gb/s and higher network interfaces, a new security appliance must be deployed and provisioned for every 10-20 servers deployed. Actually, a pair of security appliances is needed for redundancy. With the security appliances becoming choke points, it also reduces the performance of the services.

Third, this creates silos within the data centre, making it hard to fully utilise the data centre infrastructure.

Zero-trust in virtualised or cloud-scale data centres

The challenges of using appliance-based zero-trust security are amplified in a virtualized data center as the number of VMs per server increases. There is an additional operational challenge in securing VMs, since they can be shut down and brought back up on a different server or sometimes in a different data center or even live migrated. This means the policies associated with a VM should move with it as well, or else all policies have to be programmed on all security appliances.

As a result, we have to think of a different deployment mechanism for zero-trust in data centres and in particular, virtualized data centers. This can be done by distributing security to each server using virtual appliances running alongside the VMs, by implementing security at the host/hypervisor level using Linux iptables, or at the vSwitch level using Open vSwitch (OVS) Conntrack.

This method (Figure 2a) presents the same problems of scalability and performance and the same operational challenges as the standard security appliance model. The virtual security appliance becomes the bottleneck. It is difficult to manage the policies one appliance at a time. When VMs move, it is extremely challenging to move the policies. In addition, the virtual security appliance is now consuming valuable server resources like CPU, memory and disk space that should be used to run VMs and deliver revenue-generating services.

Distributed security using Linux Bridge and iptables: This method (Figure 2b) solves some of the scale challenges because Linux iptables are available on all Linux hosts. However, by adding another layer of bridging between OVS and VMs, the performance suffers immensely. It is also a massive operational challenge to program taps and then policies for each Linux bridge. VM live migration and/or movement is still extremely challenging as the bridges, taps and policies have to be manually programmed.

Distributed security using OVS Conntrack: The basic solution for operational challenges is to add OVS Conntrack to OVS networking (Figure 2c). OVS has well-defined APIs for integrating with data center management stacks including OpenStack – e.g., OpenStack Security Groups are mapped to OVS Conntrack. This significantly reduces the operational complexity of deploying distributed security. Also, it removes the additional abstraction and provides a little bit better performance than using Linux iptables. However, this approach still does not address performance and scale. Deploying OVS with Conntrack in software results in very high CPU usage for that function alone.

To address these performance and scale issues, data center operators must find a way to offload OVS and Conntrack from the CPU cores. This allows them to provide a very high-performance distributed firewall on each server – close to the VMs – which can define policies and service granularity with a high number of connections being set up and tracked.

Offloading OVS Conntrack with a SmartNIC

The most efficient way to offload OVS and Conntrack is to use a SmartNIC and appropriate software. A SmartNIC is a network interface card that incorporates a programmable network processor which can run application software. By running Conntrack software in the SmartNIC’s processor, this chore is offloaded from the server CPU cores.

Offloading OVS Conntrack from the server CPU cores leads to far higher performance and scalability. Figure 3 (above) compares some representative performance metrics for the server CPU-based and SmartNIC-based implementations.

As can be seen in Figure 3, SmartNIC-based implementation delivers 4X the performance of a software-only, CPU-based implementation while consuming less than 3 percent of the CPU for a large number of flows.

Current implementations of software-only CPU-based Conntrack starts consuming more than 40 percent CPU at 100-500 unique flows and can go as high as 51 percent CPU utilization on a modern server with 48 cores. Clearly, using more than half a server to provide security is not a feasible solution when the central function of the server is to host VMs with applications/services that generate revenue.

Essentially, offloading OVS and Conntrack to a SmartNIC makes it feasible to implement security on a per-VM or per-container basis by removing the server usage penalty and expense, solving the scalability and performance issues, and, delivering better server utilization for application traffic as intended.

How to run Microsoft Outlook on Mac

When it comes to running Microsoft Outlook on a PC versus Mac, the choice between the two is often less a question of need and more a question of preference. It is essentially the specific functionality of these products that creates the user preference. Preference can, of course, be influenced by need, and every user […]

The post How to run Microsoft Outlook on Mac appeared first on Parallels Blog.