Google creates Transfer Appliance to help EU companies shift cloud data


Clare Hopping

13 Nov, 2018

Google has launched its European Transfer Appliance beta testing programme aimed at helping businesses move their apps on legacy infrastructure to the search giant’s cloud platform.

The high-capacity server is suitable for moving workloads and data of sizes in excess of 20TB. This process would normally take a week or more, but Google claims its Transfer Appliance can complete the task much faster, although it hasn’t revealed just how quickly.

In Europe, Google is offering its Transfer Appliance in a 100TB configuration, although the total usable capacity is 200TB. It’ll launch a 480TB version shortly for businesses with an even larger amount of data to migrate. That larger appliance will offer a total usable capacity of a petabyte.

The cloud giant explained its Transfer Appliance has been used by a range of businesses, including for those needing to move large datasets, such as satellite imagery and audio files. It’s also a great option for migrating Hadoop Distributed File System (HDFS) clusters to its Google Cloud Platform.

“We see lots of users run their powerful Apache Spark and Apache Hadoop clusters on GCP with Cloud Dataproc, a managed Spark and Hadoop service that allows you to create clusters quickly, then hand off cluster management to the service,” Ben Chong, product manager at Google Cloud Platform said.

Used with NFS volumes, HDFS data can be easily pushed to Google Transfer Appliance using Apache DistCp. Once the data is copied, the appliance can be sent to Google to upload the data to GCP.

Businesses wanting to use Transfer Appliance to migrate their data can request one from their GCP Console.

A guide to the key principles of chaos engineering

Chaos engineering can be defined as experiments over a distributed system at scale, which increases the confidence that the system will behave as desired and expected under undesired and unexpected conditions.

The concept was popularised initially by Netflix and its Chaos Monkey approach. As the company put it as far back as 2010: "The Chaos Monkey’s job is to randomly kill instances and services within our architecture. If we aren’t constantly testing our ability to succeed despite failure, then it isn’t likely to work when it matters most  –  in the event of an unexpected outage."

The foundation of chaos engineering lies in controlled experiments; a simple approach follows.

Interim on controlled experiments with control and experimental groups

A controlled experiment is simply an experiment under controlled conditions. Unless necessary, when performing an experiment it is important to only do so one variable at a time, otherwise it would be increasingly difficult to determine what caused the changes in the results.

One type of controlled experiment is the ‘control and experimental’ group experiment. In this kind of experiment a control group is subject to observation with no variables being modified/affected purposefully, and the experimental group will have one variable at a time modified/affected with the consequent observation of the output at that stage.

A simple approach

Defining a steady state: The main focus is to aim for output metrics and not for system behaviour; the goal is to find out if the system can continue to provide the expected service, but not how it is providing that service. It is useful to define thresholds that will make for an easy comparison between the control group and the experimental group. Also, this will allow for automated comparisons as well, which makes comparing large quantity of metrics easier.

Building the hypothesis around control and experimental group: Due to the nature of chaos engineering, which is a mixture between science and engineering, the foundation is built around having two groups; a control group, which will be unaffected by injected events, and an experimental group, which will be the objective of the variable manipulation.

Introducing variables that correspond to undesired/unexpected events: Changing the state of the variables is what makes the experiment, however those variables need to be of significance and within reason; also, it is of utmost importance to change one variable input at a time.

Try to disprove the hypothesis: The purpose of the experiment is not to validate the hypothesis, it is to disprove it; we must not fool ourselves, knowing that we are the easiest to fool.

Production means production

The only way of increasing confidence in a system running in production is to experiment on the system running in production, under live production traffic, which may seem odd at first glance, but it is absolutely necessary.

One important aspect that sometimes goes unnoticed is that we must not attack the point where we know the system will fail; speaking with upper management I have got answers of the like ‘I know that if I unplug the DB the system will break’. Well that is not chaos engineering – that is just plain foolishness. A chaos experiment will inject failure in parts of the system we are confident will continue to provide the service. Be it be failing over, using HA, or recovering, we know that the service to the client will not be disrupted, and we try our best to prove ourselves wrong, so we can learn from it.

It is also absolutely necessary to minimise the impact of the experiment on real traffic; although we are looking for disruption, we are not pursuing interruption or fault SLO/SLI/SLA; it is an engineering task to minimise negative impact.

Interim on the blast radius

Chaos engineering or failure injection testing is not about causing outages, it is about learning from the system being managed; in order to do so, the changes injected into the system must go from small to big. Inject a small change, observe the output and what it has caused. If we have learned something, splendid; if not, we increase the change and consequently the blast radius. Rinse and repeat. Many people would argue that they know when and where the system will go down, but that is not the intention. The intention is to start small and improve the system incrementally. It is a granular approach, from small to large scale.

Automation

The importance of automation is undisputed, more so on these experiments where it is necessary to:

  • Be able to rollback fast enough without human interaction or with minimal HI
  • Be able to examine a large set of output metrics at first glance
  • Be able to pinpoint infrastructure weak spots visually

Other sources and good reads

The basics: https://principlesofchaos.org/
An extended introduction: https://www.gremlin.com/community/tutorials/chaos-engineering-the-history-principles-and-practice/
A big list of resources: https://github.com/dastergon/awesome-chaos-engineering

Qualtircs snaps up by SAP for $8bn in continued cloud push


Bobby Hellard

12 Nov, 2018

Software giant SAP has agreed to acquire Qualtrics for $8 billion, just days before the US company was due to go public.

Qualtrics is a technology platform that businesses can use to collect, manage and act on data. SAP will add the company’s XM Platform, which is a system for managing core business experiences, such as customers, products, employees and brand, on a single platform.

The company will keep its leadership structure in place and operate within SAP as normal but said it expects it’s 2018 revenue to exceed $400 million now its a part of SAP.

«Our mission is to help organizations deliver the experiences that turn their customers into fanatics, employees into ambassadors, products into obsessions and brands into religions,» said Ryan Smith, CEO of Qualtrics.

«Supported by a global team of over 95,000, SAP will help us scale faster and achieve our mission on a broader stage. This will put the XM Platform everywhere overnight. We could not be more excited to join forces with Bill and the SAP team in this once-in-a-generation opportunity to power the experience economy.»

At the time of sale, Qultrics had over 9,000 enterprises worldwide, including more than 75% of the Fortune 100. The deal will give the German enterprise software maker access to more than 413,000 customers and a global sales force of around 15,000

«Together, SAP and Qualtrics represent a new paradigm, similar to market-making shifts in personal operating systems, smart devices and social networks,» said SAP CEO Bill McDermott. «SAP already touches 77% of the world’s transactions.

«The combination of Qualtrics and SAP reaffirms experience management as the groundbreaking new frontier for the technology industry. SAP and Qualtrics are seizing this opportunity as like-minded innovators, united in mission, strategy and culture.»

SAP acquires Qualtrics in $8bn deal to deliver stronger brand and customer experiences

SAP has announced the acquisition of research management software provider Qualtrics for $8 billion (£6.2bn) with the aim to ‘deliver the transformative potential of experience and operational data.’

Qualtrics, based in Utah and Washington, offers a platform which enables organisations to conduct internal and external surveys, gauging customer experience, employee expectations and brand advocacy. With SAP claiming its software is involved to some degree in more than three quarters of the world’s transactions, the two companies say their combined product sets will offer unparalleled reach into the decisions behind them.

“The combination of Qualtrics and SAP reaffirms experience management as the groundbreaking new frontier for the technology industry,” said Bill McDermott, SAP CEO in a statement. “SAP and Qualtrics are seizing this opportunity as like-minded innovators, united in mission, strategy and culture.

“We share the belief that every human voice holds value, every experience matters and that the best-run businesses can make the world run better,” added McDermott. “We can’t wait to stand behind Ryan [Smith, Qualtrics CEO] and his amazing colleagues for the next chapters in the experience management story.”

Writing in a blog post, Robert Enslin, president of the cloud business group at SAP – where Qualtrics will be housed – noted the importance of emerging technologies in marrying experience and operational data. “The next evolution of enterprise applications has begun with a real-time connection between the X-data in the system of action and the O-data within the system of record,” wrote Enslin. “Our ability to apply intelligence and machine learning atop these co-joined data sets unleashes the unprecedented power of the new experience economy.”

Across SAP, the need to inject automation and artificial intelligence into systems of record and enhance long-standing customer relationships has long been apparent. Speaking to this publication last year, Melissa Di Donato, chief revenue officer at SAP S/4HANA Cloud, explained the rationale. “We went from hypothesising about the business benefit of what IoT, machine learning, and AI can do for the enterprise, and then all of a sudden it’s become embedded into our ERP,” she said.

Qualtrics may be a company whose product works under the surface of organisations somewhat, but it has long since been a SaaS darling. For the past two years, the company has been placed in the Forbes top 10 list of privately held cloud computing companies, placing #6 in 2017 and #7 in 2018.

Either way, Qualtrics would certainly not have been in contention for 2019’s ranking. As reported by  Deseret News last month, the company had filed for IPO, with a proposed filing being the biggest in Utah’s history. The report noted that, while no details of stock pricing or shares were available, the company’s most recent funding had put it at a valuation of $2.5bn.

It has certainly been a high-profile few weeks in the M&A realm; IBM’s acquisition of Red Hat for $34bn still looms large, backed up by VMware buying Heptio last week. The SAP-Qualtrics deal is the second largest SaaS acquisition of all time, behind the $9.3bn Oracle shelled out for NetSuite two years ago.

The acquisition is expected to close in the first half of 2019.

Picture credit: SAP

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Dropbox Business boosts security with Google Cloud Identity integration


Clare Hopping

12 Nov, 2018

Dropbox has unveiled a partnership with Google that will see the search giant’s Cloud Identity Platform become integrated into its Dropbox Business product, allowing users to sign into the storage and collaboration solution using their existing Google login details.

This means Dropbox Business accounts can benefit from an extra layer of protection via multi-factor authentication, provided through the Google Authenticator app and Titan Security Keys.


Learn how Dropbox Business’ sophisticated infrastructure is designed to keep your data secure in this whitepaper.

Download now


The collaboration with Google is just one of the new partnerships Dropbox has formed with security firms to boost the protection of its customers and their data. It explained it’s looking for new ways it can facilitate the changing needs of customers, such as mobile working and storing sensitive files and folders in the cloud.

Dropbox also revealed it has tied up with other partners with extra integrations for Dropbox Business uses including BetterCloud, Coronet, Proofpoint and SailPoint.

BetterCloud enables administrators to develop automated processes within an organisation, such as on and off boarding new staff and document management, while Coronet offers the opportunity to manage content sharing and keep on top of document security. Proofpoint tie-up with Dropbox adds data loss prevention to the collaboration platform and SailPoint gives admins data access rights for employee file and folders.

“Businesses today are using multiple tools to protect their content, and we’re making it easier for them to securely deploy Dropbox alongside their existing security standards,” said Quentin Clark, SVP of engineering, product and design at Dropbox.

“As employees work remotely and teams change, businesses will have the peace of mind of knowing their content will always be secure with Dropbox.”

These new integrations will start rolling out to Google Business customers by the end of the year.

Cloud and hybrid technology tops the priority list for businesses


Esther Kezia Thorpe

9 Nov, 2018

Cloud and hybrid IT are one of the top five most important tools in their organisation’s technology strategy, according to 95% of IT professionals participating in a survey of more than 800 IT workers conducted by SolarWinds.

Of these, 77% of respondents said that cloud technology topped their list of most important technologies, and was the most useful for digital transformation efforts.

The primary reason for this is that cloud and hybrid IT meet the current needs of businesses, but also serve as the backbone to future trends like machine learning and artificial intelligence. Many of the principles of cloud implementation support scalability for future growth, as well as flexibility to add and take away services as needed.

However, 63% of respondents also said that cloud and hybrid IT were the biggest challenges when it came to implementation and rollout, suggesting that although they understand its value, the day-to-day practicalities are still proving difficult for businesses looking to make the most of cloud technologies.

In second place, seen as one of the most important technologies in use today by 86% of IT professionals, is automation. This also scored highly as a technology with the greatest potential to provide productivity and efficiency benefits, as well as return on investment in the future.


Dive deeper into the report’s findings on technology investments and priorities by downloading it here.

Download now


Big data analytics were also rated highly as an important tool for organisations at present, with 79% of IT professionals including it in their top five choices. It was also seen as the number two priority in terms of digital transformation over the next few years.

One challenge regarding the implementation of cloud and data analytics technologies is inadequate infrastructure and a lack of organisational strategy, which the survey highlights as being a common reason why the IT professionals’ current systems aren’t optimised.

But as cloud providers are making it increasingly easier to move to the cloud and removing barriers to adoption, it is the business itself which will need to adapt in order to use these technologies with full effect.

UKCA Winner Showcase: Amido and Project FreQ


Maggie Holland

9 Nov, 2018

Video hasn’t really killed the radio star. Indeed, the radio industry continues to boom, with listeners and advertisers alike enjoying mutual benefits. According to Radio Joint Audience Research (RAJAR), the majority (89%) of the UK population tune into the radio at least once a week, listening to an average of 21.1 hours during that time.

Much like many industries, though, radio stations and presenters do not work for free and, as such, networks rely on advertising to fuel the financial pipeline.

But advertisers, understandably, need to demonstrate return on their investment and ensure they are getting value for their money. As such, being able to track and monitor the existence of their advertising, as well as the impact, is key.

We looked at digital transformation and how they could improve internal processes, keep employees happy and allow them to focus on their day job – stop doing manual processes and talk to other humans, allowing computers to get on and do what they do best. We wanted to consolidate and revolutionise.

It’s often down to the in-house radio commercial team to do the grunt work of ensuring spend is maximised and accounted for. That may seem like a simple task, but for one particular global media and entertainment company – with multiple clients – the process was anything but high-tech.

Indeed, instead of searching for and booking new ad opportunities, commercial team members were bogged down listening to individual shows to try and pinpoint client on-air mentions. With each radio show lasting up to four hours, this was a ridiculously time-consuming and inefficient process.

The entertainment firm turned to Amido, a vendor-agnostic IT consultancy specialising in cloud implementations, for help. The resulting project, FreQ (pronounced free-q) won the UK Cloud Award 2018 accolade for Most Innovative Emerging Technology in the Best Digital Transformation Project category.

The approach Amido took resulted in a very simple solution to a complex problem. It can now track, compile and report on on-air mentions with the same ease consumers enjoy when using a search engine online. It has transformed the way commercial teams work with clients – replacing hours of effort with mere minutes – and, no doubt in turn, added a USP for future client engagements, too.

“Technology for us at Amido has never been about products, but about the business objectives. This is how we are able to continue to expertly deliver innovative solutions as we know how to make disparate software work together across platforms, and plug the gaps where needed rather than reinventing the wheel,” said Alan Walsh, CEO of Amido.

“We know the value in our team and constantly invest in our cloud experts, so they are constantly evolving and learning through our DevOps Academy.”

It took just eight weeks to implement and, importantly, went live on time and within budget. But, behind the resultant simplicity lies a lot of listening, talking and project work.

“We had been working with the client for around 18 months on various projects. Our initial discussions were around how to improve internal processes. They struggled with the same issues a lot of media companies have, including legacy systems that are cumbersome and don’t really work well together,” Leo Barnes, senior business analyst and UX design consultant at Amido, told Cloud Pro.

“So we looked at digital transformation and how they could improve internal processes, keep employees happy and allow them to focus on their day job – stop doing manual processes and talk to other humans, allowing computers to get on and do what they do best. We wanted to consolidate and revolutionise.”

Speech-to-text system VoiceBase lies at the core of the solution to such a problem, with AWS’ Elasticsearch service offering sophisticated analytics, search and monitoring capabilities among other things. The former transcribed each radio show, allowing commercial staff to scan rather than listen for mentions, referring back to audio snippets where relevant.

With a user-friendly front-end, users are able to search for relevant mentions, verify and then download what they need to send onwards to clients. Perhaps an unintended, but beneficial benefit, is the fact that the use of FreQ has expanded out of the commercial team and is now being used by others. Namely presenters and producers who use the tool to review their programmes and discuss content and performance.

“We work with a range of clients. Some are small, some are complex. We have a general approach to understand what the problem is. Then, we will have a proof of concept and reduce the guesswork. The client can then sell that internally and provide confidence further up the chain,” Barnes added.

“Having that recognition shows the client is happy and winning this UK Cloud Awards accolade is an added bonus.”

ThousandEyes assesses the key performance differences between AWS, Azure and GCP

Plenty of factors have to be taken into account when choosing a cloud provider, from performance to price and everything in between. Indeed, many organisations have gone ahead of this process and are now weighing up more than one cloud provider, assessing which workloads fit best.

When it comes to the combination of price and performance, companies such as Cloud Spectator have shown the differences between the Amazons, Microsofts and Googles of this world and more specialised players. Yet for many organisations, the big three are a must.

ThousandEyes has put together what it claims to be the industry’s first report which measures the global performance of the public cloud behemoths. The report, which analysed more than 160 million data points, found subtle but crucial differences in how they worked.

For one thing, AWS takes a different approach to its brethren when it comes to connectivity. AWS’ traffic only enters its architectural backbone close to the target region; for instance, traffic from Singapore goes through multiple service providers before hitting Amazon’s systems in Dallas.

The reason for this is succinctly explained. “Why AWS chooses to route its traffic through the Internet while the other two big players use their internal backbone might have to do with how each of these service providers has evolved,” the report notes. “Google and Microsoft have the historical advantage. AWS, the current market leader in public cloud offerings, focused initially on rapid delivery of services to the market, rather than building out a massive backbone network.”

In terms of network performance, the research found generally consistent results, although there were exceptions in Asia and LATAM. The report took the cloud behemoth’s Eastern US data centre locations – Ashburn for AWS and Google and Richmond for Microsoft – and compared bi-directional latency between different geographies. Naturally, North America and Europe has the quickest times, with Asia and Oceania lagging, but when it came to fluctuations in latency, NA and Asia suffered the most.

Looking at specific countries, India proved a fascinating case. From each provider’s Mumbai region, Google struggled somewhat – particularly when it came to Europe, where almost three times the latency was experienced. Yet in terms of variance, AWS had particular difficulty when it came to Asia, although all providers recorded poor scores. “Such large swings can impact user experience and most likely corresponds to the relatively poor quality of the Internet in Asia,” the report noted.

The report also explored multi-cloud performance – with results here being consistent. Packet loss was at 0.01% across all relationships – AWS/Azure, Azure/GCP and GCP/AWS. “Multi-cloud performance reflects a symbiotic relationship,” the report explained. “Traffic between cloud providers almost never exits the three provider backbone networks, manifesting as negligible loss and jitter in end-to-end communication.”

“Multi-national organisations that are embracing digital transformation and venturing into the cloud need to be aware of the geographical performance differences between the major public clouds when making global multi-cloud decisions,” said Archana Kesevan, report author and senior product marketing manager at ThousandEyes.

You can read the full report here (email required).

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Google Cloud introduces AI Hub and Kubeflow Pipelines


Bobby Hellard

9 Nov, 2018

With the AI revolution in full swing, there’s a growing need for a simpler way to understand and deploy the smart technology so businesses can see its full potential.

And, it’s not just big organisations; it’s small and medium-sized businesses from all industries looking to get the most out of machine learning and data. To help manage these dauntingly complex technologies, Google Cloud is launching an AI Hub and Kubeflow Pipelines for businesses.

It’s as complex as it sounds and proof of Google Cloud’s point. For every business to fully understand AI and machine learning, they need a little help and guidance which Google Cloud is packaging as a set of building blocks.

However, the cloud giant’s new chief has more of a warning tone for businesses adopting these new technologies. Speaking to MIT Review, Andrew Moore laid bare the reality of embedded AI and machine learning into a business.

«It’s like electrification,» he said. «And it took about two or three decades for electrification to pretty much change the way the world was. Sometimes I meet very senior people with big responsibilities who have been led to believe that artificial intelligence is some kind of ‘magic dust’ that you sprinkle on an organisation and it just gets smarter. In fact, implementing artificial intelligence successfully is a slog.

«When people come in and say ‘How do I actually implement this artificial-intelligence project?’ we immediately start breaking the problems down in our brains into the traditional components of AI-perception, decision making, action and map those into different parts of the business. One of the things Google Cloud has in place is these building blocks that you can slot together.»

The AI Hub is described as a «one-stop destination for plug-and-play machine learning content» and includes TensorFlow modules. This, Google Cloud says, makes it easier for businesses to reuses pipelines and quickly deploy them to production in the Google Cloud Platform in a few simple steps.

The pipelines themselves are also a new component of Kubeflow, which is an open source project that packages ML code. It provides a workbench to compose, deploy and manage reusable ML workflows, making a «no lock-in hybrid solution» according to Google Cloud.

The introduction of Kubeflow Pipelines and the AI Hub reinforces Google’s large-scale efforts in 2018 to invest in artificial intelligence. As the bronze medalist in the cloud wars against Amazon and Microsoft, AI has become its most important product to entice customers to its cloud services.

«These are important, differentiating moves in artificial intelligence from Google,» states Nicholas McQuire, head of enterprise and artificial intelligence research at CCS Insight.

«Customer fear of being locked in by the cloud providers is reaching an all-time high and this has been a key barrier for AI adoption. Meanwhile, hybrid cloud and open source technologies like Kubernetes, which Google pioneered, have become very popular so Kubeflow Pipelines addresses many AI requirements in a single stroke.

Amazon and Cisco join forces to create hybrid app development platform


Clare Hopping

9 Nov, 2018

Amazon and Cisco have teamed up to make it easier for developers to build apps across cloud and traditional on-premise architecture, switching them from one to the other when required.

Cisco Hybrid Solution for Kubernetes on Amazon allows developers to create their apps either on AWS’s cloud service or on their traditional servers in containers.

Kip Compton, senior vice president of Cisco’s Cloud Platform and Solutions group explained that developers are often expected to work in fragmented environments, with their applications hosted either on the cloud or in traditional on-premise environments. But when switching to a hybrid model, it’s often a complicated process to get up and running, with technologies, teams, and vendors needing to work closely together to make sure apps can move from cloud to on-premise seamlessly. 

That’s where containers and specifically, Google-developed Kubernetes comes in.

«Containers and Kubernetes have emerged as key technologies to give developers more agility, portability, and speed — both in how applications are developed and in how they are deployed,» Compton explained in a blog post. «But, enterprises have been struggling to realize the full potential of these technologies because of the complexity of managing containerized applications in a hybrid environment.»

Cisco Hybrid Solution for Kubernetes on Amazon combines the Amazon Elastic Container Service for Kubernetes, with Cisco’s Container Platform and Cisco’s on-premise infrastructure to create a hybrid solution for businesses wanting to use a hybrid approach without the extra work.

When setting up apps, developers can choose to deploy them on either AWS or on their on-premise architecture running Cisco’s Container Platform. And if they want to switch the applications from one to the other, they can do so. This means businesses can take advantage of the cloud’s flexibility with the privacy and regulatory approval of traditional, on-premise architecture at the same time.

Cisco Hybrid Solution for Kubernetes on Amazon will set back organisations from $65,000 and will include specialised hardware as well as Cisco’s software/ The product will be available from the end of November.