Open-source rivals considered suing Amazon over “strip mining”


Nicole Kobie

16 Dec, 2019

Amazon Web Services has helped plenty of companies, from small startups to global giants, prop up their computing power, but now it’s accused of “strip mining” software from other tech firms.

According to a report in The New York Times, Amazon is accused of taking advantage of open-source technologies, noting which are popular among AWS users, and then rolling out its own version of the service. The accusations aren’t new, but seven open-source companies targeted met to discuss taking legal action against Amazon, the report suggests, but have so far not brought a case.

The story points to a company called Elastic, which offers an open-source, free-to-use search tool for data analytics called Elasticsearch. In 2015, Amazon announced it would offer a managed version of the open-source search tool. Open source companies generally, though not always, make their revenue by selling support or management for their free-to-use software, meaning Amazon was cutting in on Elastic’s business.

Elastic retalied shortly thereafter by adding new features that were only for premium users, the report says; Amazon simply added the same features. The battle being highlighted has carried on in the intervening years.

In March of this year, Amazon unveiled a fork called Open Distro for Elasticsearch saying the tool had become “increasingly central” to users worldwide, thanks to its “permissive” Apache 2.0 license, according to a blog post by AWS vice-president of cloud architecture strategy, Adrian Cockcroft.

“Unfortunately, since June 2018, we have witnessed significant intermingling of proprietary code into the code base,” said Cockcroft. “While an Apache 2.0 licensed download is still available, there is an extreme lack of clarity as to what customers who care about open source are getting and what they can depend on. For example, neither release notes nor documentation make it clear what is open source and what is proprietary.”

That means any changes to the code — such as to patch a bug or add a feature — could be a breach of license, and loss of the right to use the software. To give AWS users “certainty”, Amazon teamed with Expedia and Netflix to fork off into their own open source version, the Open Distro for Elasticsearch.

In response, Elastic founder Shay Banon warned of the dangers of such splintering in a blogpost, denying Amazon’s accusation that anything has changed with the code’s license. “Our products were forked, redistributed and rebundled so many times I lost count. It is a sign of success and the reach our products have,” Benon said. “From various vendors, to large Chinese entities, to now, Amazon. There was always a reason, at times masked with fake altruism or benevolence. None of these have lasted.”

In September, Elastic sued AWS for trademark violations and false advertising for the original product as well as Open Distro, saying customers are “likely to be confused”. Amazon has denied the accusation, but did not reply to a request for comment at the time of publication.

While we need to wait for the outcome of that particular case — and it may well be settled out of court — this isn’t the first time Amazon has been accused of “strip mining” rival companies, in particular those offering open-source software. MongoDB, MariaDB, and RedisLabs have made similar complaints, and it isn’t limited to software: reports have noted a similar practise with shoes, with Amazon selling a pair remarkably similar to those made by Allbirds.

But Amazon makes much more from AWS than it does selling retail products such as shoes. Earlier this year, results reports revealed AWS makes up half of Amazon’s total profits, growing 41% year on year — so expect Amazon to defend its corner.

Marketing Automation Systems: A new age of marketing technology


David Howell

17 Dec, 2019

As the quantity of customer data flowing into your business continues to grow, automating aspects of your enterprise’s processes has become a commercial imperative. One key area to focus this development upon is marketing.

Your cloud deployment has already brought several benefits to your company. Whether you have a private, public or hybrid cloud deployment, the hosted infrastructure you have in place is the ideal environment to radically alter how your business uses its marketing technology.

According to Flexera’s State of the Cloud Report, optimising existing cloud use for cost savings continues to be the top initiative in 2019 for the third year in a row, increasing from 58% in 2018 to 64% this year. Others include moving more workloads to cloud (58%), expanding the use of containers and adopting a cloud-first strategy (tied at 39%) and implementing automated policies for governance (35%).

Noah Elkin, senior analyst with Gartner, tells Cloud Pro: “When you look at the momentum within the marketing automation systems (MAS) marketplace, the direction of travel is towards more cloud deployments. Most of the mega-vendor solutions are either 100% cloud or some level of hybrid cloud deployment. Ultimately, MAS connects a business with data that enables them to deliver more personalised services to its customers. The 360-degree view of a customer that MAS can deliver is now central to the development of all businesses.”

Implementing MAS isn’t just about cost reductions either. It also offers the opportunity to bring in other related technologies, such as artificial intelligence (AI) and machine learning (ML). A report by tech advisory and investment firm GP Bullhound recently showed that $1 billion was invested into AI-related marketing companies in Q2 2019 alone.

Speaking to Cloud Pro,Oliver Schweitzer, executive director at GP Bullhound, explains: “Artificial intelligence heralds the beginning of a new marketing era, driven by the need to connect vast amounts of disparate data, uncover patterns and make predictions, which only AI can accomplish.

“AI will become increasingly integrated into digital services and marketing processes; however, human intelligence and intuition will remain critical to interpret its findings and implement strategic and creative plans accordingly.”

Three significant themes within marketing AI are considered within the report: Hyper-personalisation, branding and B2B. Personalising customer journeys is the most common way for marketers to deploy AI, with a quarter (24%) already using AI to this effect, and almost two thirds (59%) planning to do so in the next two years. When coupled with a cloud-based MAS, this technology becomes an even more powerful marketing tool.

Automated marketing

MAS has seen a massive uptick in popularity, as it offers businesses a suite of tools they can use to potentially expand and improve their marketing channels.

Explaining the reasons behind this, Melody Siefken, research analyst for digital media at Frost & Sullivan, tells Cloud Pro: “All marketing is data-driven and measurable, and in the digital environment, there is no shortage of customer data for marketers to use, ranging from qualitative data such as page views, to call-to-action clicks, and conversion percentages. There are also measurable qualitative data, such as an attitude about a product or service, product reviews, and social media interactions. Businesses are adopting MAS to try and make sense of the unlimited amounts of customer data and turn this data into actionable, intelligent leads and lead scoring for sales enablement.

“MAS also allows them to execute an omnichannel approach to reach customers from all sides, including through channel partners and direct sales, to create a seamless and consistent customer experience/journey. Tools found in end-to-end MAS act as the central data repositories that monitor and collect all customer data for all the departments in a business to use. By adopting MAS, enterprises of all sizes have a dependable and scalable tool that allows them to make sense of the many types of customer data to bring up a bottom line and show a valuable return on investment.”

For marketing automation systems to realise its full potential, it’s vital to ensure cloud and MAS services are integrating across business functions.

“The cloud and MAS integrate by building a big-picture platform that manages all customer data and parameters of communication. Typically, MAS is made up of four components: campaign management, lead management, sales enablement, and marketing analytics and measurement,” Siefken says.

“Most MAS offerings on the cloud are accessible from anywhere, and all are based on a subscription pricing model, which typically gives the users unlimited access with the right authorisation. Cloud-based MAS often have large integration libraries so customers can connect their existing software and solutions to the platform for that seamless experience. Data silos are removed this way.”

It’s also important to take the time to carefully think about data location when implementing a MAS solution in the cloud. For many businesses, data security and data sovereignty, as well as how easy it is to locate and migrate the data are major considerations. This will feed into questions about what cloud service providers and platforms are the most suitable for your needs.

For a further minority of organisations, latency is also a serious hurdle that may put paid to any thoughts of deploying MAS entirely in the cloud.

“[Frost & Sullivan] research also shows that in a few use cases with MAS and customer interactions, response times need to be in milliseconds, and so there is an increasing desire to reduce latency by computing some of the algorithms at the edge, rather than on the cloud,” explains Siefken.

When it comes to how MAS will evolve in the coming years, Siefken says that it’s moving from point and standalone options to full suites, thanks to the use of cloud services.

“Businesses can pick and choose the apps and functionalities they require to build their MAS, and this will be a continued trend in the era of personalisation,” she says. “MAS is evolving from an out-of-the-box solution to a customised, tailored fit platform. Integration is a must-have feature, especially as businesses look to connect their CRMs, ERPs, and sales tools like Salesforce to their MAS.”

Leveraging the cloud

As your business has embraced more cloud services, the benefits hosted services such as MAS has become apparent.

“We see the adoption of MAS in the mid 50% range, with another 25% of organisations planning to deploy this technology within the next two years. Projecting this forward, we will see an 80% uptake within the B2B and B2C sectors,” says Gartner’s Noah Elkin.

“When you are making a marketing technology purchase – especially when it is a major purchase like MAS, you must have a clear sense of what your business goals are. Ask yourself what the technology is expected to deliver. Also, pay close attention to the other stakeholders in your business or organisation. This is critical, as they will help integrate and maintain the system you are installing. As MAS could affect a range of business processes, MAS implementations are business-wide taking in IT and marketing.”

The cloud infrastructure your business has in place is an ideal environment for MAS. Moving forward, automating some critical areas of your enterprise’s marketing activities will become the norm. It’s vital, though, to understand that MAS touches multiple areas of your business. The most successful MAS implementations consider this. With all stakeholders working in unison, MAS could be massively transformative for your business.

How to manage a departing employee’s access to IT


Nik Rawlinson

19 Dec, 2019

Jobs for life are a thing of the past. Staff turnover has never been higher, in part because it suits employers to structure contracts that way – but more often because there’s a skills shortage. Staff are a valuable asset easily lured away by rivals.

And then what? Do you revoke their access, both physical and digital, to keep them away from your infrastructure and data, or should it be business as usual while they work out their notice? A decision like this can only be made if the organisation has a clear picture of what exactly the employee can access.

“You need a complete understanding of the company assets employees use from their first day,” said Fredrik Forslund, one of the part founders of the Blancco Technology Group, whose eponymous product is used by businesses to safely wipe used kit for reuse or sale. “You need an asset management system that tracks the physical assets an employee’s using, which can be simple to organise and incredibly helpful when reconciling assets following an employee’s departure. Besides that, it’s great to know all digital services used, which is easiest to achieve with single sign on. Simple tasks like changing passwords and logging out of online services is an important process that could protect your company from a potential data breach.”

“An IT admin requires quick visibility into the scope of who has access to what within the organisation, including internal systems, cloud services and files,” said Brandon Shopp, VP of product strategy for security, compliance, and tools at SolarWinds, whose access rights manager software helps IT managers understand what a departing staff member had access to, beyond simply their Active Directory account. “Doing this manually is a time-consuming exercise, so having a tool that audits and provides it to you is an important resource. Before the employee exits the organisation, IT admin should revoke access to any information they don’t need to complete their final assignments. Having a product in place to help with this not only provides visibility, but also an audit of changes to your infrastructure to help understand who is making changes and what they are.”

Why, where and when?

It also depends on the circumstances under which the employee is leaving. Redundancy requires a period of consultation, during which restricting an employee’s right to work – and access to resources – may leave an organisation open to legal repercussions. Should an employee voluntarily hand in their notice, however, the situation is somewhat different.
“If the employee is leaving to go to a competitor, it’s still the situation in most cases that once they’ve handed in their notice they’ll probably be leaving that day, so won’t continue to have access to the [company’s] data – although that’s a bit of an outdated concept, to be honest,” Shaun Thomson, CEO of Sandler Training told us. “By the time someone puts their hand up and says they’re leaving, if they want to take that data, they already have it. They’d be silly to wait until the day after they’ve handed in their notice.”

Thomson says organisations should concern themselves with continuation of business at least as much as they think about the safety of their data and the hardware they have loaned an employee. Building multiple contact points for each client, effectively sharing internal data far and wide may, conversely, be the most effective solution.

Hardware and data jurisdiction

“Once the decision about letting someone go has been made, a collection date for assets should be set and when assets are collected, all data should be securely erased with an audit trail… before these assets are transferred to another user,” Forslund said. “There should be zero risk for data leaks in between users in a situation like this.”

Frequently, the distinction between corporate and personal hardware – and corporate and personal data – is blurred. BYOD can result in business-critical data residing on users’ own devices, while personal emails may linger in a corporate inbox. Should employees be allowed to export their mailbox and take their contacts with them?

“Generally, no,” said Forslund. “The personal emails must originate from some other service where access to emails should still exist and remain. If employees are allowed to export their inbox, all locally saved work emails will come along, which is not okay.”

Shopp agrees. “Company email systems and the underlying data stored within belongs to the company, which makes it the company’s discretion to allow the employee to extract any personal items such as contacts and emails before they leave.”

It’s therefore essential that guidelines for the acceptable use of email are written into staff members’ contracts of employment, so that confusion – and conflict – can be avoided at the point of departure.

As Thomson points out, “when you employ people you’re looking for certain things, which you’re disdainful about when they leave. You expect them to come with contacts but don’t want them to leave with any.”

But contacts alone are less important than an established relationship once an organisation reaches a certain size.

“When we’re working with our client companies, we apply an acid test: do your clients have a relationship with you or just one person in your company?” Thomson asked. “If it’s the latter, when that individual moves the client is going to go wherever they go. As you grow – both your own company and a company you’re dealing with externally – it’s more about dealing organisation to organisation. We use Microsoft Dynamics as a CRM, but if our contact at Microsoft left that wouldn’t change: we’d still be using Microsoft software. The bigger a company is, the less likelihood that the employee will be able to take business with them.”

From a leadership point of view, then, and with succession planning in mind, only considering the risk to your data at the point an employee announces they’re leaving is probably too late. Data can be used as an insurance by staff who feel their position to be under threat. By cultivating multiple touch points between your organisation and its clients, this policy will be less effective, and have a less detrimental effect in-house if it was ever deployed.

You’re fired!

Special consideration needs to be given to staff leaving under a cloud, for whom you may wish to curtail access to mission-critical systems and sensitive data in short order.

In this case, SolarWinds’ Security Event Manager “alerts you if someone is still trying to use an account once they’ve been locked out” said Shopp. “It gathers logs that can tell you why someone is trying to authenticate with the account that you’ve shut down. Is it an application that was installed while the person was still at the company, which you need to go in and shut down, or is somebody actually trying to do something that they shouldn’t? Having visibility into that is something that every organisation should have.”

As Thomson explained, though, each situation must be considered on its own merits. There’s a wide choice of safeguards that companies can choose from, depending on their philosophy, size, and the kind of assets – both physical and data-based – they’re dealing with. Key is understanding what staff have access to, and knowing what needs to be done as soon as it becomes clear their time with the business is drawing to a close. After all, the rate of staff turnover is unlikely to slow down any time soon, if ever.

Google Transfer Service launched for those handling enormous data migrations


Keumars Afifi-Sabet

13 Dec, 2019

Google Cloud Platform (GCP) has developed a software service to help organisations handle massive data transfers between on-premise locations and the cloud faster and more efficiently than existing tools.

The tool has been designed for organisations that need to undergo large-scale data transfers in the region of billions of files, or petabytes of data, between physical sites to Google Cloud storage in one fell swoop.

GCP’s Transfer Service for on-premises data, released in beta, is also a product that allows businesses to move files without needing to write their own transfer software or invest in a paid-for transfer platform.

Google claims custom software options can be unreliable, slow and insecure as well as being difficult to maintain.

Businesses can use the service by installing a Docker container, with an agent for Linux, on data centre computers, before the service co-ordinates the agents to transfer data safely to GCP storage.

The system makes the transfer process more efficient by validating the integrity of the data in real-time as it gradually shifts to the cloud, with an agent using as much available bandwidth to reduce transfer times.

The data transfer service is a larger-scale version of tools such as gsutil, a cloud transfer service also developed by Google, which is unable to cope with the scale of data that Transfer Service has been designed to handle.

The firm has recommended that only businesses with a network speed faster than 300Mbps use its Transfer Service, with gsutil sufficing for those with slower speeds.

Customers also need a Docker-supported 64-bit Linux server or virtual machine that can access the data to be transferred, as well as a POSIX (Portable Operating System Interface)-compliant source.

The product is aimed squarely at enterprise users, and comes several weeks after the company announced a set of migration partnerships aimed at customers running workloads with the likes of SAP, VMware and Microsoft.

One third of data centre spend goes into hyperscalers’ pockets through Q3, finds Synergy

While good technology analysis revolves around exploring new markets, conducting research and publishing authoritative market share scores, sometimes insights can be gleaned by just freshening up current figures. Long-time cloud infrastructure Synergy Research has done just that in its latest note which focuses on continued hyperscaler dominance.

The latest data from Synergy has shown that data centre hardware and software spending from hyperscale operators in the first three quarters of the year has gradually risen, and this year represented a third of total spending.

While data centre spending from enterprises and service providers is now at 67% of total outlay – compared with 85% in 2014 – overall spend from this sector has risen 6% in five years, albeit in line with overall market expansion of 34%.

As continues to be the case, moving enterprise workloads to the cloud means the squeeze continues to be put under enterprise spending. Finding recent tales of large organisations moving their infrastructure to a major cloud vendor is like shooting fish in a barrel; to pick just a few, Best Western Hotels is in the process of going all-in on Amazon Web Services (AWS) as evinced at re:Invent earlier this month, while Salesforce and Sainsbury’s were recent client wins for Microsoft Azure and Google Cloud Platform respectively.

Synergy also noted ‘continue growth in social networking’ as a primary indicator for increased hyperscaler spend. Total data centre infrastructure equipment revenues – including cloud and on-prem, hardware and software – were at $38 billion for Q319.

John Dinsdale, a chief analyst at Synergy, argued the trend around flat enterprise spend is not going away any time soon. “We are seeing very different scenarios play out in terms of data centre spending by hyperscale operators and enterprises,” said Dinsdale. “On the one hand revenues at the hyperscale operators continue to grow strongly, driving increased demand for data centres and data centre hardware. On the other hand, we see a continued decline in the volume of servers being bought by enterprises.

“The impact of those declines is balanced by steady increases in server average selling prices, as IT operations demand ever-more sophisticated server configurations, but overall spending by enterprises remains almost flat,” added Dinsdale. “These trends will continue into the future.”

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Microsoft, not Amazon, is going to win the cloud wars


Adam Shepherd

12 Dec, 2019

Brace yourselves, because I’m about to share a theory that may be a little unpopular: I believe it’s only a matter of time before Microsoft Azure overtakes AWS as the dominant force in the world of public cloud. 

I know that may sound crazy, and many of you are probably already reaching for the ‘close tab’ button, but hear me out. 

It’s no secret that Bezos’ cloud computing division is currently sitting pretty as market leader, having capitalised incredibly effectively on its first mover advantage while its’ rivals initial efforts stalled. By cementing its reputation as the biggest force in the cloud industry, it has attracted a number of high-profile customers, but it has struggled to make a major splash within large, established enterprises. 

You know who hasn’t, though? Microsoft.

While AWS has always been a favourite of startups and developers, Microsoft has concentrated firmly on the enterprise and met with remarkable success. To sweeten the deal, Microsoft has also been busily releasing a number of business-friendly features, such as its Azure Arc platform, which is designed to make it easier to consume and deploy its services across a large enterprise estate. In fact, any time I’ve spoken to a CIO who hasn’t yet moved to the cloud but is planning to, Azure has been a key part of their roadmap.

The stated reason for this is usually “well, it works with all of our existing systems”, which is a simple yet compelling point; if your on-prem servers are primarily running workloads like Active Directory, SQL Server and Exchange Server instances, opting for Microsoft’s cloud platform is sort of a no-brainer. Add in the fact that most large businesses are likely to be using Microsoft’s Office and Windows software (and even potentially Windows Server) and the logic becomes apparent.

More importantly, however, Microsoft has learned how to play nicely with others. Azure has always been a more open platform than most have given it credit for, but the addition in recent years of full native support for the likes of Linux and VMware show just how far it’s come. It’s making a real effort to be as flexible as possible, allowing customers to run the workloads that they want in the way they want to run them. 

This includes multi-cloud environments, which is the new hotness for businesses that want to avoid vendor lock-in and increase redundancy protection. Microsoft is more than happy to support multi-cloud deployments, if that’s what the customer wants. 

Amazon? Not so much. As we discussed on a recent episode of the IT Pro Podcast, there have been recent reports that suggest that AWS partners are banned from even using the term multi-cloud, presumably on the basis that – as the current top of the pile – giving customers the option of using multiple providers only increases the risk that they’ll ditch AWS for a better option. Note that in that scenario, the emphasis is not so much on giving customers the best possible option but on trying to hide from them the fact that other providers exist.

Amazon is undoubtedly on the cutting edge as far as tech development goes; its pioneering work on machine learningserverless computing and function as a service tools are evidence enough of that. It’s enterprise support that will determine the true winner of the cloud wars, however, and in this area, AWS is leagues behind Microsoft.

Lloyd’s of London will invest £300m in digital transformation doctrine


Roland Moore-Colyer

12 Dec, 2019

Lloyd’s of London has secured £300 million to fund a digital transformation overhaul to cut its costs and streamline its processes.

A major part of this overhaul, dubbed Blueprint One, will involve the creation of new digital platforms.

A “digital end-to-end platform” will be used to create a portal and a suite of services for handling complex risks in the insurance and reinsurance market, with the goal of supplementing face-to-face negotiations.

APIs will also be used to help connect the platform to insurance brokers’ own systems, while centralised tools such as a tax calculator and compliance checker will help simplify processes. The platform will be supported with information taken from a common data platform.

A digital Lloyd’s risk exchange will also be created for handling less complex risk agreements at high volumes, allowing for brokers to easily create and purchase policies, while also accessing Lloyd’s products and services. To speed up the placement of risks and reduce their costs, algorithms will automatically rate the risks. Again, a centralised tax calculator, compliance checker, and data platform will help ensure risks are created effectively and above board.

“The risk exchange will build on the market’s current investment in e-trading platforms and other technologies to digitise the placement of less complex risks. It will not replace these systems, but will integrate them so they are compatible,” explained the Blueprint One document. “This will benefit market participants by giving them access to a wider customer base, enabling them to leverage the Lloyd’s brand, its global distribution network, economies of scale and lower costs.” 

A suite of other proposed changes, which will be funded by debt rather than charges made on the market’s members, come in response to poor performance and complaints around the high cost of doing business with Lloyd’s of London. 

As such, the digital transformation process, which will enter its first phase next year, is not just a way for Lloyd’s of London to improve internally but to also help bolster its insurance market. 

“This first Future at Lloyd’s blueprint marks an exciting new chapter for Lloyd’s. It sets out how we are going to combine data, technology and new ways of working with our existing strengths to transform the culture we work in and everything we do – from placing risks and paying claims to attracting capital and developing new products,” said Lloyd’s of London’s CEO John Neal. 

This is yet another example of a long-established organisation undergoing a digital transformation doctrine. But such projects vary in scale, with Lloyd’s of London’s Blueprint One being a major undertaking, while other projects can be of a smaller scale, such as the Department for Transport’s goal to create a digital transport data mapping tool

Some projects can be rather different altogether, such as Massachusetts Police’s use of Boston Dynamic robot dogs to sniff out bombs and explore hazardous areas.

Yet regardless of size and scope, there’s a healthy appetite for digital transformation in all manner of organisations and industries, with the goal of taking the latest technology and using it to streamline or redefine how an organisation operates.

Kubernetes as a service: What is it – and do you really need it?

We have seen that, with the acquisition of Heptio, how Kubernetes is well integrated into product stacks of VMware and launched new commercial and open source solutions. 

VMware’s motive is to shift to container based infrastructure powered with Kubernetes and participate in the competitive data centre market. Additionally, Kubernetes has been well received by public cloud and other leading tech vendors by showing full-stack support to manage containers either on bare metal or the cloud.

We are now in the era where every technology backend, infrastructure or platform is being sold in the form of an ‘as a service’ model, Kubernetes is adopted by more than 30 solution providers to offer bundled, managed and customised Kubernetes as a service (KaaS).

But investment, deployment and later management of Kubernetes might raise risks and challenges to organisations that want the rapid transformation to modern infrastructure to support dynamic needs by consumers. KaaS solution providers are coming up with an end-to-end solution that will save them from dead investment and time consumption, plugin most secure way. Let’s understand what KaaS is and what are its benefits and features.

What is Kubernetes as a service (KaaS)?

Kubernetes as a service is a type of expertise offered by a solution or product engineering provider companies, to help customers to shift to cloud-native enabled Kubernetes based platform and manage the lifecycle of K8s clusters.

This can include migration of workloads to Kubernetes clusters; deployment, management, and sustenance of Kubernetes clusters on the customer's data centre. KaaS mainly handles day one and day two operations while moving to Kubernetes native infrastructure, along with features like self service, zero-touch provisioning, scaling and multi-cloud portability.

Why do organisations need KaaS?

In the roadmap of digital transformation to gain a competitive edge in the market, companies are shifting their workloads to containers and integrating container orchestration platforms to manage their containerised workloads. Now, workloads might be applications decomposed into microservices (hosted by containers), backends, API servers, storage units, or so on. To accomplish this procedure, organisations may need expert resources and time to implement the transition. Later on, the sustenance team needs to deal with intermittent issues like scaling, upgrades of K8s stacks, policy changes, and more. 

Organisations cannot afford to spend time as well as money in this transformation as the pace of innovation is rapid. This is where Kubernetes as a service comes in to rescue organisations offering customised solutions based on organisations' existing requirements and scale of the data centre, keeping budget constraints in mind. Some of the benefits of KaaS are:

  • Security: Deployment of the Kubernetes cluster can be easy once we understand the service delivery ecosystem and data centre configuration. But this can lead to open tunnels for external malicious attacks. With KaaS, we can have policy-based user management so that users of infrastructure get proper permission to access the environment based on their business needs. Also, KaaS providers follow security policies that can prohibit most of the security attacks similar to the network firewall.

    Normal Kubernetes implementation exposes API server to the internet, inviting attackers to break into servers. With KaaS, some vendors enable the best VPN options to hide the Kubernetes API server
     

  • Saving in investment for resources: Customised KaaS allows organisations to procrastinate requirements for investment for resources, be it a team to handle KaaS terminals or physical resources to handle storage and networking component within infrastructure. Organisations get a better overview while KaaS is in place
     
  • Scaling of infrastructure: With KaaS in place, IT infrastructure can scale rapidly. It is possible due to high-level automation provided with KaaS. This saves a lot of time and bandwidth of the admin team

What do you get exactly?

Effective day two operations: This includes patching, upgrading, security hardening, scaling, and public cloud IaaS integration. These are all important as container-based workload management comes into the picture. And, when we consider Kubernetes, it may still not fit use cases of the data centre for particular organisations as most of the best practices are still evolving to match up innovation. 

Additionally, if we apply containers in infrastructure positive results should be expected rather than backtracking of strategies. KaaS have predefined policies and procedures that can be customised for organisations to meet ever-changing demands of organisations with Kubernetes.

Multi-cloud portable: Multi-cloud is new trend emerged in 2019 wherein containerised applications will be portable across different public and private cloud. Also, access to existing applications will be shared in a multi-cloud environment. In this case, having KaaS will be useful so that developers can focus on building applications without worrying about the underlying infrastructure. With KaaS, managing and portability will be with the KaaS provider.

Central management: KaaS gives admins to create and manage Kubernetes clusters from a single UI terminal. Admin has better visibility of all components within overall clusters and performs continuous health monitoring using tools like Prometheus and Grafana. Admins can upgrade the Kubernetes stack along with different frameworks used in the setup. 

It is also possible to remotely monitor Kubernetes clusters, check for any glitches in configuration, and send alerts. Additionally, the KaaS admin can apply patches to clusters if they find any security vulnerability associated with the technology stack deployed within clusters. Admin can reach out to any pods or containers in a network of the different clusters using a single pane of glass provided with KaaS.

Conclusion

Implementing Kubernetes is not just a solution, but it might create several issues that can cause security as well as resource consumption. Kubernetes as a service offerings are a breather for enterprises and organisations ranging from large scale to small scale who already have shifted workloads to a containerised model or are planning to do so. 

KaaS can increase the deployment speed of the Kubernetes cluster along with a raise in the performance of containerised infrastructure. With KaaS, organisations get single-handed support for their infrastructure which will allow them to focus on the services layer.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Google reveals UK’s most searched for terms in 2019


Roland Moore-Colyer

11 Dec, 2019

Google has revealed the most searched for terms and questions in 2019, with its Year in Search, with the UK taking a bigger interest in the Rugby World Cup than Brexit.

The most searched for term in the UK for 2019 was the aforementioned Rugby World Cup, followed by the Cricket World Cup, and then Game of Thrones.

The search results are largely explained by a strong British showing in both tournaments, while the incredibly popular HBO series Game of Throne came to an end after eight seasons in May, with tensions and plotlines ramping up.

However, both the term Chernobyl and Thanos took the fourth and fifth positions respectively. Those results are a little more surprising, as, given the heavy media coverage of Brexit and other governmental machinations, one could be forgiven for thinking that ‘Brexit’ would be a highly searched term, but in the end, it failed to even make the top ten list for Google searches in the UK.

In fact, no political terms made it into the top 10 list, with the likes of the iPhone 11 and Caitlyn Jenner proving more popular than Boris Johnson or Jeremy Corby. Neither did Extinction Rebellion get a look in, despite the protests and demonstrations in the year gaining high-profile coverage, as well as support and equal measures of condemnation.

When it came to the most searched for news events, ‘revoke Article 50 petition’ came in third place, behind the ‘iPhone 11’ and ‘Notre Dame’ at the top spot.

As for the questions being asked by the Google Search users in the UK, ‘how to watch Champions League Final’, ‘how to watch Game of Thrones’ and ‘how to floss dance’ took the first, second, and third positions respectively.

‘How to register to vote’, seemingly pertinent given the General Election on Thursday 12 December, came in eighth place, being beaten by ‘how to eat a pineapple’.

While IT Pro endeavours to bring you the latest IT news and its effect on the UK public sector, politics and society, it would appear that many of Britain’s Google Search users are more interested in finding out about subjects that matter to them in the here and now, rather than longer-term effects of politics and technological change.

1. Rugby World Cup
2. Cricket World Cup
3. Game of Thrones
4. Chernobyl
5. Thanos
6. Notre Dame
7. Avengers Endgame
8. iPhone 11
9. Caitlyn Jenner
10. Joker

Ericsson shells out $1bn to settle bribery charge


Nicole Kobie

10 Dec, 2019

Swedish telecoms giant Ericsson has settled with US authorities on charges including bribery, shelling out more than $1 billion (£759m) to avoid prosecution – one of the largest such settlements to date.

The US Department of Justice (DoJ) was investigating Ericsson under the Foreign Corrupt Practices Act (FCPA) that bans companies listed on US stock exchanges from bribing foreign officials, accusing it of making and improperly recording tens of millions of dollars in “improper payments” around the world.

Ericsson admitted that from 2000 and 2016 employees paid bribes to government officials to help win contracts in five countries – Djibouti, China, Vietnam, Indonesia and Kuwait – covering up that activity via false accounting records, sham contracts and fake invoices.

An Ericsson subsidiary pleaded guilty to bribery as part of the deal.

“Today, Swedish telecom giant Ericsson has admitted to a years-long campaign of corruption in five countries to solidify its grip on telecommunications business,” said U.S. Attorney Geoffrey S. Berman of the Southern District of New York. “Through slush funds, bribes, gifts, and graft, Ericsson conducted telecom business with the guiding principle that ‘money talks.’ Today’s guilty plea and surrender of over a billion dollars in combined penalties should communicate clearly to all corporate actors that doing business this way will not be tolerated.”

According to the DoJ, between 2010 and 2014, Ericsson paid $2.1 million in bribes in Djibouti to help the company win a contract worth €20.3 million to modernise the state-owned telecoms company. The money was sent via a consulting company – the owner of which was married to a government official – and hidden via fake invoices. A similar system was used to pay $450,000 to help it win a contract in Kuwait worth $182 million between 2011 and 2013.

In Vietnam, again according to the DoJ, Ericsson’s subsidiaries paid $4.8 million to a third-party consulting firm to set up a slush fund to pay off companies that the company wouldn’t be able to directly hire because of the company’s due diligence processes; the money was “mischaracterised” in the company’s books. A similar system was used in Indonesia to set up a $45 million slush fund, the investigators said.

And in China, between 2000 and 2016, Ericsson’s subsidiaries paid tens of millions for travel and entertainment for government officials, including some that worked at state-owned telcos, and also made payments for sham contracts with providers in the country for “services that were never performed”.

Don Fort, the chief of criminal investigation at the Internal Revenue Service tax agency, said a lack of compliance and internal controls at the company made it easier for executives and other employees at Ericsson to offer bribes and falsify accounting records.

“Ericsson’s corrupt conduct involved high-level executives and spanned 17 years and at least five countries, all in a misguided effort to increase profits,” said Assistant Attorney General Brian A. Benczkowski of the Justice Department’s Criminal Division, adding that the “strong response from law enforcement” should deter other companies from doing the same.

Under the agreement, the DoJ will defer prosecution of Ericsson and dismiss all charges after three years if the company complies with the rest of the conditions, which include reforming its compliance and submitting to an independent compliance monitor. As part of the deal, Ericsson Egypt pleaded guilty to the Djibouti bribery charges.

The company noted that the payment of $1.06 billion is fully covered by $1.2 billion set aside in the third quarter of 2019. Half of that bill is a criminal fine, the DoJ said, while the other half will be paid to the US Securities and Exchange Commission for related civil charges.

The DoJ noted that the criminal penalty half of the fine had a 15% reduction because Ericsson had partially cooperated with the investigation – though it was criticised for failing to disclose allegations of corruption, not producing materials in a timely manner, and failing to “take adequate disciplinary measures with respect to certain employees involved in the misconduct”.

According to reports, in a conference call CEO Borje Ekholm said the company wanted to move forward. “Certain employees in some markets, some of whom were executives in those markets, acted in bad faith and knowingly failed to implement sufficient controls,” Ekholm said. “I view what has happened as a completely unacceptable and hugely upsetting chapter of our history.”

The SEC has previously fined a wide range of companies under the FCPA, including a $6.3 million settlement with Barclays over hiring practices in Asia, $11.7 million from Juniper Networks to “resolve violations” of accounting and recordkeeping in China and Russia, and $1.78 billion from Petroleo Brasileiro over a bribery and bid-rigging incident.