Microsoft cites ‘layers’ of Azure and cloud depth in more positive financial results

Microsoft’s investor relations team is evidently not frightened about repeating itself when it comes to financial announcements season. “Microsoft Cloud drives record fourth quarter results,” the company proclaimed in July; “Microsoft Cloud strength powers record first quarter results,” it tooted in October; and now, “Microsoft Cloud strength fuels second quarter results.”

Given the figures, there are plenty of reasons for Microsoft to stress the same message. The company’s Q219 report saw total revenues of $32.5 billion (£24.7bn), an increase of 12% on this time last year. Of the key revenue buckets, productivity and business processes – which focuses more on software – broke $10bn at a 12% lift on last year, while intelligent cloud, focused more on infrastructure, hit $9.38bn at a 20% uptick.

Azure itself – for which Microsoft does not disclose specific financials – went up 76% compared with the previous year, exactly the same as the previous quarter’s figure.

In prepared remarks to analysts, CEO Satya Nadella made reference to its recent slew of retail-based customers, saying Azure was ‘front and centre’ at the recent National Retail Federation (NRF) event, where the partnership with Kroger was announced. Regarding general strategy, it was a continuation of the theme the chief executive forged at Ignite back in September around making Microsoft’s customers tech companies in their own right.

“These results speak to us picking the right secular trends in large and growing markets, many of which are still in their infancy, as well as focused innovation and execution,” said Nadella. “Leading companies in every industry are partnering with us to build their own digital capability to compete and grow. This is creating a broad opportunity for everyone, including our ecosystem.”

Nadella also focused specifically on cybersecurity and discussed the importance of a Zero Trust environment – something of which regular readers of this publication will be more than aware. In terms of specific security offerings issued this quarter, the start of this month saw two new products for Microsoft 365, its enterprise-focused suite, launched around identity and threat protection and compliance.

Responding to an analyst question around how the big customer deals break down looking specifically at Azure, Nadella said he internally compared it to relationships with OEM partners in the PC era, noting the mix required between infrastructure for compute, then data on top sprinkled with AI.

“We definitely see that path… where they’re adopting the layers of Azure,” said Nadella. “But it doesn’t stop in Azure. If you take Walgreens Boots Alliance, it was Microsoft 365 as well as Azure. In many cases, it’s Dynamics 365 – any IoT project on Azure leads to a Dynamics field service project in most instances.

“So we’re seeing the breadth and depth of our cloud offering, which is what we are really architected to have real synergies in the context of what our customers want to achieve, and that’s what we are seeing,” Nadella added.

Despite all figures going in the right direction Microsoft’s performance fell just short of Wall Street expectations. Shares fell as much as 4% in the immediate aftermath of the announcement, according to CNBC.

You can read the full financial statement here. in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

The guide to workplace security

31 Jan, 2019

Organisations both large and small are striving to understand their people better so they can give them more opportunity to be productive and put into their hands the technology that they need, and are familiar with in their daily lives. That kind of tech is good morale and productivity, and is great for a company’s bottom line.

There’s an ulterior motive, too. If your business fails to keep pace with innovation, you’ll ultimately be left behind.

It was in last year’s, rather than 2019’s ‘O2 Future Trends’ report, that the futurist Graeme Codrington said: “Within two years, if you are not mobile-first (which naturally implies cloud-first) you might be too far behind the curve to catch up… We can expect that tomorrow’s employees will expect to be able to use the same level of technology at their workplace as they do at home. That means mobile-first, AI, natural language processing and staying always connected.

“We joke about how Wi-Fi should now be the foundation layer for Maslow’s Hierarchy of Needs. But the time is coming when transport, work, medical treatment and civil liberties are all so reliant on the internet that connectivity will be as much a human right as running water and electricity. Now is the time to seriously question the legacy IT investments that are holding you and your people back, because waiting is a costly game. Implementation cycles are being shortened from five years to five months, because of the risk of obsolescence over time.”

A landscape of growing threat

Sadly, as companies empower their people more and unlock the good that the cloud and mobile advancements have to offer, there are those out there using the self-same technologies for their own ill gains.

“In 2017 hackers made a lot of easy money and caused huge disruption as a result of UK business missing the basics when it comes to securing data. Most of the breaches I read about in the press could have been easily prevented by taking a more proactive approach to cyber security and following the government’s guidelines,” wrote Dean Thomson, cyber security specialist at O2.

Whether it’s ransomware, malware, cyber crime, physical theft or something else – it feels like a new threat emerges every day. The media is littered with horror stories about data breaches, security blunders and tech tales of woe, so how can you avoid being next – and keep your people happy?

Follow the money

Security is big business. Analyst firm IDC predicts spending on security software, hardware and services will reach $120 billion by 2020. That’s a lot of money going towards fighting a real and growing problem.

“Three overarching trends are driving security spending: a dynamic threat landscape, increasing regulatory pressures, and architectural changes spurred by digital transformation initiatives,” said Sean Pike, IDC’s Security Products and Legal, Risk, and Compliance programme vice president.

Ultimately, the more we adopt mobile technology in our personal lives, the more we’ve come to expect these same technologies to make our lives easier when we’re at work. Implement technology in the right way and businesses should see a positive outcome across the board.

“Digital workplaces are good for employees, good for customers and good for profits.” said Emma Thompson, Head of Technology and Telecoms Business Partnership Team, UK Government Cabinet Office in O2’s Futures summary 2019.

Of course, there is no one-size-fits-all solution for digital transformation. It means different things to different businesses, but one aspect that’s fundamentally important is security. For instance, an organisation that becomes mobile first will need an agile infrastructure to allow its people to work and collaborate from anywhere.

This also requires end-to-end security, with each part of a company secure – whether that’s a server at head office or a member of staff using a work smartphone on an overseas business trip.

A big problem for business of all sizes

Whether you’re a large or small business – or any size in between – security has to be front of mind. Like taking out an insurance policy you hope you never need, it’s a must.

It’s a dilemma. Ultimately, we need technology. Its benefits far outweigh any negatives and it makes a very real difference to businesses and individuals alike.

However, in order to reap the rewards rather than be exposed to the risks, we have to tread carefully. To help, here is our guide to what you really need to consider when it comes to workplace security.

Click to view the above workplace security guide infographic full size

Do the following…

Do: Utilise two-factor authentication

Many employees actually like two-factor authentication as it helps if they forget their passwords as well as protecting what the company holds dear. By ensuring two verification steps need to be followed before granting access, you are reducing the chances of unauthorised access.

“Two-factor authentication may not be quite the security silver bullet it was once thought to be, but it’s still an important area of security and access control to keep in mind when obtaining and setting up services for your business or personal life,” according to an IT Pro article on the subject.

“The more hurdles you can put in the hackers’ way, the less likely they are to target you.”

Do: Look at both WAN and LAN

Devices that have connectivity can become dangerous in the wrong hands. That’s why it’s so important to focus on all network security elements – from LAN to WAN and beyond.

O2 was the first mobile operator to achieve CAS(T) certification (the government standard for secure communications), to validate the financial and human resource effort placed into security to protect the businesses that rely on it.

Do: Embrace behavioural analytics

By taking advantage of Big Data platforms and sophisticated technologies such as Machine Learning, behavioural analytics looks at user activity to try and identify and stop insider threats. For example, employees who are able to interpret behavioural analytics can spot potential security breaches by looking at who is accessing various network assets, how often and what devices they are using to communicate with.

Do: Focus on endpoint protection

The majority (around 70%) of businesses in the UK still rely on signature-based detection to fend off malware and ransomware attacks, according to O2’s Dean Thomson. This, simply, isn’t good enough, he says.

“It’s time to deploy next generation endpoint protection that uses behavioural analysis to detect and stop malicious activity. This technology will also go a long way in helping to protect against chip based exploits such as Meltdown and Spectre,” Thomson added.

Do: Ensure you focus on the knowns and unknowns

Hindsight is a wonderful thing and it’s easy to try and learn from what’s happened in the past to try and affect what may happen in the future. But, while it’s good to learn from experience, organisations must remember that the threats that expose our weak spots need the same amount, if not more, attention.

Do: Embrace evolving behaviours

With new threats emerging all the time, it’s important not to stand still when it comes to security. Be prepared for every possible eventuality and respond accordingly.

The most dangerous threats to your business are the ones lying dormant waiting to activate. There’s no place for complacency when it comes to workplace security. And, while people are your biggest and best asset, where security is concerned, they are also your biggest allies.

Don’t do the following…

Don’t: Shut people out

People need to feel valued and listened to. And there’s no greater way of showing you’ve been listening than answering their needs. So when you’re implementing any new technology, don’t do it without first talking to those who will end up using it. Get your people on board early on to help educate them on being security aware.

Don’t: Get the balance wrong

When it comes to accessibility boundaries and rules, there needs to be the right balance between tech security and tech freedom. Use it in the right way, tailored to how people work – with the right security in place – and technology can help unlock employees’ potential.

Don’t: Allow unauthorised devices on the company network

In the same way you wouldn’t allow any uninvited guests into your house, the same goes for your business. Many larger organisations may have this covered, but if a smaller business is keeping an eye on costs then unauthorised devices could slip through the net. In this case, ensure you maintain an audit of company issued devices as well as making sure employees understand the responsibility they hold every time they connect their personal devices to your corporate network.

Don’t: Think you can do it alone

No one really understands your business as well as you do, but you’re not expected to have all the answers. That’s why it’s important to work with your trusted partners to reinforce your defences. When it comes to security management, a third-party engagement can make a great deal of sense.

“Don’t waste money on trying to build and tool your own Security Operations Centre, instead outsource the problem to the experts,” according to Thomson.

“The costs for managed security services have come down considerably in the last year and it is far more secure to use a SOC that can see threats that are not just targeting your own business. There’s strength in numbers. We’re here to help if you need us.”

In conclusion

Technology is only one part of the puzzle when it comes to effective security; but ultimately, your people are the best defence. That’s why it’s important not only to educate employees on all the dangers, but also get them onside with your security strategy.

“Don’t let’s talk about the technology as if it’s the technology’s fault that we’ve
gotten better or worse at anything. It’s about how we choose to use the tools we’ve got,” Codrington added.

A business’s biggest asset is its people, and equipping them with the right technology to unlock their productivity is key. If an organisation and its people are always going to be connected, they need to be able to work on any device, collaborate with colleagues and access data and apps securely wherever they are.

When it comes to security, it’s often said, that you’re only as strong as your weakest link. So, make sure you strengthen those defences and continue to invest in them as the threats evolve. Together, it’s far easier to stand up to the many against threats and stop them rather than taking them on alone.

Discover how O2’s technology is helping businesses empower their workforce.

Fulfilling the promise of NFV with reconfigurable computing

With so many new technologies vying for attention, it can be difficult for CISOs to know which ones merit attention. Will this solution save time? Will it make our organisation more productive, or enable us to do things we couldn’t otherwise do? These questions need to be considered before adopting software-defined networking (SDN) and network functions virtualisation (NFV).

What makes these technologies appealing is their ability to separate software from hardware, which eschews the vendor lock-in that has been the norm. So then, the main question is not about budget but about an organisation’s ability to overcome the challenges of these methods so organisations can realise their full value.

At the time enterprises, mobile operators and data centers began building their own network infrastructure, they used the typical customised hardware and software offered on the market. Example applications include network gateways, switches, routers, network load balancers, varied mobile applications in the mobile core; radio access network such as vEPC (virtual evolved packet core), vCPE (virtual customer premise equipment) and vRAN (virtual Radio Access Network); and security applications like firewalls, NGFW, IDS/IPS, SSL/IPsec offload appliances, DLP and antivirus applications, to name just a few. 

Instead of needing to purchase proprietary appliances to run each networking application, it is much more cost-efficient to support these functions as software applications, called virtualised network functions (VNFs), running on virtual machines or in containers on standard servers. That’s the idea behind NFV. Moving away from discrete, cus­tomised architectures to a more consolidated “x86-only architecture” promises to reduce costs, simplify deployment and management of net­working infrastructure, widen supplier choice and, ultimately, enable horizontal scale-out in the networking and security market.

It’s not a sure bet that the throughput and latency demands that today’s applications require can be handled by applications in software on standard platforms without allotting significant CPU resources to address the issue. Operators are realising that the cost savings that NFV promises are offset by the need to deploy entire racks of compute resources at a problem that a single appliance could previously support. The CPU and server costs, rack space and power required to meet the same performance footprint of a dedicated solution end up being as expensive as or more than custom-designed alternatives. The vision of operational simplicity and dramatically lower total cost of ownership are still a dream on the horizon.

Along comes 5G

As if the performance and scaling problems that operators face with generic NFV infrastructure (NFVi) weren’t enough to worry about, the presence of 5G networks will make these concerns worse. The move to 5G brings new requirements to mobile networks, creating its own version of hyperscale networking that is needed to meet the performance goals for the technology, but at the right economy of scale. Numerous factors are fundamentally unique to 5G networks when compared to previous 3G/4G instantiations of mobile protocols. The shorter the distance, the higher the frequency – thus, the more bandwidth that can be driven over the wireless network.

But wait – it gets worse. 5G will also mean a huge increase in the number of users/devices (both human and IoT), which fundamentally affects the number of unique flows in the network and necessitates very low latency requirements. 5G also promises lower energy and cost than previous mobile technologies. These 5G goals, when realised, will drive the application of wireless communications to completely new areas never seen before.

Rapid scaling

If they are going to meet performance goals, network operators now see that they will need data plane acceleration based on FPGA-based SmartNICs in order to scale virtualised networking functions (VNFs). This technique offloads the x86 processors that are hosting the varied VNFs to support the breadth of services promised.

When SmartNIC acceleration supports virtual switching, this set-up has been shown to be the highest-performing and most secure method of deploying VNFs. Virtual machines (VMs) can use accelerated packet I/O and guaranteed traffic isolation via hardware while maintaining vSwitch functionality. FPGA-based SmartNICs specialise in the match/action processing required for vSwitches and can offload critical security processing, freeing up CPU resources for VNF applications.

Functions like filtering, intelligent load balancing, virtual switching, flow classification and encryption/decryption can all be performed in the SmartNIC and offloaded from the x86 processor housing the VNFs while, through technologies like VirtIO, be transparent to the VNF, providing a common management and orchestration layer to the network fabric.

A novel configuration

Network infrastructure has changed so dramatically and so much more is being asked of it that organisations cannot operate with networking and security solutions that are expensive, hardened and fixed-function.

The technique to overcome the challenges that are facing NFV deployments requires reconfigurable computing platforms based on standard servers capable of offloading and accelerating compute-intensive workloads, either in an inline or look-aside model to appropriately distribute workloads between x86 general-purpose processors and software-reconfigurable, FPGA-based SmartNICs optimised for virtualised environments.

The environment that results from combining low-cost server platforms and FPGA-based SmartNICs is one that enables huge throughput and support for many millions of simultaneous flows. CISOs that have struggled to implement NFV now have the option to use this novel framework, with the capabilities and the speed they need. in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Bringing the Next 100 Million People to Blockchain | @CloudEXPO @CelsiusNetwork #FinTech #Blockchain #Bitcoin #Ethereum #SmartCities

The Crypto community has run out of anarchists, libertarians and almost absorbed all the speculators it can handle, the next 100m users to join Crypto need a world class application to use. What will it be? Alex Mashinsky, a 7X founder & CEO of Celsius Network will discuss his view of the future of Crypto.

read more

Serverless Architecture on AWS | @CloudEXPO @RapidValue #CloudNative #Serverless #AWS #DataCenter #Docker #Kubernetes

Serverless Architecture is the new paradigm shift in cloud application development. It has potential to take the fundamental benefit of cloud platform leverage to another level.

“Focus on your application code, not the infrastructure”

All the leading cloud platform provide services to implement Serverless architecture : AWS Lambda, Azure Functions, Google Cloud Functions, IBM Openwhisk, Oracle Fn Project.

read more

ServerlessSUMMIT at @CloudEXPO Silicon Valley | @IoT2040 #CloudNative #Serverless #DevOps #Docker #Kubernetes

As you know, enterprise IT conversation over the past year have often centered upon the open-source Kubernetes container orchestration system. In fact, Kubernetes has emerged as the key technology — and even primary platform — of cloud migrations for a wide variety of organizations.

Kubernetes is critical to forward-looking enterprises that continue to push their IT infrastructures toward maximum functionality, scalability, and flexibility.

As they do so, IT professionals are also embracing the reality of Serverless architectures, which are critical to developing and operating real-time applications and services. Serverless is particularly important as enterprises of all sizes develop and deploy Internet of Things (IoT) initiatives.

read more

SAP bets big after breaking €20bn in 2018 cloud and software revenues

SAP broke €20 billion in yearly cloud and software revenues in 2018, hitting or exceeding its raised outlook metrics in the process – and the company wants more, targeting €35bn in total revenue by 2023.

The Q4 2018 financial results saw total cloud and software revenues hit €6.3 billion (£5.5bn), representing 85% of total revenues that quarter. Naturally this statistic is somewhat obfuscatory – as regular readers of this publication will recognise, many of the largest cloud providers do it – but other stats are available. New cloud bookings for the whole of 2018 hit €1.8bn, a 25% increase on the previous year, while CEO Bill McDermott said cloud revenue grew 40% in Q4, and 38% across the full year.

Speaking to analysts in an earnings call, McDermott put the figure of ‘cloud users’ SAP holds at 180 million, and was bullish at the company’s progress, particularly after the acquisition of Qualtrics for $8 billion first announced in November.

“SAP has only winning businesses in the portfolio,” said McDermott. “Every strategic asset in the company is growing. And looking back, the belief was enterprise customers would only want to rent software. But SAP embraced the software as a service business model early on and we’re growing the cloud faster than competition – and that includes Oracle, and Workday to name a few.”

Chief financial officer Luka Mucic noted that public cloud, or software and platform as a service gross margin, improved solidly during 2018. “Looking forward, we expect to realise the benefits from our platform convergence in the first half of 2019 with further acceleration in the second half,” he said. “This will set us up with full scalability going into 2020 and beyond.”

McDermott joked that he might sign his emails off in future with ‘XO’, another reference to the Qualtrics acquisition. SAP sees the research management software provider as a key piece in their jigsaw to combine operational data (O), from their side, with experiential data (X) from Qualtrics.

“This is the only strategy for SAP as we look at our bright future,” added McDermott. “And we know it’s where the world is going. Experienced management is the future and SAP owns it.”

You can read the full Q4 statement here (pdf). in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Rubrik security slip-up exposed masses of its corporate clients’ data

Connor Jones

30 Jan, 2019

Data management company Rubrik was found to have an unsecured server that exposed, in some cases, sensitive client information.

The server itself wasn’t password protected which meant that anyone who knew the location of the server could access it, according to TechCrunch. It held tens of gigabytes of data including client names, email addresses, email signatures and their case work.

Rubrik, which is valued at $3.3 billion, has some incredibly high-profile clients whose information was on the exposed database which include Deloitte, Shell and the NHS among others.

It wasn’t just the high-profile clients that belonged on the database, all of its corporate clients resided on there and the database was indexed on Shodan, a search engine for exposed devices and databases.

In addition to the names and contact details, contents of emails relating to issues and complaints between clients and Rubrik were also stored on the dedicated client portion of the exposed server. Some emails also included sensitive information about Rubrik’s clients’ setup and configuration.

Rubrik has said it took the database offline within an hour of becoming alerted to the issue, the data from which dated back to October 2018 according to email timestamps.

“While building a new solution for customer support, a sandbox environment containing a subset of our customer corporate contact information and support interaction data was potentially accessible for a brief period of time,” said a spokesperson for Rubrik. “We rectified this issue immediately.”

“We also confirmed that no customer-owned data was exposed,” the spokesperson added. “Other than the security researcher who discovered this issue, no one has accessed this environment”.

This comes as fairly ironic news as Rubrik recently announced that it will expand into the security and compliance market.

On that note, you may have picked up on the fact that some of Rubrik’s clients are based in Europe which means GDPR could come into play. The data giant could face a fine of up to 4% of its annual global revenue for exposing data it is responsible for.

It would be a big blow to the up-and-coming star in data management which raised $261 million from venture capital firms earlier this month and was also listed in the top 5 IPO prospects for 2019 by Mosaic Score.

Global Microsoft outage leaves users unable to login

Keumars Afifi-Sabet

30 Jan, 2019

A host of Microsoft’s cloud services including Azure Government Cloud and LinkedIn sustained a global authentication outage just a few days after users were blocked from accessing Office 365 in Europe.

Users in parts of Europe, the US, as well as Australia and Japan were blocked from logging into their services between 9pm GMT yesterday and the early hours of this morning due to authentication issues.

A host of Microsoft Cloud services including Dynamics 365 and Office 365, as well as US Government cloud resources, were out of action for a few hours due to problems with its authentication infrastructure.

According to the outage detection service downdetector, the issue may have affected a wide range of services including Skype, OneDrive, Office365, and – which all experienced spikes at roughly the same time. Users also complained across social media about difficulties logging into these platforms.

The issue, which has now been resolved, involved users attempting to log into new sessions, with the Azure status page indicating it concerned an internal DNS provider, describing the issue as ‘Level 3’ after an investigation. Microsoft says that engineers mitigated the outages by failing over CyberLink DNS services to an alternative provider.

These issues were resolved shortly after midnight this morning, but lasted at least a few hours, affecting users in predominantly the Eastern hemisphere who were getting into the crux of their working days.

The global outage arose just five days after Microsoft customers were unable to access their Office 365 accounts for a full working day in Europe.

The company confirmed on Thursday, after initially maintaining that services were running smoothly, that its cloud-powered productivity suite was experiencing difficulties, with the continental outage lasting around nine hours in total. 

This rocky start to the new year reflects a series of outages that Microsoft had sustained with its cloud services in the last few months of 2018, as the Windows-manufacturer struggled to provide 100% reliability. 

Understanding Kubernetes today: Misconceptions, challenges and opportunities

Any discussion of Kubernetes is best started with an understanding of why we need Kubernetes. Kubernetes helps us manage containers, which dominate application development now because they enable portability, faster application development, and greater independence for developers. Once we started using containers in great volume, we needed a way to automate the setup, tear down, and management of containers – that's what Kubernetes does.

The industry has developed other orchestrators, but Kubernetes has emerged as the de facto standard for container orchestration. As much as nearly a year ago, 69% of organisations surveyed by the Cloud Native Computing Foundation (CNCF) were using Kubernetes to manage containers. Kubernetes started with the technical credibility of coming out of Google, and thousands of contributors have increased the robustness, scalability, and security features of Kubernetes.

A series of data points highlight the growth in popularity of Kubernetes. All the cloud providers offer a managed Kubernetes service. Amazon executives highlighted at the company’s recent AWS re:Invent conference that its managed Kubernetes service, AWS EKS, is the fastest growing service AWS has ever released. KubeCon, the industry conference hosted by the Cloud Native Computing Foundation (CNCF) has doubled in attendance every year, with more than 8000 people attending the recent North America conference. And scan any tech job aggregator like and you’ll see 1000s of companies seeking Kubernetes expertise for their IT architecture teams.

The mergers and acquisitions market provides another lens into the popularity of Kubernetes. IBM’s recent acquisition of Red Hat for $34 billion provides another indication of the popularity of Kubernetes. Most industry analysts said OpenShift, Red Hat’s commercial distribution of Kubernetes, drove a significant portion of that valuation. Also, recently, VMware acquired Heptio, which provided another popular distribution of Kubernetes. The purchase price is rumored to be $550 million, an astonishing amount for a company that hadn’t had the chance to generate much revenue yet.

Common misunderstandings about Kubernetes

Despite the massive popularity of Kubernetes, misunderstandings about the platform persist. One centres around how to work with Kubernetes. Most people running open source software have a “DIY” or “do it yourself” perspective – they’re used to digging into software and tuning all the dials and twisting all the knows. So, people often think they should be working directly in the Kubernetes platform. Often that’s not the best approach, however.

As Kubernetes continues its market dominance, organisations need to look for ways to apply a UI layer to the orchestrator to simplify management and security

Building support for high availability (HA) and resilience into Kubernetes, for example, is complicated – these areas provide a great reason to leverage abstraction layers on top of Kubernetes to simplify its operations and make it run in a more robust manner. People talk about Kubernetes needing a UI layer – another interface into it to make kind of needs a UX layer on top. A lot of the managed Kubernetes services provide this abstraction layer for getting the fundamentals set up, like setting up the master, the API server, and resilient data stores.

The same goes for the security layer. Kubernetes has a lot of power controls built in for networking policy enforcement, for example, but accessing them natively in Kubernetes means working in a YAML file. Having tooling on top that visualises the networking layer, as we do in the StackRox platform, makes the power of Kubernetes far more accessible to enterprises in a way similar to how Google Kubernetes Engine makes the control plane of Kubernetes more accessible.

Securing Kubernetes

Kubernetes provides powerful security capabilities around secrets management and network policy enforcement. Digging into network policy enforcement, you can use Kubernetes to limit what resources each asset can reach. By default, Kubernetes allows all assets to talk to all other assets, because the premise of Kubernetes is that it’s meant to aid application development, and as developers craft the microservices that are the building blocks for applications, Kubernetes defaults to letting all those services communicate.

Because the developers are working in Kubernetes, the security team should also use Kubernetes to help tighten down the environment – to limit those communications paths to reduce the blast radius if an attacker got in. Moving to least privilege is a fundamental tenet of security – any person or asset should be allowed to do only the functions necessary to its role and no more. Look for a container security platform that simplifies the process of moving Kubernetes to a least privilege model. The platform should highlight the allowed communications between assets, simulate new network policies, and recommend updated configurations that support least privilege and harden the environment.

Bringing it all together

As Kubernetes continues its market dominance, organisations should look for ways to apply a UI layer to the orchestrator to simplify functionality such as management and security. Despite its inherence security functions, Kubernetes also increases the attack surface, so organisation should look for security platforms that integrate deeply with Kubernetes to make accessing its security functions easier and provide mechanisms for reducing its attack surface. in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.