Why standardisation is good for NetOps: Innovation instead of impediment

Standardisation is sometimes viewed as an assault on innovation. Being forced to abandon a polyglot buffet and adopt a more limited menu will always sound stifling. That may be because standardisation is often associated with regulatory compliance standards that have official sounding names like ISO 8076.905E and are associated with checklists, auditors and oversight committees.

The reality is that there are very few standards – in fact none that I can think of – governing enterprises' choice of languages, protocols and frameworks.

Enterprise standardisation is more driven by practical considerations such as talent availability, sustainability, and total cost of ownership over the (often considerable) lifetime of software and systems.

Studies have shown the average software lifespan over the past twenty years is around six to eight years. Interestingly, longevity tends to increase for larger programs, as measured by lines of code (LOC). Systems and software with over a million LOC appear to have lifespans over a decade, lasting 12 to 14 years. While you may dismiss this as irrelevant, it is important to realise that at the end of the day, network automation systems are software and systems. They need the same care and maintenance as software coming out of your development organisation. If you're going to treat your production pipeline as code, then you've got to accept that a significant percentage of that automated pipeline is going to be code.

Over the course of that software or system lifespan, it’s a certain bet that multiple sets of operators and developers will be responsible for updating, maintaining, operating, and deploying changes to that software or system. And this is exactly what gets at the heart of the push for standardisation – especially for NetOps taking the plunge into developing and maintaining systems to automate and orchestrate network deployment and operation, as well as application service infrastructure. 

Silos are for farms

If you or your team chooses Python while another chooses PowerShell, you are effectively building an operational silo that prevents skills sharing. This is a problem. The number one challenge facing NetOps, as reported in F5 and Red Hats’ State of Network Automation 2018 report, was a lack of skills (49% of surveyed NetOps). Therefore, it would seem foolish to create additional friction by introducing multiple languages and/or toolsets.

It is similarly a bad idea to choose languages and toolsets for which there is no local source of talent. If other organisations and nearby universities are teaching Python and you choose to go with PowerShell, you're going to have a hard time finding staff with the skills required for that system.

It is rare that an organisation standardises on a single language. However, they do tend to standardise on just a few. NetOps should take their cues from development and DevOps standards as this will expand the talent pool even further.

Time to value is valuable

Many NetOps organisations already find themselves behind the curve when it comes to satisfying DevOps and business demands to get continuous. The unfortunate reality of NetOps and network automation is that it's a heterogeneous ecosystem with very little pre-packaged integration available. In the State of Network Automation survey, this "lack of integration" was the second most cited challenge to automation, with 47% of NetOps agreeing.

Standardising on toolsets, and on infrastructure where possible (like application services), provides an opportunity to reduce the burden of integration across the entire organisation. What one team develops, others can leverage to reduce the time to value of other automation projects. Reuse is a significant factor in improving time to value.

We see reuse in developer proclivity toward open source and the fact that 80-90% of applications today are composed of third-party/open source components. This accelerates development and reduces time to value. The same principle can be applied to network automation by leveraging existing integrations. Establish a culture of sharing and reuse across operational domains to reap the benefits of standardisation.

Spurring innovation

Rather than impeding innovation, as some initially believe, standardisation can be a catalyst for innovation. By standardising and sharing software and systems across operational domains, you have a more robust set of minds and experiences able to collaborate on new requirements and systems. You're building a pool of talent within your organisation that can provide input, ideation, and implementation of new features and functionality – all without the sometimes-lengthy onboarding cycle.

Standardisation also speeds implementation. This is largely thanks to familiarity. The more you work with the same language and libraries and toolsets, the more capable you become. That means increased productivity that leads to more time considering how to differentiate and add value with new capabilities.

Standardisation is an opportunity

Standardisation can initially feel stifling, particularly if your pet language or toolset is cut from the team. Nevertheless, embracing standardisation as an opportunity to build out a strong foundation for automation systems and software can benefits the business. It also affords NetOps new opportunities to add value across the entire continuous deployment toolchain.

Even so, it is important not to standardise for the sake of it. Take into consideration existing skill sets and the availability of local talent. Survey universities and other businesses to understand the current state of automation and operations’ skill sets and talent to make sure you aren’t the only organisation adopting a given language or toolset.

For the best long-term results, don’t treat standardisation like security and leave it until after you’ve already completed an implementation. Embrace standardisation early in your automation endeavours to avoid being hit with operational and architectural debt that will weigh you down and make it difficult to standardise later.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Aviation connectivity firm Gogo takes to the cloud with AWS infrastructure shift


Clare Hopping

14 Mar, 2019

Aviation firm Gogo has decided to migrate its infrastructure to Amazon Web Services (AWS), making use of the company’s full suite of cloud services to deliver a better in-flight entertainment experience.

Gogo has already moved the majority of its commercial and business aviation divisions to AWS, but will now start using additional services such as analytics, serverless, database, and storage to engage with more airlines, without needed to upscale its physical infrastructure. It will also provide significant cost savings and improve the efficiency of the company’s current operations compared to using legacy infrastructure.

“The change in velocity that we experienced moving from our on-premises environment to AWS has been phenomenal,” said Ravi Balwada, senior vice president of software development at Gogo.

“By operating and innovating on AWS, we’ve been able to nearly eliminate customer-impacting incidents related to ground-based deployments and increase our deployment cadence sevenfold. And, our database change has made operating at scale much easier and more cost effective.”

As part of the migration, Gogo moved its business-critical databases, including payments, orders, user management, and backend services off legacy databases to Amazon Aurora.

It has also started using AWS Media Elemental Live video processing and delivery to offer on-demand video services to customers and Amazon EMR, Amazon Redshift, and Amazon Athena data analysis stored in a data lake built on AWS S3.

“Organisations are moving away from legacy infrastructure and database solutions to create cloud environments that give builders freedom and control over their own destinies,” said Mike Clayville, vice president of worldwide commercial sales at AWS said.

“By going all-in, Gogo is leveraging the breadth and depth of AWS services, including comprehensive analytics and machine learning services to gain deeper insights and improve passengers’ in-flight experiences.”

Intel, Google, Microsoft and more team up for CXL consortium to supercharge data centre performance

Intel, Google and Microsoft are among nine tech giants who have teamed up to launch a new industry group to advance data centre performance.

The group, which also includes Alibaba, Cisco, Dell EMC, Facebook, HPE and Huawei, is looking at solidifying Compute Express Link (CXL), an emerging high-speed CPU-to-device and CPU-to-memory interconnect. The particular focus is on high performance computing (HPC) and artificial intelligence (AI) workloads among others.

The companies confirmed in a statement that it had ratified the CXL Specification 1.0, built on PCI Express infrastructure which aims to offer breakneck speeds while supporting an ecosystem which enables even faster performance going forward.

The press materials outlined how CXL worked. “CXL technology maintains memory coherency between the CPU memory space and memory on attached devices, which allows resource sharing for higher performance, reduced software stack complexity, and lower overall system cost,” the companies wrote. “This permits users to simply focus on target workloads as opposed to the redundant memory management hardware in their accelerators.

“CXL was designed to be an industry open standard interface for high-speed communications, as accelerators are increasingly used to complement CPUs in support of emerging applications such as artificial intelligence and machine learning.”

The consortium is not stopping there; it is working on CXL Specification 2.0 and is looking for other companies to join, particularly cloud service providers, communications OEMs and system OEMs. “CXL is an important milestone for data-centric computing, and will be a foundational standard for an open, dynamic accelerator ecosystem,” said Jim Pappas, Intel director of technology initiatives.

The initiative is being led by Intel primarily – the press materials being sent to this reporter through the Intel UK mailing list was a bit of a giveaway – and while anyone in the three primary categories can join, eagle-eyed readers will have spotted some notable absentees, particularly a couple of leading cloud vendors as well as the likes of NVIDIA and AMD.

You can find out more about CXL and the group here.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Addressing cloud sprawl: Combining security best practices with business foundations

The rate of cloud adoption has been nothing short of remarkable. According to IDG, 90% of organisations will have some portion of their applications or infrastructure running in the cloud this year, with the rest expected to follow suit by 2021. And while most organisations currently run more than half (53%) of their business on traditional networks, IDG also predicts that this will drop to less than a third (31%) within the next year or so.

The largest segment of the cloud market is IaaS. Forrester forecasts that the six largest public cloud providers (Alibaba, AWS, Azure, Google, IBM, and Oracle) will only grow larger in 2019, while Goldman-Sachs also predicts that they will consolidate IaaS, controlling 84% of the market within the next year.

However, while IaaS and PaaS are starting to consolidate, they are only part of the cloud phenomenon. Cloud-based storage and SaaS are markets that are also growing rapidly, and nearly every organisation on the planet participates in one or more of these whether they know it or not. In addition to big SaaS players like Salesforce, according to Gartner, shadow IT now represents 30 to 40 percent of IT spending in large enterprises.

The security challenge of cloud sprawl

For many organisations, the lure of the freedom and flexibility of the cloud has caused them to adopt and deploy solutions before they have put a comprehensive security strategy in place. In fact, the majority of cloud-based spending in organisations bypasses the CIO, as lines of business are increasingly making decisions for implementing some form of cloud solution within an organisation. According to IDG, 42% of organisations now have a multi-cloud deployment in place. And yet, most organisations do not have a unified system in place for monitoring, managing, or securing these resources.

Failing to address the security challenges of cloud sprawl puts your organisation at risk. For example, Gartner predicts that by 2020 a third of successful attacks experienced by enterprises will be on their Shadow IT resources. Getting out in front of this challenge requires security teams to develop a two-pronged campaign that focuses on human intervention and the adoption of new technologies.

The human approach

Security leaders need to lead an internal PR campaign that educates leaders and users alike on the risks associated with freewheeling cloud adoption. The CIO and his leadership staff need to regularly meet with board members, C-suite leaders, and directors of lines of business to engage in business strategies that include the adoption of cloud services. The challenge is to establish yourselves as enablers rather than someone looking to restrict business opportunities.

Individuals and groups looking to adopt new cloud services usually have very good reasons for doing so, and your job is to help them get to yes without putting the organisation at risk. This involves understanding their requirements and objectives, informing them of the range of solutions already available or that can be easily integrated into your existing IT strategy, and educating them about risks that could negate any business advantages. This requires a lot of listening, trust building, and diplomacy—all soft skills that today’s security leadership team needs to possess.

The technical approach

In addition to working directly with business decision makers, there are a range of solutions that organisations need to put in place to control the security issues arising from cloud sprawl.

  • Integrate your security tools: The most essential, baseline components are having a security policy in place that covers cloud, and having security tools in place that enable you to see, control, and respond to security threats even as the network they are defending evolves. Broad deployment, deep integration, centralised management and orchestration, and coordinated threat response needs to span the entire network—including those cloud elements of which you may not even be aware
     
  • Leverage native cloud controls: Bolting a security solution onto a cloud environment does not ensure that protections will be sufficient or consistent. Look for security solutions that are fully integrated into the cloud environments and that use native controls to manage and secure cloud data and transaction
     
  • Integrate cloud security using connectors: Security features and functions do not always operate consistently in different cloud environments. This can leave gaps in coverage and critical blind spots that cybercriminals can exploit. Cloud connectors designed specifically for each of the different IaaS vendors enable organisations to quickly and easily deploy cloud-based security solutions that can ensure consistent visibility and control across a multi-cloud deployment
     
  • Implement logical (intent-based) segmentation: Secure segmentation solutions allow you to isolate resources and transactions based on a wide range of parameters, and include a range of segmentation approaches, including VLAN-like segments, micro-segmentation, and emerging macro-segmentation. Ideally, segmentation should allow you to dynamically establish a secure environment for a variety of use cases, and that can span from the originating devices—whether servers, mobile applications, or IoT—across the distributed network, including multi-cloud environments. In the cloud the traditional network constructs don’t necessarily exist – and there is a need to leverage cloud resources information and meta-data in order to associate policy with the application builder intent
     
  • Establish strong access controls: Any device, application, transaction, or workflow looking to interact with cloud infrastructures and applications needs to be analysed, processed, secured, and monitored. Recent advances in Network Access Control provide an extra layer of security without unnecessary overhead to secure the network and resources from transactions that need to join or move laterally across the network
     
  • Deploy a CASB solution: Cloud access security brokers (CASB) provide visibility, compliance, data security, and threat protection for any cloud-based services being used by an organisation—including the discovery of Shadow IT. A CASB solution should be able to provide insights into resources, users, behaviors, and data stored in the cloud, as well as advanced controls to extend security policies from within the network perimeter to IaaS resources and SaaS applications

Cloud computing based networking is utterly transforming how organisations operate and conduct business. But without comprehensive security policies and solutions in place, combined with a corporate climate committed to proactively protecting cloud-based assets and organisational resources, cloud adoption can introduce more risk and overhead than most IT teams can absorb.

To address this growing challenge, security leadership teams, beginning with the CIO, need to start now to foster a climate of business-focused enablement across the organisation, combined with an integrated security foundation that enables rapid and automated policy enforcement anywhere across the distributed network.

Read more: Gartner's latest Magic Quadrant shows the need for cloud access security brokers going forward

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Dell EMC, Facebook, Google, Intel and others create Advance CXL to boost data centre performance


Clare Hopping

13 Mar, 2019

Some of the world’s biggest tech companies have announced their commitment to developing open standards in the world of data centre and cloud computing, with the introduction of the Compute Express Link (CXL) standards group.

Alibaba, Cisco, Dell EMC, Facebook, Google, Hewlett Packard Enterprise, Huawei, Intel and Microsoft want to make sure high-speed CPU-to-Device and CPU-to-Memory processes and equipment are standardised to enhance data centre performance.

The group will focus on making the CXL Specification 1.0 mainstream among tech businesses, interconnecting CPUs with platforms and workloads. It will build upon the existing PCI Express (PCIe) infrastructure, using the PCIe 5.0 interface to improve the I/O protocol, memory protocol and coherency interface.

“CXL is an important milestone for data-centric computing, and will be a foundational standard for an open, dynamic accelerator ecosystem,” said Jim Pappas, director of technology initiatives at Intel.

“Like USB and PCI Express, which Intel also co-founded, we can look forward to a new wave of industry innovation and customer value delivered through the CXL standard.”

Intel was one of the founding fathers of the CXL protocol, developing the interconnect four years ago. However, the company now wants it to become a standardised technology and so recruited many of its partners to develop it into a set of standards that could be used by the industry to improve data centre infrastructure.

“Microsoft is joining the CXL consortium to drive the development of new industry bus standards to enable future generations of cloud servers,” said Dr Leendert van Doorn, distinguished engineer at Microsoft’s Azure division.

“Microsoft strongly believes in industry collaboration to drive breakthrough innovation. We look forward to combining efforts of the consortium with our own accelerated hardware achievements to advance emerging workloads from deep learning to high performance computing for the benefit of our customers.”

Muddu Sudhakar to Deliver AI Keynote @CloudEXPO | @SMuddu #AI #DevOps #ITOPS #Serverless #Kubernetes #ArtificialIntelligence

AI and machine learning disruption for Enterprises started happening in the areas such as IT operations management (ITOPs) and Cloud management and SaaS apps.

In 2019 CIOs will see disruptive solutions for Cloud & Devops, AI/ML driven IT Ops and Cloud Ops.

Customers want AI-driven multi-cloud operations for monitoring, detection, prevention of disruptions. Disruptions cause revenue loss, unhappy users, impacts brand reputation etc.

read more

What is Your DevOps Team Actually Doing? | @DevOpsSUMMIT @TomChavez @AndiMann @Splunk #DevOps #MachineLearning

DevOps solutions are tying together increasingly complex tools that can be hard to manage and monitor. To check on the health of your processes you need to be dialed in to your source code, artifact management, continuous integration, delivery and deployment, static code analysis, security analysis, monitoring health, infrastructure and test automation, just to name a few. Come see how to aggregate your view of the DevOps world in practice.

read more

CloudBees, Google and Linux Foundation launch Continuous Delivery Foundation

Meet the Continuous Delivery Foundation (CDF), a new offshoot of the Linux Foundation which will aim to develop, nurture and promote open source projects and best practices around continuous delivery.

The CDF is being led by CloudBees, the arbiters of open source automation server Jenkins, and includes the Jenkins Community, Google, and the Linux Foundation itself as collaborators.

Alongside Jenkins, CloudBees wanted to find a home for a newer flavour, Jenkins X, which aims to automate continuous integration and delivery (CI/CD) in the cloud. This time last year, at the time of its launch, various elements were cited in its launch, from more higher-performing DevOps teams to the near-ubiquity of Kubernetes. “All of this adds up to an increased demand for teams to have a solution for cloud native CI/CD with lots of automation,” wrote James Strachan, distinguished engineer at CloudBees.

The parallels are evident between the companies engaged in this initiative. Around a week or so earlier, the Cloud Native Computing Foundation (CNCF) announced Kubernetes had ‘graduated’, making it a production-ready technology ‘mature and resilient enough to manage containers at scale across any industry in companies of all sizes’, as CNCF said at the time. Alongside that the foundation announced 24 new members – one of whom being CloudBees.

“The time has come for a robust, vendor-neutral organisation dedicated to advancing continuous delivery,” said Kohsuke Kawaguchi, creator of Jenkins and CTO at CloudBees. “The CDF represents an opportunity to raise the awareness of CD beyond the technology people.

“For projects like Jenkins and Jenkins X, it represents a whole new level of maturity,” Kawaguchi added. “We look forward to helping the CDF grow the CD ecosystem and foster collaboration between top developers, end users and vendors.”

The two benchmark industry reports in the DevOps industry are from Puppet and DORA (DevOps Research and Assessment Team), both released in September. Puppet found the vast majority (79%) of organisations polled were bang in the middle when it came to adoption and deployment, while DORA noted how those at the top were pulling significantly ahead due to their advanced cloud infrastructure. Puppet noted 11% of companies were ‘highly evolved’.

Ultimately, the launch of the CDF will hope to bring further clarification and specification to an already fast-moving space. “As the market has shifted to containerised and cloud-native technologies, the ecosystem of CI/CD systems, DevOps and related tools has radically changed,” said Chris Aniszczyk, VP developer relations at the Linux Foundation. “The number of available tools has increased, and there’s no defining industry specifications around pipelines and other CI/CD tools.

“CloudBees, Google and the other CDF founding members recognise the need for a neutral home for collaboration and integration to solve this problem,” he added.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Why we should take the brakes off digital transformation with cloud-based connectivity

As companies continue to bring more internal and external data sources and services together, the demand for digital connectivity is growing exponentially. 

The crucial connectivity layer, which lies at the heart of most transformation projects, is hugely important in facilitating this. Yet, as it exists below the surface, its significance often goes unappreciated by senior managers and, let’s be honest, due to its complexity it is often unloved by CTOs and lead architects too.

Many see the integration stage of a digital transformation project as one enormous headache, fraught with unknown obstacles, that will inevitably delay schedules and drain budgets. 
If we were just talking about the odd connection here and there this wouldn’t be such an issue, but that’s not the modern day reality. Connectivity requests are constantly coming in from every line of business. 

Take an organisation that wants full visibility over the customer journey, for example. Data will need to be sent to and from any number of management systems, including warehouse inventory, transport and logistics, order management, payment, sales and marketing, contact centres, and more – with each connection involving its own set of endpoint security and data management protocols.

Integration is still taking too long

It's not possible to maintain a single point-to-point approach when we are inundated with multiple requests and requirements. The traditional alternative has been to build an internal integration platform. Yet this is still not a quick job, nor is it cheap. 

Consider the time and money a CTO will spend specifying, buying and integrating a software platform, as well as the recruitment and training of an integration team, and the requirement to set up and monitor numerous control standards to ensure the quality of that team’s actions. This is before any work takes place. 

If you then encounter any complications in the integration work itself, it’s like pulling a big handbrake that stops a digital transformation project in its tracks.  

When organisations are adopting a DevOps methodology in order to release new applications on a weekly or daily basis and orchestrating IT infrastructure through the cloud, this internal platform approach now seems archaic. 

We can release the handbrake

We can bypass all of those internal connectivity headaches, however, by accessing integration capabilities through the cloud. Using the ‘as a service’ model, infrastructure architects can have access to tried and tested integration flows and adapters almost instantly. 

When you consider the multiple benefits that this integration capability as a service (ICaaS) model delivers, they are significant: 

Speed: Instead of spending months building an in-house capability, you can deploy within hours in a cloud environment. You don’t need to procure hardware and software, recruit staff, train them and then carry out the work. 

When you have pre-built integration flows ready to go, all that’s left to do is configure the platform. Using a standard plug-and-play API model, you simply need to decide what talks to what.

Less resource consumption: It’s estimated that large organisations can save up to 90% of their annual integration costs using an ICaaS model – as there is no need to hire consultants, pay out salaries to specialist developers or stump up hundreds of thousands for annual software license fees.

CTOs are not confronted with a big initial lump sum using the as-a-service model either. This means that IT budgets are protected, and resources can be redistributed to front and back-end projects.  

Risk removed: Building an integration platform in-house carries a huge amount of risk. You simply don’t know what hurdles you are going to come across and what delays and extra expense that will lead to. With ICaaS, you are tapping into a library of pre-built and proven integration flows and adapters.

Conclusion

The reality is you are simply hiring that IP, and then deploying it in a cloud environment. You’re not going to find yourself in a situation where after months you hit a major barrier. You already know it will work – and can see the evidence first hand within hours. 

Given the growing quantity of digital transformation projects being pushed on CTOs, and the pressure to deliver on these against ever shortening deadlines, it’s time to reassess traditional approaches to integration. 

When organisations are rolling out their infrastructure and orchestrating new builds in a cloud environment, using an in-house approach for the integration platform no longer seems sensible, nor sustainable. Surely, the future of connectivity is in the cloud. 

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Misconfigured Box accounts lead to leaked sensitive data


Clare Hopping

12 Mar, 2019

Businesses are putting customer and other confidential company data at risk because they’re not properly securing their Box cloud storage accounts, a report by security firm Adversis has revealed.

The problem lies in the way links are shared between Box users. Employees are able to share public links to files in the Box folders, which could be easily intercepted and hacked, or shared with people outside of the organisation.

The “secret” links can be easily discovered using a script that can scan for Box accounts in use on a network, using lists of company names and wildcard searches. The security firm found more than 90 companies with publicly accessible folders, including tech firms that should know better. In fact, Box itself was one of the companies revealed to have publicly accessible data available.

“Initially, we intended to reach out to all the companies affected but we quickly realized that was impossible at this scale,” Adversis said. “A large percentage of the Box customer accounts we tested had thousands of sensitive documents exposed. We alerted a number of companies that had highly sensitive data exposed, reached out directly to Box, and published this write up.”

Even more worrying, some of these links were discoverable by Google and other search engines, meaning they could, theoretically, be indexed in search results.

Some of the data Adversis found to be available to anyone looking for it included personal details, such as passport information, bank account details and passwords, finance data, including invoices and customer details.

Some of the businesses highlighted as not properly securing their Box accounts were Apple, PR firm Edelman, Schneider Electric and TV network Discovery.

“We take our customers’ security seriously and we provide controls that allow our customers to choose the right level of security based on the sensitivity of the content they are sharing,” Box said in a statement. “In some cases, users may want to share files or folders broadly and will set the permissions for a custom or shared link to public or ‘open’.

“We are taking steps to make these settings more clear, better help users understand how their files or folders can be shared, and reduce the potential for content to be shared unintentionally, including both improving admin policies and introducing additional controls for shared links.”