Oracle cloud courses are free during coronavirus lockdown


Bobby Hellard

31 Mar, 2020

Oracle has announced it’s offering free access to its online learning content and cloud certifications while swathes of workers are in coronavirus lockdown. 

The aim is to help IT professionals gain highly sought after skills while the coronavirus pandemic enforces remote or reduced working, according to Oracle.

The courses and certifications cover Oracle Cloud Infrastructure and Oracle Autonomous Database and will be available until 15 May. There are seven learning paths that users can access with an Oracle Single-Sign-On account, which is also free.

Oracle users, developers, technical professionals, architects, students and professors will have access to more than 50 hours of online training and six certification exams, according to Raghu Viswanathan, the VP of education products and delivery at Oracle University.

“As our customers adapt to a rapidly evolving digital landscape, Oracle is stepping up its efforts to help build critical technical cloud skills they need to ramp up innovation,” Viswanathan said in a statement.

“We believe that certifications help professionals develop in-demand skills, shorten turnaround times for customer projects, enhance their expertise and advance their careers while improving their overall job performance.”

The free access will include an extensive library of materials for Oracle’s Cloud Infrastructure and Autonomous Database, as well as content on topics like machine learning, data science and multi-cloud environments, which includes integrations with Microsoft Azure.

With these courses, the company is also going to offer access to high-quality video content, experts and recorded demos of hands-on labs, all of which will be available anywhere and anytime. This will include machine learning translations for Chinese, Japanese, Korean, Portuguese and Spanish speaking countries.

Like Oracle, a number of tech companies have offered some services for free while the coronavirus outback drastically changes the way we live and work. Companies like Microsoft, which has offered Teams as a free service to the NHS and RingVPN, which has made the first 90 days of its service free of charge.

How Covid-19 will impact IT and tech spending for 2020 and beyond

The human tragedy the COVID-19 pandemic has inflicted on the world is incalculable and continues to grow. Every human life is priceless and deserves the care needed to sustain it. COVID-19 is also impacting entire industries, causing them to randomly move in unpredictable ways, directly impacting IT and tech spending.

COVID-19’s impact on industries

Computer Economics in collaboration with their parent company Avasant published their Coronavirus Impact Index by Industry that looks at how COVID-19 is affecting 11 major industry sectors in four dimensions: personnel, operations, supply chain, and revenue. Please see the Coronavirus Impact Index by Industry by Tom Dunlap, Dave Wagner, and Frank Scavo of Computer Economics for additional information and analysis.  The resulting index is an overall rating of the impact of the pandemic on each industry and is shown below:

COVID-19's Impact On Tech Spending This Year

Computer Economics and Avasant predict major disruption to High Tech & Telecommunications based on the industry’s heavy reliance on Chinese supply chains, which were severely impacted by COVID-19.

Based on conversations with U.S.-based high tech manufacturers, I’ve learned that a few are struggling to make deliveries to leading department stores and discount chains due to parts shortages and allocations from their Chinese suppliers. North American electronics suppliers aren’t an option due to their prices being higher than their Chinese competitors. Leading department stores and discount chains openly encourage high tech device manufacturers to compete with each other on supplier availability and delivery date performance.

In contrast to the parts shortage and unpredictability of supply chains dragging down the industry, software is a growth catalyst. The study notes that Zoom, Slack, GoToMyPC, Zoho Remotely, Microsoft Office365, Atlassian, and others are already seeing increased demand as companies increase their remote-working capabilities.

COVID-19’s impact on IT spending  

Further supporting the Coronavirus Impact Index by Industry analysis, Andrew Bartels, VP & Principal Analyst at Forrester, published his latest forecast of tech growth today in the post, The Odds of a Tech Market Decline In 2020 Have Just Gone Up To 50%.

Mr. Bartels is referencing the market forecasts shown in the following forecast published last month, New Forrester Forecast Shows That Global Tech Market Growth Will Slip To 3% In 2020 And 2021 and shown below:

COVID-19's Impact On Tech Spending This Year

Key insights from Forrester’s latest IT spending forecast and predictions are shown below:

  • Forrester is revising its tech forecast downward, predicting the US and global tech market growth slowing to around 2% in 2020. Bartels mentions that this assumes the US and other major economies have declined in the first half of 2020 but manage to recover in the second half
  • If a full-fledged recession hits, there is a 50% probability that US and global tech markets will decline by 2% or more in 2020
  • In either a second-half 2020 recovery or recession, Forrester predicts computer and communications equipment spending will be weakest, with potential declines of 5% to 10%
  • Tech consulting and systems integration services spending will be flat in a temporary slowdown and could be down by up to 5% if firms cut back on new tech projects
  • Software spending growth will slow to the 2% to 4% range in the best case and will post no growth in the worst case of a recession
  • The only positive signs from the latest Forrester IT spending forecast is the continued growth in demand for cloud infrastructure services and potential increases in spending on specialised software. Forrester also predicts communications equipment, and telecom services for remote work and education as organisations encourage workers to work from home and schools move to online courses

Conclusion

Every industry is economically hurting already from the COVID-19 pandemic. Now is the time for enterprise software providers to go the extra mile for their customers across all industries and help them recover and grow again. Strengthening customers in their time of need by freely providing remote collaboration tools, secure endpoint solutions, cloud-based storage, and CRM systems is an investment in the community that every software company needs to make it through this pandemic too.

Photo by Micheile Henderson on Unsplash

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Azure services up 775% as Microsoft scrambles to add more capacity


Bobby Hellard

30 Mar, 2020

Microsoft’s cloud services have seen a 775% spike in usage in areas where social distancing measures and lockdowns have been enforced.

Azure services such as Microsoft Teams, Windows Virtual Desktop and Power BI have all seen increases of users in March as more and more have been forced to work from home or stay indoors.

The company recently announced it would prioritise capacity provisions for critical health and safety organisations to ensure the relevant remote workers can stay up and running during the coronavirus pandemic. However, with demand for cloud services surging in lockdown areas, the company has said it will “expedite” the creation of new capacity.

“We’re implementing a few temporary restrictions designed to balance the best possible experience for all of our customers,” the company wrote on its blog. “We have placed limits on free offers to prioritise capacity for existing customers.

“We are expediting the addition of significant new capacity that will be available in the weeks ahead. Concurrently, we monitor support requests and, if needed, encourage customers to consider alternative regions or alternative resource types, depending on their timeline and requirements. If the implementation of these efforts to alleviate demand is not sufficient, customers may experience intermittent deployment-related issues.”

So far, the only issue with Azure has been a two-hour outage for Microsoft Teams in Europe. The service went down on the first Monday of remote working as it saw a spike in usage.

Later it was revealed that Teams had seen 12 million more users in March, taking the number of daily active users to 44 million. Windows Virtual Desktop also trebled in usage and Microsoft’s business analytics service, Power BI, saw a 42% increase in just one week.

In addition, Microsoft also said its been in regular contact with ISPs around the world and is actively working with them to “argument” capacity as needed.

“We’ve been in discussions with several ISPs that are taking measures to reduce bandwidth from video sources in order to enable their networks to be performant during the workday,” the company said.

Microsoft to acquire Affirmed Networks to get onto AWS’ wavelength

Microsoft has announced it is to acquire Affirmed Networks, a provider of network functions virtualisation (NFV) software – as the telecoms space heats up for the biggest cloud players. 

As 5G is becoming more of a reality, cloud vendors see their role as enabling telecom operators to deploy and maintain next-generation networks more efficiently.  

“At Microsoft, we intend to empower the telecommunications industry as it continues its move to 5G and support both network equipment manufacturers and operators in their efforts to find solutions that are faster, easier and cost effective,” wrote Yousef Khalidi, corporate vice president for Azure networking in a blog post. “This acquisition will allow us to evolve our work with the telecommunications industry, building on our secure and trusted cloud platform for operators. 

“With Affirmed Networks, we will be able to offer new and innovative solutions tailored to the unique needs of operators, including managing their network workloads in the cloud,” Khalidi added. 

Anand Krishnamurthy, president and CEO of Affirmed Networks – who only became CEO earlier this month – said the company had delivered on its vision. “Working together, we have created a model for mobile networks of the future that is open, cloud-native and capable of being web-scale, all at 70% of the cost of traditional networks,” wrote Krishnamurthy. “We have been their partner of choice as they prepare for fifth generation (5G) networks and infrastructure.  

“Now, the combined technologies of Microsoft and Affirmed will further accelerate this momentous shift.” 

This move makes for an interesting comparison with what Amazon Web Services (AWS) is doing with its Wavelength project. The initiative is an edge play which embeds AWS’ compute and storage services on the edge of operators’ 5G networks, enabling the delivery of ultra-low latency applications. 

At re:Invent back in December, in what was seen as the biggest item of the main keynotes – or in other words, the last item – Verizon CEO Hans Vestberg joined AWS chief Andy Jassy on stage to discuss the collaboration between the two companies. Jassy noted the most exciting applications to be ushered in, such as autonomous industrial equipment, or applications for smart cities, can’t wait that long.  

“If you want to have the types of applications that have that last mile connectivity, but actually do something meaningful, those applications need a certain amount of compute and a certain amount of storage,” he said. “What [developers] really want is AWS to be embedded somehow in these 5G edge locations.” 

For Microsoft’s part, the company said it was looking at extending ‘deep, strong partnerships’ and ensuring interoperability to ensure cloud-based software-defined networking (SDN) fits into the 5G landscape. The company’s partnership with AT&T, beefed up in November, is seen by many in the industry to be a particularly interesting one in the space. 

Financial terms of the deal were not disclosed. You can read the full announcement of the acquisition here. 

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Keep your foot on the gas: Maintaining momentum after your cloud migration

For a significant number of companies, beginning their cloud migration journey is hard. In spite of the greater scalability, flexibility, optimisation, and lower costs for big data in the cloud, organisations struggle to mobilise their teams to begin their cloud journey. Once they have successfully migrated their workloads to the cloud, however, a lot of organisations think the journey is finished.

Unfortunately, there are a number of operational and visibility challenges that exist on-premises which don’t disappear once workloads have been migrated. While the benefits of cloud migration are clear, it is frequently oversimplified and considerations such as application dependencies and system version mapping are not given due thought. As a result, costs begin to overrun through over-provisioning or production is delayed through provisioning gaps.

Even post-migration, issues can persist. Modern businesses are powered by data applications that rely on a myriad of platforms which frequently creates issues in understanding, planning, optimising, and automating the performance of their data apps and infrastructure. These difficulties are compounded by the use of disparate technologies and siloed approaches to managing data applications and data infrastructure. With the majority of monitoring solutions frequently lacking end-to-end support for big data environments, full-stack compatibility, or requiring complex instrumentation, data teams require deep subject matter expertise to configure changes to applications or components. As a result, organisations can struggle to find teams skilled enough to deliver strong application performance, often resulting in poor user experience, inefficiencies and mounting costs as organisations buy more and more tools to resolve problems.

Quantifiably, organisations see a high Mean Time to Identify (MTTI) and Mean Time to Resolve (MTTR) issues due to difficulties in understanding dependencies and retaining focus in root cause analysis. Data collection and correlation can be time-consuming when trying to collect granular cluster and application-specific runtime information, as well as metrics on infrastructure across platforms, application and system log data, configuration parameters, and other relevant data.

Moreover, resources using native Hadoop APIs will only send data while an application is executing creating yet further complications. The lack of granularity and end to end visibility makes it impossible to remedy all of these problems, leaving businesses with little visibility of their data applications. Even once all this data has been collated, further difficulties arise in evaluating and interpreting it. Minor human errors such as a missed configuration parameter, incorrectly sized container, or a rogue stage of your Spark application, which can completely cripple a data cluster, may be entirely missed.

For enterprises that have only recently migrated, these myriad issues can leave cause for doubt in their choice. However, it should be noted that cloud adoption is not a finite process with a clear start and end date — it’s an ongoing lifecycle with four broad phases (planning, migration, operation, and optimisation). To ensure a painless and efficient cloud migration, each of these four phases needs to be given proportionate attention.

Broadly, In the planning phase, decisions need to be made surrounding which applications are most suited for the cloud, what resources do they require, which data sets need to be migrated and whether permanent, transient, autoscaling, or spot instances should be used. During the migration and operation stage, there is a need for continuous monitoring of performance and costs, and assessment of the critical dependencies and service mapping. Finally, once these workloads are in production on the cloud, it is time for data teams to begin considering how they optimise their applications and performance in order to guarantee SLAs.

A comprehensive approach to operational planning goes a long way in resolving the various challenges of managing big data technologies (both on-premise and in the cloud). With enough time and focus spent on each stage of the cloud adoption lifecycle, and adhering to best practice, the benefits of cloud migration can be realised faster. The main thing to remember for data-teams, is not to take the foot off the gas and keep momentum up once they’ve moved to the cloud.

Photo by Shannon Lam on Unsplash

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Slack to work with Microsoft on Teams integration


Bobby Hellard

27 Mar, 2020

Slack’s CEO Stewart Butterfield has revealed the comms platform is working on a call integration with its fierce rival Microsoft Teams.

The plan was announced during an investor webcast with RBC analyst Alex Zukin on Thursday.

Butterfield’s view is that the number of users for both services will dramatically grow over the next five years and it’s pointless fighting over what they have now. Teams has the edge currently, largely due to it coming bundled into Office 365, but Slack has become a popular choice for companies that use multiple cloud-based communication platforms.

The two companies have recently reported spikes in daily active users during the current COVID-19 pandemic, with Teams reporting 12 million more since the start of March.

“There are a lot of people who use both, and we’re working on a Teams integration for calling features,” Butterfield said. “I’m pretty sure that in 18 to 24 months time when we look back, that more people use Teams is not going to be relevant to us as a company.”

“95% of people who are going to be using this stuff five years from now have not started, so there is no point fighting over the relatively small percentage of customers we have.”

Slack has often been used in tandem with other services, particularly video calling software such as Google Hangouts and Zoom. Recently, Microsoft singled Zoom out as a potential business threat because of this.

However, Butterfield’s comments suggest a less feisty competition from the two companies. Since Teams was launched the two have traded blows over user numbers, features and even marketing strategies.

In November, Slack accused Microsoft of copying its Adverts, referring to the company as a “boomer”. A few months prior, Microsoft banned its employee from using Slack, suggesting it was “not secure”.

Hyperscale operators invest hard in data centres amid modest overall capex, says Synergy

In the land of the hyperscale cloud operators, all remains relatively rosy in the garden. New data from Synergy Research shows that hyperscale operator capex set a new record in the most recent quarter.

In total, more than $32 billion was laid down in the fourth quarter of 2019, beating the previous record set by Q418. Synergy noted that a lot of spend – and strategy – was going into data centres. Capex specifically targeted at data centres grew 11% in 2019 in a move which ‘reflected ongoing strength in their core business operations’, as the analyst firm put it.

The top five spenders, far ahead of the rest of the hyperscalers, are Amazon, Google, Microsoft, Facebook, and Apple – although the latter dropped off sharply in 2019 to the detriment of overall figures. To the old adage that attack is the best form of defence, the top 20 companies analysed – including Alibaba, IBM, Oracle and Tencent in the challengers section – generated $1.4 trillion between them in revenues for 2019, up 13% from 2018.

Yet the inevitable question around what will happen amid the ongoing Covid-19 pandemic is not too far away. John Dinsdale, a chief analyst at Synergy Research, said that while nothing was certain, hyperscale cloud players were on a surer footing than most.

“While there are many unknowns, what is clear is that the hyperscale operators generate well over 80% of their revenues from cloud, digital services and online activities,” said Dinsdale. “The radical shifts we are seeing in social and business behaviour will actually provide some substantive tailwinds for many of these businesses.

“These hyperscale firms are much better insulated against the current crisis than most others and we expect to see ongoing robust levels of capex,” added Dinsdale.

Many of the leading cloud players are in a position to funnel some of their resources into combating Covid-19, as well as provide free services to those researching and working in healthcare. Amazon Web Services is committing $20 million to customers working on diagnostics solutions, while Microsoft, Google, Alibaba and others have offered free products to healthcare professionals.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Microsoft puts Windows development on lockdown


Keumars Afifi-Sabet

25 Mar, 2020

Microsoft will no longer release non-essential updates to its line of Windows operating systems due to disruption caused by the coronavirus outbreak.

From May 2020, businesses will only receive the most important critical security updates for a swathe of Windows systems, including the recently-published Windows 10 version 1909 through to Windows Servier 2008 SP2.

Work on category C and D cumulative updates, which are optional preview releases issued in the third and fourth weeks of the month, has been put on hold due to “challenges” posed by the pandemic, the company said.

These updates are issued so Windows users can test tweaks and fixes before these are bundled into the next Patch Tuesday releases, where they’re designated category B.

“We have been evaluating the public health situation, and we understand this is impacting our customers,” an announcement reads.

“In response to these challenges we are prioritizing our focus on security updates. Starting in May 2020, we are pausing all optional non-security releases (C and D updates) for all supported versions of Windows client and server products (Windows 10, version 1909 down through Windows Server 2008 SP2).”

The monthly Patch Tuesday security updates will continue to be published as normal, Microsoft added.

This is to ensure that organisations can continue to carry out business operations as smoothly as possible, and that they’re protected from any serious bugs or security threats.

The timing and schedule of the suspension of work suggests the company is late into its development cycle for updates set to be released in April. The announcement also suggests Microsoft feels the disruptive effects of the COVID-19 outbreak to development work will continue for a long time.

It comes just days after the company said it would be pausing development work on version 81 of its Edge browser, itself a response to Google pausing its own development work on Chromium.

Coronavirus has already had a sizeable impact on businesses of all stripes and in all sectors. While the tech sector hasn’t been as severely hit as companies in the services industry, entire workforces have shifted to remote working patterns, and a host of development projects have been put on hold.

Waste not, want not: How enterprises can avoid an idle cloud estate

The cloud lies at the heart of digital transformation. In the 2019 European Insight Intelligent Technology Index (ITI), 42% of IT decision makers deemed it one of the most critical technologies for their digital innovation initiatives. As a result, organisations are investing heavily in the cloud to drive their projects forward, spending an average of £29.48m per year.

However, it’s clear that businesses are investing without a solid strategy in place; 30% of that spend goes on services that are not utilised, resulting in £8.8m wasted each year. So the question is, why do enterprises end up with so much cloud waste, and how can they prevent it?

Failing to plan is planning to fail

The seeds of idle cloud estates are often planted in the planning stage. In fact, 39% of ITI respondents say at least some of their under-utilisation stems from issues around planning and allocating budgets for cloud consumption, while 44% place part of the blame on trying to determine whether public, private or hybrid cloud is the best place to host applications and workloads. Without a clear view of what they want to achieve, and what resources they need to do so, organisations will ultimately be setting themselves up to fail.

To avoid this, the business needs to understand exactly what it wants to achieve from moving to the cloud – whether that is agility, scalability or cost reduction. This means acknowledging the ways in which the cloud can help to transform the business and meet its goals, and knowing exactly what services are needed to support the business’s goals.

With this understanding in place, the organisation can choose cloud services appropriately. For instance, a service bundle might seemingly offer better value for money than buying services individually. Yet if a large proportion of that bundle isn’t used, buying a smaller number of individual services might still be the more cost-effective option.

Losing control

Even if it plans perfectly, the enterprise still needs to have full visibility and control over its cloud environments. According to the ITI, 36% of enterprises said a lack of visibility into used services was leading to cloud waste. Without the right tools, organisations risk losing the visibility, and control they require.

In order to minimise waste, organisations need to be able to manage these dynamic cloud environments, where applications and infrastructure can be spun up, spun down or moved quickly and easily. This also has to stretch to the new approaches and technologies the cloud allows, such as microservices, containers, serverless computing and DevOps.

Understanding the exact demands of cloud architecture, and how well existing tools can meet them, is an essential element to remaining in control. If legacy tools don’t provide the level of support a new cloud environment demands, the business can easily find itself continually playing catch-up as it attempts to plug holes in its capabilities – inevitably adding extra costs and missing opportunities to economise. Understanding legacy limitations, and having the right tools in place in the beginning, can help prevent a great amount of waste.

It’s also about the people

Having the right skills in place is also a critical element of reducing wasted spending in the cloud. This doesn’t just mean being able to use the tools needed to manage cloud environments. It also means having employees who understand exactly how the cloud differs from legacy environments, and won’t make costly assumptions in procurement and management. At the same time, with legacy technology still fulfilling a crucial role in most organisations, the business can’t simply focus on acquiring cloud skills at the expense of its existing experience. Otherwise any reduced waste in the cloud could easily be cancelled out by losses elsewhere.

The first step for any organisation looking to control its skills should be auditing existing skills to understand precisely what its current teams are capable of. It can then understand where there are gaps, and plan to train or hire employees to ensure they have all the capabilities the business needs.

A solid cloud strategy is key

For organisations to truly drive enterprise-wide agility, innovate faster, and modernise the way they work, they need to rethink their approach to cloud and treat it like any other IT project. This means planning and allocating budget, having end-to-end visibility, and putting the right skills in place. Most importantly, it means knowing exactly what the business wants to achieve from the cloud and how it will meet their business goals – as this will give the ultimate judgement of whether any investment is wasted.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

How AI is bringing a new dimension to software testing

Software testing teams analyse and correct thousands of code on a daily basis to ensure the final product is free of errors. However, the on-demand customer expects software to be comprehensive in functionality and delivered with precision and speed. Current software testing procedures are not scalable to meet these needs, nor are they cost- or time-efficient in the digital economy.

As products become more complex to create, the code becomes more challenging to test accurately. Manual testing exposes development teams to many challenges—code changes causing errors elsewhere in the product, the considerable length of regression testing cycles, resourcing constraints of hiring skilled software testers to meet demand, and more.

While the current practices of agile and DevOps increase the pace of software development, meeting near-future market needs requires the power of predictive technologies to enhance traditional software testing solutions.

Artificial intelligence (AI) and machine learning (ML) provide a dynamic framework to predict and solve code writing errors before they appear. The more data patterns ML analyses, the more processes and self-adjustments can operate based on those learned patterns. This continuous delivery of insights increases in value with the “intelligence” of the technology. AI has enormous potential to reshape software development. When properly leveraged, AI solutions drive efficiency, optimise processes, and enhance experiences.

Let’s take a closer look at some of the key advantages of implementing AI/ML for testing software during the development process:

Automating and accelerating the testing process

Deploying AI/ML to the software testing process is not to replace human testers, but rather for technology to work in collaboration with humans to make the software development lifecycle more efficient and productive. Software companies utilise the skills of AI/ML experts to apply technology solutions that operate in conjunction with, and complement, traditional software testing processes and solutions already in place.

AI can automate and reduce the number of routine tasks in development and testing, beyond the limitations of traditional test automation tools. Software companies can train AI algorithms to instantly recognise, capture, and analyse large amounts of data sets to expedite the testing process because speed, cost, and efficiency are vital when it comes to testing software codes.

For example, a traditional software test tool analyses tests without discernment, analysing every possible test available. AI can significantly add value and efficiency by reviewing the current test status, recent code changes, and other code markers, deciding which tests to run, and executing them. This allows for scalable and efficient decision-making, freeing software engineers to spend time on more complex and strategic tasks.

Removing bugs

Bugs naturally occur during software development, posing a major pain point for software testing teams. Software companies can use AI and ML algorithms across the company’s library to flag coding mistakes and discover bugs before developers include them in the code. This application of AI algorithms can help development teams save a significant amount of time and resources by not having to manually find and address bugs. Ultimately, AI can also help software companies to decide whether further coding changes are required to prevent program errors.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.