Winning the IT availability war: How to combat costly downtime

Analysts predict global enterprises will spend nearly $2 trillion on digital transformation by 20221. With digital initiatives and technology becoming ubiquitous with business today, one would think that companies would be more than ready for a world where virtually every touchpoint with customers is digital. Unfortunately, the fact that Target, British Airways, Facebook and Twitter all experienced major IT outages in 2019 suggests there is still work left to do to keep services and an optimal customer experience up and running smoothly.

To explore precisely what enterprises are doing to detect, mitigate and hopefully prevent outages LogicMonitor commissioned an IT Outage Impact Study. The independent study surveyed 300 IT decision makers at organisations in the US, Canada, UK, Australia and New Zealand to discover whether or not IT leadership is concerned about “keeping the lights on” for their businesses. The research revealed a stark reality at odds with today’s omnipresent digitisation: IT teams are concerned about their ability to avoid costly outages, mitigate downtime, and reliably provide the 24/7 availability that customers and partners demand.

Are outages inevitable?

IT teams worldwide agree on two things: performance and availability are top priorities for their department. These two mission-critical priorities, in fact, beat out security and cost, which is surprising considering how much attention security gets in today’s data-breach heavy environment.

Yet IT’s intense focus on keeping the network up and running at peak performance has not prevented downtime. In fact, 96% of survey respondents report experiencing at least one IT outage in the past three years, which is bad news if performance and availability are considered make or break areas for modern organisations.

Common causes of downtime include network failure, surges in usage, human error, software malfunction and infrastructure that fails. What is surprising, however, is that enterprises report that more than half of the downtime they experience could have been prevented.

Worryingly, IT decision makers are pessimistic when it comes to their ability to influence all-important availability. More than half (53%) of the 300 IT professionals surveyed say they expect to experience a brownout or outage so severe that the national media will cover the story, and the same percentage said someone in their organisation will lose his or her job as a result of a severe outage.

This begs the question: if even the most skilled technical experts in IT can’t prevent outages, who (or what) can?

The true costs of downtime

Negative media coverage and career impacts aside, downtime comes with additional costs for organisations. Survey respondents identify lost revenue, lost productivity and compliance-related costs as other factors associated with IT outages and brownouts (periods of dramatically reduced or slowed service). And these costs add-up quickly. Organisations with frequent outages and brownouts experience:

  • 16 times higher costs associated with mitigating downtime than organisations with few or zero outages
  • Nearly two times the number of team members to troubleshoot problems related to downtime
  • Two times as long to troubleshoot problems related to downtime

How to win the availability war

If more than half of outages and brownouts are avoidable, according to 300 global IT experts, then every organisation should be taking proactive steps to prevent these disruptive events. The best-performing organisations are already working to prevent costly downtime. Consider taking the following actions to do the same:

  • Embrace comprehensive monitoring. In today’s digital world, many companies operate in a hybrid IT environment with infrastructure both on-premises and in the cloud. Trying to spot trends using siloed monitoring tools for each platform is inefficient and prone to error.

    Identify and implement software that comprehensively monitors infrastructures, allowing the team to view IT systems through a single pane of glass. Consider extensibility and scalability during the selection process as well to ensure the platform integrates with all technologies – present and future
     

  • Use a monitoring solution that provides early visibility into trends that could signify trouble ahead. Data forecasting can proactively identify future failures and ultimately prevent an outage before it impacts the business. Teams should build a high level of redundancy into their monitoring systems as an additional method to prevent downtime and focus on eliminating single points of failure that might cause a system to go down
     
  • Don’t wait to create an IT outage response plan. Hopefully it will never be needed, but it’s critical to have a defined process for handling outages from escalation and remediation to communication and root cause analysis. Set a plan on who to involve (and when) to ensure IT can respond quickly if an outage does occur

While the 2019 LogicMonitor’s 2019 Outage Impact Study revealed that downtime is surprisingly common, it also showed that top-performing organisations are able to banish downtime from their day-to-day operations through advanced planning and comprehensive monitoring software. In the end, it is possible to win the IT availability war, with the right combination of skilled team members and powerful SaaS monitoring technology. But every minute of downtime is pricey – so there’s no time to waste.

Read more: Most outages can potentially be avoided, argues IT – yet the business side is pessimistic

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

View from the airport: AWS Re:Invent 2019


Bobby Hellard

6 Dec, 2019

Gather round kids, because AWS Re:Invent is over for another year and this is now a safe space to talk about all those subjects that were off-limits during the cloud conference.

Namely multi-cloud and project JEDI.

Having safely left Las Vegas via McCarran International Airport, I feel I can finally discuss these hot topics. I can say what I want about Microsoft winning the Pentagon’s $10 billion cloud migration project (AWS is appealing that) and also anything about the term for using services from more than one cloud provider.

The multi-cloud discourse had no place at Re:Invent. It was definitely not mentioned by any AWS executives. But it’s so hot right now that it almost burns Amazon because it refuses to acknowledge it.

The very word is banned if recent reports are true. Back in August, the cloud giant released a “co-branding” guide for partners that said it would not approve the term “multi-cloud” or “any other language that implies designing or supporting more than one cloud provider“.

Unfortunately for AWS, that word was brought up in various briefings I attended throughout the event. Such as at the end of a Sophos security roundtable, where journalists were invited to ask questions and clearly took pleasure in asking difficult questions like “what’s your beef with multi-cloud?” and “how much did it suck to lose out to Microsoft?”

Ouch.

Many companies have gone ‘all-in’ on AWS, but many more have merely added a single services or integrated certain products to offer their customers – regardless of their cloud provider(s). This is the case for Sophos, which had to confess to offering cloud security to multi-cloud environments.

It was a strange end to an otherwise brilliant session. Sophos is a great company with a broad range of security expertise, but with AWS trying to quash talk of multi-cloud, it forces partners to tackle the difficult question.

“Our products do – sorry to my AWS brothers – support multiple clouds…” Sophos director Andy Miller said while the room collectively cringed. “We work in AWS and Azure and Google.”

The Seattle Seahawks, which has integrated AWS machine learning algorithms into various parts of its organisation, found itself in the same position. From player performance to business processes, the technology will run throughout the NFL franchise over the next five years. But as its tech lead, Chip Suttles, told me, it also uses Office 365 and Azure.

Similarly, when I asked the platform team lead of Monzo, Chris Evans, why his company uses AWS, he told me it was partly due to Amazon having the technology they needed at the time. If it started now, they could use Azure, Google Cloud and so on… he even used that dirty word ‘multi-cloud’. Evans and co are more than happy with what AWS gives them and it’s also a large part of the fintech company’s success, but nothing suggests that wouldn’t be the same with different providers.

AWS may be the biggest provider of cloud computing, but its massive lead on its rivals is due to it being the first to capitalise when it was an emerging technology. The big trend within that technology now is using multiple providers and AWS is strangely taking a legacy-like approach. It wants you and your company to use it exclusively, but no matter how many great tools and functions you provide you can’t please everyone. In the world of multi-cloud, AWS is fighting a losing battle.

On Thursday, Andy Jassy was pencilled in for a Q and A session. Journalists were asked to submit questions beforehand, presumably to vet them, and the word on the street (or strip) was that JEDI questions had been sent in. Unfortunately, we will never know what these were as Jassy didn’t take any questions and instead conducted an interview with Roger Goodwell, the commissioner of the NFL.

One assumes that after his three-hour keynote and various appearances throughout the week, Jassy saw a list of multi-cloud and JEDI questions and thought, “Nah, I’ll pass”.

How the AWS and Verizon re:Invent partnership shows the way forward for telcos and cloud providers

At the main AWS re:Invent keynote, the biggest headline is usually saved until the end. Last year, it was the announcement of AWS Outposts, with VMware CEO Pat Gelsinger taking to the stage to join AWS chief Andy Jassy.

This time around it was Verizon, whose CEO Hans Vestberg joined Jassy to announce a partnership to deliver cloud and edge computing souped up with 5G connectivity. The move is also a showcase for AWS Wavelength, which is a major edge play: embedding compute and storage services on the edge of operators’ 5G networks, enabling the delivery of ultra-low latency applications.

Vestberg noted Verizon’s ‘eight currencies’ it believed in for 5G; a message first put out at the start of this year at CES and which went far beyond speed and throughput, the only primary capabilities for 4G. “The most important [aspect] is when you can slice this and give them to individuals and applications; you have a transformative technology that’s going to transform consumer behaviour, transform businesses, transform society,” he said.

For the ‘builders’ – developers who form such a key part of the re:Invent audience base – this promise of 5G, encapsulating lower latency, mobility and connectivity, is vital for the applications they are creating. Yet the journey for the data being transmitted is arduous; going from the device to the cell tower, to the aggregation site, to the Internet, and to the cloud provider, before going back.

As Jassy noted, the most exciting applications to be ushered in, such as autonomous industrial equipment, or applications for smart cities, can’t wait that long. “If you want to have the types of applications that have that last mile connectivity, but actually do something meaningful, those applications need a certain amount of compute and a certain amount of storage,” he said. “What [developers] really want is AWS to be embedded somehow in these 5G edge locations.”

Hence this AWS and Verizon collaboration – which Jassy noted had been in the works for around 18 months. “In placing AWS compute and storage services at the edge of Verizon’s 5G Ultra Wideband network with AWS Wavelength, AWS and Verizon bring processing power and storage physically closer to 5G mobile users and wireless devices, and enable developers to build applications that can deliver enhanced user experiences like near real-time analytics for instant decision making, immersive game streaming, and automated robotic systems in manufacturing facilities,” the companies noted in the press materials.

The move also evidently strengthens the relationship between Verizon and AWS, for whom the lines of business are now clearly demarcated.

As industry watchers will recall, in 2011, when cloud infrastructure was still nascent, Verizon acquired early pioneer Terremark. The company said at the time the move would ‘clear the way for Verizon to lead the rapidly evolving global managed IT infrastructure and cloud services market.’ The telco’s efforts to become a cloud provider in its own right fell flat, eventually being sold up to IBM. As Synergy Research’s John Dinsdale put it to this reporter back in 2016, ‘the speed of cloud market development and the aggressiveness of the leading cloud providers largely left [telcos] behind.’

The thinking has since changed. 18 months ago – around the time the two companies started consulting on the edge and 5G partnership – Verizon moved to AWS as its preferred public cloud provider, migrating more than 1,000 of its business-critical applications and backend systems.

Today, the much-derided ‘telco cloud’ is now about partnerships and making the most out of both sides’ assets. AT&T announced deals with IBM and Microsoft in successive days in July in a move which raised eyebrows in the industry – and according to Nick McQuire, VP enterprise at CCS Insight, this is an idea finally beginning to bear fruit.

“The announcements, above all, are about developers,” said McQuire. “For 5G to meet the enormous hype and expectation surrounding it this year, operators are now desperate to woo developers to the platform to create 5G applications which at the moment are very thin on the ground.

“AWS has the cloud, edge computing and IoT assets – some of the best in the market – and it also has developers, so it’s no surprise it’s pushing into this area and partnering with leading telcos.”

Read more: AWS re:Invent 2019 keynote: ML and quantum moves amid modernisation and transformation message

Picture credit: Amazon Web Services/Screenshot

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Google founders Larry Page and Sergey Brin step down from Alphabet management


Connor Jones

4 Dec, 2019

Google founders Larry Page and Sergey Brin are stepping aside from their leadership roles of Google’s parent company Alphabet, bringing to an end an unprecedented reign at the helm of one of the most influential companies in history.

The iconic silicon valley duo will remain as board members, shareholders and overall “proud parents” of the companies they founded and led since starting the search engine giant from a California garage in 1998.

Page and Brin said they wanted to simplify the management structure of the tech giant, adding there was no need to have two CEOs and a president of the same company.

Google’s Sundar Pichai, who joined the company in 2004 and was appointed CEO in 2015, will assume the CEO role of both Google and Alphabet.

“I will continue to be very focused on Google and the deep work we’re doing to push the boundaries of computing and build a more helpful Google for everyone,” said Pichai. “At the same time, I’m excited about Alphabet and its long term focus on tackling big challenges through technology.”

“While it has been a tremendous privilege to be deeply involved in the day-to-day management of the company for so long, we believe it’s time to assume the role of proud parents – offering advice and love, but not daily nagging,” said Page and Brin in a joint letter.

Alphabet, the multinational conglomerate holding company which houses Google and a number of other projects it has launched, including DeepMind, was created in 2015 and replaced Google as the publicly-traded company.

Shortly after, Pichai replaced Page as CEO at Google following months of rumours that he would be the next man for the job. Pichai previously held positions at Google such as product chief and head of Android prior to assuming the top job at the company.

Among other notable successes, Pichai’s reign at Google has seen the company invest heavily in green energy. Google Cloud said in 2018 it runs entirely on green energy and that the company has invested billions in building a variety of green datacentre facilities across the world, including locations in Finland and Denmark.

Google has also been embroiled in controversy throughout 2019. Most recently, the EU announced it plans to launch an investigation into the company’s data collection practices. The UK’s competition watchdog also announced it will be investigating Google’s £2 billion acquisition of data analysis firm Looker.

How smart cybersecurity solutions are increasingly powered by AI and ML

Now that data breaches are more common, 'digital trust' is a top priority for C-level leaders that build and maintain the IT infrastructure for digital transformation. Besides, for most organisations, losing digital trust can have a significant impact on brand reputation and the bottom line.

Artificial intelligence (AI) and machine learning (ML) have been adopted for their automation benefits, from predictive outcomes to advanced data analytics. AI-based cybersecurity can augment the capabilities of IT staff and help organisations deflect cyber threats, according to the latest market study by Frost & Sullivan.

AI and ML market development

Particularly, AI and ML have been used widely in cybersecurity industries, by both hacking and security communities, making the security landscape even more sophisticated. Many organisations, regardless of size, are now facing greater challenges in day-to-day IT security operations.

Many of them indicate that the cost of threat management, particularly threat detection and response, is too high. Meanwhile, AI-driven attacks have increased in number and frequency, requiring security professionals to have more advanced, smart and automated technologies to combat these automated attacks.

With digital transformation a priority for a majority of enterprises today, there is a proliferation of connected devices, offering customers convenience, efficient services and better experiences. However, this connectivity also increases the potential risk of cyberattacks for enterprises and users.

Cybercriminals are also using more sophisticated methods to attack organisations. These include polymorphic malware, AI and other automated techniques. Enterprises are struggling with a lack of trained staff and cybersecurity expertise to counter the more sophisticated attacks.

These increasing challenges in security operations suggest the need for a smarter, more adaptable, automated and predictive security strategy. AI and ML are increasingly being developed by security companies to strengthen their competitiveness using their own AI or ML algorithms to empower security products and augment the capabilities of existing IT and cybersecurity staff in enterprises.

AI and ML are being incorporated into all stages of cybersecurity to enable enterprises to adopt a smarter, more proactive and automated approach toward cyber defense, including threat prevention or protection, threat detection or hunting, and threat response to predictive security strategies.

While technology startups have been the most proactive in introducing multiple AI-enabled security offerings into the market, larger IT vendors have also incorporated AI and ML into their existing enterprise security solutions.

Outlook for AI and ML applications growth

"With cybersecurity solutions powered by AI capabilities, vendors can better support enterprises and their cybersecurity teams with less time and manpower investment and higher efficiency to identify the cybersecurity gaps," said Amy Lin, industry analyst at Frost & Sullivan.

Key AI and ML market trends for cybersecurity include:

  • Embracing and incorporating AI-enabled capabilities into exiting solutions to intensify the competitive advantage
  • Supporting a more holistic cybersecurity framework from detection to response and further prediction
  • Assisting cybersecurity expert teams on operations with lower false-positive rates and enhancing their ability to react

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

AWS ramps up SageMaker tools at Re:Invent


Bobby Hellard

4 Dec, 2019

CEO Andy Jassy announced a barrage of new machine learning capabilities for AWS SageMaker during his Re:Invent keynote on Tuesday.

SageMaker is Amazon’s big machine learning hub that aims to remove most of the heavy lifting for developers and let them use ML more expansively. Launched in 2017, there have been numerous features and capabilities introduced over the years, with more than 50 added to it in 2019 alone.

Of the SageMaker announcements made at the company’s annual conference in Las Vegas, the biggest was AWS SageMaker Studio, an IDE that allows developers and data scientists to build, code, develop, train and tune machine learning workflows all in a single interface. Within it information can be viewed, stored, collected and used to collaborate with others through the studio.

In addition to SageMaker Studio, the company announced a further five new capabilities: Notebooks, Experiment Management, Autopilot, Debugger and Model Monitor.

AWS SageMaker Studio interface

The first of these is described as a ‘one-click’ notebook with elastic compute.

“In the past, Notebooks is frequently where data scientists would work and it was associated with a single EC2 instance,” explained Larry Pizette, the global head of ML solutions Lab. “If a developer or data scientist wanted to switch capabilities, so they wanted more compute capacity, for instance, they had to shut that down and instantiate a whole new notebook.

“This can now be done dynamically, in just seconds, so they can get more compute or GPU capability for doing training or inference, so its a huge improvement over what was done before.”

All of the updates to SageMaker have a specific purpose to simplify the machine learning workflows, like Experiment Management, which enables developers to visualise and compare ML model iterations, training parameters, and outcomes.

Autopilot lets developers submit simple data in CSV files and have ML models automatically generated. SageMaker Debugger provides real-time monitoring for ML models to improve predictive accuracy, reduce training times.

And finally, Amazon SageMaker Model Monitor detects concept drift to discover when the performance of a model running in production begins to deviate from the original trained model.

“We recognised that models get used over time and there can be changes to the underlying assumptions that the models were built with – such as housing prices which inflate,” said Pizette. “If interest rates change it will affect the prediction of whether a person will by a home or not.”

“When the model is initially built to keep statistics, it will notice what we call ‘Concept Drift’ if that concept drift is happening, and the model gets out of sync with the current conditions, it will identify where that’s happening and provide the developer or data scientist with the information to help them retrain and retool that model.”

Verizon unveils 5G edge compute service at Re:Invent


Bobby Hellard

4 Dec, 2019

AWS and Verizon have partnered to deliver cloud computing services at the edge using 5G connectivity.

The deal will see Amazon’s cloud processing brought closer to mobile devices at the edge thanks to Verizon’s 5G Ultra Wideband Network and AWS Wavelength.

Speaking during an AWS keynote on Tuesday, Verizon’s CEO Hans Vestberg said that his company was “the first in the world to offer 5G network edge computing”.

However, this announcement comes a week after Microsoft and AT&T revealed their own integrated 5G edge computing service on Azure.

AWS and Verizon are currently piloting AWS Wavelength on Verizon’s edge compute platform, 5G Edge, in Chicago for a select group of customers, including video game publisher Bethesda Softworks and the NFL. Additional deployments are planned in other locations across the US for 2020.

“We’ve worked closely with Verizon to deliver a way for AWS customers to easily take advantage of the ubiquitous connectivity and advanced features of 5G,” said Jassy.

“AWS Wavelength provides the same AWS environment – APIs, management console, and tools – that they’re using today at the edge of the 5G network. Starting with Verizon’s 5G network locations in the US, customers will be able to deploy the latency-sensitive portions of an application at the edge to provide single-digit millisecond latency to mobile and connected devices.”

The aim is to enable developers to deliver a wide range of transformative, latency-sensitive use cases such as smart cars, IoT and augmented and virtual reality, according to AWS. The service will also be coming to Europe via Vodafone sometime in 2020.

“Vodafone is pleased to be the first telco to introduce AWS Wavelength in Europe,” said Vinod Kumar, CEO of Vodafone Business. “Faster speeds and lower latencies have the potential to revolutionise how our customers do business and they can rely on Vodafone’s existing capabilities and security layers within our own network.”

AWS re:Invent 2019 keynote: ML and quantum moves amid modernisation and transformation message

“If you wake up on a Casper mattress, work out with a Peloton before breakfast, Uber to your desk at a WeWork, order DoorDash for lunch, take a Lyft home, and get dinner through Postmates,” wrote The Atlantic’s Derek Thompson in October, “you’ve interacted with seven companies that will collectively lose nearly $14 billion this year.”

It is a well-worn line, and as WeWork’s collapse showed, there is plenty of pushback when it comes to the gig economy champions. Yet at the start of his re:Invent keynote today, Amazon Web Services (AWS) CEO Andy Jassy cited Uber, Lyft and Postmates, as well as Airbnb, as examples of the overall keynote theme around transformation. “These startups have disrupted longstanding industries that have been around for a long time from a standing start,” said Jassy.

An eyebrow-raising opening, perhaps. Yet, backed by the re:Invent band once more with half a dozen songs ranging from Van Halen to Queen – AWS has heard of the former even if Billie Eilish hasn’t – the rationale was straightforward. If you’re making a major transformation, then you need to get your ducks in a row; senior leadership needs to be on board, with top-down aggressive goals and sufficient training.

“Once you decide as a company that you’re going to make this transition to the cloud, your developers want to move as fast as possible,” said Jassy. This beget the now-standard discussion around the sheer breadth of services available to AWS customers – more than 175 at the most recent count – with Jassy noting that certain unnamed competitors were ‘good at being checkbox heroes’ but little else.

This was not the only jibe the AWS chief exec landed on the opposition. From transformation, another key element for discussion was around modernisation. This was illustrated by a ‘moving house’ slide (below) which was self-explanatory in its message. Jassy took extra time to point out the mainframe and audit notices. While IBM and particularly Oracle have been long-term targets, the Microsoft box is an interesting addition. Jassy again noted AWS’ supremacy with regard to Gartner’s IaaS Magic Quadrant – adding the gap between AWS and Microsoft was getting bigger.

Last year, the two big headlines were around blockchain and hybrid cloud. Amazon Managed Blockchain did what it said on the tin, but AWS Outposts aimed to deliver a ‘truly consistent experience’ by bringing AWS services, infrastructure and operating models to ‘virtually any’ on-prem facility. Google Cloud’s launch – or relaunch – of Anthos was seen as a move in the same vein, while Azure Arc was seen by industry watchers as Microsoft’s response.

This is prescient as plenty of the product updates could be seen as an evolution of 2018’s re:Invent announcements. Instead of storage, Jassy this time focused on compute; instances and containers.

One piece of news did leak out last week around AWS building a second-generation custom server chip – and this was the first announcement which Jassy confirmed. The M6g, R6g, and C6g Instances for EC2 were launched based on the AWS Graviton 2 processors. “These are pretty exciting, and they provide a significant improvement over the first instance of the Graviton chips,” said Jassy. Another instance launch was seen as another upgrade. While AWS Inferentia was launched last year as a high-performance machine learning inference chip, this year saw Inf1 Instances for EC2, powered by Inferentia chips.

On the container side, AWS expanded its offering with Amazon Fargate for Amazon EKS. Again, the breadth of options to customers was emphasised; Elastic Container Services (ECS) and EKS, or Fargate, or a mix of both. “Your developers don’t want to be held back,” said Jassy. “If you look across the platform, this is the bar for what people want. If you look at compute, [users] want the most number of instances, the most powerful machine learning inference instances, GPU… biggest in-memory… access to all the different processor options. They want multiple containers at the managed level as well as the serverless level.

“That is the bar for what people want with compute – and the only ones who can give you that is AWS.”

Jassy then moved to storage and database, but did not stray too far from his original topic. Amazon Redshift RA3 Instances with Managed Storage enables customers to separate storage from compute, while AQUA (Advanced Query Accelerator) for Amazon Redshift flips the equation entirely. Instead of moving the storage to the compute, users can now move compute to the storage. “What we’ve built with AQUA is a big high-speed cache architecture on top of S3,” said Jassy, noting it ran on a souped-up Nitro chip and custom-designed FGPAs to speed up aggregations and filtering. “You can actually do the compute on the raw data without having to move it,” he added.

Summing up the database side, the message was not simply one of breadth, but one that noted how a Swiss Army knife approach would not work. “If you want the right tool for the right job, that gives you different productivity and experience, you want the right purpose-built database for that job,” explained Jassy. “We have a very strong belief inside AWS that there is not one tool to rule the world. You should have the right tool for the right job to help you spend less money, be more productive, and improve the customer experience.”

While various emerging technologies were announced and mentioned in the second half of last year’s keynote, the big gotcha arrived the day before. Amazon Braket, in preview today, is a fully managed AWS service which enables developers to begin experimenting with computers from quantum hardware providers in one place, while a partnership has been put in place between Amazon and the California Institute of Technology (Caltech) to collaborate on the research and development of new quantum technologies.

On the machine learning front, AWS noted that 85% of TensorFlow running in the cloud runs on its platform. Again, the theme remained: not just every tool for the job, but the right tool. AWS research noted that 90% of data scientists use multiple frameworks, including PyTorch and MXNet. AWS subsequently has distinct teams working on each framework.

For the pre-keynote products, as sister publication AI News reported, health was a key area. Transcribe Medical is set to be utilised to move doctors’ notes from the barely legible script to the cloud, and is aware of medical speech as well as standard conversation. Brent Shafer, the CEO of Cerner, took to the stage to elaborate on ML’s applications for healthcare.

With regard to SageMaker, SageMaker Operators for Kubernetes was previously launched to let data scientists using Kubernetes train, tune, and deploy AI models. In the keynote, Jassy also introduced SageMaker Notebooks and SageMaker Experiments as part of a wider Studio suite. The former offered one-click notebooks with elastic compute, while the latter allowed users to capture, organise and search every step of building, training, and tuning their models automatically. Jassy said the company’s view of ML ‘continued to evolve’, while CCS Insight VP enterprise Nick McQuire said from the event that these were ‘big improvements’ to AWS’ main machine learning product.

As the Formula 1 season coming to a close at the weekend, the timing was good to put forth the latest in the sporting brand’s relationship with AWS. Last year, Ross Brawn took to the stage to expand on the partnership announced a few months before. This time, the two companies confirmed they had worked on a project called Computational Fluid Dynamics Project; according to the duo more than 12,000 hours of compute time were utilised to help car design for the 2021 season.

Indeed, AWS’ strategy has been to soften the industry watchers up with a few nice customer wins in the preceding weeks before hitting them with a barrage at the event itself. This time round, November saw Western Union come on board with AWS its ‘long-term strategic cloud provider’, while the Seattle Seahawks became the latest sporting brand to move to Amazon’s cloud with machine learning expertise, after NASCAR, Formula 1 and the LA Clippers among others.

At the event itself, the largest customer win was Best Western Hotels, which is going all-in on AWS’ infrastructure. This is not an idle statement, either: the hotel chain is going across the board, from analytics to machine learning, the standard database, compute and storage, as well as consultancy.

This story may be updated as more news breaks.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

AWS plugs leaky S3 buckets with CloudKnox integration


Bobby Hellard

3 Dec, 2019

AWS has launched a new tool to help customers avoid data leaks within its simple storage service.

The AWS IAM Access Analyzer is a new function that analyses resource policies to help administrators and security teams protect their resources from unintended access.

It comes from an integration with CloudKnox, a company that specialises in hybrid cloud access management.

It’s a strategic integration designed to protect organisations against unintended access to critical resources and mitigate the risks they face, such as overprivileged identities, according to Balaji Parimi, CEO of CloudKnox.

“Exposed or misconfigured infrastructure resources can lead to a breach or a data leak,” he said. “Combining AWS IAM Access Analyzer’s automated policy monitoring and analysis with CloudKnox’s identity privilege management capabilities will make it easier for CloudKnox customers to gain visibility into and control over the proliferation of resources across AWS environments.”

Amazon S3 is one of the most popular cloud storage services, but because of human error, it’s historically been a bit of a security liability, according to Sean Roberts, GM of Cloud Business Unit at hybrid managed services provider Ensono.

“Over the last few years, hundreds of well-known organisations have suffered data breaches as a direct result of an incorrect S3 configuration — where buckets have been set to public when they should have been private,” he said.

“When sensitive data is unintentionally exposed online, it can damage an organisation’s reputation and lead to serious financial implications. In real terms, this sensitive data is often usernames and passwords, compromising not only the business but its customers too.”

In July, more than 17,000 domains were said to have been compromised in an attack launched by the prolific hacking group Magecart that preyed on leaky S3 buckets. Looking back over the last two years, a number of companies and organisations such as NASA, Dow Jones and even Facebook have been seen breaches from this S3 Buckets.

With the Access Analyzer, there’s a new option in the console for IAM (Identity and Access Management). The toll alerts customers when a bucket is configured to allow public access or access to other AWS accounts. There is also a single-click option that will block public access.

HPE takes on public cloud with GreenLake Central


Jane McCallion

3 Dec, 2019

GreenLake, HPE’s as a service initiative, now has a new component: GreenLake Central.

The product is designed to offer IT departments a similar experience controlling and provisioning on-premises and hybrid IT as they would expect when using a public cloud service.

GreenLake Central, like many of the other offerings that fall under the GreenLake “as a service” umbrella, was created in response to the acknowledgement that public cloud doesn’t serve all requirements, particularly in large enterprises.

“Part of what we are seeing within hybrid is that our clients have moved all the easy stuff off to public cloud, and there’s been a bit of a stall, especially for a bunch of the legacy applications, whether that’s because of regulatory issues, or data gravity issues, or application dependency complexity type issues,” Erik Vogel, global vice president for customer experience for HPE GreenLake, told Cloud Pro.

“What we’re providing… is a consistent experience. We’ve taken the traditional GreenLake and really enhanced it to look and feel like the public cloud. So we have now shifted that into making it making Greenlake operate in a way that our customers are used to getting from AWS or Azure,” he added.

HPE has also incorporated some additional capabilities, such as delivering “EC2-like functionality” to provision and de-provision capacity within a customer’s own data centre on top of their GreenLake Flex Capacity environment.

It has also bundled in some managed service capabilities to help manage a hybrid IT environment. This includes, for example, controlling cost and compliance, capacity management, and public cloud management.

“Very soon we’ll be offering the ability to point and click and add more capacity,” Vogel added. “So if they want to increase the capacity within their environment, rather than having to pick up the phone and call a seller and go through that process, they will be able to drive those purchase acquisitions through a single click within the within the portal, again, being able to manage capacity, see their bills, see what they’re using effectively, what they’re not using effectively.”

GreenLake Central is in the process of being rolled out in beta to 150 customers, and will be generally available to all HPE GreenLake customers in the second half of 2020.