Microsoft to launch cloud data centre region in Israel

Microsoft has announced the launch of a new cloud data centre region in Israel, bringing the total number of countries served by Azure to 21.

The region is expected to go live in 2021, starting with Azure and with Office 365 to follow. The move represents another EMEA expansion, following recent launches in Germany and Switzerland.

Among the list of customers using Microsoft's cloud in Israel are Sheba Hospital and the Tel Aviv Municipality, as well as cryptocurrency firm eToro and DevOps provider Jfrog. 

"When I speak to customers across EMEA, it is clear that the power of the cloud is essential for their competitiveness," said Michel van der Bel, president of Microsoft Europe, Middle East and Africa in a statement. "Offering Microsoft Azure and Office 365 from a data centre region in Israel forms a key part of our investment and involvement in the startup nation, as infrastructure is an essential block for the tech intensity that public sector entities and businesses need to embrace."

The 'tech intensity' marketing message continues to resonate. Anyone who has watched a Microsoft event, or read any promotional material would have happened upon the phrase, first referenced by CEO Satya Nadella at the 2018 Ignite conference. At the end of last year, the company issued a State of Tech Intensity study, polling 700 executives, which explored the emerging technologies organisations saw as critical to their future growth. Machine learning, the Internet of Things, and artificial intelligence were the most frequently cited.

You can read the full announcement here.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Twilio powers Be My Eyes app to aid the visually impaired


Nicole Kobie

23 Jan, 2020

Cloud communications platform Twilio has revealed it’s powering Be My Eyes, an app to help visually impaired and blind people make their way in the world.

Be My Eyes pairs cameras and video chat to help 178,000 visually impaired people get help from more than three million volunteers. For example, the camera can be pointed at a sign, document, or even food packaging, letting the remote volunteer read it out.

The chat function can also be used with specific companies that support the app, which include Microsoft, Google and Lloyds Banking, making it easier for visually impaired people to get specialist help using their services, be it for banking, shopping, or booking tickets.

“Relying on friends and family for everyday tasks can be taxing on relationships and prevent people with visual impairments from achieving true independence,” said Alexander Hauerslev Jensen, chief commercial officer of Be My Eyes.

The free app has been in use since 2015 across both iPhone and Android. Upon launch, it gained 10,000 volunteers and 1,000 users overnight, suggesting there could be high demand. And as the user base grew, so too did lags in connection time, making the app less useful for those who needed it most.

At the time, Be My Eyes was using multiple providers for video connectivity. Now, it’s switched to Twilio Programmable Video for more stability and higher quality connections, reducing connection times in half. Now, Be My Eyes aims to help everyone within a minute of a request, with 90% of connections made within 30 seconds.

“Be My Eyes is a Twilio-powered community support platform that solves a visually impaired person’s problem in a fraction of the time that it would take via audio,” said Jensen. “When you’re asking for help, a little bit of time can feel like an eternity. Every second we can shave off wait times means more trust, more engagement, and a stronger bond in our community. A 50% reduction in connection time can mean a world of difference for the user and the Twilio platform enables us to achieve this.”

The partnership comes under the remit of Twilio.org, the cloud company’s social enterprise division. “Twilio.org was established to help social impact organisations use the power of communications to create positive change on a global scale, and it’s inspiring to see Be My Eyes doing just that,” said Erin Reilly, chief social impact officer of Twilio.

“Be My Eyes is enabling people with visual impairments to live independent lives, no matter where they reside in the world,” Reilly said. “Their innovative use of Twilio enables Be My Eyes to make sure that their users get help when they need it.”

Six Nations broadcasts to get AWS machine learning stats


Bobby Hellard

22 Jan, 2020

The Six Nations rugby tournament is expanding its partnership with AWS to add live in-game analytics during this season’s broadcast of the competition.

Fans will be placed “at the tactics board”, according to AWS, with five new statistical features that analyse key segments of the game.

This follows on from the 2019 Championships, where AWS technology provided fans with insights into scrummages, play patterns, try origins and team trends with statistics generated by data gathered during the games. This is streamed and analysed by AWS platforms like SageMaker and then delivered as insights back to the live TV broadcasts.

“The introduction of the advanced statistics – powered by AWS – in the 2019 Guinness Six Nations Championship was just the start of how we are planning to change the game of rugby through advanced in-game analytics,” said Ben Morel, CEO of Six Nations Rugby.

“This year will see the introduction of even more engaging and informative stats that bring fans even closer to the action. With these innovations, together with AWS, we are seeking to significantly enhance the viewing experience for all rugby fans by providing them with unique data-led insights.”

These have been developed in partnership with data analytics firm Stats Perform and the Amazon Machine Learning Solutions Lab (as well as SageMaker). One of the first new features will be a heatmap of the pitch that illustrates where most of the action is taking place. This will also “visualise” where a team is turning over possession, according to AWS.

Tackling will also be analysed with machine learning models that will map out the location of success and highlight areas that should be exploited. Fans will also be able to “visit the 22” with insights into how successful a team is at entering the oppositions 22-metre area. This will be calculated for every phase of play, with the aim of gaining deeper insight into how long a team stays in the attacking area, how many opportunities it creates and how many it converts.

When it comes to conversions, kicks from the side of the pitch are extremely difficult and with AWS Kick predictor, machine learning will inform fans just how difficult they are. According to AWS, the calculation happens in real-time, during a break in play while the kicker sets himself up. It takes into account a number of real-time, in-game factors, such as the location of the kick, the period of the game it’s being taken, the current score and if the kicking team is playing home or away.

Additionally, it analyses other historical data, for example, the average success rate of the kicking player in the given field zone, during the Championship and during the player’s entire career.

This is very similar to the NFL Next Gen stats platform that predicts the likely success of passes from a quarterback. It also further expands AWS’ portfolio of sports partnership after its recent deal with the German Bundesliga and its deep ties with Formula 1.

FireEye expands cloud security with Cloudvisory acquisition


Nicole Kobie

22 Jan, 2020

Security firm FireEye has acquired Cloudvisory, aiming to bring cloud visibility tools to its offering.

The buyout was for an undisclosed sum and is FireEye’s seventh acquisition. Last year, the security firm bought Verodin for $250 million (£190.4 million) and in 2014 bought Mandiant for $1 billion.

“Customers need consistent visibility across their public and hybrid cloud environments, as well as containerised workloads,” said Grady Summers, Executive Vice President of Products and Customer Success at FireEye. “Cloudvisory delivers this visibility and allows FireEye to apply controls and best practices based on our frontline knowledge of how attackers operate.”

Cloudvisory’s system was created to provide visibility into network traffic, spot and fix misconfigurations, and keep an eye out for compliance issues. On the security front, it claims to detect, block and quarantine attacks.

Summers notes that’s key, as most companies see security as a leading concern with using the cloud. “Security is top of mind for almost all organisations as they migrate critical workloads to the cloud,” Summers adds. “With the addition of the Cloudvisory technology, FireEye is able to offer a comprehensive, intelligence-led solution to secure today’s hybrid, multi-platform environments.”

The acquisition will see Cloudvisory’s capabilities added to FireEye’s Helix, letting customers see all of their cloud environments from one single dashboard, expanding monitoring and compliance.

“Joining FireEye offers Cloudvisory a unique opportunity to combine our innovative approach to cloud visibility and FireEye’s unrivaled insights into the threat landscape,” said Lisun Kung, Cloudvisory co-founder and Chief Executive Officer prior to the acquisition. “We’re excited by the potential to quickly scale and help more organisations secure their cloud and container workloads.”

Alongside the acquisition, FireEye’s Mandiant division also unveiled a pair of new cloud-focused services.

The first is Cloud Security Assessments, available for Office 365, Azure, AWS and Google Cloud. These will look for common misconfigurations and other issues with cloud that attackers use to slip past security measures. “Through tactical coaching and comprehensive recommendations, organisations achieve increased risk visibility and enhanced functional capabilities,” the company said.

The second new service is Cyber Defense Operations, which offers hands-on support and training to help in-house detection and response take a step up. The process begins with an evaluation to highlight goals and capabilities, such as threat hunting.

Then Mandiant personnel will offer training, analysis and other support within the client’s environment. “Through this process, areas for maturation are identified and pursued, helping to identify and resolve visibility gaps and procedural issues,” the company says.

“Our Cloud Security Assessments and Cyber Defense Operations consulting services are two new offerings to help clients protect their key assets before, during and after an incident,” said Jurgen Kutscher, EVP of Service Delivery at FireEye.

The war rages on for AWS, Azure and Google Cloud: Exploring the battlefield and strategy for 2020

The hyperscale cloud providers – Amazon Web Services (AWS), Microsoft Azure and Google Cloud Platform, with other pretenders occasionally cited – naturally generate the vast majority of revenues and, with it, the headlines.

According to figures from Synergy Research in December, one third of data centre spend in Q3 ended up in hyperscalers’ pockets. The company’s most recent market share analysis, again for Q3, found that for public infrastructure (IaaS) and platform as a service (PaaS), AWS held almost two fifths (39%) of the market, well ahead of Microsoft (19%) and Google (9%).

For those who say the race has long since been won, however, the course has gradually been changing as organisations explored hybrid and multi-cloud workflows, as well as tying infrastructure and platform together with software portfolios.

European outlook

In Europe, the battleground is shifting rapidly. Each provider has planted their flag variously, aside from the hubs of London, Frankfurt et al. Google Cloud launched in Poland and Switzerland in 2019 making seven European locations in total, while Microsoft unveiled plans to launch Azure in Germany and Switzerland, also taking its European locations to seven. AWS, meanwhile, has six with two of these regions, Italy and Spain, due in early 2020 and 2023 respectively.

Companies are going deeper with Google and Microsoft when they embed the entirety of their SaaS capability around decision making for infrastructure as well

Nick McQuire, VP enterprise at CCS Insight, says that the competitive environment has ‘obviously turned up a notch’ over the past 12 months. “Even if you rewind 12 months, you’re starting to see the significant gap that AWS had, particularly in the core infrastructure as a service, compute, storage, just slightly become minimised,” he tells CloudTech. “Obviously AWS is still very much a front runner, depending on how you define it – but this is always part of the challenge in the industry.”

Talk to any number of people and you will get any number of definitions as to who is doing what and where. This obfuscation is somewhat encouraged by the hyperscalers themselves. AWS discloses its specific revenues – $8.99 billion for Q319 – while Microsoft and Google do not.

Microsoft directs its financial reporting into three buckets; productivity and business processes ($11bn in Q120), intelligent cloud ($10.8bn), and more personal computing ($11.1bn). Azure growth percentages are wheeled out, but a specific figure is not; the overall figure lies somewhere in the first two categories. According to Jay Vleeschhouwer of Griffin Securities, per CNBC, Azure’s most recent quarter was estimated at $4.3bn. Google, meanwhile, puts its cloud operation as one part of its ‘other revenues’ tag, which was $6.42bn last quarter. Analysts have been asking the company whether it will cut free the specific revenues, only to get a committed non-committal in response.

Yet therein lies the rub. Where do these revenues come from and how does it compare across the rest of the stack? As Paul Miller, senior analyst at Forrester, told this publication in February, the real value for Google, among others, is to assemble and reassemble various parts of its offerings to customers, from software, to infrastructure, and platform. “That should be the story, not whether their revenue in a specific category is growing 2x, 3x, or 10x.”

For McQuire’s part, this is the differentiation between Google and Microsoft compared with AWS. “The alternative approach is where you see companies, typically from the CEO down, that are all-in on transformation, and seeing the workplace environment and internal side of the house as part of that,” he says. “That’s typically where you will see companies go a little bit deeper with a Google or Microsoft; they will embed the entirety of their SaaS applications capabilities in and around decision making for their infrastructure as a service as well.

“That approach very much favours Microsoft, and we’ve seen more and more companies in the context of Microsoft’s big announcements last year.”

The preferred cloud and avoiding lock-in

With this in mind, McQuire sees the rise of the ‘preferred cloud’, as the marketing spiel would put it. AT&T and Salesforce were two relatively recent Microsoft customers whose migrations were illustrated by this word. It doesn’t mean all-in, but neither does it really mean multi-cloud. “Companies will start to entrench themselves around one strategic provider, as opposed to having one multiple cloud, and [being] not necessarily embedded business-wise into a strategic provider,” says McQuire.

This represents a fascinating move with regards to the industry’s progression. Part of the reason why many industries did little more than tip their toes into the cloud in the early days was down to the worry of vendor lock-in. Multi-cloud and hybrid changed that up, so should organisations be fearful again now? McQuire notes Microsoft has been doing a lot to change its previous image, yet a caveat remains.

“There’s always going to be that pre-perceived notion among companies out there that they have to careful with going all-in with Microsoft around this,” he admits. “You see companies navigate through those complexities… [but] I feel that there’s a growing set of customers, particularly globally, and if they’re going with Azure they’re going heavily and quite deep with Microsoft across the piece, as opposed to taking a workload by workload Azure model.”

While Google Cloud is seeing areas of success, particularly among high level services around machine learning, there’s a longer game at play

According to a recent study from Goldman Sachs, more organisations polled were using Azure for cloud infrastructure versus AWS. It’s worth noting that the twice-annual survey polls only 100 IT executives, but they are at Global 2000 companies. Per CNBC again, 56 execs polled used Azure, compared with 48 for AWS.

This again shows the wider strength of the ecosystem, according to McQuire. “For the companies that are making more investments in the infrastructure as a service for Microsoft, they’re doing it with a complete picture in mind around the strength of these higher level services, particularly as you shift into SaaS applications and, more important, a lot of security and management capabilities,” he says. McQuire adds that Microsoft has had success with Azure in the UK, for instance from the number of firms who have moved to Office 365 over the past few years.

What next for Google?

Google Cloud, meanwhile, has had a particularly interesting 12 months. In terms of making noise, under the leadership of Thomas Kurian, the company has been especially vociferous. Its acquisitions – from Looker to Alooma, from Elastifile to CloudSimple – stood out, and even this year a raft of news has come through, from retail customers to storage and enterprise updates.

Expect more acquisitions to come out of Google Cloud in the coming year in what is going to be a long game. Despite the various moves made in terms of recruitment and acquisitions in beefing up Google’s marketing and sales presence, plenty more is to come. “Whilst clearly I think the focus is on improving Google Cloud and targeting very key areas – and they’re seeing areas of success, particularly among high level services around machine learning – there’s a longer game at play,” says McQuire. “The question is: how much time do they have in this arena?

“They’re going to have to focus more and more on some of those higher-level services, as opposed to the commodity infrastructure as a service market,” McQuire adds. “I think it’s going to be an ongoing battle for Google for awareness in the industry, in the market, and more importantly, I think there is still a large number of customers who are just not that well educated on what Google is doing in this space.”

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Cloud fuels IBM’s first quarter of growth since 2018


Keumars Afifi-Sabet

22 Jan, 2020

IBM has pinned its first quarter of growth for more than a year on the changing fortunes of its cloud division, following five  consecutive quarters of declining revenue.

The technology giant grew by 0.1% in the final quarter of 2019 versus the same period the previous year, recording revenues of $21.8 billion (£16.7 billion). This can be attributed to a 21% rise in total cloud revenue, which hit $6.8 billion.

Full year revenue for 2019 declined by 3.1%, however, dropping from $79.6 billion (£60.9 billion) in 2018 to $77.1 billion (£59 billion) last year.

“We ended 2019 on a strong note, returning to overall revenue growth in the quarter, led by accelerated cloud performance,” said IBM chairman, president and CEO Ginni Rometty in a statement.

“Looking ahead, this positions us for sustained revenue growth in 2020 as we continue to help our clients shift their mission-critical workloads to the hybrid cloud and scale their efforts to become a cognitive enterprise.”

Its open source software subsidiary Red Hat, for which IBM closed its $34 billion (£26 billion) acquisition last year, also grew by 24% year-on-year. The company now forms part of IBM’s cloud and cognitive software division, which in total grew by 8.7% to record revenues of $7.2 billion (£5.5 billion).

The firm initially submitted a bid to acquire Red Hat in late 2018 in order to enhance its hybrid cloud portfolio.

The move was largely seen as a shock and was branded a potential disaster by Puppet’s vice president of ecosystem engineering Nigel Kersten. However, these latest figures seem to prove the naysayers wrong.

“In 2019, we continued to invest in the higher-value growth areas of the industry and took bold actions – including several divestitures and a major acquisition – to position our business, which are reflected in our strong gross margin performance,” added IBM’s senior vice president and CFO, James Kavanaugh.

“After completing the acquisition of Red Hat, and with strong free cash flow and disciplined financial management, we significantly deleveraged in the second half.”

Elsewhere, in terms of IBM’s performance in the fourth quarter of 2019, its global business services division declined by 0.6%. Its global technology services, meanwhile, including infrastructure and cloud services as well as support services, dropped a sharper 4.8%.

IBM’s systems division, however, sustained an impressive growth of 16%, which translated to revenues of $3 billion (£2.29 billion). The main culprit was IBM Z, which alone saw a staggering 62% growth year-on-year.

Edge computing and ITOps: Analysing the opportunities and challenges ahead

It’s true that edge computing is hard to define and is running high on the hype scale. But research and surveys continue to indicate that this trend of processing data where it’s collected for better latency, cost savings and real-time analysis is an innovation with legs. There will be 75 billion IoT devices by 2025, according to Statista.

According to Spiceworks’ “2019 State of IT” report, 32% of large enterprises with more than 5,000 employees are using edge computing, and an additional 33% plan to adopt it by 2020.Tied to the growth of edge computing is the advent of 5G wireless: 51 operators globally will start 5G services by 2020, according to Deloitte Global research from 2019.

The major cloud companies are also investing in the edge. The AWS Local Zones service allows single-digit latency connecting to computing resources in a metro environment, while Microsoft offers the Azure Stack Edge appliance and Google Cloud IoT is a “complete set of tools to connect, process, store, and analyse data both at the edge and in the cloud.” It’s safe to say that edge computing is becoming mainstream and CIOs and their IT operations leaders should plan appropriately for it in 2020 and beyond.

Benefits of the edge for ITOps

We’ve read plenty about the business benefits from edge computing: oil rig operators need to see critical sensor data immediately to prevent a disaster; marketers want to push instant coupons to shoppers while in the store; video security monitoring can catch a thief in the act and medical device alerts that ensure patient safety are just a few solid use cases for edge-based processing. Edge computing may save IT money on cloud and network bandwidth costs as data volumes keep exploding and the need to store every data point becomes harder to justify.

There are also implications for IT management and operations. Local processing of high volume data could provide faster insights to manage local devices and maintain high-quality business services when seconds make a difference – such as in the event of a critical server performance issue threatening the ecommerce site.

Today, IT operations teams are inundated with data from thousands of on-premise and cloud infrastructure components and an increasingly distributed device footprint. The truth is, only an estimated 1% of monitoring data is useful, meaning that it provides indications of behavior anomaly or predictions about forthcoming change events.

With edge monitoring, we can potentially program edge-based systems to process and send only that small sliver of actionable data to the central IT operations management system (ITOM), rather than transmitting terabytes of irrelevant data daily to the cloud or an on-premise server where it consumes storage and compute power.

The job of filtering out the highly-contextual data on the edge, where business occurs, can support real-time decisions for successfully running IT operations at speed and scale—regardless of what combination of on-premise, public cloud or private cloud infrastructure is in place. At the same time, ITOps will need to be a leader when it comes to minimising the risk of edge technology from a performance, security and privacy perspective. However, as detailed below, we are in the early stages of determining how to make this work in practice.

These are the ITOps realities for edge computing:

Edge-specific security needs are still unknown

Edge devices are often small and infrequently designed with security in mind. More than 70 percent of edge devices don’t mandate authentication for third-party APIs, and more than 60 percent don’t encrypt data natively. So the attack surface in IoT and edge environments is now larger, and less secure. This is particularly worrisome when considering edge devices that collect personally identifiable information such as email, phone numbers, health data or financial formation like credit card data. IT operations will need to work closely with security and legal teams to map out the company-specific risk, governance and compliance requirements around managing edge data.

Edge monitoring tools are immature

Companies need platforms that can instantly monitor and analyse edge-generated data. In the connectivity of tomorrow, billions of connected devices will be communicating machine-to-machine, and the addition or subtraction of connected devices will be possible at an unprecedented scale. In this environment, the ability to manage large volumes of connected devices and the information being exchanged between them will be critical. 5G acts as the unifying technology, bringing flow of information and the density of scale. We will see an influx of innovation in edge monitoring in the coming years.

New environments call for new rules

As organisations move more data and application assets to edge computing environments, IT will need to devise new policies and thresholds for central processing and alerting of all this data. Applying AI-based automation is essential here, as manual efforts will have zero chance of keeping up with the volume of data filtering, analysis and response. We are entering the age of nano satellites, vis-à-vis SpaceX and OneWeb. These edge devices will transform the future of agriculture, energy, mining, transportation and finance due to their capabilities for sending insightful data in real-time to customers, wherever they are at any moment. IT operations will have its work cut out to understand and properly manage this evolving edge infrastructure.

DevOps processes will become even more paramount

If you haven’t already realised that DevOps is taking over software development and IT management, just wait for when edge goes mainstream. There will be no other way to manage change and deployments of edge technology without the agile, continuous integration and continuous delivery methodology of DevOps. It will be imperative for ITOps to adopt DevOps practices and tools to manage, monitor and deploy edge resources.

Conclusion

ITOps is at a crossroads, determining how much of the past is still relative and how much they will need to change to adapt to a distributed, hybrid cloud world that will soon include edge as a fundamental pillar of their digital strategy. Security, machine intelligence and DevOps will be crucial areas of expertise for ITOps teams looking to help drive better business value and customer experiences from the edge.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Barts Health NHS Trust shifts to cloud with Capgemini


Nicole Kobie

21 Jan, 2020

Barts Health NHS Trust has turned to Capgemini to help modernise its ICT estate using the cloud.

The three-year agreement will see Capgemini work across all five hospitals in the Trust, rolling out end-to-end cloud services across sites in Central and East London. One of the largest in the country, the Trust manages hospitals in Mile End, Whipps Cross, and Newham, as well as the Royal London and St Bartholomew’s hospitals.

The aim is to modernise the Trust’s existing estate, shifting away from legacy systems in favour of cloud technologies. That includes the assessment and migration of mission-critical workloads to different cloud providers, as well as wider management tools and security systems, Capgemini said.

“As the largest NHS Trust, and one of the pioneers in taking a step towards modernising its IT infrastructure through migrating the services to the cloud, the Trust will benefit from a more secure, scalable and agile operating environment that is more cost effective than current legacy IT infrastructure,” said Matt Howell, head of Public Sector at Capgemini in the UK.

The move is part of wider plans to modernise Barts’ technology to benefit patients and staff, said Sarah Jensen, CIO of Barts Health Trust. “With their existing experience in providing cloud hosting services and digital transformation solutions to the NHS, we are excited about the journey we have started together and are confident our partnership will continue to add value in our ever-challenging environment which ultimately leads to better care for patients.”

The deal follows last year’s efforts to build a Cloud Solutions Framework to make it easier to find suppliers for NHS and other public-sector organisations; Capgemini UK is one of the suppliers listed in the framework. The framework has four different lots — covering everything from consultancy to end-to-end cloud — with a handful of suppliers in each.

“The result is a specialist pool of 24 leading suppliers, which provide the greatest expertise and value-for-money to the public sector,” said Phil Davies, procurement director at NHS Shared Business Services, at the time.

Oracle appoints former VMWare channel lead


Daniel Todd

21 Jan, 2020

US tech firm Oracle has appointed former VMWare channel chief Ross Brown as vice president of its Cloud GTM division.

Despite the recent hire, Oracle has so far declined to reveal specific details surrounding the move and whether he will be directly involved with Oracle Cloud’s channel.

However, Brown’s LinkedIn profile has revealed his new position as Oracle’s Cloud chief, where he will be responsible for leading its go-to-market operations. According to the brief entry on his page, he will lead the segment from Seattle, Washington, where the firm has recently expanded its cloud infrastructure workforce.

Brown joins the business following a 20-month sabbatical. Prior to that, he held the position of senior vice president of VMWare’s WW Partners and Alliances division, where he was responsible for helping partners integrate the company’s technology into their services and solutions.

After joining the firm in August 2015, Brown was also responsible for rebuilding VMWare’s Development Fund Program, as well as developing and gaining executive and partner endorsements.

According to his LinkedIn page, his team oversaw a 30 percent increase in weekly partner deal creation rates, an increase in average deal size and a hike in partner certification investments.

Previously, he also spent four years in channel leadership at Microsoft, as well as a senior leadership stint at Citrix earlier in his career.

The appointment follows Oracle’s recent efforts to further drive adoption of its cloud infrastructure service around the world. Back in October, the firm revealed plans to expand its cloud division by up to 2,000 new employees, covering roles in cloud operations, software development, business operations and more.

“Our aggressive hiring and growth plans are mapped to meet the needs of our customers, providing them reliability, high performance, and robust security as they continue to move to the cloud,” Don Johnson, executive vice president at Oracle Cloud Infrastructure, said in a statement at the time.

Lufthansa to tackle flight delays with Google Cloud migration


Bobby Hellard

21 Jan, 2020

German airline Lufthansa will use Google Cloud services to minimise disruptions caused by flight delays and other irregularities.

The two companies will build an AI-based platform that will suggest scenarios to return to a stable flight plan should adverse weather or flight delays impact customers.

The company will be migrating data from various parts of its business that are relevant to flight schedules, such as aircraft replacements and its crews work patterns.

In the future, it will be possible to offer faster rebooking possibilities across all Lufthansa services, the airline said.

A joint team of operations experts, developers and engineers from the Lufthansa and software engineers from Google Cloud will be developing and testing the appropriate platform, with a test launch set to take place in Zurich with SWISS.

“Through this collaboration, we have a significant opportunity to revolutionise the future of airline operations,” said Thomas Kurian, CEO for Google Cloud. “We’re bringing the best of Lufthansa Group and Google Cloud together to solve airlines biggest challenges and positively impact the travel experience of the more than 145 million passengers that fly annually with them.”

A number of companies have already begun the AI revolution of airlines, such as British Airways and its AI-powered robots that aim to help reduce congestion and help customers with queries at Heathrow Airport’s Terminal 5.

The two bots will take part in a trial, which is part of a wider, five-year plan to improve customer experience. This is backed by a £6.5 billion investment, which has also seen BA roll out 3D printing, driverless baggage vehicles, and other innovations such as automated check-in desks and more.

At Heathrow, traffic controllers are trialling AI technology that could see the proposed third runway built without the need for a new control tower.

The airport has said that a 2.5 million “digital tower laboratory“, with a suite of ultra-high definition cameras and AI technology, has been built at the base of Heathrow’s existing tower.