Microsoft beats Amazon to win $10m US JEDI contract


Bobby Hellard

28 Oct, 2019

The Pentagon has awarded its $10 billion cloud computing contract to Microsoft, instead of Amazon, which received criticism from President Donald Trump and rivals. 

Amazon’s AMS was seen as the front runner for most of the bidding process and said it was “surprised” by the decision.

The contract, known as the Joint Enterprise Defence Infrastructure (JEDI), pitted some of the world’s biggest tech companies against each other with the ultimate prize being to upgrade the US defence department’s IT systems.

The project has been marred in controversy and complaint, particularly over the decision to offer it to a single vendor. This resulted in legal action and also caught the attention of the President, Donald Trump.

End of the JEDI saga

The JEDI project is about replacing the Department of Defences (DoD) ageing computer networks with a single cloud system.

As winners of the contract, Microsoft will provide AI-based analysis and store classified military information, as well as a host of other computer services. A big reason for the project is to give the military better access to data and the cloud from battlefields, which also proved to be to big a concern.

That was the case for Google who was the first to drop out of the JEDI race in October 2018. The decision followed its announcement that it would not renew another military contract called Project Maven after protests from its employees.

“We are not bidding on the JEDI contract because first, we couldn’t be assured that it would align with our AI Principles,” a Google spokesman said in a statement. “And second, we determined that there were portions of the contract that were out of scope with our current government certifications.”

The parts of the contract that Google cited were also issues for both IBM and Oracle who filed lawsuits against the DoD in December last year, arguing that there were conflicts of interest between former Pentagon and AWS employees.

Oracle was removed from the bidding process in April, before the ruling from that lawsuit, when it failed to meet the requirement of having three data centres with FedRAMP Moderate ‘Authorised’ support.

AWS Trumped

IBM was also ruled out, not long after, leaving Microsoft to battle it out with the favourite, AWS. However, in August, the bidding caught the attention of President Trump, who has had a long public spat with Amazon CEO Jeff Bezos.

A year before, it was reported that Trump called his Pentagon Secretary James Mattis and directed him to “screw Amazon” out of a chance to bid on the JEDI contract. This is according to Mattis’ forthcoming book “Holding The Line: Inside Trump’s Pentagon with Secretary Mattis.” The account was written by Guy Snodgrass, who served as a speechwriter for Mattis.

The official line from the Pentagon is that it weighed up the bidding fairly and that Microsoft was the rightful winner. But reports of Trump’s involvement cast some doubt over those statements; Amazon said it was “surprised about this conclusion”.

“AWS is the clear leader in cloud computing, and a detailed assessment purely on the comparative offerings clearly lead to a different conclusion,” said an AWS spokesperson. “We remain deeply committed to continuing to innovate for the new digital battlefield where security, efficiency, resiliency, and scalability of resources can be the difference between success and failure.”

AWS reports $8.99bn in revenues for Q319 – yet slowing growth concerns analysts

Amazon Web Services (AWS) announced revenues of almost $9 billion for the most recent quarter – but growth fell on last year's totals meaning a more subdued outlook.

AWS posted $8.99bn (£7bn) for Q319 at a growth of 35% year on year – however this compares with 37% growth for Q219, and a 46% growth rate for this time last year. Amazon's cloud arm now represents 12.8% of Amazon's overall revenues, compared with 11.8% for the previous year's quarter.

Naturally, many of the analyst questions focused on the performance of AWS. Stephen Ju, of Credit Suisse, enquired around the long-term potential margins, saying it 'pretty much sold itself' to begin with and noting the sales and marketing increases with potential engineering hire downturns.

Brian Olsavsky, Amazon chief financial officer, noted the increasing importance of long-term commitment in terms of pricing. "Our margins expectations are that we will price competitively and continue to pass along pricing reductions to customers, both in the form of absolute price reductions and also in the form of new products that will in effect cannibalise the old ones," said Olsavsky.

Various new products were launched among the highlights for AWS in the most recent quarter. AWS Lake Formation, a service which helps customers build data lakes, and fully managed machine learning product Amazon Forecast were the biggest releases. In terms of news, the announcement of Amazon migrating all of its consumer databases from Oracle to AWS – complete with celebrations – earlier this month was of greatest interest.

In the previous quarter, this publication noted that large expectations accompanied large numbers. Growth had again dipped for AWS, which naturally saw pessimism from the analysts. Yet as long-time industry watcher Synergy Research noted, more than 100% growth rates could not carry on forever.

This time round, a note from Synergy was in similar tones. "I've seen some comments expressing worrries over the gradual reduction in annual growth rates but this is not a real concern," wrote John Dinsdale, Synergy chief analyst and research director. "It is a truism that as great scale is achieved, then growth rates will decline. The sequential growth in cloud service spending was around $1.5 billion in Q3, in line with the growth seen in the first two quarters of the year.

"Did someone say the market is weakening? I don't think so," added Dinsdale. "The cloud market is in rude good health."

While Amazon's results were being reported, AWS was under from a DDoS attack which took its S3 storage service, and others, offline for up to eight hours. According to an AWS status at the time: "Between 10:30 AM and 6:30 PM PDT, we experienced intermittent errors with resolution of some AWS DNS nsames. Beginning at 5:16 PM, a very small number of specific DNS names experienced a higher error rate. These issues have been resolved."

You can read Amazon's full financial report here.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

If your enterprise is still on the fence around cloud – here’s what you need to know today

Cloud infrastructure services are rapidly becoming the de facto choice for enterprise IT workloads. According to Gartner, the 2019 worldwide revenue from public cloud IT services is expected to grow by 17.5% and will become a $330 billion dollar industry by 2022. Cloud-based technology is no longer an emerging trend, it’s mainstream, with 69% of enterprises moving business-critical workloads to the cloud.

What is the appeal of the cloud?

Cloud technology enables an agile working environment that can drive successful business transformation initiatives. In most cloud solutions, all a user requires is an active internet connection and login credentials to consume enterprise workloads. An Agile workplace helps to facilitate team collaboration, hot-desking, and home working initiatives that can boost productivity and enhance working relationships.

In cloud computing, everything is bigger, and the sheer scale of major cloud provider’s technical solutions is staggering, harnessing this scalability is another major appeal of the cloud. Cloud products are horizontally and vertically scalable, meaning users can scale out their applications using multiples of efficient compute nodes, and scale up (and down) dynamically adding or removing compute resources to individual systems.

As businesses grow, there may be a surge in capacity requirements, including a faster network and the extra demand for storage. Onsite enterprise data centers are expensive to maintain, and purchasing new hardware is heavy on the wallet. With the cloud, petabytes of storage are available at the click of a button, and you only pay for what you consume. Cybersecurity is always a top agenda item in any company boardroom, and cloud computing enables users to consume security as a service.

Cloud security is primarily about protecting against the user's data being compromised (destroyed or stolen), and users experiencing a service outage (denial of service). Cloud platforms designed from the ground up to be secure, and as threats are increasing in scale and severity, many enterprise organizations are choosing the cloud to mitigate the security risk.

Cloud infrastructure has backup and redundancy capabilities at its core. All cloud providers offer some type of backup-as-a-service, and the system architecture is created to be redundant, so that all data is protected, all of the time. Offsite copies of data are stored regionally specific, and most cloud providers offer disaster recovery services as standard, giving the user the capability to seamlessly fail over services to another region/country if major system issues are experienced.

One other major appeal of the cloud is the expectation of cost savings, although the costs will take time to reduce, over time, the capital expenditure will decrease significantly as businesses switch away from a local data center model, buying and leasing servers, and all the associated costs and complexities of licensing.

Making preparations

The jump to the cloud requires significant planning and preparation to reap the wealth of benefits available. Even if a business chooses to outsource this responsibility, we recommend all organizations have a grip on what cloud services they want and how they want to consume them.

Multitudes of technical activities are required for successful cloud migration. Creating Service Level Objectives (SLO) is an essential task to help define how the service should perform. Setting Service Level

Indicators (SLI) will allow you to measure the attributes of the service, such as system availability or the overall performance of the service. Together, these will help determine if a cloud solution is fit for purpose. Google Cloud suggest the next steps are the creation of a presentation layer (network) that handles the flow of information through the cloud service, a Business logic layer (compute) that manipulates the data to make it useful for the user, and the data layer (Storage) to store or retrieve the digital information.

Each cloud design must be resilient, horizontally and vertically scalable, and disaster recovery capable. A distributed design adds resiliency for geographic scaling and failover. Many businesses experience a “peak season” where system usage ramps up for a period of time, scalability of compute resources and being able to increase the number of compute nodes adds an elastic computing capability.

Cloud services are secure, future-proofed and cost-optimized. In a traditional data center, physical or virtual computing assets are purchased in advance, often sitting idle, wasting money, resources, and power. On-Demand compute fixes this capacity planning problem.

Additional services such as automated deployment (DevOps), monitoring, alerting and incident response are an inherent design of the cloud. Stateless design drives SLI, SLO and SLA objectives and your enterprise will be able to grow exponentially, both financially and geographically, with the benefits of uptime, scalability and future expansion being readily available.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

The rise of obfuscated VPN servers and their use cases: A guide

VPNs continue to be used extensively as tools to protect data security and user privacy. Yet, as to be expected, there are many providers available, and many options within those providers – so buyer confusion can reign.

A virtual private network, by itself, is the secure, private connection between your device and your intended destination. When dealing with VPN servers, the options start to broaden. There are a number of server categories to choose from; standard servers, double VPN servers, where the traffic is encrypted twice, ‘Onion over VPN’, which involves the Onion network, dedicated IP servers, P2P servers, and obfuscated servers.

Increasingly, obfuscated VPN servers are becoming a useful tool, particularly for users in countries with limited internet access. So what are obfuscated VPN servers? How do they work? And what are your options?

What is an obfuscated VPN server?

An obfuscated server can bypass internet restrictions such as network firewalls. In countries with restricted access, these types of servers are recommended. Why is this necessary? Although many people feel as if the internet should be free to roam and use as they wish, that’s not always the case. Consider VPN blocks – they aren’t just for government entities. You will find VPN blocks like ISPs, streaming services, universities and schools that also prevent the use of VPNs.

Obfuscation, also known as OBFU, restricts reverse engineering in programs, making it hard for hackers to access metadata. In other words, this VPN helps take data and makes it look like a jumbled mess.

An example of VPN obfuscation

Some people may refer to this as “stealth” or “camouflage” mode. VPN providers can’t physically put their VPN servers in countries that have strict censorship rules, so they use virtual servers with obfuscation to bypass their firewalls. It disguises data passing through the VPN app to look like regular HTTPS traffic.

Here’s a good example of an obfuscated VPN server and how it can be used. Consider Netflix and how it distributes shows among regions at different costs. In Australia the service may be $7.10, while in Australia that same service could cost $11.90. The server levels the playing field, allowing the user to get the $7.10 deal instead of having to pay $11.90. For online gamers, this is gold. If their ISP is charging more for gaming but a lesser price for general browsing, the VPN traffic can be altered to look like the user is just browsing the web. While the ethics of this can be questioned, there is no doubt that this trend helps drive VPN usage.

Banned VPN countries

Even with the ability to use an obfuscated VPN server, a handful of countries have banned the use of VPNs or have otherwise made them illegal. Here are those countries and why:

China: China has the Great Firewall (GFW) that was designed to filter and block restricted websites and services. It is one of the largest and most intricated technologies designed for censoring and mass surveillance. China passed CL97 legislation that not only criminalizes cybercrime, but people found to use VPNs in some parts of China can be fined or worse. Some of the websites blocked from mainland China include Google, Gmail, Instagram, Pinterest, YouTube, Dropbox, The New York Times, Facebook and Twitter.

Russia: Russia is another country that bans the use of VPNs to restrict the spread of extremist and unlawful conduct. The Russian government wants to restrict what content can be accessed in the country. Anyone found using an VPN can be fined up to $5,100, and VPN providers can be fined up to $12,000.

Iran: Iran has given harsh penalties to anyone using a VPN in their country since 2013. There are a few government-approved VPNs regulated by the government that are allowed. If caught using a VPN, the user can face up to one year in prison.

UAE: The United Arab Emirates also considers VPN usage a federal offense or crime. If found using a VPN, the user can be fined between $136,000 to $544,000 U.S. dollars. This ban is only imposed on individuals using VPNs for personal use. Banks and other institutions can freely use VPNs. Law No 5 of 2012 states local residents can only use state-owned VPNs and can face life imprisonment.

Are there providers that offer an obfuscated VPN?

With countries continuing to block VPN servers, there are only a few providers which offer this type of functionality:

Surfshark: Surfshark currently has 1040+ servers in over 61 countries, including Russia and the UAE. Known for its privacy, speed and performance, it has outstanding customer support and features.

VyprVPN: VyprVPN has developed their own proprietary VPN protocol called the Chameleon. It effectively obfuscates 256-bit OpenVPN encrypted traffic and transmits it using the port 443. The Chameleon protocol has been said to bypass restrictions in China, Russia, India, Turkey, Iran and Syria. It is available for all major platforms including Windows, Mac, iOS, and Android, along with features such as VPN kill-switch, NAT protection and Smart VPN.

NordVPN: NordVPN effectively bypasses regional firewalls like the GFW and passes all regional geo-restrictions. They have 5000+ servers and offer a dedicated list of obfuscated servers. They also have features such as Kill Switch, Smart play, double VPN and military encryption.

ExpressVPN: This provider does not log user data and users can obfuscate their network traffic to bypass the China GFW. They operate at very super-fast speeds and have a server park of 2000+ servers around the world. Their MediaStreamer technology works as a Smart DNS serve to help unblock geo-even the most heavily restricted content.

IPVanish: IPVanish does not have a dedicated obfuscation mode but makes it very simple to obfuscate traffic with the flip of a toggle switch. Additionally, obfuscation can be enabled on both desktop and mobile applications. They have 1,300+ servers in 75+ locations around the world.

How VPN obfuscation works

Most of the time, when connecting to an obfuscated server, a mechanism steps in that makes it impossible to block the VPN tunnel. Then, OpenVPN data packets with a Header and Payload work together to activate the encryption. XOR Obfuscation then removes all the metadata from the packet header, transforming it into meaningless information which prevents the identification of a VPN protocol. That VPN data then becomes HTTPS encrypted web traffic and the data packets go through a second layer of encryption with SSL or TLS protocols. Then the VPN data is assigned to port #443.

There is another method of obfuscation developed by the TOR Network called Obfsproxy where data is wrapped into an obfuscation layer that used pluggable transports. These scramble the VPN traffic, allowing users to bypass firewalls and geo-restrictions while protecting users from VPN detection and blockages.

When considering which type of VPN would be most useful, the obfuscated VPN server works well in instances where communications may be filtered or blocked. Businesses could benefit from using this type of VPN server when communicating with employees who may be traveling to those areas that have severe restrictions in place. Completely different from a standard VPN, it's important to outline the reasons and usage of this type of VPN server.

It is also important to determine whether there will be a record of activities, especially if the goal is to keep an identity anonymous. With cyber crime being so prevalent around the world, taking all steps to ensure the safety of data and sensitive information is key. If searching for complete online privacy, a secure connection, and safe content accessibility anywhere in the world, it's worth a deeper look to figure out which provider offers the most features and security.

Obfuscated or not, the value of a VPN goes beyond price, but offers a level of security most people need when surfing the web or conducting transactions. Taking into account data privacy laws, restrictions and new regulations that continue to hinder online activities, putting this type of protection in place for personal or business reasons should work to mitigate some potential risks that could stop productivity and other essential functions.

Editor's note: This article is brought to you in association with Surfshark.

Microsoft again secures strong revenues with ‘material growth’ in $10m Azure contracts noted

Microsoft has reported revenues of $33.1 billion (£25.7bn) for its most recent quarter, with CEO Satya Nadella emphasising the importance of artificial intelligence (AI) in building out cloud applications.

Writing the headline for Microsoft’s quarterly earnings release requires conforming to some kind of template: ‘Microsoft Cloud [noun] [verb] [quarter] Results’. The noun, usually ‘growth’ or ‘strength’, is optional, as is the word ‘record’ when allowable, while the requisite verb will either be ‘powers’, ‘fuels’, or ‘drives.’

This time round, it is a no-frills “Microsoft Cloud Strength Drives First Quarter Results”, with solid increases across the board. Microsoft’s revenues are placed into three buckets; productivity and business processes, which hit $11 billion (£8.6bn) at a 13.3% increase, intelligent cloud, which was at $10.8bn at a 26.6% yearly increase, and ‘more personal computing’, at $11.1bn representing a 3.6% change. Specific Azure figures are as ever not disclosed, however chief financial officer Amy Hood told analysts there had been ‘material growth’ in the number of $10 million plus contracts in the quarter.

In prepared remarks, Nadella cited the continued growth of Azure powering the rest of Microsoft’s stack – although stopping short of using ‘the world’s computer’, as with the most recent results – yet added a warning. “Organisations today need a distributed computing fabric to meet their real world operational sovereignty and regulatory needs,” said Nadella.

“Every Fortune 500 customer today is on a cloud migration journey, and we are making it faster and easier. We are reimagining customers’ data estates with the cloud era with new limitless capabilities.

“We are accelerating our innovation across the entire technology stack to deliver new value for customers,” Nadella added. “We’re investing aggressively in large markets with significant growth potential, and it’s still early days.”

One area which appears to be gaining traction is through Microsoft 365, the name for the overall suite of ‘productivity cloud’ products. As reported by ZDnet, the company’s ‘Windows commercial products and cloud services revenue growth’ may be the indicator, with that category seeing a particular spike up 26% in Q120 compared to 12% the year before.

“Microsoft 365 is the world’s productivity cloud and the only comprehensive solution that empowers everyone from the C-suite to first line workers with an integrated secure experience on any device,” added Nadella. “We’re infusing AI across Microsoft 365 to help make work more intuitive and natural.”

Microsoft’s highlights from the most recent quarter were varied. The company expanded its cloud data centres to Germany and Switzerland, as well as India, where a major partnership with network operator Reliance Jio was announced in August. Microsoft announced the acquisition of cloud migration tool Movere in September, and most recently secured a partnership with SAP – putting them ahead of their hyperscaler rivals.  

You can read Microsoft’s full results here.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Google and IBM debate “quantum supremacy” in academic spat


Bobby Hellard

24 Oct, 2019

Google and IBM have got into a “quantum supremacy” discourse with the later discrediting the claims of the former.

Google said its giant 53-qubit Sycamore quantum processor was able to perform a complex mathematical problem in 200 seconds, while the world’s most-powerful supercomputer would need 10,000 years to complete.

Quantum supremacy is a theory put forward by Caltech professor John Preskill who said ‘supremacy’ is achieved when a quantum computer can do something a normal computer cannot.

Google made its supremacy claim in a paper called “Quantum supremacy using a programmable superconductor processor” published in the research journal Nature.

But the accuracy of the experiment was quickly called out by IBM, which refuted the claims.

“We argue that an ideal simulation of the same task can be performed on a classical system in 2.5 days and with far greater fidelity,” IBM said in a blog post. “This is, in fact, a conservative, worst-case estimate, and we expect that with additional refinements the classical cost of the simulation can be further reduced.

“Because the original meaning of the term ‘quantum supremacy,’ as proposed by John Preskill in 2012, was to describe the point where quantum computers can do things that classical computers can’t, this threshold has not been met.”

IBM’s research staff said that when Google’s comparison to classical computing was made, it relied on an advanced simulation that uses parallelism, fast and error-free computation, and large aggregate RAM. But, it failed to fully account for enough disk storage.

Big Blue, which is deep into its own quantum computing research, said that it’s “Schrödinger-style” classical simulation approach uses both RAM and hard drive space to store and manipulate the state vector.

The tech giant also suggested the term “supremacy” is misleading and has a negative connotation. The word, it explained, “exacerbates overhyped reporting” on the status of quantum technology and that “through its association with white supremacy, evokes a repugnant political stance”. 

“A headline that includes some variation of ‘Quantum Supremacy Achieved’ is almost irresistible to print, but it will inevitably mislead the general public,” IBM said. “First because, as we argue above, by its strictest definition the goal has not been met. But more fundamentally, because quantum computers will never reign ‘supreme’ over classical computers, but will rather work in concert with them, since each have their unique strengths.”

How cloud technologies continue to enable innovation in the pharmaceutical industry

The pharmaceutical world is in the midst of a sea change. As new deadlines approach for compliance with the Drug Supply Chain Security Act (DSCSA), pharmaceutical companies are racing — and struggling — to comply. The provisions’ end goal — creating an electronic database to identify and trace the distribution of prescription drugs throughout the U.S. and ensuring proper licensing — is clearly an important one for public safety. But that doesn’t mean it’s an easy transition.

Meanwhile, companies still are going about their regular business of developing life-changing pharmacological solutions. To be successful, organisations must maintain a delicate balancing act, and aren’t able to fully direct their attention to any one area.

Fortunately, modern cloud technology, including enterprise resource planning, stands to aid pharmaceutical executives with these challenges. Not only do cloud technologies help companies update their reporting and accounting processes to meet modern standards, they open up new internal efficiencies that allow for unprecedented innovation. Here are a few ways cloud solutions will benefit modern pharmaceutical companies.

Eliminating mindless tasks

Many people become fearful when they hear news reports of increasingly automated jobs. And it’s true that 38% of American jobs are at high risk of automation by 2030. But what that statistic doesn’t show is that many of these jobs don’t require human empathy, creativity and problem-solving skills in the first place. Processes like product tracking can and should be shifted to automated, cloud-based tools, freeing up human employees to concentrate on high-level innovation.

Enabling new technologies

Pharma companies have access to a mind-boggling supply of genomic data; the datasets available for research double in size every eight months and in the past 10 years, the Broad Institute alone has generated 70 petabytes of genomic data from 100,000 genomes — the equivalent of 1.2 billion hours of streaming music files.

There is no conceivable way any pharma organisation can efficiently store or utilise such a mountain of data without the help of cloud storage. And by pairing the cloud with an ERP solution, pharma companies gain even further benefit from the ability to scale large datasets when they connect artificial intelligence and machine-learning models. Cloud and ERP pairings also enable the Internet of Things (IoT) devices that automate processes like supply chain tracking, inventory management and serialisation.

Ensuring access to information

Nearly all companies that rely on computing, pharmaceutical or not, suffer from some sort of data siloing. Information isn’t readily accessible across departments, and employees often don’t know where to look for crucial documents. In fact, nearly half (43%) of workers have avoided sharing a document with a colleague because they couldn’t find it or believed it would take too long to find.

That’s an unacceptable figure, especially in pharma, where documentation and knowledge-sharing are critical for safety, as well as compliance with DSCSA database requirements. Pharmacy chains, hospital networks, regulators and other key parties need instantaneous access to this information — access that’s readily available when data lives in the cloud. Pharmaceutical professionals are able to collaborate across departments and access the information they need to innovate, while resting assured that external facing users have appropriate resources.

When pharmaceutical companies move to cloud-based solutions, including enterprise resource planning, the sky truly is the limit for innovation and discovery.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Databricks raises $400m in series F funding and tops $6bn valuation

Big data and analytics platform provider Databricks has announced a $400 million (£310m) series F funding round – putting the company at a more than $6 billion valuation.

The San Francisco-based firm, which helped create big data processing framework Apache Spark, only closed its series E round to the tune of $250m back in February, revealing significant growth. The company’s remit is focused around ‘unified data analytics’, whereby artificial intelligence (AI) technologies are combined with data processing for more tangible, actionable results.

The series F round was led by Andreessen Horowitz’s (a16z) late stage venture fund, with a wide cast list of supporting players, including Coatue Management, Microsoft, and New Enterprise Associates (NEA). a16z has long since been a supporter of Databricks, having claimed upon the series E funding that the company was the ‘clear winner in the big data platform race.’ This time round, a16z general partner David George claimed Databricks’ net revenue retention was ‘astounding.’

So why the additional funding? Like its open source heritage, Databricks has built three technologies based around data management and machine learning. Delta Lake is a storage layer which aims to bring reliability to data lakes, MLflow is a platform for the end-to-end machine learning lifecycle, while Koalas aims to make Pandas, a Python data science tool, more compatible with big data sets.

Databricks is also looking to put €100 million towards its European centre in Amsterdam, with the company saying its engineering hub had already grown by three times over the past two years. The company is looking at further expansion in the Middle East, Africa, Asia Pacific, and Latin America.

“Data teams at thousands of organisations globally are now leveraging our Unified Data Analytics Platform to solve their toughest problems,” said Ali Ghodsi, Databricks CEO and co-founder in a statement. “Our bets on massive data processing, machine learning, open source and the shift to the cloud are all playing out in the market and resulting in enormous and rapidly growing global customer demand.

“As a result, Databricks is among the fastest growing enterprise software cloud companies on record,” added Ghodsi.

Databricks was placed in the top 20 of the most recent Forbes Cloud 100, published in September.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Databricks raises $400m in series F funding and tops $6bn valuation

Big data and analytics platform provider Databricks has announced a $400 million (£310m) series F funding round – putting the company at a more than $6 billion valuation.

The San Francisco-based firm, which helped create big data processing framework Apache Spark, only closed its series E round to the tune of $250m back in February, revealing significant growth. The company’s remit is focused around ‘unified data analytics’, whereby artificial intelligence (AI) technologies are combined with data processing for more tangible, actionable results.

The series F round was led by Andreessen Horowitz’s (a16z) late stage venture fund, with a wide cast list of supporting players, including Coatue Management, Microsoft, and New Enterprise Associates (NEA). a16z has long since been a supporter of Databricks, having claimed upon the series E funding that the company was the ‘clear winner in the big data platform race.’ This time round, a16z general partner David George claimed Databricks’ net revenue retention was ‘astounding.’

So why the additional funding? Like its open source heritage, Databricks has built three technologies based around data management and machine learning. Delta Lake is a storage layer which aims to bring reliability to data lakes, MLflow is a platform for the end-to-end machine learning lifecycle, while Koalas aims to make Pandas, a Python data science tool, more compatible with big data sets.

Databricks is also looking to put €100 million towards its European centre in Amsterdam, with the company saying its engineering hub had already grown by three times over the past two years. The company is looking at further expansion in the Middle East, Africa, Asia Pacific, and Latin America.

“Data teams at thousands of organisations globally are now leveraging our Unified Data Analytics Platform to solve their toughest problems,” said Ali Ghodsi, Databricks CEO and co-founder in a statement. “Our bets on massive data processing, machine learning, open source and the shift to the cloud are all playing out in the market and resulting in enormous and rapidly growing global customer demand.

“As a result, Databricks is among the fastest growing enterprise software cloud companies on record,” added Ghodsi.

Databricks was placed in the top 20 of the most recent Forbes Cloud 100, published in September.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

AWS servers hit by sustained DDoS attack


Keumars Afifi-Sabet

23 Oct, 2019

Businesses were unable to service their customers for approximately eight hours yesterday after Amazon Web Services (AWS) servers were struck by a distributed denial-of-service (DDoS) attack.

After initially flagging DNS resolution errors, customers were informed that the Route 53 domain name system (DNS) was in the midst of an attack, according to statements from AWS Support circulating on social media.

From 6:30pm BST on Tuesday, a handful of customers suffered an outage to services while the attack persisted, lasting until approximately 2:30am on Wednesday morning, when services to the Route 53 DNS were restored. This was the equivalent of a full working day in some parts of the US.

“We are investigating reports of occasional DNS resolution errors. The AWS DNS servers are currently under a DDoS attack,” said a statement from AWS Support, circulated to customers and published across social media.

“Our DDoS mitigations are absorbing the vast majority of this traffic, but these mitigations are also flagging some legitimate customer queries at this time. We are actively working on additional mitigations, as well as tracking down the source of the attack to shut it down.”

The Route 53 system is a scalable DNS that AWS uses to give developers and businesses a method to route end users to internet applications by translating URLs into numeric IP addresses. This effectively connects users to infrastructure running in AWS, like EC2 instances, and S3 buckets.

During the attack, AWS advised customers to try to update the configuration of clients accessing S3 buckets to specify the region their bucket is in when making a request to mitigate the impact of the attack. SDK users were also asked to specify the region as part of the S3 configuration to ensure the endpoint name is region-specific.

Rather than infiltrating targeted software or devices, or exploiting vulnerabilities, a typical DDoS attack hinges on attackers bombarding a website or server with an excessive volume of access requests. This causes it to undergo service difficulties or go offline altogether.

All AWS services have been fully restored at the time of writing, however, the attack struck during a separate outage affecting Google Cloud Platform (GCP), although there’s no indication the two outages are connected.

From 12:30am GMT, GCP’s cloud networking system began experiencing issues in its US West region. Engineers then learned the issue had also affected a swathe of Google Cloud services, including Google Compute Engine, Cloud Memorystore, the Kubernetes Engine, Cloud Bigtable and Google Cloud Storage. All services were gradually repaired until they were fully restored by 4:30am GMT.

While outages on public cloud platforms are fairly common, they are rarely caused by DDoS attacks. Microsoft’s Azure and Office 365 services, for example, suffered a set of routine outages towards the end of last year and the beginning of 2019.

One instance includes a global incident with US government services and LinkedIn sustaining an authentication outage towards the end of January this year.