AWS’ contribution to Elasticsearch may only further entrench the open source vendor and cloud war

Last week, Amazon Web Services (AWS) announced it was launching an open source value-added distribution for search and analytics engine Elasticsearch. As AWS evangelist Jeff Barr put it, the launch would “help continue to accelerate open source Elasticsearch innovation” with the company “strong believers in and supporters of open source software.”

Yet for industry-watchers and those sympathetic to the open source space, this has been seen as the latest move in a long-running spat between the developers and software vendors on one side, and the cloud behemoths – in particular AWS – on the other. So who is right?

Previous moves in the market have seen a lot of heat thrown in AWS’ direction for, as the open source vendors see it, taking open source code to which they have not contributed and selling software as a service around it. MongoDB, Confluent and Redis Labs were the highest profile companies who changed their licensing to counter this threat, with reactions ranging from understanding through gritted teeth to outright hostility.

In December, Confluent co-founder Jay Kreps outlined the rationale for changing licensing conditions. “The major cloud providers all differ in how they approach open source,” he wrote in a blog post. “Some of these companies partner with the open source companies that offer hosted versions of their system as a service. Others take the open source code, bake it into the cloud offering, and put all their own investments into differentiated proprietary offerings.

“The point is not to moralise about this behaviour – these companies are simply following their commercial interests and acting within the bounds of what the license of the software allows,” Kreps added. “We think the right way to build fundamental infrastructure layers is with open code. As workloads move to the cloud we need a mechanism for preserving that freedom while also enabling a cycle of investment. This is our motivation for the licensing change.”

Redis Labs, when changing its licensing stipulations for the second time last month after developers voiced their concerns, sounded a note of cautious optimism. The company had noted how it was ‘seeing some cloud providers think differently about how they can collaborate with open source vendors.’ Speaking to CloudTech at the time, CEO Ofer Bengal said, AWS aside, “the mood [was] trying to change.”

So whither Amazon’s announcement last week? Several pundits have noted that AWS potentially can’t win in these situations; if the company doesn’t contribute to a particular project it is stripping the technology away, but if it does it is impacting competitors.

AWS VP cloud architecture strategy Adrian Cockcroft – who said on Twitter the company had “proposed to give back jointly at a significant level and [was] turned down” – noted its official stance. Cockcroft had seemingly given short shrift to the moves Redis et al were making, saying there were examples where maintainers were ‘muddying the waters between the open source community and the proprietary code they create to monetise the open source.’

The logical response for a cloud supplier to the addition of adverse licensing terms will be in some cases a fork. The question of blame is difficult for non-partisans to assign – both parties are essentially acting as might be expected

“At AWS, we believe that maintainers of an open source project have a responsibility to ensure that the primary open source distribution remains open and free of proprietary code so that the community can build on the project freely, and the distribution does not advantage any one company over another,” he wrote.

“When the core open source software is completely open for anyone to use and contribute to, the maintainer (and anyone else) can and should be able to build proprietary software to generate revenue,” Cockcroft added. “However, it should be kept separate from the open source distribution in order not to confuse downstream users, to maintain the ability for anyone to innovate on top of the open source project, and to not create ambiguity in the licensing of the software or restrict access to specific classes of users.”

Shay Banon, CEO of Elastic, does not see it the same way. A day after AWS announced Open Distro for Elasticsearch, Banon wrote a missive with three constant themes; keeping things open, not being distracted by overtures from elsewhere, and maintaining a stellar experience for users. “Our commercial code has been an ‘inspiration’ for others,” Banon wrote. “It has been bluntly copied by various companies, and even found its way back to certain distributions or forks, like the freshly minted Amazon one, sadly, painfully, with critical bugs.

“When companies came to us, seeing our success, and asked for [a] special working relationship in order to collaborate on code, demanding preferential treatment that would place them above our users, we told them no,” Banon added. “This happened numerous times over the years, and only recently again, this time with Amazon. Some have aligned and became wonderful partners to us and the community. Others, sadly, didn’t follow through.”

So what happens from here? Can all parties see eye to eye? Stephen O’Grady, principal analyst at RedMonk and a long-time follower of the space, noted both sides of the argument. “The Amazon and Elastic controversy is the product of a collision of models,” O’Grady wrote. “To Banon and Elastic’s credit, Elasticsearch proved to be an enormously popular piece of software. The permissive license the software was released under enabled that popularity… [but] permissive licenses also enable usage such as Amazon’s.

“The logical response for a cloud supplier to the addition of adverse licensing terms will be, in at least some cases, a fork, which is why a move like Amaozn’s… was expected and inevitable,” O’Grady added. “It is also why the question of blame is difficult for non-partisans to assign. Both parties are essentially acting as might be expected given their respective outlooks, capabilities and legal rights.”

Ultimately, this is one which will run and run. As O’Grady explained, the status quo is likely to persist. “It is probable that Amazon’s move here will be the first but not the last,” he added. “As it and other cloud providers attempt to reconcile customer demand with the lack of legal restrictions against creating projects like the Open Distro for Elasticsearch, they are likely to come to the inescapable conclusion that their business interests lie in doing so.

“The incentives and motivations of both parties are clear, and understandable and logical within the context of their respective models,” he added. “Models which are and will continue to be intrinsically at odds even as they’re inextricably linked.” in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Dropbox enforces three device limit on free accounts

Clare Hopping

18 Mar, 2019

Dropbox looks to be aiming at convincing users of its business service to upgrade to a paid plan by limiting its app to being downloaded on a maximum of three devices.

The company has imposed the limit only for free account users, while those with Professional and Plus accounts can continue to install it on as many devices as they wish.

People who already have their Dropbox account up and running on more than three devices are apparently still able to access the service across all linked computers, phones and tablets. But the rule came into force for anyone trying to go above the three device threshold from the beginning of March.

So if you have a free account with three or more devices already connected, you’ll be invited to either unlink one device or upgrade (with a free 30-day trial) to a premium service.

It’s worth noting that this is potentially a move by Dropbox to push heavy users of its free service into adopting its professional and business grade tiers.

Although it might upset some personal users, it demonstrates the company’s efforts to try and become more of a premium, enterprise-focussed product.

For individuals, it will remain to be merely a way of storing files, while new collaboration-focused features, such as in-app document editing and note making are better suited to businesses.

Although Dropbox has an estimated 500 million users worldwide, only around 2.5% of those are paying users. Following the company’s move to become a publicly listed company, it needs to start generating revenue for its shareholders.

Upgrading free accounts to paid accounts is a priority, offering those splashing the cash enhanced storage limits as well as the ability to use the app across as many devices as they like.

Continuous compliance, continuous iteration: How to get through IT audits successfully

For most students, exam days are one of the most stressful experiences of their educational careers. Exams are a semi-public declaration of your ability to learn, absorb and regurgitate the curriculum, and while the rewards for passing are rather mundane, the ramifications of failure are tremendous. 

My educational experience indicates that exam success is primarily due to preparation, with a fair bit of luck. If you were like me in school, exam preparation consisted mostly of cramming, with a heavy reliance on hope that the hours spent jamming material into my brain would cover at least 70% of the exam contents.

After I left my education career behind me and started down a path in business technology, I was rather dismayed to find that the anxiety of testing and exams continued, but in the form of IT audits. Oddly enough, the recipe for audit success was remarkably similar: a heavy dose of preparation combined with luck.

It seems that many businesses adhere to my cram-for-the-exam IT audit approach. Despite full knowledge and disclosure of the due dates and subject material, IT audit preparation in most companies I’ve encountered largely consists of ignoring it until the last minute, followed by a flurry of activity, stress, anxiety and panic.

Not surprisingly, there’s a better way to do this. Both simple and complex problems can often be attacked and solved through iteration, including achieving a defined compliance level in complex IT systems. Achieving audit compliance within your IT ecosystem can be an iterative process, and it doesn’t have to be compressed into the five days before the audit is due. Following is a four-step process I use to guide clients through the process of preparing for and successfully completing IT audits.


The first step is to clearly define what we are trying to achieve. Start big-picture and then drill down into something much smaller and achievable. This will accomplish two things: 1) build some confidence that we can do this, and 2) using what we will do here, we can “drill up” and tackle a similar problem using the same pattern. 

Here is a basic example of starting big-picture and drilling down to an achievable goal: we need to monitor all logs in our organisation (too large); we need to monitor authentication logs in our organisation (still too large); we need to monitor network user authentication logs in our organisation (getting closer); we need to monitor failed network user authentication logs in our organisation (bingo!).

Identify and recognise

Given that we are going to monitor failed user logons, we need a way to do this. There are manual ways to achieve it but given that we will be doing this over and over, it’s obvious that this needs to be automated. Here is where tooling comes into play.  Spend some time identifying tools that can help with log aggregation and management, then find a way to automate the monitoring of failed network user authentication logs.

Notify and remediate

Now that we have an automated way to aggregate and manage failed network user authentication logs, we need to look at our (small and manageable) defined goal and perform the necessary notifications and remediations to meet the requirement.  Again, this will need to be repeated over and over, so spend some time identifying automated tools that can help with this process.

Analyse and report

Now that we are meeting the notification and remediation requirements in a repeatable and automated fashion, we need to analyze and report on the effectiveness of our remedy and, based on the analysis, make necessary improvements to the process.

The iteration (repetitive process) is simple. The scope and execution of the iteration is where things tend to break down. The key to successful iterations starts with defining and setting realistic goals. When in doubt, keep the goals small. The idea here is being able to achieve the goal repeatedly and quickly, with the ability to refine the process to improve the results. No more cramming for this particular compliance requirement – we are now handling it continuously. in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Intel to Sponsor of @CloudEXPO Silicon Valley | @Intel @IntelSoftware @ZhannaGrinko #Cloud #CIO #DevOps #Monitoring

Intel is an American multinational corporation and technology company headquartered in Santa Clara, California, in the Silicon Valley. It is the world’s second largest and second highest valued semiconductor chip maker based on revenue after being overtaken by Samsung, and is the inventor of the x86 series of microprocessors, the processors found in most personal computers (PCs). Intel supplies processors for computer system manufacturers such as Apple, Lenovo, HP, and Dell. Intel also manufactures motherboard chipsets, network interface controllers and integrated circuits, flash memory, graphics chips, embedded processors and other devices related to communications and computing.

read more

Xero review: Xero to (almost) hero

K.G. Orphanides

15 Mar, 2019

Comprehensive cloud accounting that's particularly well suited to sales-based businesses

£8.33/£18.33/£22.92 per month (exc VAT)

New Zealand-based cloud accounting specialist Xero is geared up to support HMRC’s new Making Tax Digital VAT payment scheme – which becomes mandatory for all UK businesses with turnover above the £85,000 VAT registration threshold from the 1 April 2019.

Xero is among the more expensive SaaS accounting suites around, with three tiers targeting businesses of various sizes. Xero Starter costs £10 per month as standard but severely limits the number of transactions you can process every month: you can send just 5 invoices, enter 5 bills and reconcile 20 bank transactions. This means that it’s only suitable for the very smallest of businesses.

Priced at £22 per month, Xero Standard removes those limits, while for £27.50 per month, Xero Premium adds multicurrency support for SMEs that do business overseas. New subscriber discounts are frequently available for all tiers.

Unlike some of Xero’s rivals, all three tiers support online VAT submission to HMRC. Bolt-on features are also available at levels, including Payroll at £1 per employee per month, Projects at £5 per user per month and Expenses at £2.50 a month for each user. Note that your accountant will have to be added to Xero, and any bolt-on service you want them to have access to, as an additional user.

Xero review: Getting started

When you sign up to Xero, you’re prompted to add a few details about your company and then taken to the main dashboard, where a guided setup wizard awaits you.

You’re asked a few simple questions to configure your financial year, whether you want to stick with Xero’s default chart of accounts categories or import your own from a previous accounting suite, and are invited to connect your bank to directly import transactions.

The Add Bank Accounts screen – also available via the Bank accounts screen in Xero’s Accounting menu – allows you to connect accounts from a large number of financial institutions that do business in the UK, including PayPal and both business and personal services from the usual high street banks.

However, although a number of online-only banks such as Revolut, TransferWise and HSBC’s well-established First Direct are represented, some of the current wave of digital challenger banks don’t appear. ING, Monzo, and Shine are among those currently missing, as are foreign banks.

If your account uses two-factor authentication, you’ll have to generate a login code every time you sync its transactions with Xero. Otherwise, they’ll be synced automatically every day. If your bank doesn’t come up when you search Xero’s list, you can just select “Add it anyway”, enter your sort code and account number, and manually upload your statements in CSV format.

If you have a foreign bank account, you’ll have to extract the account number and sort code from its IBAN to set it up, after which you can just upload statements as for any other non-connected account.

With your accounts added, you’re next prompted to enter your balances as they were on the date that you want Xero to work from – the beginning of the current month by default. For foreign currency accounts, you’ll have to – slightly obtusely – double-click on the balance field and then add both the balance and confirm the exchange rate you wish to use.

Once you’ve imported some bank transactions, Xero’s tutorial wizard sends you to reconcile them: match them up to your expenses and invoice payments. Unlike some rivals, such as QuickBooks, Xero doesn’t try to automatically classify transactions based on their description or the company involved, which means that your first reconciliation could be quite time-consuming.

Xero review: Invoicing

The next stop on the guided set-up process is particularly useful: adding unpaid invoices and bills to pay. This is an important step when moving your business to any new accounting suite, but it’s nice to have it made explicit. To complete the invoices, you’ll also need to enter your organisation’s contact details as prompted.

You can either enter all your outstanding invoices manually or download a CSV template to help you upload them en mass. To help balance your books, you should also add any invoices that have been paid within the period covered by the bank transactions you imported earlier.

Note that, when using the template, you have to enter a unit number and amount for each invoice rather than simply entering the total in order for the import to complete. While you can conveniently opt to save imported client details when doing a bulk invoice import, if you’re going to create your initial invoices manually, you should set up your Contacts and invoice template – in your Organisation settings – before.

Helpfully, Xero also provides a free portal that your customers can use to view and even pay their invoices immediately via PayPal.

Xero review: Staff, payroll and tax

If you have employees, you’ll be prompted to add them and set up your payroll accounts towards the end of the guided account configuration process – Xero Payroll costs £1 per employee with a minimum fee of £5 per month.

Once you’ve entered each staff member’s basic data – name, date of birth, address and binary gender – you’re prompted to fill out a standard range of information, from employee number, start date, NI number and national holiday group before you can go about setting up their tax and pension data.

Xero can help you calculate and submit PAYE tax, national insurance contributions and pension filings in the process of managing and paying your staff. There’s a full range of human resources options here, including a request system for time off, complete with support for statutory shared parental leave.

When it comes to VAT, Xero will automatically put together your return based on your month’s transactions and, at the click of a button, file it with HMRC.

Xero review: Time, inventory and expenses

Once you’ve created a payroll calendar for your employees, they can submit timesheets that can be used to calculate pay and integrated into optional project management systems. Xero also supports inventory tracking; you can monitor stock levels of items you buy and sell automatically as you enter bills for your purchases and invoice customers for your sales.

There’s a full expenses system – a bolt-on feature charged at £2.50 per user per month – which allows your staff to easily submit claims. You can set it up to use Xero’s receipt analysis, where staff take photos of receipts for their expenses and these are sent back to Xero for automatic analysis. Alternatively, if you’d rather not have the processing done on Xero’s end, you and your staff can enter the information manually.

As with most online services, there’s an app marketplace, featuring a range of free and paid-for tools to help you connect to third party services, including payment providers, CRM, timesheet and project management tools. You can also connect other Xero services, notably Projects and WorkFlowMax for project management.

Xero review: Interface

While Xero’s main dashboard and menus are clear, as are some of its overview pages such as the Payroll interface, many of its configuration pages look a little outdated due to tiny fonts and text entry areas. They’re not particularly comfortable to work with on a standard 1080p monitor, let alone higher-resolution displays.

Elsewhere, we found visible raw HTML code visible in an inventory tracking sheet. Another problem is the service’s use of Flash – long depreciated and disabled by default in most browsers – to produce graphs and charts for its sales summary reports.

By comparison, the Xero mobile app looks great and is really easy to use. It doesn’t give you access all the features of the web app, but provides you with key tools to monitor your business’s financial health, add contacts, record bills and receipts and create quotes and invoices on the move.

Xero also runs free webinar tutorial sessions that you can sign up for, covering everything from linking your bank accounts to HMRC’s new Making Tax Digital scheme and how it works with the software.

Xero review: Verdict

Xero is a solid and reliable accounting tool – if not always a particularly attractive one, largely due to inconsistent formatting that sometimes looks a little poor on modern displays. We liked the attention to detail in the guided setup process, as well as the integrated inventory system, although Payroll is a bolt-on extra.

Xero among the more expensive accounting tools you could subscribe to: it costs more than Sage and includes fewer quality-of-life flourishes than comparably-priced rival Quickbooks. The Starter plan is particularly limited by an improbably low allowance of monthly invoices.

Xero’s good at what it does, and is particularly well tailored to sales-oriented businesses that need basic stock management built in, but it’s not our favourite cloud-based accounting solution.

Cloudera looks to being a true multi-cloud home and calls out Amazon as primary competitor

Cloudera posted total revenues of $144.5 million (£108.9m) for its most recent quarter, and while it may have disappointed investors, the company said its ‘enterprise data cloud’ strategy with Hortonworks on board will help turn things around.

Of total revenues for the quarter ending January 31 – up 36% from this time last year – 85% of it came from subscriptions, while the remainder came from services. For the full financial year, revenue was $479.9m, up 28% from the previous year’s figures.

Naturally, the major talking point from Cloudera’s most recent quarter was its acquisition of Hortonworks for $5.2 billion back in October. At the beginning of this year, chief marketing officer Mick Hollison told this publication of how the companies were seeing threats from both fronts; the proprietary big data vendors, as well as the public cloud behemoths.

Replying to an analyst question on who their main competitor was, Cloudera CEO Tom Reilly was unequivocal. “Who is our number one competitor? It’s Amazon,” he said. “It’s Amazon’s house offerings in the data management and analytic space – and we believe we are well positioned to compete against them.

“Our value proposition is to be an enterprise data cloud company… giving our customers multi-cloud, hybrid cloud is one enduring differentiator,” Reilly added. “And then our capabilities from the edge – our integrated capabilities from the edge to AI – we’re the only company that’s offering that today.”

This was a similar theme Hollison noted in January; the idea that the public cloud providers are never going to be truly multi-cloud. Of course, the big guys do occasionally spend time together instead of butting heads, such as with the machine learning library Project Gluon launched between AWS and Azure in 2017, and there are certain migration paths, but it’s not to the level that Cloudera can offer.

The concept of the enterprise data cloud is one that not only includes supporting every possible cloud implementation and analytic capability, but also focusing purely on an open philosophy, from storage, to compute, to integration. Reilly also noted this changing customer demand.

“Enterprises are demanding a modern analytic experience across public, private, hybrid and multi-cloud environments. They want the agility, elasticity and ease of use of cloud infrastructure. But they also want to run analytic workloads wherever they choose, regardless of where their data may reside,” said Reilly. “They want open architectures and the flexibility to move those workloads to different cloud environments, public or private to avoid vendor lock-in.

“In summary, what enterprise customers want is an enterprise data cloud,” he added. “This is a new term for our industry and a new market category we are uniquely positioned to lead.”

Investors and analysts may see this differently – at least in the short-term. Benzinga noted analysts either were at neutral or outperform with unchanged or lowered price targets, while MarketWatch said the first year of the merged entity “looks like it’s going to be an adjustment period all around.”

You can read the full Cloudera fourth quarter and fiscal year results here.

Read more: The Cloudera-Hortonworks $5.2bn merger analysed: Challenges, competition and opportunities in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Microsoft open-sources Azure compression technology

Keumars Afifi-Sabet

15 Mar, 2019

Microsoft hopes that open sourcing the compression technology embedded in its Azure cloud servers will pave the way for the technology’s adoption into a range of other devices.

The company is making the algorithms, hardware design specifications and the source code behind its compression tech, dubbed Project Zipline, available for manufacturers and engineers to integrate into silicon components.

Microsoft announced this move to mark the start of the Open Compute Project’s (OCP) annual summit. Microsoft is a prominent member of the programme, which was started by Facebook in 2011 and includes the likes of IBM, Intel, and Google.

Project Zipline is being released to the OCP to combat the challenges posed by an exploding volume of data that exists in the ‘global datasphere’, in both private and public realms, the company said. Businesses are also increasingly finding themselves burdened with mountains of internal data that should be better managed and utilised.

“The enterprise is fast becoming the world’s data steward once again,” said Microsoft’s general manager for Azure hardware infrastructure Kushagra Vaid.

“In the recent past, consumers were responsible for much of their own data, but their reliance on and trust of today’s cloud services, especially from connectivity, performance, and convenience perspectives, continues to increase and the desire to store and manage data locally continues to decrease.

“We are open sourcing Project Zipline compression algorithms, hardware design specifications, and Verilog source code for register transfer language (RTL) with initial content available today and more coming soon.

“This contribution will provide collateral for integration into a variety of silicon components (e.g. edge devices, networking, offload accelerators etc.) across the industry for this new high-performance compression standard.”

According to the firm, the compression algorithm yields up to twice as high compression ratios versus the widely used Zlib-L4 64KB compression model. Contributing RTL at this level of detail, Vaid added, sets a new precedent for frictionless collaboration, and can open the door for hardware innovation at the silicon level.

Members of the OCP will be able to run their own Project Zipline trials and contribute to the further development of the algorithm, and its hardware specifications.

Microsoft hopes that its technology will be integrated into a variety of silicon components and devices, in the future. These could range from smart SSDs to archival systems, to cloud appliances, as well as IoT and edge devices.

Making its compression technology available represents Microsoft’s latest contribution to OCP, more than five years after the company first began contributing to the open source project. Incremental contributions have been made ever since, with the company, for instance, delivering its Open CloudServer specs to the project in October 2014.

Mercedes-AMG migrates to TIBCO for enhanced data analysis

Clare Hopping

15 Mar, 2019

Mercedes-AMG Petronas Motorsport has opted to take on TIBCO’s data analytics services, including its Spotfire and Data Science technologies to help the company make better decisions when refining its race cars.

Mercedes-AMG can analyse terabytes of dynamic data from races via its Connected Intelligence platform, helping it uncover anomalies and make informed decisions when tweaking car performance.

“Mercedes-AMG Petronas Motorsport are committed to developing the car throughout the season to stay ahead of competition,” said Thomas Been, chief marketing officer at TIBCO. “With new regulation changes in place for the 2019 season, data will be key to success. We wish them success in 2019, as we work together to enhance performance, improve reliability, and reduce lead-times.”

Mercedes-AMG Petronas Motorsport has tested millions of simulations with TIBCO Spotfire, inputting variations to visualise how changes will impact performance. The company’s Data Science platform enables the company to create and validate data science models used to analyse the information.

The autosport business said it chose to use TIBCO’s data analysis platform because it can help uncover the most relevant insights into the racing car company’s performance, both throughout the season and beyond.

“TIBCO technology not only provides our team with unique insights about the W10, but also gives us the necessary, real-time data we need to take the right risks at the right time to make decisions that ultimately enhance performance and help win races,” said Toto Wolff, team principal and chief executive officer, Mercedes-AMG Petronas Motorsport.

“The TIBCO Connected Intelligence platform plays a key role in helping us achieve success, and we are excited for it to continue to help the team in 2019.”

Google employee uses cloud to break Pi world record

Clare Hopping

15 Mar, 2019

A Google employee has broken a Guinness World Record by predicting the value of Pi to an astonishing 31.4 trillion digits using Google’s Compute Engine. Emma Haruka Iwao smashed the previous world record that stood at 22 million digits.

Working out a calculation to that scale took the power of 25 virtual machines a mammoth 121 days to complete and although the announcement was made coincidentally on World Pi Day, the calculation was completed back in January.

To come up with the number, the system had to process 170 terabytes of data, which is the same amount of information held in the complete Library of Congress print collection.

The calculation was achieved using y-cruncher, a benchmark tool created by Alexander J. Yee, deployed to a Google Compute Engine virtual machine cluster. The tool was then run using the Chudnovsky’s Formula algorithm, but verified using Bellard’s formula and BBP formula.

Throughout the process, Google took disk snapshots at intervals so it could process the calculation, record it, then dispose of the data; something not possible using traditional infrastructure.

“I was very fortunate that there were Japanese world record holders that I could relate to,” Iwao said of the achievement. “I’m really happy to be one of the few women in computer science holding the record, and I hope I can show more people who want to work in the industry what’s possible.”

Anyone wanting to get their hands on the data can do so by obtaining the snapshots on Google Cloud Platform. It will cost $40 per day to keep a cloned disk, but you’ll need to be utilising the us-central1, us-west1, and us-east1 regions if you want to use it in your own calculations.

“The world of math and sciences is full of records just waiting to be broken,” Iwao said. “We had a great time calculating 31.4 trillion π digits, and look forward to sinking our teeth into other great challenges. Until then, let’s celebrate the day with fun experiments.”

Google Cloud officially opens Zurich data centre region

Google Cloud has opened the doors to its Zurich data centre region, making it the sixth region for the provider in Europe and nineteenth overall.

The Zurich site, which was first announced in May, will have three availability zones and is available with the standard set of Google products, including Compute Engine, Google Kubernetes Engine, Cloud Bigtable, Cloud Spanner, and BigQuery.

Google is also looking at providing a wider adoption experience through Transfer Appliance, which enables large amounts of data to be transferred to Google Cloud Platform (GCP), as well as private network Cloud Interconnect.

The company’s European footprint appears much more assured, with Zurich joining Belgium, Finland, Frankfurt, London and the Netherlands. Upcoming regions are set to open in Osaka and Jakarta.   

As ever with Google announcements, the company rolled out a series of customers, including Swiss AviationSoftware and University Hospital Balgrist. Perhaps the most interesting of these came from a partner, in this instance Google-specific digital transformation enabler Wabion. “We have customers that are very interested in Google’s innovation who haven’t migrated because of the lack of a Swiss hub,” said Michael Gomez, Wabion co-manager. “The new Zurich region closes this gap, unlocking huge opportunities for Wabion to help customers on their Google Cloud journey.”

Zurich may have been considered an overdue area for cloud data facilities to be housed, given it hosts Google’s largest engineering offices outside of the US. More than 2,000 employees, or ‘Zooglers’, are employed in the city, with a particular focus on natural language processing and machine intelligence.

You can read the full announcement here. in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.