Todas las entradas hechas por Connor Jones

Slack’s new integrations signal the end to war on email


Connor Jones

25 Apr, 2019

Slack has added some new features to its collaboration platform which aim to embrace the power of email, the very tool it aimed to kill off over five years ago.

Instead of having the two services operate alongside one another, now those in your organisation who aren’t on Slack, or have just started and haven’t yet received credentials, can still benefit from its collaboration features.

Directly addressing an individual in Slack via it’s ‘@-mention’ feature can now be utilised within email, with notifications appearing in the employee’s inbox if they are not on the platform or logged in.

Replies sent from an employee’s inbox will beam straight back to the relevant channel just as if the interaction was taking place on just the one platform.

Admins will need to tweak their company’s account to allow outside users to communicate with those inside the organisation in this way, but it’s a step closer to being a more unified collaboration tool.

This supports Slack’s existing Outlook and Gmail functionality, which allows users to forward emails into a channel where members can view and discuss the content and plan responses from inside Slack.

Another interesting announcement, made at the company’s Frontiers conference in San Franciso, relates to its ‘Workflow Builder’ tool which will enable any user within Slack to build apps for routine functions without coding knowledge.

The tool, which is launching later this year, will be capable of automating functions, such as completing and filing a benefits request form to HR or sending messages to help new starters find the right channels to join, saving other workers from sacrificing time to give a platform tutorial.

If this sounds familiar then you’d be right. Slack announced back in February two new toolkits that would also allow non-coders to build apps within Slack, however Workflow Builder appears to be geared towards routine automation rather than the more technical backend functions of the platform.

Slack’s integration with Outlook and Google Calendar is also becoming stronger as any status you set within Calendar will be automatically synced to Slack, such as being away from the office for an event or when you have a meeting booked in.

As many business meetings tend to be virtual nowadays, integration with calendars will allow other users to see who your meeting is with and provide joining options directly within Slack thanks to partnerships with Hangouts, Zooma and Webex.

There is also a change coming to Slack’s search function which, although fast and expansive, isn’t always the most intuitive or organised. Slack aims to address this by adding new features to make it easier to view unread messages quicker, allow faster navigation between channels to find the relevant person, and better functionality when sifting through channel archives. These features will be available in the coming weeks.

Slack’s five-year slog of a battle with email has proved fruitless; email still exists and seems like it’s here to stay. Google has invested into it to a greater extent recently despite the wide adoption of the platform which depends on the virality of its freemium model.

View from the aiport: Google Cloud Next 2019


Connor Jones

12 Apr, 2019

Google made a raft of announcements at this year’s Next event in San Francisco this week. The most noteworthy of which was Anthos. Formerly Cloud Services Platform, it becomes the first, multi-cloud platform which will appeal heavily to its enterprise customers, marking a shift away from the developer focus in recent years towards business leaders.

It’s a step in the right direction for Google as it’s the C-suite that it needs to be targeting in order to accelerate the platform’s adoption and most will probably agree that the focus has been on the developers for too long. With Anthos, Google has set its sights on the future, heeding the advice of analysts who say that 88% of businesses will undergo a multi-cloud transformation in the next few years. Google’s new multi-cloud platform, «simply put, is the future of cloud», according to Urs Hölzle, Google’s senior vice president of technical infrastructure.

But, if you look past all the marketing spiel, you stop seeing the innovative new platform and notice that by releasing Anthos as its flagship product, Google has essentially taken a step down and conceded that it’s the second cloud provider in the market. If you can’t beat them, use them to help you scale, though, perhaps?

«They’ve taken on a very interesting approach,» said Sid Nag, research director at Gartner. «They want to be the second cloud which is kind of interesting because they don’t want to compete with the 40-50 pound gorilla [AWS] so they’re basically saying it’s a multi-cloud world and they’re pushing the multi-cloud narrative so they can come in as the second cloud… and then land and expand so I think that’s a pretty smart strategy».

One area where Google seems to be leading the charge is in security. Some 30 new products and services were announced at this year’s event which totals more than 100 in the past year alone. 

It’s clear the company is taking security seriously – as it should – and is keen to show its customers that everything the company offers has security baked in from the start. The Cloud Security Command Centre looks like a nice piece of kit for any cloud platform admin to use and has the added benefit of Google’s industry-leading machine learning (ML) capabilities. This, I’m sure, will prove an attractive selling point as it helps users detect malicious activity ahead of an attack. 

Building on the AI theme, the automated functions of all the new features – from AutoML advancements to intelligent event threat detection – continues to provide Google with a serious USP to draw in customers. Google has made it easier than ever to run an advanced cloud environment in a secure and intuitive way. It wants to leave the coding and app creation to the developers and let the admins do their job which is focusing on driving the business forward.

For example, using ML-driven Connected Sheets, businesses could oversee their distribution channels and see where operations were being halted. It would be easy to detect if warehouse stock wasn’t leaving Brazil on time because major road works were taking place so the business could simply re-route the drivers’ navigation systems to get things back on track.

The challenge Google will face in the next year is that of scale. The company has only just started to win over the hearts of business leaders though. Athos garnered the biggest roar I heard all week, but it needs to take that and move on, proving to its customers that it’s really serious about enterprise.

Google also faces the challenge of partnering with the smaller software vendors. Over the week, Google announced partnerships with the biggest bulls in the pen: Cisco, Salesforce, HSBC – I could go on. But, now, what it must do – given it has committed to this containerised and stateless approach with Anthos – is show that it can work with the smaller ISVs.

«The interesting part will be seeing how it can start working with smaller ISVs and convert those apps into Google containers and its containerised as that will be the challenge – that’s how it will grow its business,» said Nag. Smaller apps are the ones that will be modernised in the future, so monetising these will be key to Google’s success years down the line.

It seems as though the new CEO Thomas Kurian is continuing in the hugely successful footsteps of his predecessor Diane Greene – the woman that drove the company to be enterprise-ready in just two years instead of the forecasted 10. Conceding the second place spot to Amazon might be a good move for the company, filling the market gap enterprise customers so desperately needed.

Hybrid cloud is something that many organisations wrestle with and they finally have an answer to the headache they’ve faced for years. We’re excited to see how the company scales in the next year – the first under Kurian’s reign – and whether it’s able to tackle the challenges that it faces.

Google announces new AI platform for developers


Connor Jones

11 Apr, 2019

Google has launched a beta version of its AI platform, allowing developers, data scientists, and data engineers with an end-to-end development environment in which to collaborate and manage machine learning (ML) projects.

While ML is already employed in many cloud instances to sift through logs looking for data that could indicate malicious activity, Google announced a range of additional capabilities for its AutoML product – the same one introduced last year which aimed to get companies with limited ML know-how building their own business-specific ML products.

«We believe AI will transform every business and every organisation over the course of the next few years,» said Rajen Sheth, director product management at Google Cloud AI.

«We have focussed on building an AI platform that provides a very deep understanding of a number of fundamental types of data: voice, language, video, images, text and translation,» added Thomas Kurian, CEO Google Cloud. «On top of this platform, we have built a number of solutions to make it easy for our customers and analysts around the world to build products».

Infrastructure diagram of Google’s new AI platform

When AutoML launched last year, non-experienced workers could build ML-driven tools using image classification, natural language processing and translation specific for their businesses and the data they hold with little-to-no training.

Now Google has announced three new AutoML variations called AutoML Tables, AutoML Video and AutoML Vision for the edge. Tables allows customers to take massive amounts of data, hundreds of terabytes were cited, ingested through BigQuery and use that to create actionable insights into business operations such as predicting business downtime.

It’s all codeless, too. Data can be ingested and then fed through custom ML models created in days instead of weeks by developers, analysts or engineers using an intuitive GUI.

With Video, Google is targeting any organisation that hosts videos and needs to either categorise them automatically using automatic labelling such as cat videos or furniture videos. It can also help automatically filter explicit content and help broadcasters, much like it did with ITV recently, detect and manage traffic patterns on live broadcasts.

Vision was announced last year to help developers with image recognition. With Vision Edge for devices such as connected sensors or cameras, the challenge was that these device struggle with latency issues. Vision Edge harnesses edge TPUs for faster inference and LG CNS, an outsourcing arm of LG uses the tool to create manufacturing products that detect issues with things like LCD screens and optical films on the assembly line.

The new AutoML tools will have the ability to take visual data and turn it into structured data, that’s according to Sheth speaking at a press conference.

«One example of this is FOX Sports in Australia – they’re using this to drive viewer engagement – they’re putting in data from a cricket game and using that to predict when a wicket will fall with an amazing amount of accuracy and then it sends a notification out via social media telling followers to come and see it,» he said.

Sid Nag, research director at Gartner, said that while Google has effectively admitted to being the second best cloud provider with the introduction of Anthos, what it is doing well is leading the AI charge.

«They’re (Google Cloud) very strong in AI and ML, no-one’s doubted that,» Nag said in an interview with Cloud Pro. When asked if customers would choose Google Cloud specifically based on AI as its USP, Nag said: «yeah I think so, that and big data and analytics, you know, they’ve always been very strong in that area».

How are companies benefitting from cloud AI and AutoML?

Binu Mathew, senior vice president and global head of digital products at Baker Hughes, came on stage after Sheth to talk to us about how his team of developers use Google’s AI tools in the oil and gas industry specifically.

He said, when an offshore oil platform goes down, it costs the company about $1m per day. However, by using ML, the oil company can teach its ML tools the signs of normal business function so when these figures start to go awry, the issue can be fixed before any costly downtime occurs. 

Since using Google’s AI tools, Baker Hughes has experienced a 10x improvement in model performance, a 50% reduction in false positive predictions and a 300% reduction in false negatives.

Sheth said that AI will also be part of Kurian’s and Google Cloud’s hybrid cloud vision, you can deploy ML across GCP, on-prem, on other cloud platforms and at the edge. This is because it runs off Kubeflow, the open-source AI framework that runs anywhere Kubernetes runs too and it can all be managed by Anthos, Google’s new multi-cloud platform which «simply put, is the future of cloud», said Urs Hölzle, Google’s senior vice president of technical infrastructure.

Speaking at a subsequent and more intimate session compared to the keynote, Marcus East, CTO at National Geographic, told the crowd of the company’s cloud transformation and the quick mission-critical turnaround of migrating the company’s 20-year-old legacy on-prem photo archive system to a GCP-based archive in just eight weeks.

He also briefly mentioned the company’s work with AutoML so Cloud Pro caught up with East and a few of the engineers behind the company’s AI work after the event to hear more about the company’s vision for cloud AI implementation for the future, specifically with AutoML Vision.

Speaking exclusively to Cloud Pro, Melissa Wiley, vice president of digital products at National Geographic, said that one of the ideas it is exploring is that of advanced automated tagging of metadata and how it will be able to assign labels of not just specific animals, but specific species to animals that appear in the circa two million images it stores in its archive.

That starts by using AutoML Vision’s automatic image recognition. Using machine learning, Nat Geo can train its industry-specific ML tool to learn one species of tiger and apply that to identify the same species in all the other photos in which that species appears, according to Wiley.

«When our photographers are out in the field, they might be up to their waist in mud, avoiding mosquitos and being chased by wild creatures – they don’t have time to take a great photo and then turn to their laptop and fill in all the metadata,» said East. «So this idea that we could somehow use AutoML and the BroadVision API to really [make those connections] and enrich the metadata in those images is the starting point. Once we’ve done that, we can give our end consumers a better experience.»

«That’s the next stage for us; we can see the potential to harness the power of these cloud-native capabilities, to build personalised experiences for consumers. For example, we could say we know Connor likes snakes and videos of animals eating animals, let’s give him that experience,» he added.

Wiley also mentioned the enterprise potential for this too, perhaps offering the technology to schools, libraries or even other companies so Nat Geo can help them identify animals too. «There are a million ideas we could talk about,» she said.

What is Anthos? Google’s brand new multi-cloud platform


Connor Jones

10 Apr, 2019

Google has revealed its Cloud Services Platform has been rebranded to Anthos, a vendor-neutral app development platform that will work in tandem with rival cloud services from Microsoft and AWS.

It was developed as a result of customers wanting a single programming model that gave them the choice and flexibility to move workloads to both Google Cloud and other cloud platforms such as Azure and AWS without any change.

The news was announced during Thomas Kurian’s first keynote speech at a Google Cloud Next event as the company’s new CEO, succeeding Diane Greene’s departure in November last year.

The announcement was met with the loudest cheer of the day from the thousands-strong crowd in attendance who seemed to share the same enthusiasm as the industry analysts who have been trying to convince Google that 88% of businesses will undergo a multi-cloud transformation in the coming years.

That could be some way off though, considering global market intelligence firm IDC said last year less than 10% of organisations are ready for multi-cloud, with most sticking to just one vendor.

Anthos will allow customers to deploy Google Cloud in their own datacentres for a hybrid cloud setup and also allow them to manage workloads within their datacentre, on Google Cloud or other cloud providers in what’s being described as the world’s first true cloud-agnostic setup.

«The only way to reduce risk is by going cloud-agnostic», at least that’s according to Eyal Manor, VP engineering at Anthos. He said that managing hybrid clouds is too complex and challenging, and the reason why as much as 80% of workloads are still not in the cloud.

As it’s entirely software-based and requires no special APIs or time spent learning different environments, Manor said you can install Anthos and start running it in less than 3 hours.

It became generally available Tuesday both on GCP with Google Kubernetes Engine (GKE), and in customers’ datacentres with GKE On-Prem.

The announcement marks Google’s apparent move to make managing infrastructure much simpler for its customers so they can focus on improving their business.

By using Anthos, enterprises can depend on automation so they can «focus on what’s happening further up the stack and take the infrastructure almost for granted», said Manor. «You should be able to deploy new and existing apps running on-premise and in the cloud without constantly having to retrain your developers – you can truly double down on delivering business value».

Some of the world’s leading businesses have been given early access to the platform already such as HSBC which needs a managed cloud platform for its hybrid cloud strategy.

«At HSBC, we needed a consistent platform to deploy both on-premises and in the cloud,» says Darryl West, group CIO, HSBC. «Google Cloud’s software-based approach for managing hybrid environments provided us with an innovative, differentiated solution that was able to be deployed quickly for our customers.»

GCP customers have already invested heavily into their infrastructure, forging relationships with their vendors too which is why Google has launched Anthos with an ecosystem of leading vendors so users can start using the new platform from day one.

Cisco, VMware, Dell EMC, HPE, Intel and Atos are just a few that have committed to delivering Anthos on their own hyperconverged infrastructure for their customers.

Nvidia delivers cutting-edge graphics rendering to any Google Cloud device


Connor Jones

10 Apr, 2019

Nvidia’s Quadro Virtual Workstation (QvWS) will be available on Google Cloud Platform (GCP) by the end of the month, marking the first time any platform has supported RTX technology for virtual workstations.

Cloud workloads are becoming increasingly compute-demanding and the support for Nvidia’s QvWS is thought to be able to accelerate the development and deployment of AI services and vastly improve batch rendering from any device in an organisation.

Enterprises which rely on powerful graphics rendering processes would have to invest huge amounts in the hardware needed to perform such compute-heavy tasks on-premise. Using GPU-enabled virtual workstations, businesses can also forget about the cost and complexity of managing datacentres for the task.

Instead of running up to 12 of the T4 GPUs in a business’s on-premise infrastructure, an endeavour that would cost thousands in investment, using the infrastructure-as-a-service via GCP, businesses can spend much less.

«You can spin an instance up [on GCP] for less than $3 per hour,» said Anne Hecht, senior director, product marketing at Nvidia. «The QvWS is about 20 cents per minute and then you need to buy the other infrastructure depending on how much storage, memory and CPU that you want».

That cost can drop during ‘peak times’ in a process called pre-emption, whereby if a customer is willing to lose the service within an hour, for example assigning resources to a workload that will be completed quickly, then it can be rented for half the price.

Edward Richards, director & solution architect at Nvidia, told Cloud Pro that the service can be accessed from any device that can connect to GCP.

«You can plug your own tablet, plug in your mouse of choice, keyboard of choice – you just don’t think about it,» he said. «I use one on my desk at work, I’ve almost chained all my day-to-day to it and every once in a while I forget that I’m actually remoting to it from the other side of the country… it’s just that seamless».

Nvidia is the biggest name in the graphics processing market and its flagship Turing architecture is used in its T4 GPU which can perform real-time graphics-hungry ray tracing, AI and simulation processes. It’s the first time ray tracing has been made available for graphics processing in a cloud instance.

Azure customers can already utilise Nvidia’s graphics processing through VMs, but it only runs on Pascal using its V100 GPU.

The virtual workstations will benefit businesses in more sectors than just games developers. Engineers and car manufacturers run computer-aided design (CAD) applications that are critical to their businesses’ success. Video editors and broadcasters also stand to benefit from high-performance graphics processing while away on set.

«When spikes in production demand occur, particularly around major broadcast events like the Olympics Games or the Soccer World Cup, time and budget to set up new temporary editors are a big problem,» said Alvaro Calandra, consultant at ElCanal.com. «With Quadro vWS on NVIDIA T4 GPUs in the GCP marketplace, I can use critical applications on demand like Adobe Premiere Pro, apply GPU-accelerated effects, stabilise, scale and colour correct clips with a native, workstation-like experience.»

It’s been widely known that Nvidia’s QvWS has been through the alpha and private beta phases in the last few months, but it will be made generally available at the end of the month on GCP.

Android phones become Google’s most secure form of MFA


Connor Jones

10 Apr, 2019

Google Cloud has revealed that Android devices can now be used as a Titan authentication key in what’s seen as a major push to protect user accounts from online scams.

Working much like Google’s Titan key, which is built in accordance with FIDO standards, your phone can now act as the most secure version of multi-factor authentication (MFA) yet.

Other MFA methods, such as confirmation texts and mobile apps, have come under scrutiny as they can still be exploited by phishers who can trick users into helping them access their accounts.

The key is able to keep a log of phishing websites that Google is aware of and, if you visit one, the security key built into your Android phone will block you from handing over login credentials to phishers.

Google calls the new security standard ‘phone-as-a-security-key’ (PaaSK); the phone connects to your device via the Google-built standard, which itself is built on top of Bluetooth to create a three-pronged layer of protection.

The security also works through proximity, and so long as your phone is connected to your device, say a laptop, then the device will recognise you as both the user attempting to log in and the owner of the account’s corresponding security key. Instead of waiting for a text, a security screen will automatically appear on your phone requiring you to either hold down a volume button if using a Google phone or press an on-screen button for any other Android device.

The advantage of this is that although attackers can get you to hand over your phone number to access an SMS-based 2FA protection barrier, an attacker would find it much harder to get their hands on your phone and stay in close proximity to your computer.

Google has said that only Android devices running version 7.0 or later will support the new PaaSK platform at launch, but it can be used on all major computer operating systems including Windows, MacOSX, and Chrome OS.

“We’re focussed on Android first, but it’s not out of the realms of possibility that in the future there will be something for iOS, at least for Google accounts,” said Sam Srinivas, product management director at Google Cloud.

You’ll be able to associate as many Google accounts with the PaaSK as you wish but the user must be logged into the correct key on the phone first before making the login attempt on a browser.

Although Google says it blocks 99.9% of all fraudulent log-in attempts on its users’ accounts, there is still a 0.1% issue regarding cases of phishing, keylogging and data breaches – cases where the attacker has the correct password, making it difficult to differentiate between a genuine and fraudulent attempt.

Google chose to implement FIDO in its most recent push against phishing attacks because out of all the other MFA methods, namely SMS/voice, backup code and authenticator apps, FIDO has proved the only phishing-resistant method.

According to Google’s own assessments, user accounts becomes 10x more vulnerable if credentials are used in a data breach, 40x more vulnerable when threatened by keylogging, and 500x more vulnerable if compromised by a phishing scam.

Google Cloud doubles down on security at Next


Connor Jones

10 Apr, 2019

Google has announced 30 security features for its Google Cloud Platform (GCP) at Google Cloud Next 2019, building on a two-year-long commitment to making its platforms more robust.

Prior to today’s announcement, Google Cloud had invested heavily into its security systems, launching more than 70 products and services in 2018 and with it now adding to that tally.

The company split its announcements over three different sectors:

  • Security of the cloud: referring to the infrastructure that keeps GCP secure such as datacentres, network cables and its Titan chip
  • Security in the cloud: features that allow customers to build secure applications for their businesses in their cloud environment e.g. encryption key management
  • Security services: direct security-as-a-service solutions that Google is starting to provide

Security of the cloud

«One of the things we deeply believe in at Google is that transparency breeds trust,» said Michael Aiello, product management director at Google Cloud, adding that Google wants to reduce the number of mechanisms that customers have to trust Google with.

Access Transparency has been in GCP for some time now but it’s now released in beta for G-suite. This involves providing the customer with near real-time logs whenever a Google engineer authorises access to their environment to correct an issue they reported. Previously, a Google engineer, in this case, could self-authorise access to the environment but now they must get authorisation from the customer.

Security in the cloud

According to Gartner, 95% of all cloud security breaches are caused by customer misconfigurations such as firewalls with misconfigured buckets. Just last week a massive data trove was found to be left exposed because of an improperly configured AWS S3 bucket. The WWE, Accenture and even the NSA have fallen victim to this type of security incident and Google has recognised that.

Google’s Cloud Security Command Centre will now go to general availability (GA) after a successful beta phase. It’s a single app that provides a complete overview of your organisation’s cloud resources and the security threats that are presented to them.

Using machine learning, the app learns all the different access attempts over time and uses that intelligence to grant permissions and make smart recommendations on cloud configurations to increase overall security.

«It will give you a full rundown of all of your assets and from there you can apply security analytics and threat intelligence to best protect your GCP environment,» said Jess Leroy, product management director at Google Cloud.

After some customer requests from the beta phase, the command centre will now feature more export options to Docs and Sheets and even a custom export option for Splunk Web. New threat intelligence integrations with third-parties such as Tenable and McAfee will also be supported in the GA release.

G-suite also gets a security makeover with advanced phishing and malware protection – something Google dedicated lots of resources to. Among other things such as new controls being made available to admins against phishing attacks such as domain spoofing, Gmail will be getting a sandbox mode.

The sandbox mode aims to tackle the threat of malware spread over email and because the only way to see what a malicious program does is to run it. As such, virtual environments will now be embedded into Gmail so you can know with certainty what an executable program does before downloading it.

Security services

Aside from security features added to GCP specifically for GCP customers, Google announced a set of services that can be used on other platforms such as AWS or Azure as well as its own cloud platform.

One of the most common ways that companies will discover threats is by scanning through all of the logs in their environments. Event Threat Detection is a service that scans logs for suspicious activity and can consolidate logs from private clouds, traditional datacentres, even from other cloud platforms into GCP.

After the logs have been consolidated, they will be scanned and fed through the command centre to find vulnerabilities and users can then remediate them and even manipulate the data through BigQuery.

Security has been quite the theme here at Next – Google also announced that Android phones can now become a user’s Titan key, the only phish-resistant method of multi-factor authentication.

Google Cloud Next: Cloud Run stateless cloud environment enters beta stage


Connor Jones

9 Apr, 2019

Google Cloud’s serverless compute platform Cloud Run has entered a beta phase and aims to prevent the vendor lock-in problem faced by enterprises looking to go serverless.

Revealed at Google Cloud Next 2019 in San Francisco, the Cloud Run environment is stateless, which will tackle the issue that developers face when making the choice between the ease of serverless or the flexibility of containers.

With a serverless environment, developers need not worry about configuring the underlying infrastructure and how much resources they will need to power their applications. 

As such, with a stateless environment, enterprises can commit to a vendor for some of their serverless products, let’s say Dell, but not have to worry about being restricted only to the vendor’s software partners.

Cloud Run is fully serverless and automatically scales up or down with your website’s traffic within seconds, meaning that you’ll only pay for the resources that you need.

«What’s beautiful about the system is that you’re paying by the hundred-millisecond for what you use only and it scales up horizontally to many, many thousands of cores in just a few seconds,» said Oren Teich, director product management at Google Cloud.

It’s already and being deployed by some of the world’s biggest firms. Veolia, the waste management giant praises the ease and cost-effectiveness of the new environment.

«Cloud Run removes the barriers of managed platforms by giving us the freedom to run our custom workloads at lower cost on a fast, scalable, and fully managed infrastructure,» said Hervé Dumas, group CTO at Veolia. «Our development team benefits from a great developer experience without limits and without having to worry about anything.»

The Cloud Run environment can be used on its own or integrated with your company’s existing Kubernetes cluster; merging the two will also offer you some specific enhancements to your stack.

Using Cloud Run on Kubernetes grants access to Google’s other cloud products such as Custom Machine Types on its Compute Engine networks, which provides users with the ability to create scalable virtual machines tailored for each process that are configurable for optimal pricing.

Cloud Run on Kubernetes, the industry standard for container management, also allows you to run side-by-side with other networks deployed in the same cluster. Airbus Aerial, the aerospace company’s satellite imagery arm is already using Cloud Run on Kubernetes to process and stream aerial images.

«With Cloud Run on GKE, we are able to run lots of compute operations for processing and streaming cloud-optimized aerial images into web maps without worrying about library dependencies, auto-scaling or latency issues,» said Madhav Desetty, chief software architect at Airbus Aerial.

Cloud Run is also based on Google’s Knative open API which lets users run workloads on Google Cloud Platform, on a Google Kubernetes Engine (GKE) cluster or on a company’s own self-managed Kubernetes cluster. The underlying Knative API makes it easier for businesses to start on Cloud Run and then move to Cloud Run on GKE later on.

There are some operational constraints to Cloud Run which Teich detailed in a press conference. It runs at a maximum of 1Gb memory size instance, you get a single core per instance so it’s horizontal scale and not vertical scale. Each process must also respond to an HTTP 1.1 request in a maximum time of 15 minutes.

Third-party Facebook app leaked 540m user records on AWS server


Connor Jones

4 Apr, 2019

Facebook’s heavily criticised app integration system has led to more than 146GB worth of data being left publicly exposed on AWS servers owned and operated by third-party companies.

It’s believed 540 million records relating to Facebook accounts were stored on the servers, including comments, likes, reactions, names and user IDs, obtained when users engaged with applications on the platform – the same methods unearthed during the investigation into Cambridge Analytica.

Two apps have been associated with the data hoard so far: Cultura Colectiva, a Mexico-based media company that promotes content to users in Latin America, and ‘At the Pool’, a service that matched users with other content, which has been out of operation since 2016.

At the Pool is said to have held 22,000 passwords for its service in plaintext alongside columns relating to Facebook user IDs – the fear being that many users may have been using the same password for their Facebook accounts.

Both of the app’s datasets were stored in Amazon S3 buckets which were found to be misconfigured to allow public download of the files. Despite being commonly used among businesses, as they allow data to be distributed across servers in a wide geographical area, there have been multiple incidents involving companies failing to adequately safeguard their data.

Facebooked condemned the practices of both the apps. «Facebook’s policies prohibit storing Facebook information in a public database,» said a Facebook spokesperson. «Once alerted to the issue, we worked with Amazon to take down the databases. We are committed to working with the developers on our platform to protect people’s data.»

AWS was made aware of the exposed data on 28 January 2019, following an alert issued by security research firm UpGuard. AWS confirmed it had received the report and was investigating it, but the data was only secured on Wednesday this week.

“AWS customers own and fully control their data,” an AWS spokesperson told IT Pro. “When we receive an abuse report concerning content that is not clearly illegal or otherwise prohibited, we notify the customer in question and ask that they take appropriate action, which is what happened here.”

This statement aligns with UpGuard’s in that the researchers alerted Cultura Colectiva before AWS on 10 January 2019 but have still yet to receive a response from the company.

Accenture, Experian, WWE, and the NSA have all been found to have stored data on unsecured AWS servers in recent years, with the problem becoming so prevalent that hackers have started creating tools specifically designed to target these buckets.

“While Amazon S3 is secure by default, we offer the flexibility to change our default configurations to suit the many use cases in which broader access is required, such as building a website or hosting publicly downloadable content,” said AWS. “As is the case on-premises or anywhere else, application builders must ensure that changes they make to access configurations are protecting access as intended.»

The news coincides with an article published in The Washington Post in which Facebook’s Mark Zuckerberg called for a ‘worldwide GDPR’ and greater regulation on the data protection principles of big tech outside the EU, despite the company itself facing 10 major GDPR investigations.

The discovery of the data has once again raised the issue of Facebook’s data sharing policies, something that facilitated the improper sharing of user data for political purposes by Cambridge Analytica. This prompted Facebook to change its sharing policies to restrict access by third-parties, although the fear is that data troves such as this have already been widely shared.

“Cambridge Analytica was the most high profile case that led to some significant changes in how Facebook interacts with third-party developers, but I suspect there are many troves of Facebook data sitting around where they shouldn’t be, including this one,» said privacy advocate Paul Bischoff of Comparitech.com.

“Even though Facebook has limited what information third-party developers can access, there’s still nothing Facebook can do about abuse or mishandling until after the fact,» he said.

ITV and Google Cloud partner to fortify streaming capabilities


Connor Jones

2 Apr, 2019

ITV has created an analytics tool with Google Cloud Platform (GCP) called CROCUS which provides real-time feedback on live audiences comprising of millions of people – something that previously took anywhere between ten minutes to over a day.

Because ITV experienced its biggest ever audience in the summer of 2018, in some cases breaking the 27 million mark, getting quick feedback on audience viewing habits was critical to keeping the audience levels that high but it was made difficult with its previous analytics tool.

The CROCUS tool was created and implemented within three months while “at other major broadcasters, switching analytics tools is taking more than a year” explained Andy Burnett, director of direct to consumer technology and operations at ITV.

«We now do data analytics like no other broadcaster, because we’ve built it all ourselves, for ourselves, without worrying about infrastructure,” he added.

The tool uses managed services that scale automatically to match the unpredictable demand of an online video-on-demand platform. Data from viewers queued in Cloud Pub/Sub and then fed into Cloud Dataflow for transformation into more digestible data ready for presentation.

It’s important for the tool to be scalable because certain programmes can receive unpredictable levels of online traffic. Citing England’s semi-final in the World Cup with Croatia, Simon Forman, head of behavioural data engineering at ITV said that it had one million viewers watching the match from ITV hub which was many more times than previously recorded for a live broadcast.

Forman said that ITV’s analysts were hesitant to make the switch. «Our analysts were very attached to their own set of analytics tools, which they wanted to use on top of BigQuery,” he said. “But after a while, they began querying data directly through BigQuery itself. It’s easy to use and fast on complex queries, and they use Google Data Studio to display results.»

ITV uses audience analytics to measure how well the broadcast is doing. Forman said that there is usually an easily-identifiable traffic profile for each show. For example, football games have big audiences but drop off at half-time to make snacks, then it returns to high levels. Love Island also had a unique profile and if the analytics didn’t match the profile, ITV knew something was wrong.

ITV’s view for the future is to bring the analytics from its TV audience, which eclipses its online audience by a huge margin, and use the same tool it used for ITV Hub to personalise and improve the broadcasts for core viewers of its TV programmes.