NASCAR moves onto AWS to uncover and analyse its racing archive

As sporting teams and franchises continue to realise the value of their archive – and balk at how much data it commands – many are in the process of migrating their operations to the cloud. NASCAR is the latest, announcing it will utilise Amazon Web Services (AWS) for archiving purposes.

The motor racing governing body is set to launch new content from its archive, titled ‘This Moment in NASCAR History’, on its website, with the service powered by AWS. NASCAR is also using image and video analysis tool Amazon Rekognition – otherwise known for its facial recognition capabilities – to automatically tag specific video frames with metadata for easier search.

“We are pleased to welcome AWS to the NASCAR family,” said Jon Tuck, NASCAR chief revenue officer in a statement. “This relationship underscores our commitment to accelerate innovation and the adoption of cutting-edge technology across our sport.

“NASCAR continues to be a powerful marketing vehicle and will position AWS’s cutting-edge cloud technology in front of industry stakeholders, corporate sponsors, broadcast partners, and ultimately our fans,” Tuck added.

The move marks another key sporting client in AWS’ roster. In July, Formula 1 was unveiled as an Amazon customer, with the company moving the majority of its infrastructure from on-premises data centres to AWS. Formula 1 is also using various AWS products, from Amazon SageMaker to apply machine learning models to more than 65 years of race data, to AWS Lambda for serverless computing.

Ross Brawn, Formula 1 managing director of motor sports, took to the stage at AWS re:Invent in November to tell attendees more of the company’s initiatives. The resultant product, ‘F1 Insights Powered By AWS’, was soft-launched last season giving fans race insights, and Brawn noted plans for further integrating telemetry data, as well as using high performance computing (HPC) to simulate environments which led to closer racing.

Two weeks after Formula 1 was unveiled, Major League Baseball (MLB) extended its partnership with AWS citing machine learning (ML), artificial intelligence, and deep learning as a key part of its strategy. The baseball arbiter already used Amazon for various workloads, including Statcast, its facts and figures base, but added SageMaker for ML use cases. Among the most interesting was its plan to use SageMaker, alongside Amazon Comprehend, to “build a language model that would create analysis for live games in the tone and style of iconic announcers.”

NASCAR is also keen to utilise these aspects of Amazon’s cloud. The company said AWS was its preferred ‘cloud computing, cloud machine learning and cloud artificial intelligence’ provider.

It’s worth noting however that AWS is not the only game in town. The Football Association (FA) announced it was partnering with Google as its official cloud and data analytics partner last week, while the Golden State Warriors are another confirmed customer of Google’s cloud.

You can read more about the NASCAR move here.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Tipping the scales in the cloud: From security risk to security’s friend

Cloud first, that’s the mantra for many organisations today. So, how curious that there was once a time when adoption was not so straightforward.

Many saw the cloud as an experimental technology suitable for nothing more important than storing holiday photos, while others cited security and compliance concerns as obstacles to adoption.

Perceptions have changed. No longer is the mention of cloud met with an instant intake of breath and question about security. In fact, there is an understanding that cloud can make your IT environment even more secure and compliant than the use of on-premise infrastructure alone.

One of the main reasons for this changing perception is experience. Organisations have become less concerned about security as they gain more exposure to cloud services. Equally they have understood that there is nothing to fear from the cloud if they adopt solid security practices; while trying to block cloud adoption will only lead to users bypassing IT, creating bigger security risks.

However, maybe the biggest part of the equation is that the cloud is inherently more secure. There’s no reason to suggest that operating private infrastructure – where you would be responsible for monitoring and patching – would be any more secure than the public cloud and the resources at providers’ disposal.

Looking at the evidence

According to Alert Logic’s 2017 Cloud Security Report, public cloud installations had the fewest cybersecurity incidents of any cloud type.

This is because public cloud vendors invest hundreds of millions of pounds in securing their infrastructure, the benefits of which are passed onto customers. The mega providers have built a data centre and network architecture designed to meet the requirements of even the most security-sensitive organisations.

This allows customers to scale and innovate without the need to pay for the cost of development. In many ways, this enhanced security can be viewed as another type of cloud service. The enhanced security could even be viewed as a type of cloud service in that organisations don’t have to pay the up-front costs of development and have a lower total cost of ownership.

Considering compliance

The same could be said of compliance. The advent of GDPR has caused organisations of all sizes to re-assess their cybersecurity measures and how they handle sensitive data, while those in regulated industries are subject to stringent requirements.

Because the big cloud providers manage dozens of compliance programs for their infrastructure, any data stored on the cloud is automatically compliant. In most cases the cloud is not a threat to compliance but makes the process easier.

Most of the providers can also help with data residency. Some jurisdictions, such as the European Union, forbid the transfer of data to territories with inferior data protection roles. While mechanisms such as EU-US Privacy Shield can overcome this, the answer for many organisations is to store information in local data centres.

The cloud providers now have Availability Zones that provide the answer to the vast majority of data residency needs, allowing businesses to firstly choose where their data is located, while also being safe in the knowledge that data is replicated across multiple data centres to protect against natural or technical disasters.

Of course, public cloud doesn’t have all the answers and for certain types of data a hybrid cloud model will be more appropriate. What’s clear though is the scales have tipped; security isn’t the blocker anymore and any many case organisations are turning to the cloud because it provides the security they need in an instant.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Local or Microsoft account: Which is best for you?


Will Stapley

4 Jun, 2019

It’s easy to assume your Windows account simply lets you sign in and out of Windows. However, the type of account you choose can have a significant effect on how Windows behaves. Here, we explain the differences between Microsoft and local accounts, so you can decide which is best for you.

Types of account

In Windows 7 and earlier, a local account (sometimes referred to as an offline account) was the only user account available. It is intended to be used on a single computer, which stores your account username, password and other details on its hard drive.

In contrast, a Microsoft account is stored online and can be used across multiple PCs. You’ll still be able to sign if your computer’s offline, so you won’t be locked out of Windows if your home network goes down or you’re working on your laptop while out and about.

Microsoft still gives you the option of setting up a local account, but it’s hidden away

Microsoft is keen to move users away from using a local account, begrudgingly lets you set one up when installing Windows (look for the ‘Offline account’ option hidden in the bottom corner of the sign in window). And if you do opt for it, Microsoft will hit you with all the benefits you’ve decided to forego with your choice. There are, without doubt, advantages to using a Microsoft account, but there are also drawbacks, as we will explain.

If you’re not sure which type of account you’re currently using, click Start, then the cog icon to open Settings and select Accounts. In the ‘Your info’ section, you’ll see your user account name. Below this, you’ll either see ‘Local account’ or, if you’re using a Microsoft account, the email address linked to your account.

Signing in & syncing

A Microsoft account makes it much easier to use the company’s other services within Windows. For example, as soon as you sign into your Microsoft account, you’ll also be signed into other services such as OneDrive, Skype and the Microsoft Store. With a local account, you’ll need to sign into these services individually.

A Microsoft account also syncs your Windows settings (such as your desktop theme, ease of access settings and even your Wi-Fi passwords) across all the computers you sign into. This is handy if you tend to use more than one computer or if you’re setting up a new one.

Additionally, you’ll be able to share your Windows Timeline (accessed by clicking the film-strip icon to the right of the Start button) with your other computers. This shows a record of which programs you’ve used and websites you’ve visited over the past few days. By default, it will only show websites viewed using Microsoft’s Edge browser, but the new Web Activities extension for Chrome also lets you sync your Chrome browsing history with your timeline.

This is great if you regularly use more than one computer and want everything synced, but it also lets anyone who logs in using your account see your emails, browsing history, synced files and more.

Security

A Microsoft account stores your password (albeit an encrypted copy of it) online. And while Microsoft has a pretty decent security record, so did many companies who have since been the victim of online security breaches. However, even if a hacker were to get hold of your Microsoft password, they couldn’t gain access to your home PC – unless they’d stolen that too. They would, however, have access to files that you had uploaded to OneDrive.

On the face of it, then, a local account may seem less risky, but it too contains security flaws. A relatively simple Command Prompt hack can let you (or anyone else) reset your local account password. Microsoft may have quietly fixed this vulnerability with the Windows 10 May update. When we tried the hack on a preview release, it no longer worked. Whether the fix makes the full update remains to be seen.

Set up security questions for your local account in case you need to reset your password

While we’re pleased to see that the hack may have been addressed, it did represent a way of accessing your local account if you’d forgotten your password. Because Microsoft doesn’t store local account passwords, it can’t reset them for you should yours slip your mind. A Microsoft account, on the other hand, lets you reset your password using the email address registered to your account.

If you decide to use a local account, we recommend you set up security questions – answer these correctly and you’ll be able to reset your password. To set these up, go to Settings, Accounts, ‘Sign-in options’, then scroll down on the right to the Password section and click ‘Update your security questions’.

You can make a Microsoft account more secure by setting up two-factor authentication (2FA). This means that whenever someone tries to sign into your account from a new location, a code will be sent to your phone that needs to be entered to gain access. To set this up, go to the Microsoft account security website and sign in (if you’re not already). At the bottom, click the ‘more security options’ link. From here, click ‘Set up two-step verification’ and follow the instructions.

Using a Microsoft account has other security benefits, including the ability to track your laptop should it be lost or stolen. If you run Windows 10 Pro, a Microsoft account will let you use its BitLocker drive encryption tool and store a copy of the recovery key (required if you need to access the contents of the drive after removing it from your computer) on Microsoft’s servers as a backup.

Privacy

When Microsoft accounts were first introduced with Windows 8, many users had concerns about privacy – specifically over the amount of data Microsoft would collect. In recent years, Microsoft has added settings to let you control how much you share, but it’s still easy to share more than you intended to. To stop sharing info about which programs you’ve opened and the websites you’ve visited, for example, go to Settings, Privacy, ‘Activity history’ and make sure the ‘Send my activity history to Microsoft’ is unticked.

Keep this option unticked unless you’re happy for your Windows usage data being sent to Microsoft

Using a local account helps prevent this type of data being sent to Microsoft. However, if you download an app from the Microsoft Store, for example, you’ll need to sign-in with a Microsoft Account – in which case, we recommend you changing the ‘Activity history’ setting as above.

Our verdict

There’s no doubt that a Microsoft account makes Windows easier to use. You don’t need to constantly sign into Microsoft services each time you want to use them and all your settings are synced across all your computers. And as long as you set up two-factor authentication, it’s secure and it provides a hassle-free way to reset your password should you forget it. Throw in those extra benefits, such as being able to track your laptop if you lose it, and it’s fair to say we go for a Microsoft account over an old-style local account every time.

That said, if you’ve no interest in using other Microsoft services (or prefer to sign into them individually) and would prefer not to store personal details online or share information with Microsoft, a local account will provide you with everything you need.

How to switch between accounts

Changing from a local account to a Microsoft one (or vice versa) is easy and you can do it as often as you like – and it won’t affect any of your personal files.

Switching to a local account

Go to Settings, Accounts, then make sure the ‘Your info’ section on the left is selected. Click the ‘Sign in with a local account instead’ link on the right. You’ll be asked to enter your current Microsoft account password, then choose a username and password. Click ‘Sign out and finish’ to continue (doing this will sign you out from all Microsoft services).

Switch to Microsoft account

Go to Settings, Accounts, then the ‘Your info’ section, and click the ‘Sign in with a Microsoft account instead’ link. You now need to enter your Microsoft account username and password. If you don’t already have an account, click ‘Create one’, then follow the instructions. Otherwise, enter your current local account password, then click Next. You’ll then be prompted to set up a PIN. This PIN is only stored on your PC and saves you from having to type your full Microsoft account password each time you want to login to Windows. At this point, we also recommend you set up two-factor authentication (as above).

Enterprises not seeing total fulfilment with cloud strategies – but hybrid remains the way to go

For enterprises looking to migrate to the cloud, with sprawling workloads and data, it can be a long, arduous journey. According to a new survey, more than two thirds of large enterprises are not getting the full benefits of their cloud migration journeys.

The study from Accenture, titled ‘Perspectives on Cloud Outcomes: Expectation vs. Reality” – polled 200 senior IT professionals from large global businesses and identified security and complexity of business and operational change as key barriers to cloud success.

This doesn’t mean enterprises struggle to see any benefits of the cloud – overall satisfaction was at above 90% on average – but when it came to cost, speed, business enablement and service levels, only one in three companies said they were fully satisfied on those metrics.

This breaks down further when looking at specific rollouts (below). Overall, enterprises are seeing greater benefits the more chips they put in; satisfaction levels climb to almost 50% among those with heavy investments, compared to less than 30% for those just starting their journeys.

When it came to public and hybrid cloud, the results showed an evident cost versus speed trade-off. More than half of those with public cloud workloads said they had fully achieved their cost objectives, while for speed it dropped below 30%. Hybrid cloud initiatives, the research noted, saw much more consistent results across the board, if not quite the same cost savings.

This makes for interesting reading when compared with similar research. According to a study from Turbonomic in March, the vast majority of companies ‘expect workloads to move freely across clouds’, with multi-cloud becoming the de facto deployment model for organisations of all sizes.

Yet the Accenture study argued this would not be plain sailing. 42% of those polled said a lack of skills within their organisation hampered their initiatives. Securing cloud skills is of course a subject which continues to concern – but according to Accenture, managed service providers (MSPs) may provide the answer. 87% of those polled said they would be interested in pursuing this initiative.

“Like most new technologies, capturing the intended benefits of cloud takes time; there is a learning curve influenced by many variables and barriers,” said Kishore Durg, senior managing director of Accenture Cloud for Technology Services. “Taking your cloud program to the next level isn’t something anyone can do overnight – clients need to approach it strategically with a trusted partner to access deep expertise, show measurable business value, and expedite digital transformation.

“If IT departments fail to showcase direct business outcomes from their cloud journeys, they risk becoming less relevant and losing out to emerging business functions, like the office of the chief data officer, that are better able to use cloud technologies to enable rapid innovation,” added Durg.

You can read the full report here (pdf, no opt-in required).

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

What is cloud-to-cloud backup?


Esther Kezia Thorpe

3 Jun, 2019

Most businesses understand the importance of having a robust backup strategy in place for their on-premises data, but as more companies migrate to the cloud, myths and misunderstandings are beginning to creep in about how to backup data in a cloud environment.

One type of backup that is growing in popularity is cloud-to-cloud backup (or C2C backup). At a basic level, this is where data stored on one cloud service is copied to another cloud.

But if company data is stored in a cloud, why does that data need to be backed up to another cloud?

Many people think that they’re protected from data loss if they use a software as a service (SaaS) platform such as Microsoft Office 365 or G Drive. But although these platforms have robust solutions in place for protecting data in the cloud, they are only designed to protect against losses on their side.


Discover how to keep your cloud data safe from loss with this comprehensive guide to cloud-to-cloud backup for Office 365. Download it here.

Download now


In fact, 77% of IT decision makers reported some form of data loss through SaaS over a 12 month period,  according to an extensive survey from Spanning. That figure is higher than previously reported figures of 58%, suggesting that data loss in the cloud is a growing issue as more organisations experiment with the technologies.

So how is cloud-to-cloud backup solving this issue? Here, we break down what C2C backup is, and how it can help your business.

What is cloud-to-cloud backup?

As the name suggests, cloud-to-cloud backup is where data in the cloud is backed up to another cloud, rather than to an on-site method such as tape or disk backup. It is also sometimes known as SaaS (software as a service) backup.

It adds an extra layer of protection for businesses using cloud services by quite simply keeping a duplicate copy in a separate cloud.

An important distinction here is between backup and archive. When looking for a C2C backup solution, it should specifically be a solution for backing up data, not archiving it. A backup copy exists for the express purpose of making data available and recoverable in the event of the original being inaccessible.

But an archive exists to meet compliance needs or internal policies, and isn’t designed for data recovery. Most archiving systems aren’t able to rapidly restore lost data back into production, and don’t have the functionality needed to automate accurate restores.

How does C2C backup help businesses?

Like on-site backup methods, C2C backup benefits businesses by storing data in multiple locations, enabling easy restoration of data in case of a cyber attack, accidental deletion, data corruption or other incidents.

Many applications that run in the cloud are already protected by the provider from data loss on their side. But there is a common assumption that SaaS providers like Google’s G Suite and Microsoft’s Office 365 have backup completely covered.

Although many providers are geared up to protect data in the cloud from problems on their side, such as disk failures or natural disasters, they can’t do much if the issue occurs on your side.

If a ransomware attack erases your files, or if they’re deleted accidentally or maliciously by an employee, it can be difficult to restore the data. This is one-way cloud-to-cloud backup can be used to help businesses with an extra layer of protection, should the worst happen.

Pros and cons of cloud-to-cloud backup?

The main benefit of cloud-to-cloud backup is cost. Because there’s no investment required in on-site backup infrastructure, C2C backup can be set up quickly and inexpensively. This also applies in the long term; cloud storage can be added or taken away quickly as business needs evolve, and costs are kept predictable on a monthly basis.

However, as with many cloud services, this can also lead to wastage, with dark data or unnecessary data taking up large amounts of storage space and driving up monthly costs.

Due to the nature of C2C backups, availability is a major advantage. Backup copies of data in the cloud can be accessed from anywhere, so IT teams don’t have to come to a physical location to restore business data if anything goes wrong.

C2C is still a relatively new market, and most vendors currently offering cloud-to-cloud backup services also manage the backup and day-to-day management themselves. This makes it a good option for organisations that don’t have their own in-house specialists to do this for them. But larger enterprises may feel more comfortable managing C2C backup themselves, or even keeping backup on-site.


‘The definitive guide to backup for G Suite’ covers why you need a solution for G Suite, what you need to understand about cloud-to-cloud backup, and more. Download it here.

Download now


Another benefit is resilience to cyber attacks. Should an employee accidentally click on a malicious email attachment and open the business up to a ransomware attack, backup data on a cloud generally won’t be affected as it isn’t on the office network.

However, as with all cloud services, data security is an issue. Backup data stored on a cloud opens it up to being hacked or otherwise compromised, and offline hard backups are still considered to be the more secure option.

Any businesses looking at cloud-to-cloud backup services should carefully consider their needs, from how frequently a backup should be run to how the day-to-day management should work, and how much storage is required.

How cloud computing is changing the laboratory ecosystem

The Human Genome Project, that was declared complete in 2003, took over a decade to run, costing billions of dollars. Using today’s software, the same data analysis can be accomplished in just 26 hours.  

Research has thrived from the rapid growth in computational power. With this, comes increased pressure on labs as data storage facilities need to house the exponentially-increasing quantities of large data sets, as big data becomes an integral part of research. This poses a crucial problem for labs, in which they may require a whole room dedicated to its on-site storage. And with that comes maintenance, leading to hefty up-front IT infrastructure costs.

Cloud computing has helped to alleviate this burden, by removing the need for companies to have their own information silos. Instead, research data can be stored in the cloud, external to their own facility. For laboratories, cloud computing centralises data, ensuring security and data sovereignty whilst facilitating collaboration.

Centralising data

Cloud computing allows labs to partake in immense computing processes without the cost and complexity of running onsite server rooms. Switching from an onsite solution to the cloud alleviates the costs of IT infrastructure, reducing the cost of entry into the industry, while also leveling the playing field for smaller laboratories.

Moreover, cloud computing can allow data to be extracted from laboratory devices to be put in the cloud. Device integration between lab equipment and cloud services allows real-life data from experiments to be collated in a cloud system. One of the most popular products in the market is Cubuslab, a plug-and-play solution that serves as a laboratory execution system and collects instrument data in real time as well as managing devices remotely.

This new collection of high amounts of data requires a centralised system that integrates the scientists protocols and experimental annotations. The electronic lab notebook, is starting to become a common tool in research by allowing users to organise all their different data inputs and retrieve this data at any point. This also allows for large R&D projects to effectively control data over their scalability potential.

Data sovereignty

As the pressure on laboratory practices swells, it is now more crucial than ever to adapt the way labs function. The need for cloud computing in laboratory environments has become increasingly imperative due to exceeding regulatory requirements.

Compliant research, through audit trails and other measures, is required to verify that the data is truthful and unaltered. Good experimental practice makes research more applicative to receive patents. Moreover, cloud computing supports custom access control, so that access to certain information can be allocated dependant on the role. Therefore, the QA/QM department of the lab can govern which members have access to certain levels of data.

In reference to the fear around cloud computing mentioned earlier, the fear of security is not formed from substance. Cloud providers have security protocols and specialised staff that provide majoritively better security than on-site solutions.

However, in light of last year’s GDPR law, the compliance regulations of cloud providers has changed significantly. Some cloud providers use hybrid or multi-cloud approaches, which poses difficulty in the lab as it is much more complex to check GDPR compliance. You would have to check the data protection policies of each cloud provider. A breach in compliance from one provider would sabotage the data-protective infrastructure of the whole system.

Data safety

A lot of the current aversion to cloud computing comes from fear of lost data. It may be easy to think that one is most likely to lose their lab data due to outages or cyber attacks. However, it is more probable to lose data from fires in proximity to on-site data storage facilities. Moreover, paralysis of the electrical grid in the U.S. is mostly caused by squirrels, rather than by cyber-attacks (according to Jon Inglis, former Deputy Director, U.S. National Security Agency). Furthermore, deploying systems in the Cloud escapes the problem of data loss caused by outages that would otherwise handicap smaller laboratory industries. If a cascading failure were to occur in the cloud, the data will not be lost as most files are stored in more than three locations, hence securing data.

What seems to be holding back many labs is the fear of losing data by switching to cloud computing. In reality, it is much safer and reliable than on-site data storage.

Conclusion

Cloud computing has allowed a more balanced playing field with regards to research. Cloud computing is a cost-effective way for smaller laboratories to get a leg up in the research industry, just as long as it pertains to compliance regulations. This is due to the ability to scale-up, as a business doesn’t require excessive data storage facilities – which is especially important as data sets become exceedingly large.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Google confirms network congestion as contributor to four-hour cloud outage

Google has confirmed a ‘network congestion’ issue which affected various services for more than four hours on Sunday has since been resolved.

A status update at 1225 PT noted the company was investigating an issue with Google Compute Engine, later diagnosed as high levels of network congestion across eastern USA sites. A further update arrived at 1458 to confirm engineering teams were working on the issue before the all-clear was sounded at 1709.

“We will conduct an internal investigation of this issue and make appropriate improvements to our systems to help prevent or minimise future recurrence,” the company wrote in a statement. “We will provide a detailed report of this incident once we have completed our internal investigation.”

The outage predominantly affected users in the US, with some European users also seeing issues. While various Google services, including Google Cloud, YouTube, and G Suite were affected, many companies who run on Google’s cloud also experienced problems. Snapchat – a long-serving Google Cloud customer and considered a flagship client before the company’s major enterprise push – saw downtime, as did gaming messaging service Discord.

According to security provider ThousandEyes, network congestion is a ‘likely root cause’ of the outage. The company spotted services behaving out of sync as early at 1200 PT at sites including Ashburn, Atlanta and Chicago, only beginning to come back at approximately 1530 (below). “For the majority of the duration of the 4+ hour outage, ThousandEyes detected 100% packet loss for certain Google services from 249 of our global vantage points in 170 cities around the world,” said Angelique Medina, product marketing director at ThousandEyes.

Previous Google cloud snafus have shown the company can learn lessons. In November 2015 Google Compute Engine went down for approximately 70 minutes, with the result being the removal of manual link activation for safety checks. The following April, services went down for 18 minutes following a bug in Google Cloud’s network configuration management software.  

According to research from Gartner and Krystallize Technologies published last month, Microsoft is the poor relation among the biggest three cloud providers when it comes to reliability. As reported by GeekWire, 2018 saw Amazon and Google achieve almost identical uptime statistics, at 99.9987% and 99.9982% respectively. Microsoft, meanwhile, trailed with 99.9792% – a ‘small but significant’ amount.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

BT partners with Juniper on unified cloud network platform


Connor Jones

3 Jun, 2019

BT has partnered with Juniper Networks to support with the core infrastructure that will underpin the rollout of its upcoming unified cloud networking platform.

The platform will unify BT’s networks including 5G, Wi-Fi and fixed-line into one virtualised service which will enable more efficient infrastructure management and deployment.

The new unified platform will supposedly allow BT to “create new and exciting converged services bringing mobile, Wi-Fi, and fixed network services together”.

The platform’s infrastructure will be build to a common framework, allowing it to be shared across BT’s offices nationally and globally.

The platform will be used by a range of BT’s arms such as voice, mobile core, radio/access, ISP, TV and IT services and deploying the platform company-wide will cut costs and streamline operations.

“This move to a single cloud-driven network infrastructure will enable BT to offer a wider range of services, faster and more efficiently to customers in the UK and around the world,” said Neil McRae, chief architect, BT. “We chose Juniper to be our trusted partner to underpin this Network Cloud infrastructure based on the ability to deliver a proven solution immediately, so we can hit the ground running.”

“Being able to integrate seamlessly with other partners and solutions and aligning with our roadmap to an automated and programmable network is also important,” he added.

We’re told that the project will facilitate the advent of new applications and workloads for the telecoms giant and evolve its existing ones including converged fixed and mobile services and faster time-to-market for internet access delivery.

“By leveraging the ‘beach-front property’ it has in central offices around the globe, BT can optimise the business value that 5G’s bandwidth and connectivity brings,” said Bikash Koley, chief technology officer, Juniper Networks.

“The move to an integrated telco cloud platform brings always-on reliability, along with enhanced automation capabilities, to help improve business continuity and increase time-to-market while doing so in a cost-effective manner,” he added.

BT has undergone a change in leadership this year and faces challenges in almost all areas of its business, according to its annual financial overview.

EE’s business has been carrying the telco, it’s the only arm of the company that is posting profits in an “unfavourable telecoms market”. Its revenue slip for the year has been attributed to the decline in traditional landline calls with a seemingly unrelenting shift to voice over IP.

In order to capitalise on new business angles such as IoT, cloud and SD-WAN, BT admits greater investment is needed and this will most likely hinder its short-term revenue targets but it could be pay off in the long term.

“Our aim is to deliver the best converged network and be the leader in fixed ultrafast and mobile 5G networks,” said Jensen. “We are increasingly confident in the environment for investment in the UK.”

EE launched its 5G network last week, becoming the first telecoms company in the UK to do so. It’s available in six major cities and speeds of 1Gbps are promised “for some users”.

Four-hour Google Cloud outage blamed on ‘network congestion’


Jane McCallion

3 Jun, 2019

Google Cloud Platform (GCP) suffered a significant outage on Sunday night that lasted nearly three hours, knocking offline services including G Suite, YouTube and Google Cloud.

The issue was first noted on the company’s cloud status dashboard at 8.25pm BST on 2 June as a Google Compute Engine problem.

Shortly, however, reports of problems with Google Cloud, YouTube and more started hitting Twitter and by 8.59pm, the dashboard acknowledged it was a “wider network issue”.

By 12.09am on 3 June, the issue was resolved but little detail is available as to what happened beyond “high levels of network congestion in the eastern USA, affecting multiple services in Google Cloud, G Suite and YouTube”.

However, someone claiming to work on Google Cloud (but currently on holiday) posted a message on Hacker News saying: “It’s disrupting everything, including unfortunately the tooling we usually use to communicate across the company about outages.”

“There are backup plans, of course, but I wanted to at least come here to say: you’re not crazy, nothing is lost … but there is serious packet loss at the least,” they added. 

In a statement, Google told Cloud Pro: “We will conduct a post mortem and make appropriate improvements to our systems to prevent this from happening again. We sincerely apologise to those that were impacted by [these] issues. Customers can always find the most recent updates on our systems on our status dashboard.”

Some, however, have questioned what exactly Google meant by “high levels of network congestion in the eastern USA”.

Clive Longbottom, co-founder of analyst house Quocirca, told Cloud Pro: “If this was the case, a lot more than GCP would have been impacted: this does not seem to have been the case. As such, it would appear that what Google possibly means is that it was excessive network traffic in its own environment in the Eastern USA.”

He suggested that the excessive network traffic was potentially caused by something internal.

“This could be something like a memory leak on an app going crazy, or (like AWS some time back) human error through a script causing a looping command bringing chaos to the environment.”

This doesn’t mean that organisations should abandon cloud for business-critical workloads, however. Owen Rogers, research director at the digital economics unit of 451 Research, told Cloud Pro: “Four hours is quite a long time … but it’s a tricky issue, because outages are going to happen now and then, and all customers can do is to build resiliency such that if an outage does occur, they have a backup.

“Using multiple availability zones and regions is a must, but if applications are business critical, multi-cloud should be considered. Yes, it’s more complex to manage; yes, you’ll have to train more people. But if your company is going to go bust because of a few hours of outage, it is an investment worth making. It appears some hyperscalers are more resilient than others, but even the best are likely to slip up occasionally.”