Spotify shifts all music from data centres to Google Cloud

Spotify_Icon_RGB_GreenMusic streaming service Spotify has announced that it is to switch formats for storing tunes for customers and is copying all the music from its data centres onto the Google’s Cloud Platform.

In a blog written by Spotify’s VP of Engineering & Infrastructure, Nicholas Harteau explained that though the company’s data centres had served it well, the cloud is now sufficiently mature to surpass the level of quality, performance and cost Spotify got from owning its infrastructure. Spotify will now get its platform infrastructure from Google Cloud Platform ‘everywhere’, Harteau revealed.

“This is a big deal,” he said. Though Spotify has taken a traditional approach to delivering its music streams, it no longer feels it needs to buy or lease data-centre space, server hardware and networking gear to guarantee being as close to its customers as possible, according to Harteau.

“Like good engineers, we asked ourselves: do we really need to do all this stuff? For a long time the answer was yes. Recently that balance has shifted,” he said.

Operating data centres was a painful necessity for Spotify since it began in 2008 because it was the only way to guarantee the quality, performance and cost for its cloud. However, these days the storage, computing and network services available from cloud providers are as high quality, high performance and low cost as anything Spotify could create from the traditional ownership model, said Harteau.

Harteau explained why Spotify preferred Google’s cloud service to that of runaway market leader Amazon Web Services (AWS). The decision was shaped by Spotify’s experience with Google’s data platform and tools. “Good infrastructure isn’t just about keeping things up and running, it’s about making all of our teams more efficient and more effective, and Google’s data stack does that for us in spades,” he continued.

Harteau cited the Dataproc’s batch processing, event delivery with Pub/Sub and the ‘nearly magical’ capacity of BigQuery as the three most persuasive features of Google’s cloud service offering.

Google launches Dataproc after successful beta trials

Google cloud platformGoogle has announced that its big data analysis tool Dataproc is now on general release. The utility, which was one of the factors that persuaded Spotify to choose Google’s Cloud Platform over Amazon Web Services is a managed tool based on the Hadoop and Spark open source big data software.

The service first became available in beta in September and was tested by global music streaming service Spotify, which was evaluating whether it should move its music files away from its own data centres and into the public cloud – and which cloud service could support it. Dataproc in its beta form supported the MapReduce engine, the Pig platform for writing programmes and the Hive data warehousing software. Google says it has added new features and sharpened the tool since then.

While in its beta testing phase, Cloud Dataproc added features such as property tuning, VM metadata and tagging and cluster versioning. “In general availability new versions of Cloud Dataproc will be frequently released with new features, functions and software components,” said Google product manager James Malone.

Cloud Dataproc aims to minimise cost and complexity, which are the two major distractions of data processing, according to Malone.

“Spark and Hadoop should not break the bank and you should pay for what you actually use,” he said. As a result, Cloud Dataproc is priced at 1 cent per virtual CPU per hour. Billing is by the minute with a 10-minute minimum.

Analysis should run faster, Malone said, because clusters in Cloud Dataproc can start and stop operations in less than 90 seconds, where they take minutes in other big data systems. This can make analyses run up to ten times faster. The new general release of Cloud Dataproc will have better management, since clusters don’t need specialist administration people or software.

Cloud Dataproc also tackles two other data processing bugbears, scale and productivity, promised Malone. This tool complements a separate service called Google Cloud Dataflow for batch and stream processing. The underlying technology for the service has been accepted as an Apache incubator project under the name Apache Beam.

IBM launches object-based storage for the cloud

Cloud storageNew object-based cloud storage could tackle the growing challenge presented by unstructured data, according to IBM.

Announcing a new Cloud Object Storage at its InterConnect 2016 event in Las Vegas, IBM said the object storage technology it acquired from Cleversafe creates a fast, flexible, hybrid cloud storage service that gives companies new options for managing and analysing data.

Researcher IDC says 80% of new cloud apps will be big-data intensive. The cloud, mobile, IoT, analytics, social media, cognitive and other technologies all conspire to increase the data management workload, said John Morris, general manager of IBM Cloud Object Storage. Bringing Cleversafe technology to the cloud will give clients a way to keep on top of the problem.

The service offers a choice of multiple application programming interfaces and the option to store massive amounts of data on-premise, on the IBM Cloud or in a hybrid of both.

In June, when the Cloud Object Storage services is launched, it will come in three configurations: Nearline, Standard and Dedicated.

Nearline is a cloud infrastructure for infrequently accessed data charged at a lower cost and ideal for archive; back-up and other non-timely workloads. The Standard offering will provide a higher performance public cloud offering based on the Cleversafe technology with three new APIs into S3 Object storage.

The Dedicated option gives a single-tenant IBM Object Storage system running on dedicated servers in IBM Cloud data centres. This is available as an IBM managed service or as a self-managed cloud solution and gives clients access to object storage without needing for extra hardware or data centre space.

IBM Cloud Object Storage will be available in a variety of licensing models, including perpetual, subscription or consumption. This means customers can buy storage capacity with the flexibility to move data between the enterprise and the cloud as business needs change. It will also support both file and object workloads, so enterprises can have a single data storage hub that supports both traditional and web-based applications.

When Hybrid Cloud Becomes Hybrid Fog By @CWvanOrman | @CloudExpo #Cloud

Hybrid cloud is an appealing infrastructure model. With hybrid cloud, you can keep hosting applications and data on your own premises where necessary, while taking advantage of cloud economics and elasticity as appropriate.
Unfortunately, IT organizations are often unprepared to manage application infrastructure that spans the on-premise data center and multiple clouds. The result: they achieve vastly superior infrastructure economics — but still can’t consistently ensure a great customer experience.

read more

Provisioning the Entire Stack | @DevOpsSummit #DevOps #Microservices

As we move into the world of complete datacenter automation, there is a whole new selection of issues that we are learning to resolve – from custom hardware to a variety of provisioning tools at each level of automation. These are not unexpected issues, but they certainly provide us with plenty to do while we’re trying to reduce the amount of busy work we have to do.

We’re currently in the process of stringing together applications at the various layers to do server provisioning, application provisioning, and (for internal apps at least) application deployment. The options out there are a pretty broad swath that runs from all-in-one solutions like Puppet with Razor to specialized solutions hooked together to run the entire automation process. The end result is the same, the differences are in implementation, vendor lock-in, and whether a full solution that can be implemented all at once is more important to an organization than a solution that has the most suited parts that can be implemented in parallel but have to be tied together.

read more

Continuous Delivery and Legacy Software | @DevOpsSummit #DevOps #Microservices

The future of software releases is clear. Continuous delivery is here to stay. But does that mean that legacy software systems and infrastructure need to be altogether abandoned? If you polled the people busy redefining best practices today, they’d agree that in a decade we’re going to be effortlessly collaborating on highly complex systems that are updated continuously. QA will…

The post Can Continuous Delivery Accommodate Legacy Software Systems? appeared first on Plutora Inc.

read more

WebRTC’s Impact in Capital Markets | @ThingsExpo #IoT #RTC #WebRTC

WebRTC is bringing significant change to the communications landscape that will bridge the worlds of web and telephony, making the Internet the new standard for communications. Cloud9 took the road less traveled and used WebRTC to create a downloadable enterprise-grade communications platform that is changing the communication dynamic in the financial sector.
In his session at @ThingsExpo, Leo Papadopoulos, CTO of Cloud9, will discuss the importance of WebRTC and how it enables companies to focus on building intellectual property into their platforms that support customer needs, while also providing the performance, service, and support levels expected by Fortune 100 companies.

read more

Challenges and opportunities securing data in a world without borders

(c)iStock.com/Matej Moderc

Data today is moving and multiplying at pace across boundaries, platforms and applications. With the growth in cloud adoption, social media and mobility, information very rarely stays within the secure parameters of the enterprise anymore. 

More and more businesses are looking to reap the benefits of the cloud. A recent study from the Economist Intelligence Unit found that the most mature enterprises are now looking to cloud strategies in order to expand sales channels. IDC also forecasts that total spending on cloud IT infrastructure will reach $53.1 billion by 2019, accounting for 46 per cent of the total enterprise IT infrastructure spend. However, with more critical data than ever before now residing outside of the corporate perimeter, the task of locating, securing and controlling this data will continue to be a challenge for organisations. The traditional fortress-building strategies that businesses have used to de-risk company data are no longer the viable option they once were.

A data-centric approach

The ability to roll out and maintain an effective data regulation strategy begins with an organisation’s ability to identify sensitive data at the source, wherever that may be. In today’s digital era it is essential that IT security professionals can visualise where their sensitive or confidential data resides. This involved putting strong data governance practices in place, which ensure the delivery of trusted, secured data.

Organisations also need to make it their business to understand the risks that are posed to their data, staying up to date with the constantly evolving threat landscape. Only once this is done, can the right security measures be applied to that information, ensuring it is safe. This is a big shift in security posture and is essential for future data strategies, but in light of a threat landscape that is rife with skilled and vigilant cybercriminals, businesses need to make sure that they are equipped to fundamentally re-architect security approaches to be data-centric. Security has to travel with the data, no matter where it goes. For IT  professionals this means adopting an approach that focuses on managing and securing all end users and tying them to the data they create. By identifying and analysing sensitive data, such an approach can then be applied to help thwart data theft, whether it’s from internal or external sources.

The key to making this happen is helping business users easily integrate, consume and analyse all types of data. From there, the organisation can understand where applications create sensitive information in databases and how the information is proliferated to other data stores for use by line-of-business applications, cloud services and mobile applications.

The age of compliance

The recent introduction of EU data protection regulations has once again hammered home the need for organisations to ensure compliance with data regulations. Failing to manage and protect sensitive information can result in hefty fines of up to 4 per cent of global revenues, a sum that could jeopardise business viability. Of course, financial penalties are just one element of non-compliance. Another powerful incentive for businesses to adequately protect their data assets comes from the risk that data breaches pose for customer trust and corporate reputation.

No company wants to be known for their failure to protect confidential information. There is a growing list of organisations with first hand experience of the impact that a breach can have on brand equity. Consumers’ loss of confidence in business services can take a long time to repair. In fact, the latest research from Informatica into the State of the Data Nation reveals that security fears stop half of UK consumers sharing personal data with brands and businesses alike. What’s more, over half are reclaiming access and plan to share less data over the next three years, while a third claim nothing could incentivise them to share data at all. 

There’s no doubt that effective data security comes at a cost, but by implementing the correct tools and adapting data postures, data security needn’t be the burden that companies seem to fear. When implemented effectively, a solid data strategy can provide the protection that all corporate assets require to master data protection and breach resiliency.

Milton Keynes: Smart City Ecosystem Architecture | @CloudExpo #IoT #Cloud

The primary technology framework for a Smart City can best be described through an ‘Ecosystem Architecture’.
This would be applied as an ‘overlay’ across the city’s existing IT systems landscape, essentially providing two main components: A Cloud Applications Store and an IoT Sensor Network, united through this ecosystem approach.
An example of this model is the Milton Keynes case study, delivered by the Infonova Bearing Point partnership.

read more

Case Study: Parallels Desktop is Vital for Cross-Platform Development

“I would recommend other developers take a good look at the new Pro Edition. With its integrated productivity and network tools and support for cloud services, it is practically the only option for programmers and app developers in my opinion.” ~ Rafael Regh, Student Developer Award-winning student-developer Rafael Regh needed to simultaneously develop professional apps […]

The post Case Study: Parallels Desktop is Vital for Cross-Platform Development appeared first on Parallels Blog.