Category Archives: Public Cloud

Apache Spark reportedly outgrowing Hadoop as users move to cloud

cloud competition trophyApache Spark is breaking down the barriers between data scientists and engineers, making machine learning easier and is out growing Hadoop as an open source framework for cloud computing developments, a new report claims.

The 2015 Spark User Survey was conducted by Databricks, the company founded by the creators of Apache Spark.

Spark adoption is growing quickly because users are finding it easy to use, reliably fast, and aligned for future growth in analytics, the report claims, with 91 per cent of the survey citing performance as their reason for adoption. Other reasons given were ease of programming (77 per cent), easy deployment (71 per cent) advanced analytics (64 per cent) and the capacity for real time streaming (52 per cent).

The report, based on the findings of a survey of 1,400 respondents Spark stakeholders, claims that the number of Spark users with no Hadoop components doubled between 2014 and 2015. The study set out to identify how the data analytics and processing engine is being used by developers and organisations.

The Spark growth claim is based on the finding that 48 per cent of users are running Spark in standalone mode while 40 per cent run it on Hadoop’s YARN operating system. At present 11 per cent of users are running Spark on Apache Mesos. The survey also found that 51 per cent of respondents run Spark on a public cloud.

The number of contributors to Spark rose from 315 to 600 contributors in the last 12 months, which the report authors claim makes this the most active open source project in Big Data. Additionally, more than 200 organisations contribute code to Spark, which they claims makes it ‘one of’ the largest communities of engaged developers to date.

According to the report, Spark is being used for increasingly diverse applications, with data scientists particularly focused on machine learning, streaming and graph analysis projects. Spark was used to create streaming applications 56 per cent more frequently in 2015 than 2014. The use of advanced analytics, like MLib for machine learning and GraphX for graph processing, is becoming increasingly common, the report says.

According to the study, 41 per cent of those surveyed identified themselves as data engineers, while 22 per cent of respondents say they are data scientists. The most common languages used for open sourced based big data projects in cloud computing are Scala (used by 71 per cent of the survey), Python (58 per cent), SQL (36 per cent), Java (31 per cent) and R (18 per cent).

Backblaze launches cheap cloud storage service

BackBlaze B2 screenBackup service provider Backblaze has made a cloud storage service available for beta testing. When launched it could provide businesses with a cheap alternative to the Amazon S3 and the storage services bundled with Microsoft Azure and Google’s Cloud.

According to sources, Backblaze B2 will offer a free tier of service of up to 10GB storage, with 1GB/ per day of outbound traffic and unlimited inbound bandwidth. Developers will be able to access it through an API and command-line interface, but the service will also offer a web interface for less technical users.

Launched in 2007 Backblaze stores 150 petabytes of backup data and over 10 billion files on its servers, having built its own storage pods and software as a policy. Now, it intends to use this infrastructure building knowledge to offer a competitive cloud storage service, according to CEO Gleb Budman.

“We spent 90 per cent of our time and energy on building out the cloud storage and only 10 per cent on the front end,” Glebman told Tech Crunch. The stability of its backup service technology persuaded many users to extend the service into data storage. In response to customer demand,

Backblaze’s engineers spent a year working on the software to make this possible. Now the company is preparing to launch a business to business service that, it says, can compete with the cloud storage market’s incumbents on price and availability.

Backblaze’s service, when launched, will be half the price of Amazon Glacier, and ‘about a fourth’ of Amazon’s S3 service, according to sources. “Storage is still expensive,” Glebman said.

Though the primary use for Backblaze B2 will be to store images, videos and other documents, Budman said he expects some users to use it to store large research data sets.

Public cloud generating $22 billion a quarter for IT Companies

metalcloud_lowresPublic cloud computing generated over $22 billion in revenues for IT companies in the second financial quarter of 2015, according to a study by Synergy Research Group.

The revenue breaks down into $10 billion earned by companies supplying public cloud operators with hardware, software and data centre facilities and $12 billion being generated from selling infrastructure, platforms and software as a service.

In addition the public cloud supports ‘huge’ revenue streams from a variety of internet services such as search, social networking, email and e-commerce platforms, says the report. It identifies the supply side companies with the biggest share of revenues as Cisco, HP, Dell, IBM and Equinix. On the cloud services side the market leaders are AWS, Microsoft, Salesforce, Google and IBM.

As the public cloud makes inroads into the total IT market, the hardware and software used to build public clouds now account for 24 per cent of all data centre infrastructure spending. Public cloud operators and associated digital content companies account for 47 per cent of the data centre colocation market.

While the total IT market grew at less than five per cent per year, the growth of cloud revenues outpaced it. Infrastructure and platform as a service revenues (Iaas/Paas) grew by 49 per cent in the past year and software as a service (SaaS) grew by 29 per cent.

“Public cloud is now a market that is characterized by big numbers, high growth rates and a relatively small number of global IT players,” said Synergy Research Group’s chief analyst Jeremy Duke.

However, the report noted that there is still a place for regional small-medium sized public cloud players.

The FT discusses app and cloud strategy

christy rossBCN caught up with Christy Ross, Head of Application and Publishing Services, Technology at the Financial Times, to get some insight into the company’s approach to digital publishing, mobile apps and the cloud.

BCN: From a digital perspective, what is the FT currently focussed on?

Christy Ross: Print has been written off for years now, no pun intended, but we’re still doing very well. However our main interest these days — rather than investing in print product – is in looking at how we can identify and supply other means of content delivery and then to actually make some money from that. Over the past few years we’ve done things to help us to maintain a direct relationship with our subscribers, such as building our own web app rather than place anything on the Apple Store or Play Store.

We have also done a lot around building APIs, so that we can provide distinct feeds of information to businesses, enabling them to come to us and say, ‘we are particualrly interested in these areas of news, or analysis, and will pay you for that’. Of course we’ve also seen mobile take off massively, so probably over 50% of our new subscription revenue comes from mobile, rather than fromm the browser or tablets.

Why is the FT able to be so confident when asking for revenue from its readers?

We’ve been quite lucky. We were one of if not the first UK newspaper to introduce a paywall. A lot has been made of the fact that paywalls ‘don’t work,’ and we’ve seen a number of other daily national papers put them up and pull them back down again, but we are very wedded to ours.

That’s because we are a niche product. If you like, we’re ‘the business world’s second newspaper.’ So in the UK someone will have, say, their Times or the Telegraph (or in the US they’ll have the Washington Post or the New York Times), but then their second newspaper will be the Financial Times. You can’t get our content anywhere else, particularly not the analysis we provide. While we are interested in breaking news and do follow it, our key differetnaitor is analysis and that comment of what is going on in the world and what it means long term. People aree able to use these insights in their business decisions – and people are prepared to pay for that.

Is there anything unique about your current mobile application in itself?

At the end of the day we are a  content provider. It’s about getting the content out as quickly as we can, and providing the tools to our editorial users so they can concentrate on writing and not worry so much about layout – we’re doing a lot more about templating, metadata, and making our content much richer, so that, when a reader comes on, the acutal related stories mean something to them, and it’s easier for them to navigate through our considerable archive on the same poeople and companies, and be able to form a much more rounded opinion.

What about internal technical innvoation?

We’ve built our own private cloud, and we’re also heavily investigating and starting to use AWS, so doing a lot out there to support the public cloud. One of our strategy points is that any new applcaition or new functionality that we look to bring online, we have to start by looking on the public cloud to see if we can host and proivide it on that, and there has to be a very good technical reason for not doing it. We’re pushing it much more that way.

We have also borrrowed a concept from Netflix, their Chaos Monkey appraoch, where every now and then we deliberately break parts of our estate to see how resilient applications are, and to see how we can react to some of our applications not being available and what that means to our user base. Just a a couple of weekends ago we completely turned off one of our UK data centres, where we’d put most of our publishing and membership applciations in advance, to see what it did, and also to see whether we could bring up the applications in our other data centres – to see how long it took us and what it meant for things like our recovery time objectives.

 

Christy Ross will be appearing at Apps World Europe (18- 19 November, Excel, London)

Part 2: Cisco Live 2015 Recap – AWS Direct Connect, VIRL Facelift & More!

It was another great Cisco Live event this year! My colleague Dan Allen wrote a post summarizing the key takeaways he got out of the event. I wanted to add in some of my own to supplement his. As you probably know, it was John Chambers last Cisco Live event as CEO – which makes it especially cool that I got this picture taken with him!

cisco live

Expanded DevNet Zone

Last year Cisco introduced the DevNet zone which was focused on giving people hands on access to Cisco’s most ground breaking technology that could be construed as science fiction unless they opened their toy box and let people see and touch what they’ve been hiding in it. This year we got to play with Internet of Things development environments, API driven SDN solutions, virtual network simulation toolkits and drone technologies hosted by the co-founder of iRobot. Last year, it was 4 little booths in between two restrooms with giveaways to get people to come in. This year, it consumed a whole section of the convention center with over 20 booths, 6 interactive labs and different exhibits and guest speakers delivering presentations on the future of technology.

Programmability and automation were a part of every session no matter what the topic was

It didn’t matter if you were attending entry-level or advanced breakout sessions, IT management track courses or developer workshops; everything you attended at Cisco Live this year had something to do with automation, programmability, cloud connectivity or application awareness. This was very different from any of the 8 Cisco Live events I’ve attended throughout my career. If you’re a technologist and have any doubt in your mind that this is where the industry is headed, you’d better start learning new skills because, like it or not, our customers and the customers of our customers are, or will soon be, believers and consumers of these technologies and consumption models.

Cisco and Amazon TEAM up to BEEF up AWS Direct Connect

AWS Direct Connect is a part of Amazon’s APN Partner program that consists of ISP’s that provide WAN circuits directly connected to AWS datacenters. That means if you’re a Level3 or AT&T MPLS customer and you have 10 offices and 2 datacenters on that MPLS network, Amazon AWS can now become another site on that private WAN. That’s HUGE! Just look at a small portion of their ISP partner list:

  • AT&T
  • Cinenet
  • Datapipe
  • Equinix, Inc.
  • FiberLight
  • Fiber Internet Center
  • First Communications
  • Global Capacity
  • Global Switch
  • Global Telecom & Technology, Inc. (GTT)
  • Interxion
  • InterCloud
  • Level 3 Communications, Inc.
  • Lightower
  • Masergy
  • Maxis
  • Megaport
  • MTN Business
  • NTT Communications Corporation
  • Sinnet
  • Sohonet
  • Switch SUPERNAP
  • Tata Communications
  • tw telecom
  • Verizon
  • Vocus
  • XO Communications

 

Combine that with a CSR1000v and an ASAv and you have a public cloud that can be managed and utilized exactly like a physical colo that is completely transparent to both your network teams and users.

ASAv in AWS

This little announcement slipped under the radar when it was made a week before Cisco Live but was definitely front and center in the Cisco Solutions Theater in the world of solutions. The ASA1000v has been Cisco’s only answer to a full featured virtual security appliance for the past two years or so. The only problem is that it required the Nexus1000v with which the industry as a whole has been reluctant to embrace (particularly in the public cloud space). Well good news, the ASAv doesn’t require the Nexus 1000v and, therefore, has opened the doors for the likes of Amazon AWS and Microsoft Azure to let us make use of an all Cisco Internet and WAN edge within an AWS Virtual Private Cloud (VPC). This means you can manage the edge of your AWS VPC the same way you manage the edge of your datacenters and offices. The ASAv supports everything an ASA supports which will soon include the full FirePower feature set. Have you ever tried building a VPN tunnel to an ASA at a customer’s datacenter from the AWS VPC Customer Gateway? I have – not the best experience. Well, not any more – it’s pretty cool!

ACI was big this year, but not as big as last year

I was expecting more of the same from last year on this one. Just about everywhere you looked last year, you saw something about ACI. This year was a more targeted effort both with the breakout session and in the Cisco Solutions Theater. I’m not saying it didn’t get a lot of attention, just not as much as last year and certainly not more. This shouldn’t come as too big of a surprise for anyone used to Cisco’s marketing and positioning tactics, however. Last year was geared toward awareness of the new technology and this year was more geared toward the application of the technology across very specific use cases and advances in it’s capabilities. The honeymoon is clearly over and everyone was focused on how to live every-day life with ACI being a part of it.

APIC can interact with ASA and other non-Cisco devices

The ACI APIC is slowly getting more and more abilities related to northbound programmatic interaction with other Cisco and non-Cisco appliances. For example, it can now instantiate policies and other configuration elements of ASA, Fortigate, F5 and Radware appliances as part of its policy driven infrastructures.

iWAN almost officially tested and supported on CSR1000v

As of next month, the iWAN suite of technologies will be officially tested and supported on the CSR1000v platform which means all of that functionality will now be available in public cloud environments. More to come on iWAN in another post.

CSR1000v

The CSR1000v (Cloud Services Router) is Cisco’s answer to a virtual router. Until now, it’s been sort of an “Oh ya? We can do that too” sort of project. Now it’s a full-fledged product with a dedicated product team. It’s supported across just about every public cloud provider and in every Cisco Powered Cloud partner (Cirrity, Peak 10, etc.).

Additionally, I managed to get the product team to pull back the covers on the roadmap a bit and reveal what Dynamic Multipoint VPN (DMVPN) will be supported on the CSR1000v soon along with a number of other ISR/ASR features which will make a truly seamless WAN that includes your public cloud resources.

Non-Cisco Cloud News – Azure Virtual Network now supports custom gateways

A big challenge in real adoption of non-Microsoft application workloads in Azure has been the inability to use anything but Azure’s gateway services at the edge of your Azure Virtual Network. Well, Cisco let the cat out of the bag on this one as Cisco CSR’s and ASR’s will soon be supported as gateway devices in Azure VN. For me, this really brings Azure into focus when selecting a public cloud partner.

APIC-EM has more uses than ever

Cisco Application Policy Infrastructure Controller Enterprise Module (rolls right off the tongue right?), or APIC-EM, is Cisco’s answer to an SDN controller. It’s part of Cisco’s ONE software portfolio and has more uses than ever. Don’t confuse the APIC-EM with the ACI APIC, however. The ACI APIC is the controller and central point of interaction for Cisco’s ACI solution and runs on Cisco C-Series servers. The APIC-EM, however, is truly an open source SDN controller that is free and can run as a VM and interact with just about anything that has an API. That’s right.

VIRL got a facelift

Cisco’s Virtual Internet Routing Lab (VIRL) is getting some real attention. It’s an application that was unveiled to Cisco DevNet partners last year that lets you virtually build Cisco networks with VM’s running real IOS and NX-OS code to simulate a design and test it’s functionality. As a partner, this is huge as we can virtually replicate customer environments as a proof of concept or troubleshooting tool. It’s getting more development support within Cisco.

 

A lot of crucial information and updates came out of this event. If you would like to discuss any in more detail, feel free to reach out!

 

By Nick Phelps, Principal Architect

IDC: Cloud to make up nearly half of IT infrastructure spending by 2019

Enterprise adoption of public cloud services seems to be outstripping private cloud demand

Enterprise adoption of public cloud services seems to be outstripping private cloud demand

Total cloud infrastructure spending will grow by 21 per cent year over year to $32bn this year, accounting for approximately 33 per cent of all IT infrastructure spending, up from about 28 per cent in 2014, according to IDC.

The research and analyst house echoed claims that cloud computing has been significantly disrupting the IT infrastructure market over the past couple of years. The firm estimates last year cloud infrastructure spending totalled $26.4bn, up 18.7 per cent from the year before.

Kuba Stolarski, research manager, server, virtualization and workload research at IDC said much of the growth over the next few years will be driven largely by public cloud adoption.

Private cloud infrastructure spending will grow by 16 per cent year on year to $12bn, while public cloud IT infrastructure spending will grow by a whopping 25 per cent in 2015 to $21bn – nearly twice as much, the firm believes.

“The pace of adoption of cloud-based platforms will not abate for quite some time, resulting in cloud IT infrastructure expansion continuing to outpace the growth of the overall IT infrastructure market for the foreseeable future,” Stolarski explained.

“As the market evolves into deploying 3rd Platform solutions and developing next-gen software, organizations of all types and sizes will discover that traditional approaches to IT management will increasingly fall short of the simplicity, flexibility, and extensibility requirements that form the core of cloud solutions.”

By 2019, the firm believe, cloud infrastructure spending will top $52bn and represent 45 per cent of the total IT infrastructure spend; public cloud will represent about $32bn of that amount, and private cloud the remaining $20bn.

According to IDC, 15 per cent of the overall infrastructure spend in EMEA was related to cloud environments in 2014, up from 8 per cent in 2011. $3.4bn was spent on hardware going to cloud environments in EMEA in 2013, up 21 per cent from 2012.

vCloud Air: Helping a customer move to a hybrid cloud environment

As you most likely know, vCloud Air is VMware’s offering in the hybrid/public cloud space. In my opinion, it’s a great offering. It allows you to take existing virtual machines and migrate those up to the cloud so that you can manage everything with your existing virtual center. It’s also a very good option to do disaster recovery.

I worked on a project recently where the client wanted to know what they needed to do with their infrastructure. They were looking for solid options to build a foundation for their business, whether it was on-prem, a cloud-based offering, or a hybrid approach.

In this project, we ended up taking their VMs and physical servers and put a brand new host on site running VMware that’s running a domain controller and a file server. We put the rest of the production servers and test dev environment in vCloud Air. Additionally, this helped them address their disaster recovery needs. It gave them a place where they could take their systems without a lot of upfront money and have a place where they could recover their VMs in case of the event of a disaster.

 

http://www.youtube.com/watch?v=OP3qO-SI6SY

 

Are you interested in learning more about vCloud Air? Reach out!

 

By Chris Chesley, Solutions Architect

Microsoft Azure – It’s More Than Just Portability

When people discuss Microsoft Azure, they often think about portability to the cloud. One of the misnomers of the Azure cloud is that you’re just taking your on-prem virtual machines and moving them to the cloud when, in reality, Azure is much more than that. It is about VM portability, but it is also running different platforms in the cloud. It’s using instances which allows users to move, say, a web server to an instance in the Azure cloud so they don’t have to worry about the patching and management of that server from month to month. Instead, the user knows that it’s already taken care of for you. Other benefits include uptime SLAs and back up solutions.

Watch the video below with DJ Ferrara to learn more about the benefits Microsoft Azure has to offer.

 

Microsoft Azure – What are the benefits?

 

http://www.youtube.com/watch?v=yfsobUCjff0

What are your thoughts on Microsoft Azure? Has your organization utilized the platform? Any plans to use Azure in the future? Why or why not?

To hear more from DJ, watch his video blog discussing the pros and cons of different public cloud platforms and when it makes sense to use each. If you’d like to speak with DJ more about the Azure cloud, email us at socialmedia@greenpages.com.

 

Video with DJ Ferrara, Vice President & Enterprise Architect

Managing Resources in the Cloud: How to Control Shadow IT & Enable Business Agility

 

In this video, GreenPages CTO Chris Ward discusses the importance of gaining visibility into Shadow IT and how IT Departments need to offer the same agility to its users that public cloud offerings like Amazon can provide.

 

http://www.youtube.com/watch?v=AELrS51sYFY

 

 

If you would like to hear more from Chris, download his on-demand webinar, “What’s Missing in Today’s Hybrid Cloud Management – Leveraging Cloud Brokerage”

You can also download this ebook to learn more about the evolution of the corporate IT department & changes you need to make to avoid being left behind.

 

 

 

Have You Met My Friend, Cloud Sprawl?

By John Dixon, Consulting Architect

 

With the acceptance of cloud computing gaining steam, more specific issues related to adoption are emerging. Beyond the big-show topics of self-service, security, and automation, cloud sprawl is one of the specific problems that organizations face when implementing cloud computing. In this post, I’ll take a deep dive into this topic, what it means, how it’s caused, and some options for dealing with it now and in the future.

Cloud Sprawl and VM Sprawl

First, what is cloud sprawl? Simply put, cloud sprawl is the proliferation of IT resources – that provide little or no value – in the cloud. For the purposes of this discussion, we’ll consider cloud to be IaaS, and the resources to be individual server VMs. VM sprawl is a similar concept that happens when a virtual environment goes unchecked. In that case, it was common for an administrator, or someone with access to vCenter, to spin up a VM for testing, perform some test or development activity, and then forget about it. The VM stayed running, consuming resources, until someone or something identified it, determined that it was no longer being used, and shut it down. It was a good thing that most midsize organizations limited vCenter or console access to perhaps 10 individuals.  So, we solved VM sprawl by limiting access to vCenter, and by maybe installing some tools to identify little-used VMs.

So, what are the top causes of cloud sprawl? In IT operations terms, we have the following:

  • Self-service is a central advantage of cloud computing, and essentially cloud means opening up a request system to many users
  • Traditional IT service management (a.k.a. ITIL) is somewhat limited in dealing with cloud, specifically configuration management and change management processes
  • There remains limited visibility into the costs of IT resources, though cloud improves this since resource consumption ends up as a dollar amount on a bill…somewhere

How is Cloud Sprawl Different?

One of the main ideas behind cloud computing – and a differentiator between plain old virtualization and centralization – is the notion of self-service. In the language of VMware, self-service IaaS might be interpreted as handing out vCenter admin access to everyone in the company. Well, in a sense, cloud computing is kind of like that – anyone who wants to provision IaaS can go out to AWS and do just that. What’s more? They can request all sorts of things, aside from individual VMs. Entire platform stacks can be provisioned with a few clicks of the mouse. In short, users can provision a lot more resources, spend a lot more money, and cause a lot of problems in the cloud.

We have seen one of our clients estimate their cloud usage at a certain amount, only to discover that actual usage was over 10 times their original estimate!

In addition, cloud sprawl can go in different directions than plain old VM sprawl. Since there are different cloud providers out there, the proliferation of processes and automation becomes something to watch out for. A process to deal with your internal private cloud may need to be tweaked to deal with AWS. And it may need to be tweaked again to deal with another cloud provider. In the end, you may end up with a different process to deal with each provider (including your own datacenter). That means more processes to audit and bring under compliance. The same goes for tools – tools that were good for your internal private cloud may be completely worthless for AWS. I’ve already seen some of my clients filling their toolboxes with point solutions that are specific to one cloud provider. So, bottom line is that cloud sprawl has the potential to drag on resources in the following ways:

  1. Orphaned VMs – a lot like traditional VM sprawl, resulting in increased spend that is completely avoidable
  2. Proliferation of processes – increased overhead for IT operations to stay compliant with various regulations
  3. Proliferation of tools – financial and maintenance overhead for IT operations

 

Download John’s ebook “The Evolution of Your Corporate IT Department” to learn more

 

How Can You Deal with Cloud Sprawl?

One way to deal with cloud sprawl is to apply the same treatment that worked for VM sprawl: limit access to the console, and install some tools to identify little-used VMs. At GreenPages, we don’t think that’s a very realistic option in this day and age. So, we’ve conceptualized two new approaches:

  1. Adopt request management and funnel all IaaS requests through a central portalThis means using the accepted request-approve-fulfill paradigm that is a familiar concept from IT service management.
  2. Sync and discoverGive users the freedom to obtain resources from the supplier of their choosing, whenever and wherever they want. IT operations then discovers what has been done, and runs their usual governance processes (e.g., chargeback, showback) on the transactions.

Both options have been built in to our Cloud Management and a Service (CMaaS) platform. I see the options less as an “either/or” decision, and more of a progression of maturity within an organization. Begin with Option 2 – Sync and Discover, and move toward Option 1 – Request Management.

As I’ve written before, and I’ll highlight here again, IT service management practices become even more important in cloud. Defining services, using proper configuration management, change management, and financial management is crucial to operating cloud computing in a modern IT environment. The important thing to do now is to automate configuration and change management to prevent impeding the speed and agility that comes with cloud computing. Just how do you automate configuration and change management? I’ll explore that in an upcoming post.

See both options in action in our upcoming webinar on cloud brokerage and governance. Our CTO Chris Ward will cover:

  • Govern cloud without locking it down: see how AWS transactions can be automatically discovered by IT operations
  • Influence user behavior: see how showback reports can influence user behavior and conserve resources, regardless of cloud provider
  • Gain visibility into costs: see how IaaS costs can be estimated before provisioning an entire bill of materials

 

Register for our upcoming webinar being held on May 22nd @ 11:00 am EST. “The Rise of Unauthorized AWS Use. How to Address Risks Created by Shadow IT.