Archivo de la categoría: IaaS

HPE to give customer access to IaaS from NTT Communications

HPE customers can now get instant infrastructure as a service (IaaS) from NTT Communications portfolio following an agreement with the Japanese telco’s NTT America division.

The enterprise level service offers public, private and hybrid cloud options, plus NTT America’s professional services including cloud migration, data centre consolidation, managed infrastructure services and disaster recovery-as-a-service (DRaaS).

Demand for IaaS is rising, according to analyst Transparency Market Research which says the $15.6 billion online infrastructure market of 2014 will grow to become a $73.9 billion IaaS trade by 2022.

NTT American will be one of a few global IaaS partners to HPE, said its executive VP of Global Enterprise Services Jeffrey Bannister. Only integration of best of breed technologies within NTT’s own infrastructure can help customers stay ahead of their competition, said Bannister.

NTT Com’s secure network coverage (VPN) reaches 196 countries through a Tier 1 IP network and it has 140 data centres across the world with an enterprise-grade cloud footprint in 14 global markets and a planned expansion to 15.

In August BCN reported how NTT Com launched a multi-cloud connect service with direct private links to Amazon Web Services, Microsoft Azure and other top tier cloud service providers.

What was once a disruptive innovation is the new norm as businesses shift to off-premise systems, said Chuck Adams, HPE’s Partner Ready Service Provider Programme director. “IaaS is IT infrastructure without the overhead,” said Adams.

Carrenza claims it’s now top cloud host for UK government digital service

gov.ukUK cloud service provider Carrenza has announced it is now providing the majority of hosting for the government digital service (GDS) as it made the production and staging environments for the Gov.UK site live on its cloud infrastructure.

Gov.uk has now rationalised hundreds of individual web sites for government departments and public bodies and concentrated the traffic for 24 ministerial departments and 28 other organisations according to Carrenza.

Infrastructure as a service (IaaS) provider Carrenza was initially asked to provide the infrastructure for Gov.UK’s preview operation in 2013 but, it claims, once it opened a second UK data centre its role was expanded. Carrenza rents capacity in Slough and London from data centre operators Equinix and Level 3.

Carrenza runs its IaaS and platform as a service (PaaS) offerings on a VMware-based cloud built on HP servers and HP 3PAR SAN storage which, it says, supports a range of operating systems, application and database technologies that includes “pretty much anything that runs on X86 architecture”. After Carrenza achieved official security accreditation the GDS moved the majority of Gov.Uk’s staging and production systems to the Carrenza Cloud, which has now received 2 billion visits, it says.

GDS originally found Carrenza through the G-Cloud III framework and a competitive tendering process. A major consideration for any cloud service provider, when pitching for contracts with the GDS, is a commitment to open source technology, according to Carrenza CEO Dan Sutherland.

Carrenza was chosen for Gov.UK because its custom software was developed in-house at GDS which needed to source cloud hosting and support for its flagship website.

“The launch of Gov.uk was a significant milestone,” said Sutherland. Open source has underpinned open dialogue and is helping to change and improve the way government communicates with its citizens, according to Sutherland.

Any cloud service provider wanting to win government contracts needs to concentrate on communicating with them, according to Andrew Mellish, Carrenza’s Head of Public Sector Services. “Our team understands what GDS is trying to achieve and how best to deliver the technologies they are using,” said Mellish, “when someone from GDS calls one of our engineers, they know they are speaking to someone who gets it and will work with them as efficiently as possible.”

Cloud News Daily 2015-07-14 05:38:21

Rackspace Hosting and has paired up with Microsoft to manage and offer technical support to Microsoft’s public cloud computing platform known as Azure. Azure support and managed services are currently available and expansion to overseas customers will begin in 2016.

Rackspace has struggled to compete with larger companies and their cloud platforms, such as Amazon Web Services, and this agreement with Microsoft marks its first major deal to support public cloud services other than its own.

Rackspace Chief Technology Officer, John Engates, has said, “Stay tuned. As part of our managed cloud strategy, a tenet of that is we want to support industry-leading technologies. Our strategy really gives us the opportunity to apply fanatical support to any leading industry platform in the future. So stay tuned in terms of announcements.”

logo

Rackspace hopes to improve profit margins and reduce capital spending by offering managed services and technical support for public clouds, and it is starting with Microsoft’s Azure. Rackspace’s main strength has been providing fanatical service, training and technical support to smaller businesses.

Rackspace technical support will be available directly to clients through Microsoft. Rackspace may also resell Microsoft’s IaaS services to its cutomers. In the fourth quarter of 2014, IaaS Services accounted for thirty one percent of Rackspace’s total revenue.

Engates also added Rackspace will help customers build apps that run in hybrid, private-public cloud environments. Many companies are becoming interested in the public-private cloud model, with important business apps ran on private servers with accessing public IaaS providers on a as needed basis.

The post appeared first on Cloud News Daily.

Dev-focused DigitalOcean raises $83m from Access Industries, Andreessen Horowitz

DigitalOcean raised $83m this week, which it will use to add features to its IaaS platform

DigitalOcean raised $83m this week, which it will use to add features to its IaaS platform

DigitalOcean this week announced it has raised $83m in a series B funding round the cloud provider said would help it ramp up global expansion and portfolio development.

The round was led by Access Industries with participation from seasoned tech investment firm Andreessen Horowitz.

DigitalOcean offers infrastructure as a service in a variety of Linux flavours and and aims its services primarily at developers, though the company said the latest round of funding, which brings the total amount it has secured since its founding in 2012 to $173m, will be used to aggressively expand its feature set.

“We are laser­-focused on empowering the developer community,” said Mitch Wainer, co-founder and chief marketing officer at DigitalOcean. “This capital infusion enables us to expand our world­-class engineering team so we can continue to offer the best infrastructure experience in the industry.”

Although the company is fairly young, and with just ten datacentres globally it claims to serve roughly 500,000 (individual) developers deploying cloud services on its IaaS platform, a respectable size by any measure. It also recently added another European datacentre in Frankfurt back in April, the company’s third on the continent.

But with bare bones IaaS competition getting more intense it will be interesting to see how DigitalOcean evolves; given its emphasis on developers it is possible the company’s platform could evolve into something more PaaS-like.

“We began with a vision to simplify infrastructure that will change how millions of developers build, deploy and scale web applications,” said Ben Uretsky, chief exec and co-­founder of DigitalOcean. “Our investors share our vision, and they’ll be essential partners in our continued growth.”

Mirantis, Pivotal team up on OpenStack, Cloud Foundry integration

Mirantis and Pivotal are working to integrate their commercial deployments of OpenStack and Cloud Foundry, respectively

Mirantis and Pivotal are working to integrate their commercial deployments of OpenStack and Cloud Foundry, respectively

Pivotal and Mirantis announced this week that the two companies are teaming up to accelerate integration of Cloud Foundry and OpenStack.

As part of the move Pivotal will support Pivotal CF, the company’s commercial distribution of the open source platform-as-a-service, on Mirantis’ distribution of OpenStack.

“Our joint customers are seeking open, bleeding-edge technologies to accelerate their software development and bring new products to market faster,” said James Watters, vice president and general manager of the Cloud Platform Group at Pivotal.

“Now, with Pivotal Cloud Foundry and Mirantis OpenStack, enterprises across various industries can rapidly deliver cloud-native, scalable applications to their customers with minimal risk and maximum ROI,” Watters said.

The move comes just one month after Mirantis announced it would join the Cloud Foundry Foundation in a bid to help drive integration between the two open source platforms. At the time, Alex Freedland, Mirantis co-founder and chairman said an essential part of rolling out software to help organisations build their own clouds includes making it as easy as possible to deploy and manage technologies “higher up the stack” like Cloud Foundry.

“Enterprises everywhere are adopting a new generation of tools, processes and platforms to help them compete more effectively,” said Boris Renski, Mirantis chief marketing officer and co-founder. “Mirantis and Pivotal have made Pivotal Cloud Foundry deployable on Mirantis OpenStack at the click of a button, powering continuous innovation.”

Joint customers can install Pivotal Cloud Foundry onto Mirantis OpenStack using the companies’ deployment guide, but the two companies are working towards adding a full Pivotal CF installation into the application catalogue of the next OpenStack release, Murano.

Are We All Cloud Service Brokers Now? Part II

By John Dixon, Consulting Architect

In my last post, I discussed Cloud Service Brokers and some of their benefits after reading a couple of articles from Robin Meehan (Article 1 here and Article 2 here). In this post, I will break down some of Robin’s points and explain why I agree or disagree with each.

At the end of last post, I was breaking down cloud arbitrage into three areas (run-time, deployment-time, plan-time). Credit to Robin for run-time and deployment-time arbitrage. I really like those terms, and I think it illuminates the conversation. So, run-time cloud arbitrage is really science fiction right now – this is where the CSB moves running workloads around on the fly to find the best benefit for the customer. I haven’t seen any technology (yet) that does this. However, VMware does deployment-time and run-time arbitrage with VMotion and Distributed Resource Scheduling – albeit, in a single virtual datacenter, with individual VMs, and with a single policy objective to balance a cluster’s load across vSphere nodes. See Duncan Epping’s excellent write up on DRS here. Even 10 years ago, this was not possible. 15 years ago, this was certainly science fiction. Now, it’s pretty common to have DRS enabled for all of your vSphere clusters.

A few of Robin’s points…

Point 1:
“The ability to migrate IT workloads dynamically (i.e. at run-time, not at deployment time) is something I sometimes see as a capability under the ‘cloud broker’ banner, but in my view it really just doesn’t make sense – at least not at the moment.”

I agree. Run-time cloud arbitrage and workload migration ala vMotion is not possible today in cloud. Will it be possible within the next few years? Absolutely. I think it will first manifest itself in a VMware High Availability-like scenario. Again, see Duncan Epping’s fantastic deep-dive into HA. If cloud provider X drops off of the internet suddenly, then restart the resources and application at cloud provider Y (where cloud provider Y might even be your own datacenter). This is sometimes known as DR as a service, or DRaaS. And even now, there are some DRaaS solutions that are coming onto the market.

Point 2:
“The rate of innovation in the IaaS/PaaS/DaaS market is such that most of the other vendors are playing catch-up with AWS, as AWS continue to differentiate themselves from the following pack. This shows no sign of slowing down over the next couple of years – so the only way a migrated workload is going to work across multiple cloud vendors is if it only relies on the lowest common denominator functionality across the vendors, which is typically basic storage, virtualised compute and connectivity.”

Also agree, the rate of innovation in the market for cloud computing is rapid as specialization sets in at an industrial level. This also means that downward price pressures are enormous for vendors in the cloud space, even today as vendors vie for market share. As switching costs decrease (e.g., portability of applications increases), prices for IaaS will decrease even more. Now, wouldn’t you, as a customer, like to take advantage of this market behavior? Take in to consideration that CSBs aggregate providers but they also aggregate customer demand. If you believe this interpretation of the market for IaaS, then you’ll want to position yourself to take advantage of it by planning portability for your applications. A CSB can help you do this.

Point 3:
“The bottom line is that if you are going to architect your applications so they can run on any cloud service provider, then you can’t easily use any of the good bits and hence your value in migrating to a cloud solution is diminished. Not ruined, just reduced.”

Disagree. To take advantage of market behavior, customers should look to avoid using proprietary features of IaaS platforms because they compromise portability. Like we noted earlier, increased portability of applications means more flexibility to take advantage of market behavior that leads to decreasing prices.

This is where perspective on cloud becomes really important. For example, GreenPages has a customer with a great use case for commodity IaaS. They may deploy ~800 machines in a cluster at AWS for only a matter of hours to run a simulation or solve a problem. After the result is read, these machines are completely destroyed—even the data. So, it makes no difference to this customer where they do this work. AWS happens to be the convenient choice right now. Next quarter, it may be Azure, who knows? I’m absolutely certain that this customer sees more benefit in avoiding the use of propriety features (a.k.a., the “good bits” of cloud) in a cloud provider rather than using them.

What is your perspective on cloud?
• A means to improve time to market and agility
• A way to transform capex into opex
• Simply a management paradigm – you can have cloud anywhere, even internally as long as you have self-service and infinite resources
• An enabler for a new methodology like DevOps
• Simply a destination for applications

I think that a good perspective may include all of these things. Leave a comment and let me know your thoughts.

Interested in learning more? Download this free whitepaper ‘Cloud Management, Now!’

Don’t Be a Michael Scott – Embrace Change in IT

By Ben Stephenson, Journey to the Cloud

 

One of the biggest impediments to the adoption of new technologies is resistance to change. Many IT departments are entrenched and content in the way they currently run IT. But as the technology industry continues to embrace IT-as-a-Service, IT departments must be receptive to change if they want to stay competitive.

I’m a big fan of the TV show The Office. In my opinion, it’s the second funniest series behind Seinfeld (and it’s a very close second). Dunder Mifflin Scranton Regional Manager Michael Scott is a quintessential example of a decision maker who’s against the adoption of new technologies because of fear, a lack of understanding, and downright stubbornness.  

In the “Dunder Mifflin Infinity” episode in Season Four, the young, newly promoted hot-shot exec (and former intern) Ryan Howard returns to the Scranton branch to reveal his plan on how he’s going to use technology to revitalize the company. Part of his plan is the rollout of a new website that will allow Dunder Mifflin to be more agile and allow customers to make purchases online. Michael and his loyal sidekick (and part-time beet farmer) Dwight Schrute are staunchly opposed to this idea.

At this point in the episode Michael is against Ryan’s idea of leveraging technology to improve the business process out of pure stubbornness. Michael hasn’t heard Ryan’s strategy or thought out the pros and cons of leveraging technology to improve business processes. His mindset is simply “How can this new technology possibly be better than the way we have always done things?”

Maybe your company has always bought infrastructure and run it in house—so why change now? Well, running a hybrid cloud environment can provide better service to your end users and also contribute to cost savings. Regardless if you act or not, it’s something you need to keep an open mind about and look into closely. Dismissing the concept immediately isn’t going to do you any good.

Creed Bratton is the oldest employee in the Scranton office. After hearing Ryan’s announcement about implementing new technologies, Creed gets extremely worried that he’s going to get squeezed out of his job. He goes to Michael and shares his concerns that both their jobs may be in jeopardy. At this point, Michael is now against the adoption of technology due to a lack of understanding. Ryan’s plan is to retrain his employees so that they have the knowledge and skillset to leverage new technologies to improve the business—not to use it as a means to downsize the workforce.

This is similar to the fear that cloud computing will cause widespread layoffs of IT workers. This is not necessarily the case. It’s not about reducing jobs; it’s about retraining current employees to take on new roles within the department.

Ryan claims that the new website is going to significantly increase sales. Michael and Dwight set out on a road trip to win back several key customers whose accounts they have recently lost to competitors to prove to Ryan that they don’t need a website. Their strategy? Personally deliver fruit baskets. Each customer ends up turning them down because the vendors they are currently using have websites and offer lower prices.

In this case, Dunder Mifflin’s lack of IT innovation is directly affecting its bottom line. They’re making it an easy decision for customers to leave because they simply aren’t keeping pace with the competition. As a modern day IT department, you need to be leveraging technologies that allow people to do their jobs easier and in turn reduce costs for the organization. For example, by installing a SaaS-based marketing automation tool (i.e. HubSpot), your marketing team can automate workloads and spend more time generating leads for the sales team to drive revenue. By using Amazon, or another IaaS platform, you have the ability to buy only the capacity you actually need, saving on infrastructure hardware capital and maintenance costs. For workloads that make more sense running on-prem, creating a private cloud environment with a service catalog can streamline performance and give users the ability to choose and instantly receive the IT services they need.

At the end of the episode, an enraged Michael and Dwight head back to the office. On their way back, Michael’s GPS instructs him to take a right hand turn. Dwight looks at the screen and tells Michael that it’s saying to bear right around the bend, but Michael takes the sharp right trusting the machine and follows it…directly into a lake. Dwight shouts that he’s trained for this moment and jumps in the two feet of water to valiantly save Michael. When they get back to the office Michael announces “I drove my car into a [bleep] lake. Why you may ask did I do this? Well, because of a machine. A machine told me to drive into a lake. And I did it! I did it because I trusted Ryan’s precious technology, and look where it got me.” At this point, Michael is resisting technology because of fear.

In today’s changing IT landscape, embarking on new IT initiatives can be scary. There are risks involved, and there are going to be bumps along the way. (Full disclosure, Ryan ends up getting arrested later in the season for fraud after placing orders multiple times in the system—but you get the idea.)But at the end of the day, the change now taking place in IT is inevitable. To be successful, you need to carefully, and strategically, plan out projects and make sure you have the skillsets to get the job done properly (or use a partner like GreenPages to help).The risk of adopting new technologies is nothing compared to the risk of doing nothing and being left behind. Leave a comment and share how your organization is dealing with the changing IT landscape…or let me know what your favorite Office episode is…

If you’d like to talk more about how GreenPages can help with your IT transformation strategy, fill out this form!

 

 

The PaaS Market as We Know it Will Not Die Off

I’ve been hearing a lot about Platform as a Service (PaaS) lately as part of the broader discussion of cloud computing from both customers and in articles across the web. In this post, I’ll describe PaaS, discuss a recent article that came out on the subject, and take a shot at sorting out IaaS, PaaS, and SaaS.

What is PaaS?

First a quick trip down memory lane for me. As an intern in college, one of my tours of duty was through the manufacturing systems department at an automaker. I came to work the first day to find a modest desktop computer loaded with all of the applications I needed to look busy, and a nicely printed sheet with logins to various development systems. My supervisor called the play: “I tell you what I want, you code it up, I’ll take a look at it, and move it to test if it smells ok.” I and ten other ambitious interns were more than happy to spend the summer with what the HR guy called “javaweb.” The next three months went something like this:

Part I: Setup the environment…

  1. SSH to abcweb01dev.company.com, head over to /opt/httpd/conf/httpd.conf, configure AJP to point to the abcapp01 and 02dev.company.com
  2. SSH to abcapp01.dev.company.com, reinstall the Java SDK to the right version, install the proper database JARs, open /opt/tomcat/conf/context.xml with the JDBC connection pool
  3. SSH to abcdb01dev.company.com, create a user and rights for the app server to talk to the web server
  4. Write something simple to test everything out
  5. Debug the environment to make sure everything works

Part II: THEN start coding…

  1. SSH to abcweb01dev.company.com, head over to /var/www/html and work on my HTML login page for starters, other things down the road
  2. SSH to devapp01dev.company.com, head over to /opt/tomcat/webapps/jpdwebapp/servlet, and code up my Java servlet to process my logins
  3. Open another window, login to abcweb01dev and tail –f /var/www/access_log to see new connections being made to the web server
  4. Open another window, login to abcapp01dev and tail –f /opt/tomcat/logs/catalina.out to see debug output from my servlet
  5. Open another window, login to abcdevapp01 and just keep /opt/tomcat/conf/context.xml open
  6. Open another window, login to abcdevapp01 and /opt/tomcat/bin/shutdown.sh; sleep 5; /opt/tomcat/bin/startup.sh (every time I make a change to the servlet)

(Host names and directory names have been changed to protect the innocent)

Setting up the environment was a little frustrating. And I knew that there was more to the story; some basic work, call it Part 0, to get some equipment in the datacenter, the OS installed, and IP addresses assigned. Part I, setting up the environment, is the work you would do to setup a PaaS platform. As a developer, the work in Part I was to enable me and my department to do the job in Part II – and we had a job to do – to get information to the guys in the plants who were actually manufacturing product!

 

So, here’s a rundown:

Part 0: servers, operating systems, patches, IPs… IaaS

Part I: middleware, configuration, basic testing… PaaS

Part II: application development

So, to me, PaaS is all about using the bits and pieces provided by IaaS, configuring them in a usable platform, delivering that platform to a developer so that they can deliver software to the business. And, hopefully the business is better off because of our software. In this case, our software helped the assembly plant identify and reduce “in-system damage” to vehicles – damage to vehicles that happens as a result of the manufacturing process.

Is the PaaS market as we know it dead?

I’ve read articles predicting the demise of PaaS altogether and others just asking the question about its future. There was a recent Networkworld article entitled “Is the PaaS market as we know it dying?” that discussed the subject. The article makes three main points, referring to 451 Research, Gartner, and other sources.

  1. PaaS features are being swallowed up by IaaS providers
  2. The PaaS market has settled down while the IaaS and SaaS markets have exploded
  3. Pure-play PaaS providers may be squeezed from the market by IaaS and SaaS

 

I agree with point #1. The evidence is in Amazon Web Services features like autoscaling, RDS, SQS, etc. These are fantastic features but interfacing to them locks developers in to using AWS as their single IaaS provider. The IaaS market is still very active, and I think there is a lot to come even though AWS is ahead of other providers at this point. IaaS is commodity, and embedding specialized (read: PaaS) features in an otherwise IaaS system is a tool to get customers to stick around.

I disagree with point #2. The PaaS market has not settled down – it hasn’t even started yet! The spotlight has been on IaaS and SaaS because these things are relatively simple to understand, considering the recent boom in server virtualization. SaaS also used to be known as something that was provided by ASPs (Application Service Providers), so many people are already familiar with this. I think PaaS and the concepts are still finding their place.

Also disagree with point #3, the time and opportunity for pure-play PaaS providers is now. IaaS is becoming sorted out, and it is clearly a commodity item. As we highlighted earlier, solutions from PaaS providers can ride on top of IaaS. I think that PaaS will be the key to application portability amongst different IaaS providers – kind of like Java: write once, run on any JVM (kind of). As you might know, portability is one of NIST’s key characteristics of cloud computing.

Portability is key. I think PaaS will remain its own concept apart from IaaS and SaaS and that we’ll see some emergence of PaaS in 2014. Why? PaaS is the key to portable applications — once written to a PaaS platform, it can be deployed on different IaaS platforms. It’s also important to note that AWS is almost always associated with IaaS, but they have started to look a lot like a PaaS provider (I touched on this in a blog earlier this month). An application written to use AWS features like AutoScaling is great, but not very portable. Lastly, the PaaS market is ripe for innovation. Barriers to entry are low as is required startup capital (there is no need to build a datacenter to build a useful PaaS platform).

This is just my opinion on PaaS — I think the next few years will see a growing interest in PaaS, possibly even over IaaS. I’m interested in hearing what you think about PaaS, feel free to leave me a comment here, find me on twitter at @dixonjp90, or reach out to us at socialmedia@greenpages.com

To hear more from John, download his whitepaper on hybrid cloud computing or his ebook on the evolution of the corporate IT department!

 

 

5 Cloud Predictions for 2014

By John Dixon, LogicsOne

 

Here are my 5 Cloud Predictions for 2014. As always, leave a comment below and let me know what you think!

1. IaaS prices will drop by at least 20%

Amazon has continued to reduce its pricing since it first launched its cloud services back in 2006. In February of last year, Amazon dropped its price for the 25th time. By April prices dropped for the 30th time and by the summer it was up to 37 times. Furthermore, there was a 37% drop in hourly costs for dedicated on-demand instances. Microsoft announced that they will follow AWS’s lead with regard to price cuts. I expect this trend to continue in 2014 and likely 2015. I highlight some of these price changes and the impact it will have on the market as more organizations embrace the public cloud in more detail in my eBook.

2. We’ll see signs of the shift to PaaS

Amazon is already starting to look more like a PaaS provider than an IaaS provider. Just consider pre-packaged, pre-engineered features like Auto Scaling, CloudWatch, SQS, RDS among other services. An application hosted with AWS that uses all of these features looks more like an AWS application and less like a cloud application. Using proprietary features is very convenient, but don’t forget how application portability is impacted. I expect continued innovation in the PaaS market with new providers and technology, while downward price pressures in the IaaS market remain high. Could AWS (focusing on PaaS innovation) one day source its underlying infrastructure to a pure IaaS provider? This is my prediction for the long term — large telecoms like AT&T, Verizon, BT, et al. will eventually own the IaaS market, Amazon, Google, Microsoft will focus on PaaS innovation, and use infrastructure provided by those telecoms. This of course leaves room for startup, niche PaaS providers to build something innovative and leverage quality infrastructure delivered from the telecoms. This is already happening with smaller PaaS providers. Look for signs of this continuing in 2014.

3. “The cloud” will not be regulated

Recently, there have been rumblings of regulating “the cloud” especially in Europe, and that European clouds are safer than American clouds. If we stick with the concept that cloud computing is just another way of running IT (I call it the supply chain for IT service delivery), then the same old data classification and security rules apply. Only now, if you use cloud computing concepts, the need to classify and secure your data appropriately becomes more important. An attempt to regulate cloud computing would certainly have far reaching economic impacts. This is one to watch, but I don’t expect any legislative action to happen here in 2014.

4. More organizations will look to cloud as enabling DevOps

It’s relatively easy for developers to head out to the cloud, procure needed infrastructure, and get to work quickly. When developers behave like this, they not only write code and test new products, but they become the administrators of the platforms they own (all the way from underlying code to patching the OS) — development and operations come together. This becomes a bit stickier as things move to production, but the same concept can work (see prediction #5).

5. More organizations will be increasingly interested in governance as they build a DevOps culture

As developers can quickly bypass traditional procurement processes and controls, new governance concepts will be needed. Notice how I wrote “concepts” and not “controls.” Part of the new role of the IT department is to stay a step ahead of these movements, and offer developers new ways to govern their own platforms. For example, a real time chart showing used vs. budgeted resources will influence a department’s behavior much more effectively than a cold process that ends with “You’re over budget, you need to get approval from an SVP (expected wait time: 2-8 weeks).”

DevOps CIO Dashboard

 Service Owner Dashboard

The numbers pictured are fictitious. With the concept of Service Owners, the owner of collaboration services can get a view of the applications and systems that provide the service. The owner can then see how VoIP spending is a little above the others, and drill down to see where resources are being spent (on people, processes, or technology). Different ITBM applications display these charts differently, but the premise is the same – real time visibility into spend. With cloud usage in general gaining steam, it is now possible to adjust the resources allocated to these services. With this type of information available to developers, it is possible to take proactive steps to avoid compromising the budget allocated to a particular application or service. On the same token, opportunities to make informed investments in certain areas will become exposed with this information.

So there you have it, my 2014 cloud predictions. What other predictions do you have?

To hear more from John, download his eBook “The Evolution of Your Corporate IT Department” or his Whitepaper “Cloud Management, Now

 

 

The 2013 Tech Industry – A Year in Review

By Chris Ward, CTO, LogicsOne

As 2013 comes to a close and we begin to look forward to what 2014 will bring, I wanted to take a few minutes to reflect back on the past year.  We’ve been talking a lot about that evil word ‘cloud’ for the past 3 to 4 years, but this year put a couple of other terms up in lights including Software Defined X (Datacenter, Networking, Storage, etc.) and Big Data.  Like ‘cloud,’ these two newer terms can easily mean different things to different people, but put in simple terms, in my opinion, there are some generic definitions which apply in almost all cases.  Software Defined X is essentially the concept of taking any ties to specific vendor hardware out of the equation and providing a central point for configuration, again vendor agnostic, except of course for the vendor providing the Software Defined solution :) .  I define Big Data simply as the ability to find a very specific and small needle of data in an incredibly large haystack within a reasonably short amount of time. I see both of these technologies becoming more widely adopted in short order with Big Data technologies already well on the way. 

As for our friend ‘the cloud,’ 2013 did see a good amount of growth in consumption of cloud services, specifically in the areas of Software as a Service (SaaS) and Infrastructure as a Service (IaaS).  IT has adopted a ‘virtualization first’ strategy over the past 3 to 4 years when it comes to bringing any new workloads into the datacenter.  I anticipate we’ll begin to see a ‘SaaS first’ approach being adopted in short order if it is not out there already.  However, I can’t necessarily say the same on the IaaS side so far as ‘IaaS first’ goes.  While IaaS is a great solution for elastic computing, I still see most usage confined to the application development or super large scale out application (Netflix) type use cases.  The mass adoption of IaaS for simply forklifting existing workloads out of the private datacenter and into the public cloud simply hasn’t happened.  Why?? My opinion is for traditional applications neither the cost nor operational model make sense, yet. 

In relation to ‘cloud,’ I did see a lot of adoption of advanced automation, orchestration, and management tools and thus an uptick in ‘private clouds.’  There are some fantastic tools now available both commercially and open source, and I absolutely expect to see this adoption trend to continue, especially in the Enterprise space.  Datacenters, which have a vast amount of change occurring whether in production or test/dev, can greatly benefit from these solutions. However, this comes with a word of caution – just because you can doesn’t mean you should.  I say this because I have seen several instances where customers have wanted to automate literally everything in their environments. While that may sound good on the surface, I don’t believe it’s always the right thing to do.  There are times still where a human touch remains the best way to go. 

As always, there were some big time announcements from major players in the industry. Here are some posts we did with news and updates summaries from VMworld, VMware Partner Exchange, EMC World, Cisco Live and Citrix Synergy. Here’s an additional video from September where Lou Rossi, our VP, Technical Services, explains some new Cisco product announcements. We also hosted a webinar (which you can download here) about VMware’s Horizon Suite as well as a webinar on our own Cloud Management as a Service Offering

The past few years have seen various predictions relating to the unsustainability of Moore’s Law which states that processors will double in computing power every 18-24 months and 2013 was no exception.  The latest prediction is that by 2020 we’ll reach the 7nm mark and Moore’s Law will no longer be a logarithmic function.  The interesting part is that this prediction is not based on technical limitations but rather economic ones in that getting below that 7nm mark will be extremely expensive from a manufacturing perspective and, hey, 64k of RAM is all anyone will ever need right?  :)

Probably the biggest news of 2013 was the revelation that the National Security Agency (NSA) had undertaken a massive program and seemed to be capturing every packet of data coming in or out of the US across the Internet.   I won’t get into any political discussion here, but suffice it to say this is probably the largest example of ‘big data’ that exists currently.  This also has large potential ramifications for public cloud adoption as security and data integrity have been 2 of the major roadblocks to adoption so it certainly doesn’t help that customers may now be concerned about the NSA eavesdropping on everything going on within the public datacenters.  It is estimated that public cloud providers may lose as much as $22-35B over the next 3 years as a result of customers slowing adoption due to this.  The only good news in this, at least for now, is it’s very doubtful that the NSA or anyone else on the planet has the means to actual mine anywhere close to 100% of the data they are capturing.  However, like anything else, it’s probably only a matter of time.

What do you think the biggest news/advancements of 2013 were?  I would be interested in your thoughts as well.

Register for our upcoming webinar on December 19th to learn how you can free up your IT team to be working on more strategic projects (while cutting costs!).