Todas las entradas hechas por GreenPages Blog

8 Things You May Not Know About Docker

DockerIt’s possible that containers and container management tools like Docker will be the single most important thing to happen to the data center since the mainstream adoption of hardware virtualization in the 90s. In the past 12 months, the technology has matured beyond powering large-scale startups like Twitter and Airbnb and found its way into the data centers of major banks, retailers and even NASA. When I first heard about Docker a couple years ago, I started off as a skeptic. I blew it off as skillful marketing hype around an old concept of Linux containers. But after incorporating it successfully into several projects at Spantree I am now a convert. It has saved my team an enormous amount of time, money and headaches and has become the underpinning of our technical stack.

If you’re anything like me, you’re often time crunched and may not have a chance to check out every shiny new toy that blows up on Github overnight. So this article is an attempt to quickly impart 8 nuggets of wisdom that will help you understand what Docker is and why it’s useful.

 

Docker is a container management tool.

Docker is an engine designed to help you build, ship and execute applications stacks and services as lightweight, portable and isolated containers. The Docker engine sits directly on top of the host operating system. Its containers share the kernel and hardware of the host machine with roughly the same overhead as processes launched directly on the host machine.

But Docker itself isn’t a container system; it merely piggybacks off the existing container facilities baked into the OS, such as LXC on Linux. These container facilities have been baked into operating systems for many years, but Docker provides a much friendlier image management and deployment system for working with these features.

 

Docker is not a hardware virtualization engine.

When Docker was first released, many people compared it to virtualization hypervisors like VMware, KVM and Virtualbox. While Docker solves a lot of the same problems and shares many of the same advantages as hypervisors, Docker takes a very different approach. Virtual machines emulate hardware. In other words, when you launch a VM and run a program that hits disk, it’s generally talking to a “virtual” disk. When you run a CPU-intensive task, those CPU commands need to be translated to something the host CPU understands. All these abstractions come at a cost– two disk layers, two network layers, two processor schedulers, even two whole operating systems that need to be loaded into memory. These limitations typically mean you can only run a few virtual machines on a given piece of hardware before you start to see an unpleasant amount of overhead and churn. On the other hand, you can theoretically run hundreds of Docker containers on the same host machine without issue.

All that being said, containers aren’t a wholesale replacement for virtual machines. Virtual machines provide a tremendous amount of flexibility in areas where containers generally can’t. For example, if you want to run a Linux guest operating system on top of a Windows host, that’s where virtual machines shine.

 

Docker uses a layered file system.

As mentioned earlier, one of the key design goals for Docker is to provide image management on top of existing container technology. In Docker terms, an image is a static, immutable snapshot of a container’s file system. But Docker rather cleverly takes this snapshotting concept a step further by incorporating a copy-on-write filesystem into its design. I’ve found the best way to explain this is by example:

Let’s say you want to build a Docker image to run your Java web application. You may start with one of the official Docker base images that have Java 8 pre-installed. In your Dockerfile (a text file which tells Docker how to build your image) you’d specify that you’re extending the Java 8 image, which instructs Docker to pull down the pre-built snapshot associated with this image. Now, let’s say you execute a command that downloads, extracts and configures Apache Tomcat into /opt/tomcat. This command will not affect the state of original Java 8 image. Instead, it will start writing to a brand new filesystem layer. When a container boots up, it will merge these file systems together. It may load /usr/bin/java from one layer and /opt/tomcat/bin from another. In fact, every step in a Dockerfile produces a new filesystem layer, even if only one file is changed. If you’re familiar with the Git version control system, this is similar to a commit tree. But with Docker, it provides users with tremendous flexibility to compose application stacks iteratively.

At Spantree, we have a base image with Tomcat pre-installed and on each application release we merely copy the latest deployable asset into a new image, tagging the Docker image to match the release version as well. Since the only variation on these images is the very last layer, a 90MB WAR file in our case, each image is able to share the same ancestors on disk. This means we can keep our old images around and rollback on-demand with very little added cost. Furthermore, when we launch several instances of these applications side-by-side, they share the same read-only filesystems.

 

Docker can save you time.

Many years ago, I was working on a project for a major restaurant chain and on the first day I was handed a 12 page Word document describing how to get my development environment set up to develop against all the various applications. I had to install a local Oracle database, a specific version of the Java runtime, along with a number of other system and library dependencies and tooling. The whole setup process cost each member of my team approximately a day of productivity, which unfortunately translated to thousands of dollars in sunk costs for our client. Our client was used to this and considered this part of the cost of doing business when onboarding new team members, but as consultants we would have much rather spent that time building useful features that add value to our client’s business.

Had Docker existed at the time, we could have cut this process from a day to mere minutes. With Docker, you can express servers and services through code, similarly to configuration tools like Puppet, Chef, Salt and Ansible. But, unlike these tools, Docker goes a step further by actually pre-executing these steps for you during its build process snapshotting the output as an indexed, shareable disk image. Need to compile Node.js from source? No problem. The Docker runtime will do that on build and simply snapshot the output for you at the end. Furthermore, because Docker containers sit directly on top of the Linux kernel, there’s no risk of environmental variations getting in the way.

Nowadays, when we bring a new team member into a client project, they merely have to run `Docker-compose up`, grab a cup of coffee and by the time they’re back they should have everything they need to start working.

 

Docker can save you money.

Of course, time is money, but Docker can also save you hard, physical dollars as it relates to infrastructure costs. Studies at Gartner and McKinsey cite the average data center utilization as between 6% to 12%. Quite a lot of that underutilized space is due to static partitioning. With physical machines or even hypervisors, you need to defensively provision the CPU, disk and memory based on the high watermark of possible usage. Containers, on the other hand, allow you to share unused memory and disk between instances. This allows you to pack many more services onto the same hardware, spinning them down when they’re not needed without worrying about the cost of bringing them back up again. If it’s 3am and no one is hitting your Dockerized intranet application but you need a little extra horsepower for your Dockerized nightly batch job, you can simply swap some resources between the two applications running on common infrastructure.

 

Docker has a robust ecosystem of existing images.

At the time of writing, there are over 14,000 public Docker images available on the web. Most of these images are shared through Docker Hub. Similar to how Github has largely become the home of most major open-source projects, Docker Hub is the de facto resource for sharing and working with public Docker images. These images can serve as building blocks for your application or database services. Want to test drive the latest version of that hot new graph database you’ve been hearing about? Someone’s probably already gone to the trouble of Dockerizing it. Need to build and host a simple Rails application with a special version of Ruby? It’s now at your fingertips in a single command.

 

Docker helps you avoid production bugs.

At Spantree, we’re big fans of “immutable infrastructure.” That is to say, if at all possible, we avoid doing upgrades or changes on live servers at all costs. Instead, we build out new servers from scratch, applying the new application code directly to a pristine image and rolling the new release servers into the load balancer when they’re ready, retiring the old server instances after all our health checks pass. This gives us the ability to cleanly roll back if something goes wrong. It also gives us the ability to promote the same master images from dev to QA to production with no risk of configuration drift. By extending this approach all the way to the developer machine with Docker, we can also avoid the “it works on my machine” problem because each developer is able to test their build locally in a parallel

 

Docker only works on Linux (for now).

The technologies powering Docker are not necessarily new, but many of them, like LXC and cgroups, are specific to the Linux kernel. This means that, at the time of writing, Docker is only capable of hosting applications and services that can run on Linux.  That is likely to change in the coming years as Microsoft has recently announced plans for first-class container support in the next version of Windows Server. Microsoft has been working closely with Docker to achieve this goal. In the meantime, tools like boot2docker and Docker Machine make it possible to run and proxy Docker commands to a lightweight Linux VM on Mac and Windows environments.

Have you used Docker? What has your experience been like? If you’re interested in learning more about how Spantree and GreenPages can help with your application development initiatives, please reach out!

 

By Cedric Hurst, Principal, Spantree Technology Group, LLC

Cedric Hurst is Principal at Spantree Technology Group, a boutique software engineering firm based primarily out of Chicago, Illinois that focuses on delivering scalable, high-quality solutions for the web. Spantree provides clients throughout North America with deep insights, strategies and development around cloud computing, devops and infrastructure automation. Spantree is partnered with GreenPages to provide high-value application development and devops enablement to their growing enterprise client base. In his spare time, Cedric speaks at technical meetups, makes music, mentors students and hangs out with his daughter. To stay up to date with Spantree, follow them on Twitter @spantreellc

Tech News Recap for the Week of 4/20/2015

Were you busy last week? Here’s a quick tech news recap of articles you may have missed from the week of 4/20/2015.

tech news recapA new browser hack can spy on 8 out of 10 PCs. Hybrid cloud, EUC and SDN solutions have boosted VMware’s Q1 earnings. Google may launch a US wireless service by teaming up with T-Mobile and Sprint. The Worldwide cloud IT infrastructure market growth is expected to accelerate to 21% this year. CIOs believe that wearable technologies will become part of office life.

Tech News Recap

Windows Server 2003 end of life is almost here. Have you developed a strategy yet? Learn more in this whitepaper.

 

By Ben Stephenson, Emerging Media Specialist.

From Data Collection and Analysis to Business Action

Data Collection and AnalysisGuest post from Azmi Jafarey. Azmi is an IT leader with over 25 years of experience in IT innovation. He was CIO at Ipswitch,, Inc. for the last nine years, responsible for operations, infrastructure, business apps and BI. In 2013, he was named CIO of the year by Boston Business Journal and Mass High Tech. You can hear more from Azmi on his blog: http://hitechcio.com/

 

Here is a progression that most businesses experience in the data arena.

  • You go from no data or bad data to “better” data.
  • You start having reports regularly show up in your mailbox.
  • The reports go from being just tables to showing trend lines.
  • You evolve to dashboards that bring together data from many sources.
  • You fork into sets of operational and strategic reports and dashboards, KPIs driven, with drill down.

By this point, you have Operational Data Stores (ODSs), data warehouses, a keen sense of the need for Master Data, keeping all systems in synch and an appreciation of defined data dictionaries.  You expect data from all functions to “tie together” with absolute surety – and when it does not, it is usually traced to having a different understanding of data sources, or data definitions.  But you are there, feeling good about being “data driven”, even as you suspect that that last huge data clean-up effort may already be losing its purity to the expediency of daily operations.  How?  Well, someone just created a duplicate Opportunity in your CRM, rather than bother to look up if one exists.  Another person changed a contact’s address locally, rather than in a Master.  And so it goes.

Sadly, for most businesses “data-driven” stops at “now you have the numbers” — an end in itself.  At its worst, reports become brochure-ware, a travel guide for the business that is “interesting” and mainly used to confirm one’s suspicions and biases.  Also, at its worst, many “followed” KPIs consume enormous amounts of time and effort to come up with a number, paint it green, yellow or red when compared to a target, and then these act mainly as trigger points for meetings rather than measured response.

I have nothing against meetings.  I am just anxious for the business mindset to go beyond “descriptive” and “predictive” analytics to “prescriptive” analytics.   Thus for Sales we seem to stop at “predictive” – forecasts are the holy grail, a look into the future, couched in probability percentages.  Forecasts are indeed very useful and get reacted to.  It is just that it is a reaction whose direction or magnitude are usually delinked from any explicit model.  In today’s world instinct cannot continue to trump analysis.  And analysis is meaningful only in the context of suggesting specific action, tied to business results as expected outcomes.  The data must not only punt the can down the road – it must tell you exactly how hard and in which direction to punt.  And the result must be measured for the next round to follow.

One of the really interesting things about data modeling, predictive and prescriptive analytics is that for all three the starting point is precisely the same data.  After all, that is what you know and have.   The difference is the effort to model and the feedback loop where measurable action and measured consequence can be used to refine action and hence outcomes.  Part of the problem is also that the paradigm in today’s business world is for leaders who provide direction on actions to be farthest from those who know data well.  Without personal exploration of relevant data, you revert to an iterative back-and-forth requesting new data formats from others.  The time to search for such “insight” can be dramatically shortened by committing to modeling and measuring results from the get go.  Bad models can be improved.  But lacking one is to be adrift.

Before you begin to wonder “Is the next step Big Data?  Should we be thinking of getting a Data Scientist?” start with the basics: training on analytics, with a commitment to model.  Then use the model and refine.

If You Could Transform Your IT Strategy, Would You?

GreenPages-Transformation-Services-LogoAs you may know, GreenPages recently launched our Transformation Services Group, a new practice dedicated to providing customers with the agility, flexibility and innovation they need to compete in the modern era of cloud computing.

This move was designed to allow us to help companies think beyond the near-term and to cast a vision for the business.  As we look at the market, we see a need to help organizations take a more revolutionary and accelerated approach to embracing what we call “New World” IT architectures.  While this is something we have been helping companies with for many years, we now believe that this is a logical evolution of our business that builds on our legacy of delivering high quality and competency deploying advanced virtualization and cloud solutions.

When we think about some of the great work we have done over the years, many examples come to mind.  One of these is a complex project we completed for The Channel Company, that helped them truly transform their business. Coming off a management buyout from its parent company, UBM, The Channel Company was tasked with having to migrate off the parent company’s centralized IT infrastructure under a very tight timeline.

Faced with this situation, the company was presented with a very compelling question: “If you had the opportunity to start from scratch, to transform your IT department, resources and strategy what would you do?”

Essentially, as a result of their situation, The Channel Company had the opportunity to leapfrog traditional approaches.  They had the opportunity to become more agile, and more responsive. And, more importantly, they took it!

As opposed to simply moving to a traditional baseline on-prem solution, The Channel Company saw this as an opportunity to fundamentally rethink its entire IT strategy and chose GreenPages to help lead them through the process.

Through a systematic approach and advanced methodology, we were able to help The Channel Company achieve its aggressive objectives.  Specifically, in less than six months, we led a transformation project that entailed the installation of new applications, and a migration of the company’s entire infrastructure to the cloud.  This included moving six independent office locations from a shared infrastructure to a brand-new cloud platform supporting their employees, as well as new cloud-based office and ERP applications.

In addition to achieving the independence and technical autonomy The Channel Company needed, the savings benefits and operational efficiencies achieved were truly transformational from a business standpoint.

It is these types of success stories that drove us to formalize our Transformation Services Group. We have seen first-hand the benefits that organizations can achieve by transforming inflexible siloed IT environments into agile organizations, and we’re proud to be able to offer the expertise and end-to-end capabilities required to help customers achieve true business transformation.

In our view, the need for business agility and innovation has never been greater. The question is no longer “is transformation necessary?” but rather  “if you had the opportunity to start from scratch and achieve business transformation, would you take it?”

If you’re interested in hearing more about how GreenPages has helped companies like The Channel Company transform their IT operations, please reach out.

 

By Ron Dupler, CEO

Launching GreenPages’ Transformation Services Group

GreenPages' Transformation ServicesI am excited to announce that today, with the launch of GreenPages’ Transformation Services Group, GreenPages took a major step in the continuing evolution of our company. This evolution was done for one reason, and one reason only –to meet the needs of our customers as they strive to compete in today’s rapidly-changing business and technology environment.

GreenPages’ new Transformation Services Group is a practice dedicated to providing customers with the agility, flexibility and innovation they need to compete in the modern era of cloud computing.  We see the establishment of this focused practice area as a way to help clients take a revolutionary, accelerated approach to standing up New World, Modern IT architectures and service delivery models that enable business agility and innovation.

Disrupt or be Disrupted

With each day’s latest business headlines we learn of new ‘up-start’ companies that are finding a new way to compete in what was once a mature market.  You know the names – its Uber, it’s Airbnb.  These companies have found a way to leverage advanced technologies as a strategic weapon and were able to completely turn existing industries on their heads without even owning cabs or hotels (respectively).

How’d they do it?  They were agile enough from a business standpoint to understand the disruptive force that technology can play, and they were fortunate enough not to be encumbered by existing infrastructure, policies and procedures.  While these companies clearly were smart and innovative, they were also fortunate — they had a blank slate and could start from scratch with an offensive game plan capable of delivering value to customers in new ways.

These market disrupters share the benefit of not being encumbered by legacy technologies, platforms and processes and as a result, are out-performing and executing their larger competitors.  These companies were born to be agile organizations capable of “turning on a dime” when their competitors could not.

To compete effectively in today’s environment, every company needs to find a way to become more agile.  Business leaders have the choice, play defense and respond to disruption, or play offense and become the disruptor.  The need for business agility has never been greater.  To support this needed agility and innovation, enterprises need nimble, agile IT platforms, as legacy platforms cannot meet this need.

If it were just about technology, modernizing IT would be a more straightforward situation, but it’s about more than that. This is more than a technology problem. This is a people and process problem. It’s about command, control and compliance… Needless to say, “high velocity change” is no walk in the park.

Fortunately, helping companies achieve transformational change is something we have been doing for many years and is an area where we have deep domain expertise.  Throughout our history as a company, we have become adept at guiding companies through IT and business transformation.  What we are doing today is formalizing this expertise—which has been forged working with our customers in the trenches and in the boardrooms—into a unique Transformation Services practice.  Transformation Services represents the next logical evolution of GreenPages and builds on our prior legacy of high quality and competency deploying advanced virtualization and cloud solutions.

Our Approach

We have always believed that while many companies face similar challenges, no two scenarios are identical. Through our more than 20 years of experience we have established a methodology that we use in each engagement, regardless of the challenge, that allows us to identify the best solution for each customer, drive organizational and technical change, and create positive outcomes for the business.

We hope that you share our excitement about this unique moment in the IT industry and our continued evolution as a company.   We all know that technology can produce tangible benefits, but sometimes the road to deployment can be daunting.  Transformation Services was founded to ensure our customers are able to successfully navigate that road with agility and velocity.

If you’re interested in learning more about our new Transformation Services Group, please reach out!

 

By Ron Dupler, CEO

Tech News Recap for the Week of 4/6/2015

Were you busy last week? Here’s a quick tech news recap of articles you may have missed from the week of 4/6/2015.

tech news recapMicrosoft celebrated its 40th anniversary. A data security laps at Auburn University left personal information belonging to roughly 370,000 current, former and prospective students accessible online for months. Virginia became the first state to enact a digital identity law. The Apple Watch sold out within two hours of debuting at retail stores and online. This has pushed delivery times back until mid-June for many customers. Log Angeles has announced that it will implement cloud controlled street lighting on all 7,500 miles of roads within city limits.

Tech News Recap

How has the corporate IT Department evolved? Has your department kept pace?

 

By Ben Stephenson, Emerging Media Specialist

Tech News Recap for the Week of 3/30/2015

Were you busy last week? Here’s a quick tech news recap of articles you may have missed from the week of 3/30/2015.

tech news recapGoogle has banned China’s website certification authority after a security breach. A study revealed that almost half of smartphone users in the U.S said they cannot live without their phones. Obama authorized sanctions against hackers. Cisco will purchase SDN startup Embrane.

Tech News Recap

If you’re looking to stay up-to-date on the top industry news throughout the week, follow @GreenPagesIT on Twitter!

By Ben Stephenson, Emerging Media Specialist

 

CIO Focus Interview: Isaac Sacolick, Greenwich Associates

CIO Focus InterviewFor this CIO Focus Interview, I got the pleasure of interviewing Isaac Sacolick. Isaac is the Global CIO and a Managing Director at Greenwich Associates and is recognized as an industry leading, innovative CIO. In 2013, he received Tech Target’s CIO award for Technology Advancement. The past two years, he’s been on the Huffington Post’s Top 100 Most Social CIOs list. I would highly recommend reading his blog, Social, Agile and Transformation and also following him on Twitter (@nyike).

Ben: Could you give me some background on your career?

Isaac: My career began in start-ups, and I have never lost that start-up DNA. My past few jobs have been taking the way start-ups work and applying that mentality and framework to traditional businesses that need to transform.

Ben: Could you give me some background on your company and your current role within the company?

Isaac: Greenwich is a provider of global market intelligence and advisory services to the financial services industry. I’m the CIO and am leading our Business Transformation Initiative. I’ve been focused on a couple of key areas in my role. These include creating agile practices and a core competency in software development, as well as building and standardizing our Business Intelligence platforms.

Ben: You recently started at Greenwich. As a CIO in this day and age, what are some of the challenges of starting a new role?

Isaac: When starting a new role, you’re constantly switching hats. You need your learning hat to be able to digest things that you know very little about. You need your listening hat to hear where a pain point or opportunity is so you can understand and apply your forces in the right places. It’s important to look for some quick wins while taking baby steps towards implementing changes and transformations you think are necessary. It’s like a clown picture with 7 or 8 different wheels spinning at the same time. I had to learn how our business operated and to work with the IT team to transition from one way of operating to another way of operating. An important piece is to learn the cultural dynamics of the company. That’s been my first three months here.

Ben: What projects have you been able to work on with all the chaos?

Isaac: I’ve instrumented some tangible results while getting situated. We now have an agile practice. It was one of those things that had been talked about in the past, but now we have four programs running with four different teams, each in different states of maturity. We’ve also changed our approach with our developers. They were operating in support mode and taking requests to address break fix things, etc. Now, we’ve put the brakes on some of the marginal work and have freed some of their time so some of them can be tech leads on agile projects. This has helped us make great progress on building new products. We’re a tech team focused on more strategic initiatives.

I’ve been doing similar work with Dev Ops by getting them an expanded view of support beyond service desk and having them look at considerations that our organization has that need support around applications. We’re trying to get in the mindset that we can respond to application requests in need. We’ve gone from a help desk and infrastructure model to one that adds more focus on supporting applications.

Which areas of IT do you think are having the biggest impact on businesses?

Isaac: I would say self-service BI programs. If you roll the clock back 3-4 years ago, the tools for data analytics most organizations were using could be split into two camps. You were either operate out of do-it-yourself tools like Microsoft Excel and Access or you deployed an enterprise BI solution. The enterprise BI solution cost a lot of money and required extensive training. Over the last 3 years, there has been an emergence of tools that fit in that middle ground. Users can now do more analytics in a much more effective and productive fashion. The business becomes more self-serving, and this changes the role of the IT department in regards to how to store and interpret data. There is also a lot of governance and documentation involved that needs to be accounted for. These new self-service BI programs have taken a specialized skill set and made it much more democratic and scalable so that individual departments can look at data to see how they can do their jobs better.

Ben: What’s the area of IT that interests you the most?

Isaac: I would have to say the Internet of Things. The large volumes of data and the integration of the physical world and virtual world are fascinating. The Internet of Things has capabilities to really enrich our lives by simplifying things and giving us access to data that used to be difficult to capture in real time. Take wearables for example. The Apple Watch came out and then there will be many more things just like it. I’m really interested to see the form and functionality wearables take moving forward, as well as who will adopt them.

Ben: What sorts of predictions did you have coming into 2015?

Isaac: I actually wrote a blog post back in January with my 5 predictions for 2015. One was that big data investments may be the big bubble for some CIOs. To avoid overspending and underachieving on big data promises, CIOs are going to have to close the skills gap and champion analytics programs. Another was that Boards are likely to start requesting their CIOs to formally present security risks, options and a roadmap as companies become more active to address information security issues.

 

By Ben Stephenson, Emerging Media Specialist

Tech News Recap for the Week of 3/23/2015

Were you busy last week? Here’s a quick tech news recap of articles you may have missed from the week of 3/23/2015.

Tech News Recap

A new breed of Point of Sale malware has been spotted in the wild by security researchers at Cisco’s Talos Security Intelligence & Research Group. Microsoft Apps are coming to Android smartphones and tablets. The White House has named Twitter veteran Jason Goldman as the first Chief Digital Officer. Eric Schmidt says that Google Glass will return. The Human Rights Council at the United Nations has voted to appoint an independent watch dog to monitor privacy rights in the digital age.

In other news, our CEO Ron Dupler is now on Twitter! Follow him @Ron_Dupler

Are you looking for more information around Windows Server 2003 End-of-Life? Read our whitepaper from Microsoft expert & GreenPages blogger David Barter.

By Ben Stephenson, Emerging Media Specialist

Tech News Recap for the Week of 3/16/2015

Were you busy last week? Here’s a quick tech news recap of articles you may have missed from the week of 3/16/2015.

Tech News Recap

Tech  News RecapChina has admitted to the existence of units dedicated to cyber warfare. Microsoft announced Windows 10 will arrive this summer, is pushing itself into the Internet of Things battle and is rumored to be killing off the Internet Explorer brand. The White House has named its first Director of IT (a former Facebook Engineer). The Hillary Clinton email scandal has shed light on shadow IT. US firms are getting caught up in Chinese censorship issues. There were also some good articles around cybercriminals stealing information via data laundering, why CIOs are adopting virtual desktops, and the future of big data in the cloud.

 

Corporate IT departments have progressed from keepers of technology to providers of complex solutions that businesses truly rely on. Learn more in this ebook – The Evolution of Your Corporate IT Department

 

By Ben Stephenson, Emerging Media Specialist