Harnessing the Power of Mobile Apps By @DanaGardner | @CloudExpo [#Cloud]

Source Refrigeration and HVAC has been extending the productivity of its workforce, much of it in the field, through the use of innovative mobile applications and services.
This series of penetrating discussions on the latest in enterprise mobility explores advancements in applications design and deployment technologies across the full spectrum of edge devices and operating environments.
Our next innovator interview focuses on how Source Refrigeration and HVAC has been extending the productivity of its workforce, much of it in the field, through the use of innovative mobile applications and services.

read more

.@Unitrends Launches Ad Campaign on ‘Cloud Computing Journal’ | @CloudExpo [#Cloud]

SYS-CON Media announced today that Unitrends, cloud-empowered all-in-one continuity solutions that increase your IT confidence, has launched ad campaigns on SYS-CON’s i-Technology sites, which include Cloud Computing Journal, DevOps Journal, Virtualization Journal, and IoT Journal.
SYS-CON Media’s interactive programs with an average 47 million pages views per month, have proven to be one of the most effective lead-generating tools for our advertising partners.
With 1.2 million qualified IT professionals across SYS-CON’s network of i-Technology sites, your company will have access to a multitude of influential enterprise development managers and decision makers in the marketplace that you’re not currently reaching.
These packages will put you in touch with your best customers and deliver the reach, impact and visibility necessary to stay competitive in today’s market.

read more

Moving Towards IoT: 5G Challenges and Trends | @ThingsExpo [#IoT]

5G is not just faster data and/or higher capacity only. It is much more than these obvious characteristics.

For one thing for IoT to succeed; the grand vision created for its debut on a universal scale, 5G is a must. More users; more devices interconnected at any instant of time. The 5G must address the issues arising out of interconnected devices in addition to only users. Also for IoT to succeed additional items like coverage of the serviced region (Can you hear me now!) and more efficient networks.

The data rates required by 5G are some 1000 times that provided by 4G to make things happen with low latency. One immediate challenge is the unavailability of spectrum (700 MHZ to 3 GHZ).

read more

Stackify Launches APM+ | @CloudExpo @Stackify [#Cloud]

Cloud-based application performance monitoring and management provider Stackify announces the launch of their APM+ (Application Performance Management) solution for Microsoft® ASP.NET, a cost-effective solution offering real-time, code-level insights for business-critical applications. This lightweight solution requires minimal server resources while giving developers continuous code-level visibility into application behavior to improve their system’s overall performance.
Stackify’s new APM+ solution was designed to run on production servers, thus allowing developers to capture and fix application performance problems immediately instead of requiring them to spend time reproducing reported errors in order to solve them. The new APM+ moves beyond many traditional APM solutions on the market that are either too expensive or have high resource utilization, causing developers to activate them only after an issue has been reported.

read more

.@ProfitBricksUSA Launches .NET SDK for Cloud | @DevOpsSummit [#DevOps]

ProfitBricks has launched its SDK for .NET, adding to its growing collection of libraries for the DevOps community. This new library exposes all functionality found in the SOAP API and expands ProfitBricks’ support for developers who work with Microsoft technology.
“This .NET library addition provides a powerful programmatic interface to our SOAP API,” commented Achim Weiss, Co-founder and CEO of ProfitBricks. “This release is the latest in a series of new offerings that further extend ProfitBricks’ services into the developer community. We are dedicated to providing the industry with the best in painless cloud infrastructure, and you can expect more functionality for the Microsoft developer community in upcoming months.”

read more

Containers & Cloud & Security – Oh My! By @EFeatherston | @CloudExpo [#Cloud]

Dorothy the CIO was walking the yellow brick road of planning. She was on her way to the Emerald City to ask the great wizard of the agile data center for advice. Along the way she met two other CIOs who joined her on the journey, nicknamed Tin Man and Scarecrow. Their travels brought them to the edge of the dark forest of datacenter hype and fear.
‘‘Do you think we’ll meet any wild technologies and fears in there?’’ she asked her companions.
‘‘We might.’’ responded Tin Man.
‘One’s that devour IT projects?’’ whispered Scarecrow.
‘‘Possibly.’ said the Tin Man, ‘‘but mostly containers, and clouds, and security.’’
‘‘Containers, and clouds, and security, oh my!’’ they all murmured in unison as they entered the dark forest.

read more

A Public Service Announcement for Users of the New MacBook

If you hadn’t already heard, Apple recently announced that Boot Camp on the new MacBook will be restricted to Windows 8. Parallels Desktop® for Mac has no such restriction on the new MacBook or on any other Mac. With Parallels Desktop, you can of course run Windows 8. You can also run Windows 7, Windows XP, the […]

The post A Public Service Announcement for Users of the New MacBook appeared first on Parallels Blog.

All About Android for the Uninitiated

While the madness of March came to an exciting close, the ongoing battle between Apple® and Android™ has yet to officially declare an official winner. For many, Apple is the way to go, with no questions asked, but we non-conformists stand strong on the dark side. So come join us—we have KitKat, Jelly Bean, Gingerbread, Ice Cream Sandwich, and many […]

The post All About Android for the Uninitiated appeared first on Parallels Blog.

Golgi adds native Arduino, Intel Edison support to IoT platform

Golgi is adding support for more devices to its Internet of Things cloud service

Golgi is adding support for more devices to its Internet of Things cloud service

Data transport and Internet of Things cloud service provider Golgi has added support for native Arduino and Intel Edison endpoints in a bid to bolster the IoT platform.

Golgi, which is owned by data transport tech provider Openmind Networks, offers a cloud-based managed connectivity service that helps bridge the gap between different IoT devices and applications across multi-architecture networks.

The service effectively auto-generates native code for each endpoint once the platform is told what kind of data it will be ingesting from them, so that developers don’t have to muck about learning a rift of different technologies in order to link up a broad range of sensors to their applications and services.

“Our support for Arduino and Edison creates a place where IoT developers and makers of embedded devices can meet,” said Brian Kelly, chief technology officer and co-founder of Golgi.

One of the big challenges in the IoT sector at the moment sits where the needs of device manufacturers and IoT app developers conflict.

Device manufacturers seem incentivised to back (build to) as few technology ecosystems as possible given the cost implications, but as we’re still in the heyday of IoT it is clear there is no shortage of IoT tech ecosystems, each with their own take on transport and application language support, jockeying for the top spot. Similarly, developers don’t want to have to learn a raft of technologies just to develop and deploy an IoT service. That’s the challenge Golgi is trying to solve – by abstracting much of the underlying coding work away.

“We’ve been solving operators’ data transport problems for 13 years, and now we’ve extended our infrastructure to solve these problems for IoT developers. Because Golgi translates the various communications languages of device makers, developers don’t have to learn them; they can focus on what they know best. As a result, their product development cycle is shortened by 50 per cent and time to market is speeded up,” Kelly said.

8 Things You May Not Know About Docker

DockerIt’s possible that containers and container management tools like Docker will be the single most important thing to happen to the data center since the mainstream adoption of hardware virtualization in the 90s. In the past 12 months, the technology has matured beyond powering large-scale startups like Twitter and Airbnb and found its way into the data centers of major banks, retailers and even NASA. When I first heard about Docker a couple years ago, I started off as a skeptic. I blew it off as skillful marketing hype around an old concept of Linux containers. But after incorporating it successfully into several projects at Spantree I am now a convert. It has saved my team an enormous amount of time, money and headaches and has become the underpinning of our technical stack.

If you’re anything like me, you’re often time crunched and may not have a chance to check out every shiny new toy that blows up on Github overnight. So this article is an attempt to quickly impart 8 nuggets of wisdom that will help you understand what Docker is and why it’s useful.

 

Docker is a container management tool.

Docker is an engine designed to help you build, ship and execute applications stacks and services as lightweight, portable and isolated containers. The Docker engine sits directly on top of the host operating system. Its containers share the kernel and hardware of the host machine with roughly the same overhead as processes launched directly on the host machine.

But Docker itself isn’t a container system; it merely piggybacks off the existing container facilities baked into the OS, such as LXC on Linux. These container facilities have been baked into operating systems for many years, but Docker provides a much friendlier image management and deployment system for working with these features.

 

Docker is not a hardware virtualization engine.

When Docker was first released, many people compared it to virtualization hypervisors like VMware, KVM and Virtualbox. While Docker solves a lot of the same problems and shares many of the same advantages as hypervisors, Docker takes a very different approach. Virtual machines emulate hardware. In other words, when you launch a VM and run a program that hits disk, it’s generally talking to a “virtual” disk. When you run a CPU-intensive task, those CPU commands need to be translated to something the host CPU understands. All these abstractions come at a cost– two disk layers, two network layers, two processor schedulers, even two whole operating systems that need to be loaded into memory. These limitations typically mean you can only run a few virtual machines on a given piece of hardware before you start to see an unpleasant amount of overhead and churn. On the other hand, you can theoretically run hundreds of Docker containers on the same host machine without issue.

All that being said, containers aren’t a wholesale replacement for virtual machines. Virtual machines provide a tremendous amount of flexibility in areas where containers generally can’t. For example, if you want to run a Linux guest operating system on top of a Windows host, that’s where virtual machines shine.

 

Docker uses a layered file system.

As mentioned earlier, one of the key design goals for Docker is to provide image management on top of existing container technology. In Docker terms, an image is a static, immutable snapshot of a container’s file system. But Docker rather cleverly takes this snapshotting concept a step further by incorporating a copy-on-write filesystem into its design. I’ve found the best way to explain this is by example:

Let’s say you want to build a Docker image to run your Java web application. You may start with one of the official Docker base images that have Java 8 pre-installed. In your Dockerfile (a text file which tells Docker how to build your image) you’d specify that you’re extending the Java 8 image, which instructs Docker to pull down the pre-built snapshot associated with this image. Now, let’s say you execute a command that downloads, extracts and configures Apache Tomcat into /opt/tomcat. This command will not affect the state of original Java 8 image. Instead, it will start writing to a brand new filesystem layer. When a container boots up, it will merge these file systems together. It may load /usr/bin/java from one layer and /opt/tomcat/bin from another. In fact, every step in a Dockerfile produces a new filesystem layer, even if only one file is changed. If you’re familiar with the Git version control system, this is similar to a commit tree. But with Docker, it provides users with tremendous flexibility to compose application stacks iteratively.

At Spantree, we have a base image with Tomcat pre-installed and on each application release we merely copy the latest deployable asset into a new image, tagging the Docker image to match the release version as well. Since the only variation on these images is the very last layer, a 90MB WAR file in our case, each image is able to share the same ancestors on disk. This means we can keep our old images around and rollback on-demand with very little added cost. Furthermore, when we launch several instances of these applications side-by-side, they share the same read-only filesystems.

 

Docker can save you time.

Many years ago, I was working on a project for a major restaurant chain and on the first day I was handed a 12 page Word document describing how to get my development environment set up to develop against all the various applications. I had to install a local Oracle database, a specific version of the Java runtime, along with a number of other system and library dependencies and tooling. The whole setup process cost each member of my team approximately a day of productivity, which unfortunately translated to thousands of dollars in sunk costs for our client. Our client was used to this and considered this part of the cost of doing business when onboarding new team members, but as consultants we would have much rather spent that time building useful features that add value to our client’s business.

Had Docker existed at the time, we could have cut this process from a day to mere minutes. With Docker, you can express servers and services through code, similarly to configuration tools like Puppet, Chef, Salt and Ansible. But, unlike these tools, Docker goes a step further by actually pre-executing these steps for you during its build process snapshotting the output as an indexed, shareable disk image. Need to compile Node.js from source? No problem. The Docker runtime will do that on build and simply snapshot the output for you at the end. Furthermore, because Docker containers sit directly on top of the Linux kernel, there’s no risk of environmental variations getting in the way.

Nowadays, when we bring a new team member into a client project, they merely have to run `Docker-compose up`, grab a cup of coffee and by the time they’re back they should have everything they need to start working.

 

Docker can save you money.

Of course, time is money, but Docker can also save you hard, physical dollars as it relates to infrastructure costs. Studies at Gartner and McKinsey cite the average data center utilization as between 6% to 12%. Quite a lot of that underutilized space is due to static partitioning. With physical machines or even hypervisors, you need to defensively provision the CPU, disk and memory based on the high watermark of possible usage. Containers, on the other hand, allow you to share unused memory and disk between instances. This allows you to pack many more services onto the same hardware, spinning them down when they’re not needed without worrying about the cost of bringing them back up again. If it’s 3am and no one is hitting your Dockerized intranet application but you need a little extra horsepower for your Dockerized nightly batch job, you can simply swap some resources between the two applications running on common infrastructure.

 

Docker has a robust ecosystem of existing images.

At the time of writing, there are over 14,000 public Docker images available on the web. Most of these images are shared through Docker Hub. Similar to how Github has largely become the home of most major open-source projects, Docker Hub is the de facto resource for sharing and working with public Docker images. These images can serve as building blocks for your application or database services. Want to test drive the latest version of that hot new graph database you’ve been hearing about? Someone’s probably already gone to the trouble of Dockerizing it. Need to build and host a simple Rails application with a special version of Ruby? It’s now at your fingertips in a single command.

 

Docker helps you avoid production bugs.

At Spantree, we’re big fans of “immutable infrastructure.” That is to say, if at all possible, we avoid doing upgrades or changes on live servers at all costs. Instead, we build out new servers from scratch, applying the new application code directly to a pristine image and rolling the new release servers into the load balancer when they’re ready, retiring the old server instances after all our health checks pass. This gives us the ability to cleanly roll back if something goes wrong. It also gives us the ability to promote the same master images from dev to QA to production with no risk of configuration drift. By extending this approach all the way to the developer machine with Docker, we can also avoid the “it works on my machine” problem because each developer is able to test their build locally in a parallel

 

Docker only works on Linux (for now).

The technologies powering Docker are not necessarily new, but many of them, like LXC and cgroups, are specific to the Linux kernel. This means that, at the time of writing, Docker is only capable of hosting applications and services that can run on Linux.  That is likely to change in the coming years as Microsoft has recently announced plans for first-class container support in the next version of Windows Server. Microsoft has been working closely with Docker to achieve this goal. In the meantime, tools like boot2docker and Docker Machine make it possible to run and proxy Docker commands to a lightweight Linux VM on Mac and Windows environments.

Have you used Docker? What has your experience been like? If you’re interested in learning more about how Spantree and GreenPages can help with your application development initiatives, please reach out!

 

By Cedric Hurst, Principal, Spantree Technology Group, LLC

Cedric Hurst is Principal at Spantree Technology Group, a boutique software engineering firm based primarily out of Chicago, Illinois that focuses on delivering scalable, high-quality solutions for the web. Spantree provides clients throughout North America with deep insights, strategies and development around cloud computing, devops and infrastructure automation. Spantree is partnered with GreenPages to provide high-value application development and devops enablement to their growing enterprise client base. In his spare time, Cedric speaks at technical meetups, makes music, mentors students and hangs out with his daughter. To stay up to date with Spantree, follow them on Twitter @spantreellc