As interest in VDI grows among enterprises, many are finding its implementation more challenging than they anticipated. Successful desktop virtualization requires a powerful, secure, reliable infrastructure that delivers a seamless user experience. Private clouds that deliver a public cloud experience are emerging as a solution.
CloudCamp, where adopters of Cloud Computing technologies exchange ideas, is being held on Tuesday, November 3, from 7:30 pm – 9:30 pm at the 17th Cloud Expo, November 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. CloudCamp will be led by Dave Nielsen of CloudCamp.
This CloudCamp is focused on these Open Cloud platforms and open source applications that run on top of them. This event is a way for us to learn about what is happening in the Open Cloud ecosystem. We invite you to propose talks related to any of these open source platforms and other topics related to getting solutions to run on top of open source.
While we were busy throwing parts of our organizations into the cloud, and (for those who don’t count it as cloud) SaaS, while we were moving parts of our organization over to Python, or Node, or Swift, while we were looking into Software Defined Everything, and containers started sounding like the hosting spot for a humongous jigsaw puzzle, something was growing that we should be paying more attention to.
While container technology is sweeping the board and being installed practically everywhere, its progress will be largely unmonitored, says a study. According to the research figures, the majority of Docker adopters could be sleepwalking into chaos.
The report, The State of Containers and the Docker Ecosystem 2015, found that 93% of organisations plan to use containers, with 78% of them opting for Docker.
The primary reason for using Docker was its convenience and speed, according to the survey group, of whom a massive majority (85%) nominated ‘Fast and easy deployment’ as their most important reason for using Docker. However, this haste could lead to mistakes, because over half (54%) told researchers that performance monitoring was not the major focus of attention as they rushed to adopt container technology.
The findings of the study shocked Bernd Greifeneder, CTO at performance manager Dynatrace, which commissioned the research.
“It’s crucial to monitor not just the containers themselves, but to understand how microservices and applications within the containers perform,” said Greifeneder, who works in Dynatrace’s Ruxit division, “monitoring application performance and scalability are key factors to success with container technology.”
Half the companies planning a container deployment in the coming six months to a year will do so in production, according to Greifeneder. Without monitoring, it will be difficult to manage, he said.
While most companies (56%) seem to realise the benefits of having reliable and production-ready solutions, fewer (40%) seemed to understand the flip side of the powers of automation and the dangers inherent in using ‘extraordinarily dynamic’ technology without monitoring its progress.
Since Docker was launched in 2013, more than 800 million containers have been pulled from the public Docker Hub. While container use is skyrocketing there are barriers to success that need to be addressed, Greifeneder argued.
The report was conducted by O’Reilly Media in collaboration with Ruxit. Survey participants represent 138 companies with fewer than 500 people from a variety of sectors including in software, consulting, publishing and media, education, cloud services, hardware, retail and government.
The acquired assets include WSI, weather.com, Weather Underground and The Weather Company brand. The Weather Channel will not be part of the acquisition but it will license weather forecast data and analytics from IBM under a long-term contract.
IBM says the combination of technology and expertise from the two companies will be foundation for the new Watson IoT Unit and Watson IoT Cloud platform as part of its $3 billion investment strategy in this sector.
The Weather Company’s cloud data system runs the fourth most-used mobile app daily in the United States and handles 26 billion inquiries a day.
On closing the deal, IBM will acquire The Weather Company’s product and technology assets that include meteorological data science experts, precision forecasting and a cloud platform that ingests, processes, analyses and distributes petabyte sized data sets instantly. The Weather Company’s models analyse data from three billion weather forecast reference points, more than 40 million smartphones and 50,000 airplane flights per day, allowing it to offer a broad range of data-driven products and services to 5000 clients in the media, aviation, energy, insurance and government industries.
The Weather Company’s mobile and web properties serves 82 million unique monthly visitors. IBM said it plans to develop The Weather Company’s digital advertising platform and skills, commercialising weather information through data-driven advertising with additional ad-sponsored consumer and business solutions.
“The next wave of improved forecasting will come from the intersection of atmospheric science, computer science and analytics,” said Weather Company CEO David Kenny. “Upon closing of this deal, The Weather Company will continue to be able to help improve the precision of weather forecasts and deepen IBM’s Watson IoT capabilities.”
The fourth employee of Infectious Media, Dan de Sybel started his career as an Operations Analyst for Advertising.com, where during a six year tenure, he launched the European Technology division, producing bespoke international reporting and workflow platforms, as well as numerous time saving systems and board level business intelligence.
Dan grew the EU Tech team to 12 people before moving agency side, to Media Contacts UK, part of the Havas Media Group. At Havas, Dan was responsible for key technology partnerships and spearheading the agency’s use of the Right Media exchange under its Adnetik trading division.
At Infectious Media, Dan’s Technology division yielded one of the first Big Data analysis systems to reveal and visualise the wealth of information that RTB provides to its clients. From there, the natural next step was to produce the Impression Desk Bidder to be able to action the insights gained from the data in real time and thus close the loop on the programmatic life cycle. Dan’s team continues to enhance its own systems, whilst integrating the technology of other best-in-class suppliers to provide a platform that caters to each and every one of our clients’ needs.
Ahead of his presentation at DevOps World on November 4th in London, Dan shares his insights on how he feels DevOps is affecting ICT teams, the DevOps challenges he is facing as well as what he is doing to overcome it.
What does your role involve and how are you involved with DevOps?
Infectious Media runs its own real-time bidding software that takes part in hundreds of thousands of online auctions for online advertising space every second. As CTO, it’s my job to ensure we have the right team, processes and practices in place to ensure this high frequency, low latency system remains functional 24×7 and adapts to the ever changing marketplace and standards of the online advertising industry.
DevOps practices naturally evolved at Infectious Media due to our small teams, 1 week sprint cycles and growing complexity of systems. Our heavy use of the cloud meant that we could experiment frequently with different infrastructure setups and adapt code to deliver the best possible value for the investment we were prepared to make. These conditions resulted in far closer working of the developers and the operational engineers and we have not looked back since.
How have you seen DevOps affecting IT teams’ work?
Before adopting the DevOps philosophy, we struggled to bring the real-time bidding system to fruition, never sure if problems originated in the code, in the operational configurations of infrastructure, or in the infrastructure itself. Whilst the cloud brought many benefits, never having complete control of the infrastructure stack led to many latency and performance issues that could not be easily explained. Furthermore, being unable to accurately simulate a real-world environment for testing without spending hundreds of thousands of pounds meant that we had to work out solutions for de-risking testing new code in live environments. All of these problems became much easier to deal with once we started following DevOps practices and as a result, we have a far happier and more productive technology team.
What is the biggest challenge you are facing with DevOps and how did/are you trying to overcome it?
The biggest challenge was overcoming initial inertia to switch to a model that was so far unproven and regarded as a bit of a fad. Explaining agile methodologies and the compromises it involves to senior company execs is hard enough, but as soon as you mention multiple daily release cycles necessitating fewer governance processes and testing on live, you are bound to raise more than a few eyebrows.
Thankfully, we are a progressive company and the results proved the methodology. Since we adopted DevOps, we’ve had fewer outages, safer, more streamlined deployments and, crucially, more features released in less time
Can you share a book, article, movie that you recently read/watched and inspired you – in regards to technology?
The Phoenix Project. Perhaps a bit obvious, but it’s enjoyable reading a novel that covers some of the very real problems IT professionals experience in their day-to-day roles with the very solutions that we were experimenting with at the time.
Really my goal is to understand and help with some of the problems rolling DevOps practices out across larger companies can yield. In many respects, rolling out DevOps in small startups is somewhat easier as you have far less inertia from tried and trusted practices, comparatively less risk and far fewer people to convince that it’s a good idea. I’ll be interested to hear about other people’s experiences and hopefully be able to share some advice based on our own.
Deep Information Sciences, the company that reimagined MySQL for the New Economy in the cloud, today announced that Chad Jones, its Chief Strategy Officer, will be a featured speaker at Internet of @ThingsExpo, taking place November 3 – 5, 2015 in Santa Clara, CA.
In his session, “How Databases Can Stop Playing Catch-Up with the IoT” on November 4 at 8:30am, Jones will share tips on how to accelerate IoT initiatives and harness truly big IoT data by applying machine learning to the database core.
Jones will also be a featured panelist on the IoT Power Panel, “The World’s Many IoTs: Which are the Most Important?” on November 3 at 5:20pm, which explores technology advancements that will have the biggest impact on the IoT industry.
By Ryan Kroonenburg, Managing Director at Logicworks UK
After speaking with hundreds of UK technology leaders about cloud adoption, it is clear that cloud technology is transforming business models and improving cost-efficiency in the enterprise.
Despite the positive results, IT leaders have also shared serious concerns. They usually sound something like this:
“The cloud is great. The support is mediocre at best.”
“I am not sure my cloud provider understands my business.”
“I spend thousands of pounds a month for cloud support, but I do not know what they do.”
Enterprises expect a certain level of support for IT products, and they are simply not getting it. In fact, nearly seventy-five percent (75%) of UK CIOs feel they have sacrificed support by moving to the cloud. Eighty-four percent (84%) feel that cloud providers could do more to reduce burden on internal IT staff, and the vast majority of respondents felt ripped off by “basic” cloud support.
This is a huge threat to the success of the cloud in the UK. Poorly architected clouds that are supported by junior, outsourced technicians not only expose enterprises to downtime and security vulnerabilities, but also hurt overall business goals and reduce the likelihood of further cloud adoption.
I believe there are three major causes of cloud support failure — and in many ways, it is up to enterprises to become educated and demand a higher quality of service.
While this phenomenon is well-known in North America, the UK market is still full of small service providers that claim to provide “cloud computing” but in fact do not. Usually these ‘cloud providers’ are small companies with a couple of leased data centres that provide none of the scale, built-in services and tools, and global diversity that true cloud providers offer.
These companies have a tiny fraction of the power and capacity for innovation of cloud giants like Amazon and Google. Cloud technology has far outstripped the basic low-cost compute model, and has now transformed data warehousing, analytics, cold storage, scalable compute, and more. Small cloud providers cannot possibly compete.
UK IT leaders assume that these small providers are superior in cloud support and cost. Unfortunately, the typical support model tends to be “fix what is broken”, which is insufficient for cloud systems and results in slow wait times and higher risk of manual error. Further, these niche providers lack the scale and breadth of services to keep costs down.
Misaligned support models
When your applications depend on the health of physical data centre components, you want a support team that is very good at fixing mechanical issues, fast.
When your systems are in the cloud, you want a support team that builds systems that can survive any underlying mechanical issue.
These two infrastructure models require very different support teams. In the cloud support model, the system as a whole is more complex. For instance, a problem might originate in a script in a central repository, so the support engineer must have a deep understanding of all layers of the system in order to discover the source of the issue. They must be able to code as well as to understand traditional networking and database concepts. Service providers can no longer staff support teams with low-level engineers whose only responsibilities are to record issues and read monitoring dashboards.
Unfortunately, cloud service providers — or traditional hosting providers that have rebranded — do not staff their support centers with experienced cloud architects. It is no surprise to me that nearly half of IT leaders report that call handlers lacked sufficient technical knowledge (41%) and were slow to respond (47%).
The cloud automation gap
There is only one way for service teams to deliver a fast, targeted fix to a service request: cloud automation. Not incidentally, automation is also the only way to deliver 100% available systems on the cloud.
Maintaining complex cloud environments is difficult. To take full advantage of the cloud’s flexibility and pay-as-you-go pricing, your cloud should scale dynamically without human intervention. This requires more than just a basic service install: you need to automate the provisioning of new instances, which means configuring those instances quickly with a configuration management script like Puppet. If your team’s goal is to deploy more frequently, you need to combine this rapid infrastructure provisioning capacity with deployment automation tools, which automate the testing and deployment of new code.
When small cloud companies claim to offer scalability, be sure to dig deeper. What they may mean is that they can manually respond to a service request to increase your server capacity. If you want to actually automate scaling on a large cloud platform, these tasks require advanced, specialized skills both to create these systems in the first place and to maintain them.
Few service providers offer true automation, and few enterprises realise they need it. Automation is difficult, automation experts are hard to find, and this education gap will have even larger consequences in the years to come. As the demands on IT resources continue to increase, the delay required to perform or outsource manual work will become impossible to sustain. Downtime is impossible to prevent when the system cannot be adequately automated for failover.
Any cloud service provider must be cloud automation experts. Cloud automation should be the heart of enterprise support — it will decrease downtime, increase flexibility, and dramatically improve your provider’s ability to rapidly respond to service requests.
The right providers
Choosing the right cloud platform and service provider feels very risky for many UK business leaders. The industry is changing rapidly. The pressures to develop faster and more cost-efficiently are enormous.
I place my bets behind companies that are also evolving rapidly and developing the right new services to meet customer demand, which is why I have helped European enterprises move to AWS for over five years. That’s also why I joined Logicworks, who I believe are one of the only cloud service providers to understand the importance of automation in improving data privacy and agility. Feel free to reach out to me directly (email@example.com) to learn more about AWS or Logicworks.
I am not a data scientist or an expert in knowing how to build candlestick charts from historical stock prices. I am however a data enthusiast and it fascinates me when I hear people talk about Big Data, like they invented it. Sorry, no offense meant, but really how did we just jump to Big Data without even creating an understanding about any kind of data?
Information in any shape, form or face is a brilliant resource. We work with information every day and, if you look at it, nothing runs without information. Every business of every size across the world works on information – even the smallest corner store to the corporations working in large glass towers. This information is of many types: accounting information, sales data, marketing stats, customer information, and purchase order information, patient information, hosting information and so on. Everything we know has some kind of information associated with it. Do we agree so far? Yes we do.
WebRTC services have already permeated corporate communications in the form of videoconferencing solutions. However, WebRTC has the potential of going beyond and catalyzing a new class of services providing more than calls with capabilities such as mass-scale real-time media broadcasting, enriched and augmented video, person-to-machine and machine-to-machine communications.
In his session at @ThingsExpo, Luis Lopez, CEO of Kurento, will introduce the technologies required for implementing these ideas and some early experiments performed in the Kurento open source software community in areas such as entertainment, video surveillance, interactive media broadcasting, gaming or advertising. He will conclude with a discussion of their potential business applications beyond plain call models.