Over the past year I reckon I have spoken to more than a thousand Developers/IT Os/DevOps folk through customer calls, demos of Logentries, at conferences such as Velocity, DevOpsDays, AWS re:Invent as well as a bunch of other more low key meetups across US and Europe.
Naturally, one of the first questions I tend to ask is: “hey what do you use for logging?”
Quickly followed by: “What other tools do you use?”
Data centers have matured immensely over the past decade with today’s data centers being greener than ever. Pike Research mentions that the green data center market was worth $17.1 billion in 2012. The green data center economy is expected to grow rapidly throughout this decade. Pike Research estimates that the total market value for green data centers will increase to $45 billion by 2016.
Hardware will never be more valuable than on the day it hits your loading dock. Each day new servers are not deployed to production the business is losing money. While Moore’s Law is typically cited to explain the exponential density growth of chips, a critical consequence of this is rapid depreciation of servers. The hardware for clustered systems (e.g., Hadoop, OpenStack) tends to be significant capital expenses.
In his session at Big Data Expo, Mason Katz, CTO and co-founder of StackIQ, discussed how infrastructure teams should be aware of the capitalization and depreciation model of these expenses to fully understand when and where automation is critical.
Most companies hope for rapid growth so it’s important to invest in scalable core technologies that won’t demand a complete overhaul when a business goes through a growth spurt. Cloud technology enables previously difficult-to-scale solutions like phone, network infrastructure or billing systems to automatically scale based on demand. For example, with a virtual PBX service, a single-user cloud phone service can easily transition into an advanced VoIP system that supports hundreds of phones and numerous office locations.
“There is a natural synchronization between the business models, the IoT is there to support ,” explained Brendan O’Brien, Co-founder and Chief Architect of Aria Systems, in this SYS-CON.tv interview at the 15th International Cloud Expo®, held Nov 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA.
Advanced Persistent Threats (APTs) are increasing at an unprecedented rate. The threat landscape of today is drastically different than just a few years ago. Attacks are much more organized and sophisticated. They are harder to detect and even harder to anticipate. In the foreseeable future it’s going to get a whole lot harder. Everything you know today will change. Keeping up with this changing landscape is already a daunting task. Your organization needs to use the latest tools, methods and expertise to guard against those threats. But will that be enough? In the foreseeable future attacks will originate from entirely new attack platforms. The tools and methods you use today will not protect you from the threats you will face tomorrow. Your security experts will be no match for this new threat. Where will these attacks come from? It isn’t where you think.
This past week the Appcore team got the opportunity to attend one of the industry’s leading cloud events, Cloud Expo in Santa Clara, CA. We spent a lot of time interacting with attendees at the exhibit portion of the event. As a software company with a sole commitment to CloudStack, we heard a lot of questions around the debate between CloudStack and OpenStack. “Do you support OpenStack?” “Why do you only support CloudStack?” “Do you plan to integrate OpenStack into your multi-cloud solution?”
We can all agree that OpenStack has done a great job branding their product. That isn’t hard to do with an annual operating budget of $4–5 mm and enterprises such as Red Hat, Dell, HP and Rackspace backing the project. As you can imagine, many enterprise CEOs believe OpenStack is the right direction.
What’s Big Data without Big Compute? Basically just a large collection of unstructured information with little purpose and value. It’s not enough for data just to exist, we must derive value from it through computation – something commonly referred to as analytics.
With traditional data, we simply query it to derive results; all we need is currently stored within the data set itself. For instance, for a customer database with dates of birth, we may just fetch the list of customers who were born after a certain date. This is a simple query, not a computation, and therefore cannot be considered analytics.
Cloud services are the newest tool in the arsenal of IT products in the market today. These cloud services integrate process and tools. In order to use these products effectively, organizations must have a good understanding of themselves and their business requirements.
In his session at 15th Cloud Expo, Brian Lewis, Principal Architect at Verizon Cloud, outlined key areas of organizational focus, and how to formalize an actionable plan when migrating applications and internal services to the cloud.
An education portal application developed by a European non-profit organization links students to the higher education institutions where they wish to study, as well as to the government agency that helps them finance their education. When educational institutions want to develop and test transactions involving this portal, they need access to the behavior of the interconnected government agency’s system—however, this system is not readily available for testing.
Service Virtualization provides these institutions continuous, secure access to the government system behavior that is critical for completing thorough end-to-end tests against the portal application.