Bring Your Own Encryption: The case for standards

BYOE is the new black

BYOE is the new black

Being free to choose the most suitable encryption for your business seems like a good idea. But it will only work in a context of recognised standards across encryption systems and providers’ security platforms. Since the start of the 21st century, security has emerged from scare-story status to become one of IT users’ biggest issues – as survey after survey confirms. Along the way a number of uncomfortable lessons are still being learned.

The first lesson is that security technology must always be considered in a human context. No one still believes in a technological fix that will put an end to all security problems, because time and again we hear news of new types of cyber attack that bypass sophisticated and secure technology by targeting human nature – from alarming e-mails ostensibly from official sources, to friendly social invitations to share a funny download; from a harmless-looking USB stick ‘accidentally’ dropped by the office entrance, to the fake policeman demanding a few personal details to verify that you are not criminally liable.

And that explains the article’s heading: a balance must be struck between achieving the desired level of protection against keeping all protection procedures quick and simple. Every minute spent making things secure is a minute lost to productivity – so the heading could equally have said “balancing security with efficiency”.

The second lesson still being learned is never to fully trust to instinct in security matters. It is instinctive to obey instructions that appear to come from an authoritative source, or to respond in an open, friendly manner to a friendly approach – and those are just the sort of instincts that are exploited by IT scams. Instincts can open us to attack, and they can also evoke inappropriate caution.

In the first years of major cloud uptake there was the oft-repeated advice to business that the sensible course would be to use public cloud services to simplify mundane operations, but that critical or high priority data should not be trusted to a public cloud service but kept under control in a private cloud. Instinctively this made sense: you should not allow your secrets to float about in a cloud where you have no idea where they are stored or who is in charge of them.

The irony is that the cloud – being so obviously vulnerable and inviting to attackers – is constantly being reinforced with the most sophisticated security measures: so data in the cloud is probably far better protected than any SME could afford to secure its own data internally. It is like air travel: because flying is instinctively scary, so much has been spent to make it safe that you are

less likely to die on a flight than you are driving the same journey in the “safety” of your own car. The biggest risk in air travel is in the journey to the airport, just as the biggest risk in cloud computing lies in the data’s passage to the cloud – hence the importance of a secure line to a cloud service.

So let us look at encryption in the light of those two lessons. Instinctively it makes sense to keep full control of your own encryption and keys, rather than let them get into any stranger’s hands – so how far do we trust that instinct, bearing in mind the need also to balance security against efficiency?

BYOK

Hot on the heels of BYOD – or “Bring Your Own Device” to the workplace – come the acronym for Bring Your Own Key (BYOK).

The idea of encryption is as old as the concept of written language: if a message might fall into enemy hands, then it is important to ensure that they will not be able to read it. We have recently been told that US forces used Native American communicators in WW2 because the chances of anyone in Japan understanding their language was near zero. More typically, encryption relies on some sort of “key” to unlock and make sense of the message it contains, and that transfers the problem of security to a new level: now the message is secure, the focus shifts to protecting the key.

In the case of access to cloud services: if we are encrypting data because we are worried about its security in an unknown cloud, why then should we trust the same cloud to hold the encryption keys?

Microsoft for instance recently announced a new solution to this dilemma using HSMs (Hardware Security Modules) within their Windows Azure cloud – so that an enterprise customer can use its own internal HSM to produce a master key that is then transmitted to the HSM within the Windows Azure cloud. This provides secure encryption when in the cloud, but it also means that not even Microsoft itself can read it, because they do not have the master key hidden in the enterprise HSM.

It is not so much that the enterprise cannot trust Microsoft to protect its data from attack, it is more to do with growing legal complexities. In the wake of Snowden revelations, it is becoming known that even the most well protected data might be at risk from a government or legal subpoena demanding to reveal its content. Under this BYOK system, however, Microsoft cannot be forced to reveal the enterprise’s secrets because it cannot access them itself, and the responsibility lies only with the owner.

This is increasingly important because of other legal pressures that insist on restricting access to certain types of data. A government can, for example, forbid anyone from allowing data of national importance to leave the country – not a simple matter in a globally connected IP network. There are also increasing legal pressures on holders of personal data to guarantee levels of privacy.

Instinctively it feels a lot more secure to manage your own key and use BYOK instead of leaving it to the cloud provider. As long as that instinct is backed by a suitable and strict in-house HSM based security policy, these instincts can be trusted.

BYOE

BYOK makes the best of the cloud provider’s encryption offering, by giving the customer ultimate control over its key. But is the customer happy with the encryption provided?

Bearing in mind that balance between security and efficiency, you might prefer a higher level of encryption than that used by the cloud provider’s security system, or you might find the encryption mechanism is adding latency or inconvenience and would rather opt for greater nimbleness at the cost of lighter encryption. In this case you could go a step further and employ your own encryption algorithms or processes. Welcome to the domain of BYOE (Bring Your Own Encryption).

Again, we must balance security against efficiency. Take the example of an enterprise using the cloud for deep mining its sensitive customer data. This requires so much computing power that only a cloud provider can do the job, and that means trusting private data to be processed in a cloud service. This could infringe regulations, unless the data is protected by suitable encryption. But how can the data be processed if the provider cannot read it?

Taking the WW2 example above: if a Japanese wireless operator was asked to edit the Native American message so a shortened version could be sent to HQ for cryptanalysis, any attempt to edit an unknown language would create gobbledygook, because translation is not a “homomorphic mapping”.

Homomorphic encryption means that one can perform certain processes on the encrypted data, and the same processes will be performed on the source data without any need to de-crypt the encrypted data. This usually implies arithmetical processes: so the data mining software can do its mining on the encrypted data file while it remains encrypted, and the output data, when decrypted, will be the same output as if the data had been processed without any intervening encryption.

It is like operating one of those automatic coffee vendors that grinds the beans, heats the water and adds milk and sugar according to which button was pressed: you do not know what type of coffee bean is used, whether tap, filtered or spring water or whether the milk is whole cream, skimmed or soya. All you know is that what comes out will be a cappuccino with no sugar. In the data mining example: what comes out might be a neat spread-sheet summary of customers average buying habits based on millions of past transactions, without a single personal transaction detail being visible to the cloud’s provider.

The problem with the cloud provider allowing the users to choose their own encryption, is that the provider’s security platform has to be able to support the chosen encryption system. As an interim measure, the provider might offer a choice from a range of encryption offerings that have been tested for compatibility with the cloud offering, but that still requires one to trust another’s choice of encryption algorithms. A full homomorphic offering might be vital for one operation, but a waste of money and effort for a whole lot of other processes.

The call for standards

So what is needed for BOYE to become a practical solution is a global standard cloud security platform that any encryption offering can be registered for support by that platform. The customer chooses a cloud offering for its services and for its certified “XYZ standard” security platform, then the customer goes shopping for an “XYZ certified” encryption system that matches its particular balance between security and practicality.

Just as in the BYOD revolution, this decision need not be made at an enterprise level, or even by the IT department. BYOE, if sufficiently standardised, could become the responsibility of the department, team or individual user: just as you can bring your own device to the office, you could ultimately take personal responsibility for your own data security.

What if you prefer to use your very own implementation of your own encryption algorithms? All the more reason to want a standard interface! This approach is not so new for those of us who remember the Java J2EE Crypto library – as long as we complied with the published interfaces, anyone could use their own crypto functions. This “the network is the computer” ideology becomes all the more relevant in the cloud age. As the computer industry has learned over the past 40 years, commonly accepted standards and architecture (for example the Von Neumamm model or J2EE Crypto) play a key role in enabling progress.

BYOE could prove every bit as disruptive as BYOD – unless the industry can ensure that users choose their encryption from a set of globally sanctioned and standardised encryption systems or processes. If business is to reap the full benefits promised by cloud services, it must have the foundation of such an open cloud environment.

Written by Dr. Hongwen Zhang, chair security working group, CloudEthernet Forum.

Five ways to monitor and control AWS cloud costs

(c)iStock.com/surpasspro

Many IT teams find that their AWS cloud costs grow less efficient as “clutter” builds up in their accounts. The good news is that both AWS and a small army of third party providers have developed tools to help engineers discover the cause(s) of these inefficiencies.

While there are several “easier” fixes, such as Reserved Instances and eliminating unused resources, the real issue is usually far more complex. Unplanned costs are frequently the result of nonstandard deployments that come from an unclear or absent development processes, poor organisation, or the absence of automated deployment and configuration tools.

Controlling AWS costs is no simple task in enterprises with highly distributed teams, unpredictable legacy applications, and complex lines of dependency. Here are some strategies Logicworks engineers use to keep our clients’ costs down:

1. Cloudcheckr and Trusted Advisor

The first step in controlling AWS costs is to gather historical cost/usage data and set up an interface where this data can be viewed easily.

There are many third party and native AWS resources that provide consolidated monitoring as well as recommendations for potential cost saving, using tools like scheduled runtime and parking calendars to take advantage of the best prices for On-Demand instances.

Cloudcheckr is a sophisticated cloud management tool that is especially useful in enforcing standard policies and alerting developers if any resources are launched outside of that configuration. It also has features like cost heat maps and detailed billing analysis to give managers full visibility into their environments. When unusual costs appear in an AWS bill, Cloudcheckr is the first place to look.

Trusted Advisor is a native AWS resource available with Business-level support. TA’s primary function is to recommend cost savings opportunities and like Cloudcheckr, it also provides availability, security, and fault tolerance recommendations. Even simple tunings in CPU usage and provisioned IOPS can add up to significant savings; Oscar Health recently reported that it saw 20% savings after using Trusted Advisor for just one hour.

Last year, Amazon also launched the Cost Explorer tool, a simple graphical interface displaying the most common cost queries: monthly cost by service, monthly cost by linked account, and daily spend. This level of detail might be suitable for upper management and finance teams, as it does not have particularly specific technological data.

2. Reserved instances

The most obvious way to control compute cost is to purchase reserved EC2 instances for the period of one or three years, either paid all upfront, partially upfront, or none upfront. Customers can see savings of over 50% on reserved instances vs. on-demand instances.

However, reserved instances have several complications. First, it is not a simple matter to predict one or three years of usage when an enterprise has been on AWS for the same amount of time or less; secondly, businesses that are attracted to the pay-as-you-go cloud model are wary of capital costs that harken back to long-term contracts and sunk costs. It can also be difficult to find extra capacity of certain instance types on the marketplace, and enterprises might find this a complicated and costly procedure in any case.

Companies can still get value out of reserved instances by following certain best practices:

  • Buy reserved capacity to meet the minimum or average sustained usage for the minimum number of instances necessary to keep the application running, or instances that are historically always running.
  • To figure out average sustained usage, use tools like Cloudcheckr and Trusted Advisor (explored above) to audit your usage history. Cloudcheckr will recommend reserved instance purchases based on those figures, which can be especially helpful if you do not want to comb through years of data across multiple applications.
  • Focus first on what will achieve the highest savings with rapid ROI; this lowers the potential impact of future unused resources. The best use-cases for reserved instances are applications with very stable usage patterns.
  • For larger enterprises, use a single individual, financial team, and AWS account to purchase reserved instances across the entire organisation. This allows for a centralized reserved instance hub so that resources that are not used on one application/team can be taken up by other projects internally.
  • Consolidated accounts can purchase reserved instances more effectively when instance families are also consolidated. Reservations cannot be moved between accounts, but they can be moved within RI families. Reservations can be changed at any time from one size to another within a family. The fewer families are maintained, the more ways an RI can be applied. However, as explored below, the cost efficiencies gained by choosing a more recently released, more specialised instance type could outweigh the benefits of consolidating families to make the RI process smoother.
  • Many EC2 Instances are underutilised. Experiment with a small number of RIs on stable applications, but you may find better value by choosing smaller instance sizes and via better scheduling of On-Demand instances, without upfront costs.

3. Spot instances

Spot instances allow customers to set the maximum price for compute on EC2. This is great for running background jobs more cheaply, processing large data loads in off-peak times, etc. Those familiar with certain CPC bid rules in advertising may recognise the model.

The issue is that a spot instance might be terminated when you are 90% through a job if the price for that instance rises above the price threshold. An architecture unplanned for this can see the cost of the spot instance wasted. Bid prices need to change dynamically, but without exceeding on-demand prices. Best practice is to set up an Auto Scaling group that only has spot instances; CloudWatch can watch the current rate and in the event of the price meeting a bid, it would scale up the group as long as it is within the parameters of the request. Then create a second Auto Scaling group with on-demand instances (the minimum to keep the lights on), and set an ELB between them so that requests get served either by the spot group or the on-demand group. If the on-demand price is greater than bid price, then create a new launch configuration that sets the min_size of spot instances Auto Scaling group to 0. Sanket Dangi outlines this process here.

Engineers can also use this process to make background jobs run faster, so that spot instances are used to supplement a scheduled runtime if the bid price is below a certain figure, thus minimizing the impact on end-users and potentially saving cost between reserved and on-demand instances.

For those not interested in writing custom scripts, Amazon recently acquired ClusterK, which reallocates resources to on-demand resources when spot instances terminate and “opportunistically rebalance” to spot instances when the price fits. This dramatically expands the use-case for spot instances beyond background applications to mission-critical apps and services.

4. Organise and automate

As IT teams evolve to a more service-oriented structure, highly distributed teams will increasingly have more autonomy over provisioning resources without the red tape and extensive time delay of traditional IT environments. While this is a crucial characteristic of any DevOps team, if it is implemented without the accompanying automation and process best practices, decentralised teams have the potential to produce convoluted and non-standard security rules, configurations, storage volumes, etc. and therefore drive up costs.

The answer to many of these concerns is CloudFormation. The more time an IT team spends in AWS, the more it is absolutely crucial that the team use CloudFormation. Enterprises deploying on AWS without CloudFormation are not truly taking advantage of all the features of AWS, and are exposing themselves to both security and cost risks as multiple developers deploy nonstandard code that is forgotten about / never updated.

CloudFormation allows infrastructure staff to bake in security, network, and instance family/size configurations, so that the process of deploying instances is not only faster but also less risky. Used in combination with a configuration management tool like Puppet, it becomes possible to bring up instances that are ready to go in a matter of minutes. Puppet manifests also provide canonical reference points if anything does not go as planned. Puppet maintains the correct configuration even if it means reverting back to an earlier version. For example, a custom fact to report which security groups an instance is running in, along with a manifest to automatically associate the instance with specific groups as needed. This can significantly lower the risk of downtime associated with faulty deploys. CloudFormation can also dictate which families of instances should be used, if it is important to leverage previously-purchased RIs or provide the flexibility to do so at a later point.

Granted, maintenance of these templates requires a significant amount of staff time, and can initially feel like a step backwards in terms of cost efficiency. CloudFormation takes some time to learn. But investing the time and resources will have enormous impacts on a team’s ability to deploy quickly and encourage consistency within an AWS account. Clutter builds up in any environment, but this can be significantly reduced when a team automates configuration and deployment.

5. Instance types and resource optimisation

Amazon is constantly delivering new products and services. Most of these have been added as a direct result of customer comments about cost or resource efficiencies, and it is well worth keeping on top of these releases to discover if the cost savings outweigh the cost of implementing a new solution. If the team is using CloudFormation, this may be easier.

New instance types often have cost-savings potential. For instance, last year Amazon launched new T2 instances, which provide low cost stable processing power and the ability to build up “CPU credits” during these quiet periods to use automatically during busy times. This is particularly convenient for bursty applications with rare spikes, like small databases and development tools.

A number of Amazon’s new features over the last year have related to price transparency, including the pricing tiers of reserved instances, so it appears safe to expect more services that offer additional cost efficiencies in the next several years.

The post 5 Ways to Monitor and Control AWS Cloud Costs appeared first on Gathering Clouds.

NZ Ministry of Health taps IBM gov cloud

NZ's Ministry of Health is moving some of its core services onto IBM cloud

NZ’s Ministry of Health is moving some of its core services onto IBM cloud

New Zealand’s Ministry of Health has enlisted IBM to help the department set up a cloud-based system to support the country’s national healthcare IT infrastructure.

The Ministry manages a set of technical services that support both internal IT systems and national health systems including the National Health Payment System, which processes transactions for pharmacies and healthcare providers, and a National Health Index, which supports planning and coordination in health service delivery.

The deal will see the Ministry deploy all of its internal systems on IBM’s managed cloud infrastructure (hosted in-country) for a minimum of five years.

“The agreement is a key element in improving the Ministry of Health’s ability to deliver shared services for the sector, which enables secure access to personal health records for patients and their health care providers,” said Graeme Osborne, Director of the National Health IT Board. “Our aim is to improve productivity and patient safety, and enable new models of care through strategic technology investments.”

The move follows an pledge made by Health Benefits Limited, the crown company set up in 2010 to support health service provision, to consolidate the infrastructure of all twenty District Health Boards onto IBM’s cloud platform.

“We continue to invest in advanced technology infrastructure vital for New Zealand’s long-term economic growth. IBM’s cloud services offer customers like the Ministry of Health the most comprehensive enterprise-grade cloud environment in New Zealand and will support new, enhanced services for the public, suppliers and staff,” says Andrew Buchanan, cloud business leader, IBM New Zealand.

“This agreement further demonstrates our leadership and commitment to health care innovation,” Buchanan added.

Enterprise @Splunk «Best» DevOps Collaboration | @DevOpsSummit [#DevOps]

“This win means a great deal to us because it is decided by the readers – the people who understand how use of our technology enables new insights that drive the business,” said Matt Davies, senior director, EMEA marketing, Splunk. “Splunk Enterprise enables organizations to improve service levels, reduce operations costs, mitigate security risks, enhance DevOps collaboration, create new product and service offerings and obtain deeper insight into customer behavior. Being named Best Business Application underlines the value Operational Intelligence delivers to our customers.”

read more

Comcast, Lenovo join OpenDaylight SDN effort

Comcast and Lenovo have thrown their weight behind the OpenDaylight Project

Comcast and Lenovo have thrown their weight behind the OpenDaylight Project

Comcast and Lenovo have thrown their hats into the OpenDaylight Project, an open source collaboration between many of the industry’s major networking incumbents on the core architectures enabling software defined networking (SDN) and network function virtualisation (NFV).

The recent additions bring the OpenDaylight project, a Linux Foundation Colalborative Project, to just over the fifty member mark. The community is developing an open source SDN architecture and software (Helium) that supports a wide range of protocols including OpenFlow, the southbound protocol around which most vendors have consolidated.

“We’re seeing more end users starting to adopt OpenDaylight and participate in its development as the community sharpens its focus on stability, scalability, security and performance,” said Neela Jacques, executive director, OpenDaylight.

“Comcast has been testing ODL and working with our community since launch and the team at Lenovo were heavily involved in ODL’s foundation through their roots at IBM. Our members see the long-term value of creating a rich ecosystem around open systems and OpenDaylight,” Jacques said.

Igor Marty, chief technology officer, Lenovo Worldwide SDN and NFV said: “We believe that the open approach is the faster way to deploy solutions, and what we’ve seen OpenDaylight achieve in just two years has been impressive. The OpenDaylight community is truly leading the path toward interoperability by integrating legacy and emerging southbound protocols and defining northbound APIs for orchestration.”

The move will no doubt give the project more credibility in both carrier and enterprise segments.

Since Lenovo’s acquisition of IBM’s low-end x86 server unit it has been pushing heavily to establish itself as a serious player among global enterprises, where open standards continue to gain favour when it comes to pretty much every layer of the technology stack.

Comcast is also placing SDN at the core of its long-term network strategy and has already partnered with CableLabs, a non-profit R&D outfit investigating technology innovation and jointly owned by operators globally, on developing southbound plugins for OpenDaylight’s architecture.

“Like many service providers, Comcast is motivated to reduce the operational complexity of our networks. In the near-term this involves significant improvements to network automation under what we call our Programmable Network Platform. This framework outlines a stack of behaviors and abstraction layers that software uses to interact with the network,” explained Chris Luke, senior principal engineer, Comcast and OpenDaylight Advisory Group member.

“Some of our key objectives are to simplify the handoffs from the OSS/BSS systems, empower engineers to rapidly develop and deploy new services and to improve the operational support model. It is our hope that by harmonizing on a common framework and useful abstractions, more application groups within the company will be able to make use of better intelligence and more easily interact with the network.”

Luke said the company already has several proof-of-concepts in place, including an app that provides network intelligence abstraction in a way that allows it to treat its internal network like a highly elastic CDN, and mechanisms to integrate overlay edge services with legacy network architectures like MPLS.

“When ODL was launched we were excited to see that the industry was moving to a supportable open source model for SDN. There were a growing number of proprietary SDN controllers at the time and that had service providers like us questioning the direction of the market and whether it made sense to us. We were pleased to see an open source platform come forward aiming to provide a neutral playing field with support for more than just OpenFlow.”

Microsoft invests in undersea cables to connect global data centres

(c)iStock.com/mihtiander

Microsoft has announced a series of partnerships to invest in subsea cables and terrestrial dark fibre capacity in order to better connect their global data centres.

The partnerships are with Ireland-based subsea capacity-based network provider AquaComms and telecoms service providers Hibernia Networks and Chunghwa Telecom. David Crowley, managing director for network enablement at Azure, notes: “As people and organisations expect data and information at their fingertips, Microsoft must have an infrastructure that can deliver the cloud services, including Azure, which our customers need to support their global businesses.”

Hibernia announced it had been selected by Microsoft to provide connectivity between Canada, Ireland and the UK; AquaComms revealed Microsoft was its first customer on the America Europe Connect subsea cable system; and Chunghwa announced the beginning of construction of the New Cross Pacific (NCP) Cable Network, with Redmond among the NCP consortium.

In a blog post, Crowley explained: “When we look to the future with these investments, we believe our customers will see that Microsoft is pulling together all the components necessary to make its cloud services the most reliable, accessible and secure.

“Competition in the cloud and infrastructure space continues to heat up. But it’s not a battle that will be won on just cloud or infrastructure alone, but instead on holistic innovation and providing value to customers from the ‘sea to the sky,’” he added.

As cloud adoption grows, network traffic will bear the brunt of it, and so the new cables will enable Microsoft to deliver data at higher speeds, with higher capacity and lower latency for global customers. Naturally, Microsoft’s own initiatives will anticipate a spike. The proposed ceasing of support for Windows Server 2003, on July 14, will be part of this, as recent figures from the Cloud Industry Forum revealed more than 80% adoption of cloud services in the UK, and 58% of companies polled still running WS2003.

Sage, Salesforce partner to offer cloud-based SME accounting solutions

Sage has developed a cloud-based offering for SMEs on the Salesforce platform

Sage has developed a cloud-based offering for SMEs on the Salesforce platform

Accounting software incumbent Sage has partnered with Salesforce to develop  business software based on the Salesforce’s cloud platform.

The two companies jointly developed Sage Life, which is being pitched as a set of cloud-based, mobile-enabled payroll and accounting tools for small businesses based on the Salesforce1 platform.

“Together with Salesforce, Sage is shaping the future of small business. Small business software no longer has to represent different systems or layers of complexity – it’ll be simple, collaborative, and real time,” Stephen Kelly, chief executive of Sage.

“With Sage Life, we are delivering social, mobile, cloud-based innovation, powered by real-time accounting. Now running a small business can be as easy as updating your Facebook status,” Kelly said.

The company said the software will help give small businesses a consolidated view of their customers, something often difficult to achieve given a fragmented technology landscape (SMEs don’t typically have the cash to spend on strong systems integration).

The move is a positive sign for Salesforce, which has attracted a wide range of new and legacy ISVs to its platform; Sage is quite popular in the UK (where it is based) and while it has its own cloud service in Sage One, developing a Salesforce-based alternative to its legacy solutions could broaden its reach.

The partnership comes as rumours surrounding Salesforce’s potential acquisition continue to swell. Salesforce has repeatedly declined rumours that it is working with financial advisors and fielding acquisition inquiries, with many betting that Microsoft may be one of the suitors in the running.

Sematext’s @DevOpsSummit Blog Exceeds 30,000 Reads | @Sematext [#DevOps]

Sematext is a globally distributed organization that builds innovative Cloud and On Premises solutions for performance monitoring, alerting and anomaly detection (SPM), log management and analytics (Logsene), and search analytics (SSA). We also provide Search and Big Data consulting services and offer 24/7 production support for Solr and Elasticsearch.

read more

PricewaterhouseCoopers to Present at @DevOpsSummit | @PwC_LLP [#DevOps]

In today’s digital world, change is the one constant. Disruptive innovations like cloud, mobility, social media, and the Internet of Things have reshaped the market and set new standards in customer expectations. To remain competitive, businesses must tap the potential of emerging technologies and markets through the rapid release of new products and services. However, the rigid and siloed structures of traditional IT platforms and processes are slowing them down – resulting in lengthy delivery cycles and a poor customer experience.

read more

Processing Metrics, Logs & Traces By @Sematext | @DevOpsSummit [#DevOps]

Application metrics, logs, and business KPIs are a goldmine. It’s easy to get started with the ELK stack (Elasticsearch, Logstash and Kibana) – you can see lots of people coming up with impressive dashboards, in less than a day, with no previous experience. Going from proof-of-concept to production tends to be a bit more difficult, unfortunately, and it tends to gobble up our attention, time, and money.
In his session at DevOps Summit, Otis Gospodnetić, co-author of Lucene in Action and founder of Sematext, will share the architecture and decisions behind Sematext’s services for handling large volumes of performance metrics, traces, logs, anomaly detection, alerts, etc. He’ll follow data from its sources, its collection, aggregation, storage, and visualization. He will also cover the overview of some of the relevant technologies and their strengths and weaknesses, such as HBase, Elasticsearch, and Kafka.

read more