Here to help unpack insights into the new era of using containers to gain ease with multi-cloud deployments are our panelists: Matt Baldwin, Founder and CEO at StackPointCloud, based in Seattle; Nic Jackson, Developer Advocate at HashiCorp, based in San Francisco, and Reynold Harbin, Director of Product Marketing at DigitalOcean, based in New York. The discussion is moderated by Dana Gardner, principal analyst at Interarbor Solutions.
Archivo mensual: febrero 2019
CloudBees Makes Kubernetes a Reality for Enterprise DevOps | @KubeSUMMIT @CloudBees #Serverless #Kubernetes
CloudBees,
Inc., the hub of enterprise Jenkins and DevOps, today announced a
major investment in Kubernetes technology across all areas of the
business. CloudBees has made Kubernetes a key part of its long-term
strategy by fully supporting it in CloudBees Jenkins Enterprise,
acquiring key Kubernetes talent and joining the Cloud Native Computing
Foundation. With today’s news, CloudBees offers the industry’s first
Kubernetes-based, enterprise-ready continuous delivery solution,
CloudBees Jenkins Enterprise, delivering multi-cloud portability.
Japan DX Pavilion at @CloudEXPO Silicon Valley | @JETROUSA @IDE_JETRO #Cloud #CIO #IoT #DevOps #Blockchain #SmartCities
The Japan External Trade Organization (JETRO) is a non-profit organization that provides business support services to companies expanding to Japan. With the support of JETRO’s dedicated staff, clients can incorporate their business; receive visa, immigration, and HR support; find dedicated office space; identify local government subsidies; get tailored market studies; and more.
51 Useful Docker Tools | @KubeSUMMIT #CloudNative #DevOps #Serverless #AWS #Docker #Kubernetes #Monitoring
Docker is sweeping across startups and enterprises alike, changing the way we build and ship applications. It’s the most prominent and widely known software container platform, and it’s particularly useful for eliminating common challenges when collaborating on code (like the «it works on my machine» phenomenon that most devs know all too well). With Docker, you can run and manage apps side-by-side – in isolated containers – resulting in better compute density. It’s something that many developers don’t think about, but you can even use Docker with ASP.NET.
How to Sponsor @CloudEXPO Silicon Valley | #Cloud #IoT #Blockchain #Serverless #DevOps #Monitoring #Docker #Kubernetes
At CloudEXPO Silicon Valley, June 24-26, 2019, Digital Transformation (DX) is a major focus with expanded DevOpsSUMMIT and FinTechEXPO programs within the DXWorldEXPO agenda. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive over the long term. A total of 88% of Fortune 500 companies from a generation ago are now out of business. Only 12% still survive. Similar percentages are found throughout enterprises of all sizes.
Healthcare firms go for the hybrid cloud approach with compliance and connectivity key
It continues to be a hybrid cloud-dominated landscape – and according to new research one of the traditionally toughest industries in terms of cloud adoption is now seeing it as a priority.
A report from enterprise cloud provider Nutanix has found that in two years’ time, more than a third (37%) of healthcare organisations polled said they would deploy hybrid cloud. This represents a major increase from less than a fifth (19%) today.
The study, which polled more than 2,300 IT decision makers, including 345 global healthcare organisations, found more than a quarter (28%) of respondents saw security and compliance as the number one factor in choosing where to run workloads. It’s not entirely surprising. All data can be seen as equal, but healthcare is certainly an industry where the data which comes from it is more equal than others. Factor in compliance initiatives, particularly HIPAA, and it’s clear to see how vital the security message is.
Yet another key area is around IT spending. The survey found healthcare organisations were around 40% over budget when it came to public cloud spend, compared to a 35% average for other industries. Organisations polled who currently use public cloud spend around a quarter (26%) of their annual IT budget on it – a number which is expected to rise to 35% in two years.
Healthcare firms see ERP and CRM, analytics, containers and IoT – the latter being an evident one for connected medical devices – as important use cases for public cloud. The average penetration in healthcare is just above the global score. 88% of those polled said they see hybrid cloud to positively impact their businesses – yet skills are a major issue, behind only AI and machine learning as an area where healthcare firms are struggling for talent.
It is certainly an area where the largest vendors have been targeting in recent months. Amazon Web Services (AWS) announced in September a partnership with Accenture and Merck to build a cloud-based informatics research platform aiming to help life sciences organisations explore drug development. Google took the opportunity at healthcare conference HiMSS to launch a new cloud healthcare API, focusing on data types such as HL7, FHIR and DICOM.
Naturally, Nutanix is also in the business of helping healthcare organisations with their cloud migrations. Yet increased maturity across the industry will make for interesting reading. The healthcare IT stack of the future will require different workloads in different areas, with connectivity the key. More than half of those polled said ‘inter-cloud application mobility’ was essential going forward.
“Healthcare organisations especially need the flexibility, ease of management and security that the cloud delivers, and this need will only become more prominent as attacks on systems become more advanced, compliance regulations more stringent, and data storage needs more demanding,” said Chris Kozup, Nutanix SVP of global marketing. “As our findings predict, healthcare organisations are bullish on hybrid cloud growth for their core applications and will continue to see it as the ideal solution as we usher in the next era of healthcare.
“With the cloud giving way to new technologies and tools such as machine learning and automation, we expect to see positive changes leading to better healthcare solutions in the long run,” Kozup added.
Photo by Hush Naidoo on Unsplash
Interested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.
How new cloud agents are increasing confidence in the public cloud
By now, articles espousing migrating to the cloud are a dime a dozen. You can find everything from simple how-tos to complete lift and shift project plans being touted as the best way to migrate. At face value, the advantages and cost savings of moving enterprise applications to the public cloud are easy to grasp. Why wouldn’t your enterprise leverage the scale and power of the cloud, which grows as your business grows, without the huge capital investment of adding to an existing data centre?
Regardless of where a company is in this transition, there are a few cloud myths that always seem to rear their ugly head. One of my personal favourites is, “We don’t need to use security/network tools in the cloud because we’ll never have those issues in the cloud.” To some extent this is correct (I have yet to see a malfunctioning ethernet card that causes issues with a subset of my cloud instances). However, this doesn’t mean a plethora of connectivity snafus and end user application issues have subsided. They’ve simply changed and diagnosing the root cause is now more difficult.
While the cloud simplifies infrastructure provisioning and management, the new challenges that present themselves must be solved before full-scale cloud deployment takes place. Some of the integrated cloud tools can assist, such as flow logs or any of the infrastructure monitoring elements. But all of these elements come at a cost as you expand them wider across your environment.
Part of the answer to this security challenge is to gain full visibility into the data you host in the cloud. Network engineers need to determine how they will gain access, visibility and control of their data before moving enterprise applications to the cloud because, once there, they lose access to the traditional tools used in the data centre to diagnose these problems.
Packet capture to the rescue
In the data centre, there are numerous proven solutions for network security that translate perfectly to the cloud. In the DC, physical choke points are used for various network and security services. This architecture is easily migrated to the cloud, but now one must focus on “logical” choke points. Another key element in DCs and retained in the cloud is to “log everything.” Today’s log and log management capabilities in the cloud outpace on-prem solutions. However, the cloud lags the physical world when it comes to packet capture.
In the data centre, full packet capture and analysis are a key factor in troubleshooting performance issues or forensically identifying security threats. Full packet capture is like having an 80-inch 4k picture-in-picture screen running your favorite programming 24/7. They provide much more detail than application logs or network flow logs. When a security team is trying to replay the exact data which was exfiltrated or identify the delay in an application, these logs don’t suffice. So what should a network security team do?
How can one stay ahead of security breaches or network issues, understanding every single packet matters for the security and performance of your business? To achieve this level of insight in the cloud, you need three things:
- Accurate packet-level history of network activity so the security team can recreate events and look at related packets to identify exactly what happened and when
- 100% packet capture of traffic that will help detect a threat or identify a network performance issue in real time
- A network monitoring tool that copies packets of all sizes and types and provide complete visibility
To acquire, process and distribute cloud packet traffic to your monitoring tools, IT teams are turning to next-generation cloud agents. These highly specialised agents instrument the cloud and enable packet monitoring and analysis in detail.
Because they are cloud-native, modern agents can continuously stream virtual machine network traffic to a network packet collector or analytics tool. With cloud agents, users can acquire packet traffic from any public cloud provider and cloud compute resources.
Cloud agent technology is designed to filter and process the packets; then replicate and distribute the information to the tools and teams that need it. The agent can send traffic to any routable IP address including tool destinations like IDS and DPI security tools, as well as to load balancers that front scalable tool clusters. The agent can even send packet traffic to your on-premises systems via Express Route or Direct Connect.
Lastly, cloud agents will cut down on data transport charges while increasing the life and utility of the tools and teams already in place. These easy-to-deploy, flexible agents literally help connect the apps and resources in the cloud and maximise your access, visibility and control of the data placed there.
The public cloud offers incredible opportunity for your enterprise and with the right formula, you can get full packet capture, analysis and distribution using born-in-the-cloud, for-the-cloud agents. If it’s confidence and security you’re thinking about as you consider how to fully leverage the cloud, consider how cloud agents can serve as the policy driven, cloud networking solution to activate and enable your security and monitoring tools.
Photo by Samuel Zeller on Unsplash
Interested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.
Ten Attributes of Serverless | @KubeSUMMIT #Serverless #FaaS #AWS #Lambda #OpenWhisk #Docker #Kubernetes
Serverless Computing or Functions as a Service (FaaS) is gaining momentum. Amazon is fueling the innovation by expanding Lambda to edge devices and content distribution network. IBM, Microsoft, and Google have their own FaaS offerings in the public cloud. There are over half-a-dozen open source serverless projects that are getting the attention of developers.
Serverless, FaaS, AWS and Lambda | @KubeSUMMIT #CloudNative #Serverless #DevOps #FaaS #AWS #Lambda #Monitoring #Docker #Kubernetes
If you are part of the cloud development community, you certainly know about “serverless computing,” almost a misnomer. Because it implies there are no servers which is untrue. However the servers are hidden from the developers. This model eliminates operational complexity and increases developer productivity.
We came from monolithic computing to client-server to services to microservices to the serverless model. In other words, our systems have slowly “dissolved” from monolithic to function-by-function. Software is developed and deployed as individual functions – a first-class object and cloud runs it for you. These functions are triggered by events that follow certain rules. Functions are written in a fixed set of languages, with a fixed set of programming models and cloud-specific syntax and semantics. Cloud-specific services can be invoked to perform complex tasks. So for cloud-native applications, it offers a new option. But the key question is what should you use it for and why.
Amazon’s AWS, as usual, spearheaded this in 2014 with an engine called AWS Lambda. It supports Node, Python, C# and Java. It uses AWS API triggers for many AWS services. IBM offers OpenWhisk as a serverless solution that supports Python, Java, Swift, Node, and Docker. IBM and third parties provide service triggers. The code engine is Apache OpenWhisk. Microsoft provides similar function in its Azure Cloud function. Google cloud function supports Node only and has lots of other limitations.
This model of computing is also called “event-driven” or FaaS (Function as a Service). There is no need to manage provisioning and utilization of resources, nor to worry about availability and fault-tolerance. It relieves the developer (or DevOps) from managing scale and operations. Therefore, the key marketing slogans are event-driven, continuous scaling, and pay by usage. This is a new form of abstraction that boils down to function as the granular unit.
At the micro-level, serverless seems pretty simple – just develop a procedure and deploy to the cloud. However, there are several implications. It imposes a lot of constraints on developers and brings a load of new complexities plus cloud lock-in. You have to pick one of the cloud providers and stay there, not easy to switch. Areas to ponder are cost, complexity, testing, emergent structure, vendor dependence, etc.
Serverless has been getting a lot of attention in the last couple of years. We will wait and see the lessons learned as more developers start deploying it in real-world web applications.
Microservice Forensics | @KubeSUMMIT @BuoyantIO @Linkerd #CloudNative #Serverless #DevOps #Docker #Kubernetes #Microservices
Because Linkerd is a transparent proxy that runs alongside your application, there are no code changes required. It even comes with Prometheus to store the metrics for you and pre-built Grafana dashboards to show exactly what is important for your services – success rate, latency, and throughput.
In this session, we’ll explain what Linkerd provides for you, demo the installation of Linkerd on Kubernetes and debug a real world problem. We will also dig into what functionality you can build on top of the tools provided by Linkerd such as alerting and autoscaling.