Containers, or Container Management? | @DevOpsSummit @DMacVittie #DevOps #Docker #Kubernetes

We just came off of a review of a product that handles both containers and virtual machines in the same interface. Under the covers, implementation of containers defaults to LXC, though recently Docker support was added.
When reading online, or searching for information, increasingly we see “Container Management” products listed as competitors to Docker, when in reality things like Rocket, LXC/LXD, and Virtualization are Dockers competitors.
After doing some looking around, we have decided that this shift in how Docker is perceived in the market is due largely to the fact that it is easy to set up and begin using. Inevitably, if the Docker install is large enough, something like Docker Enterprise Edition, Kubernetes, Apache Mesos/DCOS will be required to container management, but for the entry level, “container management” has less stringent requirements.

read more

Consider the Human Side of IT | @CloudExpo #AI #ML #DX #Cloud

Artificial intelligence, virtual reality, software-defined networking, Hyperconverged Infrastructure, the cloud. These are all technological advancements intended to improve how IT systems and operations work, to give the business better agility and reduced costs, and ultimately to give end-users a new and better experience.  Quite often, however, IT’s creators and consumers alike lose sight of the […]

The post Consider the Human Side of IT appeared first on Plexxi.

read more

[video] Blockchain Technologies with @IBMcloud | @CloudExpo #ML #FinTech #Blockchain

«IBM is really all in on blockchain. We take a look at sort of the history of blockchain ledger technologies. It started out with bitcoin, Ethereum, and IBM evaluated these particular blockchain technologies and found they were anonymous and permissionless and that many companies were looking for permissioned blockchain,» stated René Bostic, Technical VP of the IBM Cloud Unit in North America, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.

read more

Tech News Recap for the Week of 11/27/17

If you had a busy week and need to catch up, here’s a tech news recap of articles you may have missed the week of 11/27/2017!

How SDN can improve compliance and customer experience. How blockchain can transform the manufacturing industry. Barracuda Networks is sold to Thoma Bravo for $1.6 billion. Why 2018 will be the year of the WAN and more top news this week you may have missed! Remember, to stay up-to-date on the latest tech news throughout the week, follow @GreenPagesIT on Twitter.

Tech News Recap

Featured

IT Operations

  • Networking verification: Predictions the future of complex networks
  • How blockchain can transform the manufacturing industry
  • Barracuda Networks is sold to Thoma Bravo for $1.6 billion
  • How VDI became the workforce transition solution that’s been staring us in the face
  • Data center cooling market set to explode in the coming years
  • How artificial intelligence will self-manage the data center
  • Why 2018 will be the year of the WAN

[Interested in learning more about SD-WAN? DownloadWhat to Look For When Considering an SD-WAN Solution.]

Amazon

Microsoft

Dell

  • Dell EMC one year in: How are they performing and how did they do it?
  • Dell EMC updates hyperconverged VxRail XC Series systems with latest PowerEdge servers

Cisco

  • What to expect from Cisco in 2018

VMware

Cloud

Security

Thanks for checking out our tech news recap!

By Jake Cryan, Digital Marketing Specialist

While you’re here, check out this white paper on how to rethink your IT security, especially when it comes to financial services.

[slides] Build and Deploy Artificial Intelligence | @CloudExpo #IoT #AI #ML #DX #ArtificialIntelligence

The question before companies today is not whether to become intelligent, it’s a question of how and how fast. The key is to adopt and deploy an intelligent application strategy while simultaneously preparing to scale that intelligence. In her session at 21st Cloud Expo, Sangeeta Chakraborty, Chief Customer Officer at Ayasdi, provided a tactical framework to become a truly intelligent enterprise, including how to identify the right applications for AI, how to build a Center of Excellence to operationalize the intelligence and how to implement a strategy to scale efforts. She pulled from her experience helping tackle HSBC’s anti-laundering threats and identifying genetic susceptibilities of diseases for Mt. Sinai with machine intelligence.

read more

Trend Micro and AWS collaborate to boost cloud security

Trend Micro, which has announced integration with Amazon GuardDuty, is also one of the first companies for Enterprise Contracts for AWS Marketplace.

Enterprise Contracts for AWS Marketplace provides standardised contract terms across many software vendors to simplify and hasten procurement for the recently announced Amazon Web Services (AWS) Web Application Firewall (WAF) Managed Rules Partner Program.

With the AWS cooperation, Trend Micro expects to deliver application protection rules as part of the new AWS WAF Managed Rules Partner Program.

“AWS Marketplace is simplifying the enterprise software procurement experience to accelerate customer innovation," comments Dave McCann, VP of AWS Marketplace and Catalog Services, Amazon Web Services. "Trend Micro is a valued AWS Marketplace seller that embraces customer feedback to drive their innovation in product and business models. We are delighted to have them as one of the first companies for Enterprise Contract for AWS Marketplace.”

The Trend Micro and Amazon GuardDuty integration permits users to take advantage of the security findings from Amazon GuardDuty in order to make smarter decisions with their Amazon Elastic Compute Cloud (Amazon EC2) and Amazon EC2 Container Service (Amazon ECS) workloads.

Kevin Simzer, EVP at Trend Micro, said: “We are proud to provide an extra layer of protection to the innovative applications that AWS builders are deploying on the cloud. Our collaboration with AWS allows us to deliver scalable security that removes friction from procurement, devops lifecycle and day-to-day operations.”

What are your thoughts on the Trend Micro and AWS collaboration? Let us know in the comments.

[video] Custom Applications with @InteractorTeam | @CloudExpo #AI #Cloud

«The reason Tier 1 companies are coming to us is we’re able to narrow the gap where custom applications need to be built. They provide a lot of services, like IBM has Watson, and they provide a lot of hardware but how do you bring it all together? Bringing it all together they have to build custom applications and that’s the niche that we are able to help them with,» explained Peter Jung, Product Leader at Pulzze Systems Inc., in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.

read more

Meeting Network Traffic and Speed Growth Challenges | @CloudExpo #API #SDN #Cloud

Today’s hyper-connectivity of both people and things has led to an enormous jump in network traffic. Global IP traffic will increase nearly threefold over the next five years. It’s not going to lessen, nor is it going to get slower. These factors create a need for high-speed networks to ensure service level and capacity. 100G network links are deployed by telecom networks and data centers, serving hundreds of thousands of users, so that these providers can keep pace with customer demand. This then creates the need to test and troubleshoot on the networks at 100G link speed.

read more

Amazon EC2 «T2» instances – are you expecting too much?

Some time ago, iuvo decided to move all of its resources into the cloud. Most of our resources were pretty idle and didn't need lots of CPU and memory. We had experience with Amazon Web services, but we also looked at others such as Microsoft's Azure. After creating spreadsheets and examining resource requirements and costs, we decided to go with Amazon Web Services Elastic Compute Cloud (EC2), with most of our infrastructure on the relatively new T2 instances.

The T2 instances provided effective cost vs. performance characteristics. CPU performance allowed for bursts of full speed CPU usage for short times, where otherwise the system would remain idle. Excellent for our mainly internal resources that were more I/O than CPU, such as code repositories, web sites, etc. The way they worked is that you earn "CPU Credits" over time, and you accumulated them when your CPU utilization percentage was below a given "baseline" for the particular instance size. If you went above the baseline, you then consumed the credits. If you exhausted the credits, your CPU performance then was throttled until you could earn more. This is a key point – if you get into a state where your CPU performance gets throttled, then whatever is consuming all that CPU time gets less execution time, which makes it take even longer to run to completion – if it ever does.

Another key point is that there is a limit to the number of CPU credits you can earn for a given instance – the credits expire after 24 hours. This will become important later in our tale.

Our most CPU-intensive system was our central monitoring system. Still, it was mostly I/O-based, as most of the time was spent waiting for other systems to respond to SNMP requests, HTTP checks, etc. We measured it, and it typically ran about 18-20% aggregate CPU consumption on a virtual machine with two virtual CPUs, and rarely peaked above that. It seemed a good match for a t2.medium instance, which AWS advertised as having a 40% baseline performance, 4GB of memory and 2 vCPUs. No problem. We migrated everything and all was well.

After running fine for a while, we needed to update some custom code on the monitoring server. We utilize the AMQP message queue protocol in order to farm out monitoring checks to remote devices, and we needed to replace one of the underlying libraries in order to take advantage of some features in an update of the protocol. This also necessitated moving from a typical procedural model to an event-driven model in our code. This apparently resulted in us increasing the CPU load to close to 25% average. Still, shouldn't have been a problem.

About a day later, our monitoring system collapsed. Suddenly, most of the checks were timing out, and the number of checks attempting to run simultaneously rose. Checks were backing up in the scheduler, creating a snowball effect – once it started backing up, they got slower, which caused them the back up more, which made them slower…lather, rinse, repeat.  Eventually the checks would time out, sending a mass of false alerts. Once the system decided a service was down, it would back off on the checks and reschedule, and the alerts would clear – only to fail again a few minutes later.

We tracked it down to the performance of the instance – we exhausted our CPU credits and were being throttled to 20% CPU usage, when we needed 25%. But we were expecting that baseline of 40%. So what happened? Well, it was misunderstanding their chart of baseline performance for "T2" instances. The advertised chart, which showed 2 vCPUs and 40% baseline, meant the 40% was aggregate for the two vCPUs – that is, 20% of each vCPU. But all the tools reported the CPU utilization as the average of all vCPUs – so while the graph would show us 25% utilization, it meant EACH vCPU was using 25%, and the aggregate was 50% – we were over baseline by 10% and burning our CPU credits. This makes the baseline of "135%" for a t2.2xlarge more sensible, since you couldn't exceed a 100% average performance. Amazon has since clarified on the chart, noting that the 40% of a t2.medium instance is "out of 200% max" with an annotation as to what they mean.

We have since migrated to a "c4" instance which was more expensive, but didn't have these CPU limitations. Migrating an instance between sizes is pretty easy, essentially only requiring a reboot.

More recently I ran into another "T2" limitation. I had a client that had some EC2 instances and lots of Elastic Block Storage (EBS) volumes for a project. The always-running instances were mostly idle, so "t2" instances made sense. The space needs kept growing, and we needed to consolidate several EBS volumes into one larger one. We did the migration on a t2.large instance (2 vCPUs, 60% baseline), and while it is mainly an I/O-bound operation, it does consume CPU at 40-45% CPU average during the copy against a 30% baseline. No problem, as we were using 4-4.5 CPU credits every 5 minutes, while earning 3…and we had about 800 credits to spare. Not a problem, or so we thought.

Remember that I said CPU credits expire in 24 hours? We had 800 CPU credits, and consuming only at most 1.5 credits more than we earned per 5-minute interval, or 18 per hour. We were moving a lot of data, and when we checked the next day, everything slowed down a bit. We noticed we were throttled back to 30% CPU usage because the credits were all gone! Credits are used "newest first" in a sense – so as our workload exceeded baseline for over 24 hours and we were consuming anything we earned at the time PLUS some of our backlog, all the previously earned credits expired and we were left with none. Our estimated time for the migration was exceeded.

To sum up, just beware of the limitations of the "T2" instances before jumping for the cheap price. Know what CPU performance needs you have, and how they will be affected when you exceed your limits.