CFP Deadline For @CloudEXPO Silicon Valley | #HybridCloud #AI #DevOps #IoT #Blockchain #Serverless #Docker #Kubernetes

At CloudEXPO Silicon Valley, June 24-26, 2019, Digital Transformation (DX) is a major focus with expanded DevOpsSUMMIT and FinTechEXPO programs within the DXWorldEXPO agenda. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive over the long term. A total of 88% of Fortune 500 companies from a generation ago are now out of business. Only 12% still survive. Similar percentages are found throughout enterprises of all sizes.

read more

Red Hat to Present at @KubeSUMMIT | @IBMcloud @RedHat @GHaff @DanielOh30 @VeerMuchandi @ChrisVanTuin #DevOps #Serverless #Docker #Kubernetes

Kubernetes as a Container Platform is becoming a de facto for every enterprise. In my interactions with enterprises adopting container platform, I come across common questions: – How does application security work on this platform? What all do I need to secure? – How do I implement security in pipelines? – What about vulnerabilities discovered at a later point in time? – What are newer technologies like Istio Service Mesh bring to table?In this session, I will be addressing these commonly asked questions that every enterprise trying to adopt an Enterprise Kubernetes Platform needs to know so that they can make informed decisions.

read more

Commvault Enables Backups For Cisco | @CloudEXPO @Commvault @RDeMeno #Cloud #CIO #DataCenter #Serverless #Backup

ScaleProtect™ with Cisco UCS® expands many of the capabilities of the Cisco HyperFlex platform with backup and recovery. With the broadest ecosystem of integrated public cloud providers, ScaleProtect with Cisco UCS plus Cisco HyperFlex is a true multi-cloud platform. Delivering proven support for enterprise applications like SAP HANA – for which Commvault is certified – ScaleProtect with Cisco UCS allows customers to run mission-critical applications on Cisco HyperFlex with the confidence their data and applications are backed up and available. Pairing ScaleProtect with Cisco UCS with Cisco HyperFlex provides a hyper-converged, scale-out solution that delivers enterprise-class backup and recovery for end-to-end protection of the software-defined data center.

read more

Rene Bostic Joins @CloudEXPO Faculty | @ReneBosticatIBM @IBMCloud @IBMBlockchain #FinTech #Blockchain #SmartCities

Blockchain has shifted from hype to reality across many industries including Financial Services, Supply Chain, Retail, Healthcare and Government. While traditional tech and crypto organizations are generally male dominated, women have embraced blockchain technology from its inception. This is no more evident than at companies where women occupy many of the blockchain roles and leadership positions. Join this panel to hear three women in blockchain share their experience and their POV on the future of blockchain.

read more

Colovore to Exhibit at @CloudEXPO | @Colovore #HybridCloud #CIO #DataCenter #AIOps #Serverless #SDN #SDDC #Docker #Kubernetes

Colovore is the Bay Area’s leading provider of high-performance colocation services. Our 9MW state-of-the-art data center in Santa Clara features power densities of 35 kW per rack and a pay-by-the-kW pricing model. We offer colocation the way you want it-cost-efficient, scalable, and robust. Colovore is profitable and backed by industry leaders including Digital Realty Trust. For more information please visit www.colovore.com.

read more

Getting past cloud cost confusion: How to avoid the vendors’ traps and win

Cloud service providers like AWS, Azure, and Google were created to provide compute resources to save enterprises money on their infrastructure. But cloud services pricing is complicated and difficult to understand, which can often drive up bills and prevent the promised cost savings. Here are just five ways that cloud providers obscure pricing on your monthly bill.

Terminology varies

For the purpose of this article, I’ll focus on the three biggest cloud service providers: AWS, Azure, and Google. Between these three cloud providers alone, different terms are used for just about every component of services offered.

For example, when you think of a virtual machine (VM), that’s what AWS calls an “instance,” Azure calls a “virtual machine,” and Google calls a “virtual machine instance.” If you have a scale group of these different machines, or instances, in Amazon and Google they’re called “auto-scaling” groups, whereas in Azure they’re called “scale sets.”

There’s also different terminology for their pricing models. AWS offers on-demand instances, Azure calls it “pay as you go,” and Google has “on-demand” resources that are frequently discounted through “sustained use.” You’ve also got “reserved instances” in AWS, “reserved VM instances” in Azure, and “committed use” in Google. And you have “spot instances” in AWS, which are the same as “low-priority VMs” in Azure, and “preemptible instances” in Google.

It’s hard to see what you’re spending

If you aren’t familiar with AWS, Azure, or Google Cloud’s consoles or dashboards, it can be hard to find what you’re looking for. To find specific features, you really need to dig in, but even just trying to figure out the basics of how much you’re currently spending and predicting how much you will be spending – all can be very hard to understand.

You can go with the option of building your own dashboard by pulling in from their APIs, but that takes a lot of upfront effort, or you can purchase an external tool to manage overall cost and spending.

They change the pricing frequently

Cloud services pricing has changed quite often. So far, they have been trending downward, so things have been getting cheaper over time due to factors like competition and increased utilisation of data centres in their space. However, don’t jump to conclude that price changes will never go up.

Frequent price changes make it hard to map out usage and costs over time. Amazon has already made changes to their price more than 60 times since they’ve been around, making it hard for users to plan a long-term approach. For some of these instances, if you have them deployed for a long time, the prices of instances don’t display in a way that is easy to track. So, you may not even realise that there’s been a price change if you’ve been running the same instances on a consistent basis.

Multitude of variables

Operating systems, compute, network, memory, and disk space are all different factors that go into the pricing and sizing of these instances. Each of these virtual machine instances also have different categories: general purpose, compute optimised, memory optimised, disk optimised and other various types.

Then, within each of these different instance types, there are different families. In AWS, the cheapest and smallest instances are in the “t2” family, in Azure they’re called the “A” family. On top of that, there are different generations within each of those families, so in AWS there’s t2, t3, m2, m3, m4, and within each of those processor families, different sizes (small, medium, large, and extra-large). So, there are lots of different options available – and lots of confusion, too. 

It’s based on what you provision – not what you use

Cloud services pricing can charge on a per-hour, per-minute, or per-second basis. If you’re used to the on-prem model where you just deploy things and leave them running 24/7, then you may not be used to this kind of pricing model. But when you move to the cloud’s on-demand pricing models, everything is based on the amount of time you use it.

When you’re charged per hour, it might seem like 6 cents per hour is not that much. But after running instances for 730 hours in a month, it turns out to be a lot of money. This leads to another sub-point: the bill you get at the end of the month doesn’t typically come until 5 days after the month ends, and it’s not until that point that you get to see what you’ve used.

As you’re using instances (or VMs) during the time you need them, you don’t really think about turning them off or even losing servers. I’ve had customers who have servers in different regions, or on different accounts that don’t get checked regularly, and they didn’t even realise they’ve been running all this time, charging up bill after bill.

What can you do about it?

Ultimately, cloud service offerings are there to help enterprises save money on their infrastructures. And they are great options if – and I emphasise, if – you know how to use them. To optimise your cloud environment and save money on costs, here are a few suggestions:

  • Get a single view of your billing. You can write your own scripts (but that’s not the best answer) or use an external tool
  • Understand how each of the services you use is billed. Download the bill, look through it, and work with your team to understand how you’re being billed
  • Make sure you’re not running anything you shouldn’t be. Shut things down when you don’t need them, like dev and test instance on nights and weekends (my company, ParkMyCloud, focuses on this type of optimisation along with rightsising)
  • Review regularly to plan out usage and schedules as much as you can in advance
  • Put governance measures in place so that users can only access certain features, regions, and limits within the environment

Cloud services pricing is tricky, complicated, and hard to understand. Don’t let this confusion affect your monthly cloud bill.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Microsoft Azure doles out 500 patents to startups


Clare Hopping

29 Mar, 2019

Microsoft has announced it’s handing over 500 patents to startups in the LOT Network, a community of businesses committed to protecting against patent trolls through the sharing of patents.

Some of the other businesses involved in the LOT Network are brand superpowers, such as Amazon, Facebook, Google, Netflix, SAP, Epic Games, Ford, GM, Lyft and Uber. By allowing other businesses to access their patents, it stops them being distributed illegally.

Startups are encouraged to join the LOT Network to gain access to the patents, but they also want to use it to raise capital and protect themselves from people who will either sit on patents without using then or sue others for potentially infringing upon unused patents. Because such a variety of patents are available to members, startups have a great advantage over their competitors.

“The LOT Network is really committed to helping address the proliferation of intellectual property losses, especially ones that are brought by non-practicing entities, or so-called trolls,” said Microsoft CVP and Deputy General Counsel Erich Andersen.

Those joining the LOT Network will be able to own the patents outright, giving startup members access to three of them. However, as part of the agreement, the startups must be consuming at least $1000 a month in Azure spend, based upon their previous three months’ expenditure.

“We want to help the LOT Network grow its network of startups,” Andersen said. “To provide an incentive, we are going to provide these patents to them.”

The announcement formed part of Microsoft’s wider news that it has expanded its Azure IP Advantage programme, which has been developed to protect its Azure users against patent trolls. It allows those developing IoT applications on Microsoft Azure to access 10,000 of its patents, meaning they’re less likely to find themselves in an intellectual property lawsuit.

Trend Micro to Exhibit at @CloudEXPO | #Cloud #CIO #AI #SaaS #IoT #IIoT #RTC #AIOps #Telecom #Security #Infosec #DigitalTransformation

Trend Micro Incorporated, a global leader in cybersecurity solutions, helps to make the world safe for exchanging digital information. Our innovative solutions for consumers, businesses, and governments provide layered security for data centers, cloud workloads, networks, and endpoints. All our products work together to seamlessly share threat intelligence and provide a connected threat defense with centralized visibility and investigation, enabling better, faster protection. With more than 6,000 employees in 50 countries and the world’s most advanced global threat research and intelligence, Trend Micro enables organizations to secure their connected world. For more information, visit www.trendmicro.com.

read more

Announcing @PrinterLogic “Technology Sponsor” of @CloudEXPO | #Serverless #CloudNative #Cloud #AI #SaaS #RSAC #AIOps

PrinterLogic helps IT professionals eliminate all print servers and deliver a highly available serverless print infrastructure. With PrinterLogic’s centrally managed direct IP printing platform, customers empower their end users with mobile printing, secure release printing, and many advanced features that legacy print management applications can’t provide. The company has been included multiple times on the Inc. 500 and Deloitte Fast 500 lists of fastest growing companies in North America.

read more

AWS makes S3 Glacier Deep Archive available for coldest cloud storage needs

It was promised at last year’s re:Invent, and now it is here: Amazon Web Services (AWS) has announced the general availability of S3 Glacier Deep Archive, aimed at being the lowest cost storage in the cloud.

When the company said it was the cheapest around, it wasn’t kidding; offering prices at only $0.00099 per gigabyte per month, or $1 per terabyte per month. This level, as with other cold storage, is aimed at organisations looking to move away from off-site archives or magnetic data tapes for data that needs to be stored, but is accessed once in every several blue moons. “You have to be out of your mind to manage your own tape moving forward,” Jassy told re:Invent attendees back in November.

“We have customers who have exabytes of storage locked away on tape, who are stuck managing tape infrastructure for the rare event of data retrieval. It’s hard to do and that data is not close to the rest of their data if they want to do analytics and machine learning on it,” said Mai-Lan Tomsen Bukovec, Amazon Web Services VP of S3. “S3 Glacier Deep Archive costs just a dollar per terabyte per month and opens up rarely accessed storage for analysis whenever the business needs it, without having to deal with the infrastructure or logistics of tape access.”

Cold storage is not just the domain of AWS of course. Google’s Coldline offering was subject to price cuts earlier this month, with the company opting for high levels of availability and low levels of latency as its calling card. Google said at the time that Coldline in multi-regional locations was now geo-redundant, meaning that it was protected from regional failure by storing another copy of it at least 100 miles away in a different region. For comparison, AWS aims to offer eleven nines durability, and restoration within 12 hours or less.

Customers using Glacier Deep Archive, AWS added, include video creation and distribution provider Deluxe, Vodacom, and the Academic Preservation Trust.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.