The KCSP program is a pre-qualified tier of vetted service providers that offer Kubernetes support, consulting, professional services and training for organizations embarking on their Kubernetes journey. The KCSP program ensures that enterprises get the support they’re looking for to roll out new applications more quickly and more efficiently than before, while feeling secure that there’s a trusted and vetted partner that’s available to support their production and operational needs.
Archivo mensual: febrero 2019
Application Portability with Kubernetes | @KubeSUMMIT @Kublr #CloudNative #Serverless #Containers #DevOps #Docker #Kubernetes
Containers and Kubernetes allow for code portability across on-premise VMs, bare metal, or multiple cloud provider environments. Yet, despite this portability promise, developers may include configuration and application definitions that constrain or even eliminate application portability. In this session we’ll describe best practices for «configuration as code» in a Kubernetes environment. We will demonstrate how a properly constructed containerized app can be deployed to both Amazon and Azure using the Kublr platform, and how Kubernetes objects, such as persistent volumes, ingress rules, and services, can be used to abstract from the infrastructure.
Neo4j Launches Commercial Kubernetes Application | @KubeSUMMIT @neo4j #CloudNative #Serverless #Kubernetes
GCP Marketplace is based on a multi-cloud and hybrid-first philosophy, focused on giving Google Cloud partners and enterprise customers flexibility without lock-in. It also helps customers innovate by easily adopting new technologies from ISV partners, such as commercial Kubernetes applications, and allows companies to oversee the full lifecycle of a solution, from discovery through management.
Don’t Skeu Up Containers | @KubeSUMMIT #DevOps #Docker #Serverless #Docker #Kubernetes
Skeuomorphism usually means retaining existing design cues in something new that doesn’t actually need them. However, the concept of skeuomorphism can be thought of as relating more broadly to applying existing patterns to new technologies that, in fact, cry out for new approaches.
In his session at DevOps Summit, Gordon Haff, Senior Cloud Strategy Marketing and Evangelism Manager at Red Hat, discussed why containers should be paired with new architectural practices such as microservices rather than mimicking legacy server virtualization workflows and architectures.
LA Clippers to use Amazon Web Services for CourtVision platform
The LA Clippers are moving their CourtVision game-watching platform onto Amazon Web Services (AWS) and using machine learning to drive greater insights.
Clippers CourtVision, which was created alongside the NBA’s video tracking technology provider Second Spectrum, will have its data stored and analysed on AWS in real-time. The system uses cameras in every NBA arena to collect 3D spatial data, including ball and player locations and movements.
The system will also utilise Amazon SageMaker to build, train and deploy machine learning-driven stats which will appear on live broadcasts and official NBA videos. Clippers fans will be able to access greater insights, from frame-by-frame shots and analysis of whether a shot will go in, to live layouts of basketball plays.
The move with the Clippers comes hot on the heels of the Golden State Warriors moving to Google Cloud, utilising the company’s analytics tools for scouting reports and planning to host a mobile app on Google Cloud Platform.
On the court the two clubs have had recent differing fortunes, with the Warriors having won three of the last four championships. Yet the sporting arena is one of mutual interest to the biggest cloud providers. One of the guest speakers during the AWS re:Invent keynote in November was Formula 1 managing director of motor sports Ross Brawn, who explained how the sport’s machine learning projects were being ramped up after a softer launch this season. Alongside Formula 1, Major League Baseball is another important AWS customer.
“The combination of cloud computing and machine learning has the potential to fundamentally redefine how fans experience the sports they love,” said Mike Clayville, vice president of worldwide commercial sales at AWS. “With AWS, Second Spectrum and the LA Clippers leverage Amazon’s 20 years of experience in machine learning and AWS’s comprehensive suite of cloud services to provide fans with a deeper understanding of the action on the court.
“We look forward to working closely with both organisations as they invent new ways for fans to enjoy the game of basketball,” Clayville added.
Photo by Markus Spiske on Unsplash
Interested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.
IBM focuses on second chapter of cloud story at Think – hybrid and open but secure
It’s seconds out for round two of the cloud story – one which has hybrid and multi-cloud at its core, and is open but secured and managed properly.
That was the key message from IBM chief executive and chairman Ginni Rometty at IBM’s Think Conference in San Francisco earlier this week.
“I’ve often said we’re entering chapter two – it’s cloud and it’s hybrid,” Rometty told the audience. “In chapter one, 20% of your work has moved to the cloud, and it has mostly been driven by customer-facing apps, new apps being put in, or maybe some inexpensive compute. But the next 80%… is the core of your business. That means you’ve got to modernise apps to get there. We’re going from an era of cloud that was app-driven to ‘now we’re transforming mission critical.’
“It’s very clear to me that it’s hybrid,” Rometty added, “meaning you’ll have traditional IT, private clouds, [and] public clouds. On average, [if you] put your traditional aside, 40% will be private, 60% public. If you’re regulated it will be the other way around.
“The reason it’s so important to [have] open technologies is that skills are really scarce. But then you’ve got to have consistent security and management.”
Naturally, IBM has been reinforcing this strategic vision with action. The $34 billion acquisition of Red Hat announced in October, albeit not yet to close, is a clear marker of this. As this publication put it at the time, it plays nicely into containers and open technologies in general. Both sides needed each other; IBM gets the huge net of CIO and developers Red Hat provides, while Red Hat gets a sugar daddy as its open source revenues – albeit north of $3 billion a year – can’t compete with the big boys. It’s interesting to note that at the time Rometty said this move represented “the next chapter of the cloud…shifting business applications to hybrid cloud, extracting more data and optimising every part of the business.”
Rometty assured the audience at Think that IBM would continue to invest in this future journey. “This is going to be an era of co-creation,” she said. “It’s why we’ve put together the IBM Garage and the IBM Garage methodology. [It’s] design thinking, agile practices, prototype and DevOps… but with one switch – we do them all in a way that can immediately go from prototype and pilot to production scale.
“I think we are all standing at the beginning of chapter two of this digital reinvention,” Rometty added. “Chapter two will, in my mind, be enterprise driven.”
It is interesting to consider these remarks, along with those of Google Cloud’s new boss Thomas Kurian this week, and look back on what has previously happened in the process. Kurian told an audience at the Goldman Sachs Technology and Internet Conference that Google was going to compete aggressively in the enterprise space through 2019 and beyond. This would presumably be music to the ears of Amir Hermelin, formerly product management lead at Google Cloud, who upon leaving in October opined the company spent too long dallying over its enterprise strategy.
If IBM and Google are advocating a new chapter in the cloud, it may be because the opening stanzas did not work out as well as hoped for either. As this publication has opined variously, the original ‘cloud wars’ have long since been won and lost. Amazon Web Services (AWS) won big time, with Microsoft Azure getting a distant second place and the rest playing for table stakes. Google’s abovementioned enterprise issues contributed, as well as IBM losing the key CIA cloud contract to AWS.
Today, with multi-cloud continuing to be a key theme, attention turns to the next wave of technologies which will run on the cloud, from blockchain, to quantum computing, to artificial intelligence (AI). Rometty noted some of the lessons learned with regards to AI initiatives, from putting in the correct information architecture, to whether you take an ‘inside out’ or ‘outside in’ approach to scale digital transformation.
There was one other key area Rometty discussed. “I think this chapter two of digital and AI is about scaling now, and embedding it everywhere in your business. I think this chapter two when it comes to the cloud is hybrid and is driven by mission critical apps now moving,” said Rometty. “But underpinning it for all of us is a chapter two in trust – and that’s going to be about responsible stewardship.”
The remark, which drew loud applause from the audience, is something we should expect to see a lot more of this year, if analyst firm CCS Insight is to be believed. At the company’s predictions event in October, the forecast was that the needle would move to trust as a key differentiator among cloud service providers in 2019. Vendors “recognise the importance of winning customers’ trust to set them apart from rivals, prompting a focus on greater transparency, compliance efforts and above all investment in the security,” CCS wrote.
You can watch the full presentation here.
Interested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.
AWS launches five new bare metal instances to give customers greater cloud control
AWS has unveiled five new EC2 bare metal instances to run high-intensity workloads, such as performance analysis, specialised applications and legacy workloads not supported in virtual environments.
The new instances – m5.metal, m5d.metal, r5.metal, r5d.metal, and z1d.metal – have all been designed to run virtualisation secured containers such as Clear Linux Containers. Each offers its own set of resources, with the m5 variations offering 384 GiB memory, the r5 options 768 GiB ( both up to 3.1GHz all-core turbo power) and z1 with 384 GiB, but with up to 4GHz power across 48 logical processors.
AWS has specified that the different levels of bare metal instances have been created for different scenarios. For example, the m5 instances will be useful for web and application servers, as well as back-end servers for enterprise applications and gaming servers. While the r5 models are best suited to high-performance database applications and real-time analytics.
The company’s z1d are best used for electronic design automation, gaming and relational database workloads because of their high compute and memory offerings.
Any workloads using AWS’s bare metal instances can still take advantage of the cloud firm’s suite of cloud services, such as Amazon Elastic Block Store (EBS), Elastic Load Balancer (ELB) and Amazon Virtual Private Cloud (VPC), just with more control over the hardware.
AWS is offering the bare metal instances on a number of different plans, including on-demand, as reserved instances on a year, 3-year and convertible plans or as spot instances. They’re available now across the company’s US East, US West, Europe and Asia Pacific regions.
Slack adds new tools designed for developers of all abilities
Slack is launching two new tools for developers to build apps within the business communication service without the need for in-depth coding knowledge.
The first is called ‘Block Kit’, which is a user interface (UI) framework to help developers do more with the information inside Slack. Its aim is to provide more flexibility and control over app message interactivity. It’s comprised of blocks, or stackable message components, that make it easy to control and prioritise the order of information.
The second tool is a ‘Block Kit Builder’, which is a prototyping tool for testing interactions as they appear in Slack so developers of all levels can see, understand and use the code. The Block Kit Builder tool has customisable templates that provide a foundational example for how to use blocks.
«The combined power of these blocks gives you the ability to deliver information in a clear, actionable way, enabling users to get more work done faster,» the company said in a blog post.
Within the Block Kit are five new smaller Blocks for sectioning, contextualising, imaging, dividing and actioning content.
In its blog, Slack highlighted some examples of developers using the tools. The first from a knowledge management platform called Guru helps people capture information as it’s shared with them in Slack. Using dividers, Guru’s «help» message now contains clearer calls to action, including an overflow menu that enables users to access advanced app functionality without cluttering their screen.
Guru example – courtesy of Slack
A second example from Optimizely, an experimental platform, has a Slack app that enables product and marketing teams to brainstorm, test and track digital campaigns. The context block clarifies which marketing campaign a particular message is referencing. In this case «Attic and Button.»
Optimizely in action – courtesy of Slack
However, what makes this interesting is that rather than offering the chance for experienced developers to build upon the platform, the Block Kit Builder will enable those less skilled or with little experience to quickly prototype their apps with customisable templates. These have the majority of the code already populated, offering a head start to budding developers.
How to solve visibility issues from AIOps: A guide
Considering that the language that underpins artificial intelligence (AI), LISP, turns just 60 years old this year, there can be little doubt now that the technology itself has been adopted by the masses. AI has become a critical part of how businesses operate in an extremely short stretch of time – in 2017 alone, 61% of business had implemented AI in some way. Meanwhile, worldwide spending on such technology jumped by over 50% last year to a staggering $19 billion.
A natural byproduct of such unprecedented growth and adoption has also been its impact on more traditional approaches, especially to IT, such as operations. This has given rise to “AIOps”, whereby AI is applied to either enhance or replace, partially, a number of IT operational processes. Given this relatively new approach, Gartner already predicts that 25% of enterprises will be using AIOps by the end of this year.
Moving beyond the nitty gritty, AIOps is really seeking to deliver insight into the digital experience, but also why such technology isn’t working, or even worse, breaking down. While this is extremely powerful, there are a number of data visibility issues that AIOps, on its own. This is why, in order to address such data gaps, specific measures are taken that compliment AIOps technology.
In the digital era, undoubtedly, the experience is now vital for every company and brand. Yet with this in mind, how can you monitor to protect this experience? Firstly, getting the right data is paramount. In light of this requirement, IT ops should have visibility to address all the elements that affect the digital experience, but also, crucially be able to address the “why” when digital experience breaks.
How can it do this?
Visibility should include digital experience monitoring (DEM) including HTML server availability and response time, page load and web transaction data. Yet more needs to be known, especially in the modern, cloud and Internet-dependent era. For example, a website or a service, such as a Salesforce API endpoint, may not be responding in a timely manner, but what can the ops team actually do with such information? It simply isn’t enough to go on because they still need to know why it is happening in order to actually solve the underlying problem.
As illustrated by this example, we can really see a significant gap in most IT Ops visibility architectures in place. This is because the vast majority of IT Ops visibility is based on passive data collection from different pieces of an overall puzzle it still has control over.
In today’s software-as-a-service (SaaS) environment, IT teams simply don’t have oversight, or even control over the myriads of external apps, services, infrastructure and Internet networks. This will only become a bigger issue as there will be no slow-down in the use of SaaS, as evidenced by Gartner’s estimation that the market will grow by 17%. In reality, what does this actually look like? For example, your IT team can’t input application performance management (APM) code into a SaaS provider’s software, and they also can’t gather infrastructure data from a network that doesn’t belong to you. As Gartner, pithily puts it, “Infrastructure and operations leaders must rely on outside partners.”
Yet without having data on this vast part of the current IT landscape, you’re left with a pretty insurmountable data gap that means that no level of analytical intelligence can actually make up for it.
Undoubtedly AIOps has a role in dealing with IT ops challenges, but it’s not a panacea for all the issues raised. For example, AIOps, at its core, doesn’t address the visibility data gap around understanding the impact of the Internet and other non-IT-controlled assets on the digital experience.
However, the ability to employ network monitoring technology can bridge this gap. At its heart, this technology measures communications across any Internet protocol (IP) network path, inclusive of internal, multiprotocol label switching (MPLS), Internet, cloud and SaaS network infrastructures. Backing this detailed monitoring up, is the metrics it measures to identify exactly why a digital experience may be failing. Given that the Internet is ever-changing, it also has the ability to collect global data feeds, while also formulating a continuously, full view of Internet routing, to understand if a specific path is having a problem, leak, or even hijack.
While it is possible to input such monitoring data into an AIOps platform and tune it to give you answers that are relevant to business issues, there is a challenge to get the right streams of data and to define how you want the AIOps engine to answer your questions. Given the pressures to uphold the digital experience on a day-by-day, 24/7 basis, complimenting AIOps with network monitoring is the most efficient and effective approach for businesses.
In today’s highly complex IT infrastructure environment that every company now faces, there’s simply no one technology that businesses can solely rely on for everything.
(c)iStock.com/SimCh
Interested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.
RingCentral Office review: Calling the shots
It’s easy to see why RingCentral Office is one of the most popular cloud-hosted VoIP services as it delivers an incredible range of call handling features and an extensive choice of call plans, allowing it to be easily customised to your needs.
Customers can pay monthly while yearly contracts offer a 38% price reduction, and SMEs that want all the bells and whistles will find the Standard plan a good starting point. For a monthly fee of £14.99 per user (paid yearly), you get free monthly outbound calls of 750 minutes per user, 250 minutes of calls to freephone numbers assigned to your account plus call analytics, reporting and RingCentral’s slick multi-level IVR (interactive voice response) feature.
All plans, including the basic Entry version, provide plenty of standard features. These include sending voice messages to email, call recording and RingCentral’s free Glip Windows app for messaging, file sharing, video chat and integration with apps such as Dropbox.
You also get the standard Auto-Receptionist service which was already set up for us and linked to our account’s main number. We could assign it to one extension so callers press ‘0’ to be put through to this, they could enter another extension if known or wait for a list to be presented.
Custom greeting messages are created using the web portal’s recording feature which allows you to use a phone, your PC’s microphone or uploaded WAV and MP3 files. The default message worked fine as RingCentral had already inserted our account company name into the audio message for us.
IVR takes this to the next level, providing a visual editor tool for creating multi-level call handling menus. This allows you to present a highly professional front desk and RingCentral offer plenty of help along with sample XML files to get you started.
Creating users doesn’t get any easier as you can do it manually or import them from Active Directory. If you’ve already created extensions, these can be assigned to new users who receive an email message with a secure link to an express setup web page.
This requires them to enter a strong password and voicemail PIN after which they can customise options such as regional settings and their registered location for emergency calls. From their personal web portal, users can create their own greetings and set up call handling rules for transferring calls to other extensions, a mobile or voicemail.
The wizard also presents a page where users can download free softphones for Windows, Macs, iOS and Android, and RingCentral’s Windows softphone is simply the best. Along with a standard dial-pad, it links up with your voicemail, allows you to send text or faxes and even has a HUD (heads-up display) that shows the status of other users and provides quick contact links.
During user and extension creation you can order Cisco, Polycom and Yealink desk phones from RingCentral or use your own. Our Yealink T23G phones required manual configuration but this didn’t take long as the online help showed their web interface and where SIP account details had to be entered.
Reporting is top-notch: the portal’s analytics page displays graphs of total calls, average call time, inbound and outbound calls plus a breakdown of calls by user. Historical reports are included and the portal even shows call quality and MOS (mean opinion score) graphs and pie charts.
SMEs that want the best cloud VoIP services won’t find a better alternative to RingCentral Office. It’s easy to deploy and manage, call handling features are outstanding and it all comes at a very competitive price.