Low Code Apps | @DevOpsSummit #DevOps #Serverless #LowCode #NoCode

Gone are the days when application development was the daunting task of the highly skilled developers backed with strong IT skills, low code application development has democratized app development and empowered a new generation of citizen developers.
There was a time when app development was in the domain of people with complex coding and technical skills. We called these people by various names like programmers, coders, techies, and they usually worked in a world oblivious of the everyday priorities of the business world. However, with the passage of time, this scenario is much more democratized now. Newer business models have given rise to new technologies that are replacing old legacy systems and processes. Technology and business teams have come together to build just what is needed for business priorities. Today nothing happens backstage in the name of complex coding – everyone builds and develops apps using simple methods based on modern technology.

read more

The Internet of Things in Your Driveway | @ThingsExpo #AI #IoT #IIoT #M2M

By 2020, 90 percent of all new cars will have some sort of built-in connectivity platform, and by 2022, there will be 1.8 billion automotive M2M connections. As cars join the Internet of Things, cars will stop being independent entities and will become part of a larger, connected ecosystem. Cars as part of the IoT isn’t just a look into the future – it’s already happening.
“Everything that moves will become autonomic, it’s just a matter of time,” says Vishnu Andhare, Consulting Manager at ISG. Andhare notes that all of the big automotive players are already moving towards a future of shared mobility, and mobility-as-a-service – although it will take time. “In the future, it will not be limited by technology, but by the psychological barrier, and by public policy,” he says. “there is a limit to which the human mind can go without semblance of control. At airports, you already see terminal-to-terminal autonomic shuttles. We are used to them. But with cars on the road, there is still a psychological barrier.”

read more

Public cloud: The app compatibility problem and how to overcome it

By now, the benefits of public cloud are well known. Better performance, lower costs, agility and greater flexibility are all key motivators to enable digital transformation. With these benefits in mind, it’s understandable why an organisation might choose to consolidate its data centres and move all of its applications to the cloud, so what is still holding some of them back? Often we hear about cost and security considerations, but within businesses that run bespoke mission-critical apps, the IT team will more than likely want to be running apps in the public cloud, but are unable to do so – here’s why.

If you scratch beneath the surface, as you check into a hotel, collect a hire car, even the largest public and private organisations you’ll find apps running on legacy operating systems, such as Windows XP and Server 2003. The issue this poses is that they simply won’t run on operating systems supported by Azure, AWS and Citrix Cloud, leaving organisations stuck, or with a hefty price tag to re-code their existing software.

The even bleaker reality is that many organisations have so many competing priorities that they do not have the time, money, and in some cases the capability, to rewrite their applications. Applications that were either heavily customised off-the-shelf applications, or were bespoke applications developed to meet their needs entirely, which explains why the people, expertise or media are no longer available.

Why are old operating systems still alive?

Outdated OS’ cause IT teams headaches for a number of reasons. Often defined by very specific functionality and hard coded dependencies, organisations are often reliant on apps created on old systems for mission-critical business processes. This is particularly relevant to companies in highly specialised and regulated industries, such as healthcare, manufacturing or utilities. This proves difficult because once the operating system they are designed for reaches its end of life, they are no longer supported by security patches and are more greatly susceptible to threats.

Even more problematic still is the fact that few public cloud services currently make it straightforward to transfer legacy apps. If the cloud provider doesn’t support the OS that your app was developed for, it simply won’t be supported. So, with this in mind, how can businesses move their applications to the cloud and enjoy its benefits?

A new lease of life

Compatibility container software can circumvent this problem using application virtualisation technology to run the application on a modern platform. Once in a container, organisations can migrate from an on-prem data centre to public cloud, the hybrid cloud, and even between clouds to avoid vendor lock in – all without making changes to the app’s architecture.

Compatibility containers control the packaging, and provide the required redirection, isolation and compatibility for the apps into the external cloud service. Through this process, the application is abstracted from the underlying operating system, preparing it for Windows-as-a-service; where Windows is updated every 6 months.

What this means for IT teams that have previously had their hands tied over a move to the public cloud is that now they have a cost-effective way to get there, as well as keep their critical apps up and running – not just now but for the future. The pace of technology is continually increasing, and organisations need a way to get ahead of the adoption curve, and this is only possible if they invest in evergreen capabilities.

How cloud computing, AI and IoT will transform semiconductor companies’ revenues in 2018

The Internet of Things (IoT), artificial intelligence (AI) and cloud computing are among the key application markets in driving semiconductor providers’ revenue streams over the coming year, according to a new report from KPMG.

Of the 150 semiconductor industry leaders surveyed by the professional service provider, three quarters said wireless communications, including smartphones and other mobile devices, are either important or very important. This number is down from 84% in 2016. IoT was cited by 63% of respondents, with robotics (45%), cloud computing (43%) and artificial intelligence (43%) also seen as important. Cloud and AI saw a notable increase year over year, from 27% and 18% respectively.

Not only are the technologies important in their own right, the report asserts that the applications have value to each other, with cloud infrastructure critical to enabling AI and capturing IoT-produced data.

This doesn’t mean that everything is rosy in the garden, however. When it came to strategic priorities for semiconductor providers, the most popular response – cited by 37% of respondents – was diversifying into a new business area. 29% said they are looking at merger or acquisition options, while 25% opted for talent development and management. Other responses noted a maturing market. While 24% and 23% said they were looking at greater speed to market and articulating their company’s vision respectively in 2016, last year saw these numbers drop to 12% and 6%.

According to a report from IHS Markit in September, the global semiconductor industry beat $100 billion for the second quarter of 2017, representing the sector’s best quarter in three years.

“The majority of semiconductor leaders said they expect their companies – and the industry as a whole – to increase revenue, largely driven by diversification into revolutionary new technology segments, such as artificial intelligence, the Internet of Things, and autonomous vehicles,” wrote Lincoln Clark, partner at KPMG’s US firm.

“We found that while most semiconductor executives recognise it will be nearly impossible to sustain such massive growth over the long-term, optimism exists about 2018.”

You can read the full report here (pdf).

How hybrid, multi-cloud and community clouds are coming together for the best of all worlds

What you look for in a cloud provider depends to a large extent on the drivers and challenges that you are experiencing.

People with large legacy estates, for instance, tend to be looking for a hybrid cloud solution that can support both their old legacy workloads and their new cloud ones. Some see this as a transitional arrangement to cover the period in which workloads are migrated to the cloud, but many realise that there are certain workloads for which migration will never be either technologically possible or economically practical.

Many people with heterogeneous environments, on the other hand, tend to be looking for a multi-cloud solution. They may be doing this by design, such as in moving their Oracle workloads onto an Oracle cloud environment and their Microsoft ones to an Azure cloud environment. There may also be an element of shadow IT, with some workloads strategically moved to SaaS environments like Salesforce while a host of other SaaS options may also have been adopted by individual departments.

There are others that are keen to collaborate with peers or partners in the cloud which tend to be looking for community clouds. In the USA, the main public cloud providers have set up dedicated regions as community clouds to allow US government agencies at the federal, state and local level, along with contractors and educational institutions to collaborate using sensitive workloads and data sets while meeting specific regulatory and compliance needs. Meanwhile in the UK, UKCloud has created a community cloud for public sector and healthcare that has succeeded in attracting over 220 projects, capturing over a third of the G-Cloud IaaS workloads.

Other sectors where such collaboration is becoming increasingly common include manufacturing with data sharing across the logistical supply chain, in public services and transportation where logistical and geospatial data is shared, and in health and social care where access to patient records or genomic sequencing data is shared.

There is no reason, however, for not being able to have the best of all worlds. New appliances, such as Customer@Cloud from Oracle and Azure Stack from Microsoft have been designed to enable seamless hybrid environments. However, these hybrid environments don’t need to operate in isolation. Heterogeneous environments can be created with hybrid appliances to support both Oracle and Microsoft workloads. Further combining these options with cloud native options like OpenStack and with container management as well creates a cross-over between hybrid and multi-cloud. Indeed, some providers are now starting to offer this kind of heterogeneous cloud with an array of technology stacks, all within dedicated community clouds, giving you the best of all worlds. You get a combination of hybrid and multi-cloud within a sector-specific community cloud.

There are many compelling advantages to this ‘have-it-all’ approach:

  • Customer-centricity: As a technology matures, vertical-industry expertise and talent becomes the ultimate differentiator as customers want to know that their technology suppliers are just as committed to their industry and its specific needs as the customer itself is. In effect technology wizardry becomes table stakes, while customer expertise trumps all. And we are now seeing this in the cloud arena.

    With global public cloud providers, you can be treated a bit like a number, but the sector specific nature of community clouds enables them to be very customer centric – centred around key workloads and data sets. Then adding a multi-cloud dimension allows you to use API calls to access advanced functionality in the public cloud in areas like Artificial Intelligence and Machine Learning. Multi-cloud also allows customers to create rich heterogenous solutions that address a wider set of requirements than is possible using only cloud native technologies or any single cloud platform, while maximising choice and flexibility and minimising lock-in.
     

  • The clustering effect – partners: Such sector-specific community clouds can spark a clustering effect, where, as more customers from a particular sector join, it attracts specialist application providers, both software as a service (SaaS) providers and independent software vendors (ISVs), which in turn then attract more customers in what becomes a virtuous circle.
     
  • Minimising latency: appliances like as Oracle Customer@Cloud and Azure Stack are part of a movement away from big centralized clouds, to clouds that are closer to their data origins and help cut down on latency. This is taking two forms: fog computing, and intelligent edge computing. Latency can occur either between the users and the workload that they are accessing, or between different workloads and datasets that need to work together, but are often based on different technology platforms. In the first instance, the appliance can be located as close to the main user groups as possible in order to minimise latency. In the second instance, it is better to locate the appliance within a community cloud alongside as many of the key datasets, workloads and platforms that need to interoperate and if possible to provide connectivity to this community cloud via secure, high performance networks.

Whatever your current situation, bringing together the best aspects of Hybrid Cloud and Multi-Cloud and combining them within a Community Cloud can create the best of all worlds – especially if you work within a sector where collaboration between partners and peers is important.

For example, an NHS trust in the UK may have a collection of legacy workloads that are Microsoft or Oracle based, along with a few newer cloud native applications. It might also have legacy systems that cannot be moved to the cloud, but that could be hosted in a secure facility and it might want to access cloud based applications offered by leading health provides (either SaaS or ISV) as well as core data sets like the 100,000 Genomes Project database. Ideally the trust would want as much of this as possible available in a single community cloud with close proximity between systems to minimise latency. The trust would also want to be able to access this heterogenous environment via HSCN and also to be able to connect onwards to peripheral workloads hosted elsewhere or even to public clouds via API calls for things like artificial intelligence. Fortunately for UK healthcare and the public sector this is all available today.

So why just focus on looking for hybrid cloud or multi-cloud or community cloud – when it is possible to have it all?

Read more: UKCloud partnership with Microsoft and Cisco pushes forward multi-cloud for public sector

Multiple QA and Staging Environments | @DevOpsSummit Serverless #CloudNative #DevOps

Conquering the challenges that managing test environments brings is a huge obstacle to achieving DevOps efficiencies in enterprises today. Gaining automated, real time visibility across the enterprise portfolio to establish a single source of truth to align teams and identify and resolve resource conflicts is key. A tool to keep track of environments at all times makes the job of test environment managers easier by displaying strategic allocation challenges in a single, consolidated place. Gone are the days of having to fire up Excel and send emails to gather this data again.

read more

Don’t Fall Behind in the #DevOps Integration Race | @DevOpsSummit #CloudNative #Serverless

Development cycles are being squeezed into tighter timeframes than ever before – days, hours and even minutes. Organizations that fail to keep up will find themselves behind and struggling to keep pace. However, DevOps isn’t the hurdle that it may initially seem, and if an organization develops a proper strategy the obstacles to adopting DevOps can be successfully overcome.

read more

NetEnt: Betting on DevOps | @DevOpsSummit #DevOps #Microservices #CloudNative #Serverless

These slow, brittle, manual, error-prone deployments meant new features that were developed were taking longer and longer to actually be released into the market, and that on-boarding of new customers and applications were greatly delayed as well. The process of seeing Dev work actually getting delivered into the hands of end-users became risky and unpredictable. For example, looking at the JIRA tickets the Ops team was spending their time on – only 12% was being spent on “revenue generating” activities- such as releasing new games or onboarding new customers. Majority of the time – 88% – was spent on deploying bug fixes and patches.

read more

[slides] Scheduling in #Kubernetes | @DevOpsSummit #CloudNative #DevOps

Is advanced scheduling in Kubernetes achievable?Yes, however, how do you properly accommodate every real-life scenario that a Kubernetes user might encounter? How do you leverage advanced scheduling techniques to shape and describe each scenario in easy-to-use rules and configurations? In his session at @DevOpsSummit at 21st Cloud Expo, Oleg Chunikhin, CTO at Kublr, answered these questions and demonstrated techniques for implementing advanced scheduling. For example, using spot instances and cost-effective resources on AWS, coupled with the ability to deliver a minimum set of functionalities that cover the majority of needs – without configuration complexity.

read more

Dovetailing #DevOps | @DevOpsSummit @CAinc #Serverless #CloudNative

As DevOps methodologies expand their reach across the enterprise, organizations face the daunting challenge of adapting related cloud strategies to ensure optimal alignment, from managing complexity to ensuring proper governance. How can culture, automation, legacy apps and even budget be reexamined to enable this ongoing shift within the modern software factory?
In her Day 2 Keynote at @DevOpsSummit at 21st Cloud Expo, Aruna Ravichandran, VP, DevOps Solutions Marketing, CA Technologies, was joined by a panel of industry experts and real-world practitioners who shared their insight into an emerging set of best practices that lie at the heart of today’s digital transformation.

read more