[session] Serverless Machine Learning Operations | @CloudExpo @Hydrospheredata #AI #ML #Serverless

Any startup has to have a clear go –to-market strategy from the beginning. Similarly, any data science project has to have a go to production strategy from its first days, so it could go beyond proof-of-concept.
Machine learning and artificial intelligence in production would result in hundreds of training pipelines and machine learning models that are continuously revised by teams of data scientists and seamlessly connected with web applications for tenants and users.

read more

Microsoft joins Cloud Native Computing Foundation, launches new container service

Microsoft has joined the Cloud Native Computing Foundation (CNCF), a San Francisco-based organisation aiming at sustaining containers and microservices architectures, as a platinum member.

The Redmond giant joins 14 other companies in the platinum membership category, including Docker, Google – who originally designed Kubernetes before donating it to the CNCF – and IBM.

The foundation’s mission statement is to “create and drive the adoption of a new computing paradigm that is optimised for modern distributed systems environments capable of scaling to tens of thousands of self-healing multi-tenant nodes.”

Microsoft said joining the CNCF was ‘another natural step’ on its open source journey. The company last month joined the Cloud Foundry Foundation as a joint member, paying $100,000 per annum for three years.

“We are honoured to have Microsoft, widely recognised as one of the most important enterprise technology and cloud providers in the world, join CNCF as a platinum member,” said Dan Kohn, CNCF executive director in a statement. “Their membership, along with other global cloud providers that also belong to CNCF, is a testament to the importance and growth of cloud native technologies.

“We believe Microsoft’s increasing commitment to open source infrastructure will be a significant asset to the CNCF,” Kohn added.

Alongside this, Microsoft has announced the launch of Azure Container Instances (ACI), which aims to deliver containers simply and efficiently without the effort of managing virtual machine infrastructure. The company also introduced the ACI Connector for Kubernetes in open source, which enables Kubernetes clusters to deploy to Azure Container Instances. “ACIs are the fastest and easiest way to run a container in the cloud,” wrote Corey Sanders, Azure director of compute, adding it was the first service of its kind.

The product is available in public preview for Linux customers, with Windows support following ‘in the coming weeks’, the company said.

Read more: A comparison of Azure and AWS microservices solutions

Don’t allow an ‘always on’ mentality to dictate your backup strategy

Consumers’ expectations for always-on technology are continually increasing, with businesses looking to see where they need to make changes to support this. Users expect their applications to be available and functioning at optimal performance, considering it almost a basic human right, with even the most minor disruptions a cause for uproar.

A recent survey by digital operations management provider PagerDuty found that resolving consumer-impacting incidents takes IT teams approximately double the amount of time consumers are willing to wait for a service that isn’t performing. This level of expectance has translated into the workplace, where users wish to have 100% access to more applications across multiple devices.

In addition to this, making up an increasing proportion of the workplace are digital native millennials, expecting instant access to information having grown up with broadband, smartphones, laptops and social media as the norm. A PwC study recently showed that 59% of millennials surveyed highlight a prospective employer’s technology provision as crucially important when choosing a job.

This demonstrates just how important it is for all users to have access to important applications at any time. This demand requires a complex virtualised infrastructure, while aggressive performance SLAs are on the rise and corporate management teams continue to squeeze software licensing costs, creating a challenging environment for even the most efficient IT department.

Why backing up everything the same way doesn’t always work

With such high expectations for application availability, organisations can be tempted to reflect this approach in their backup and recovery strategies to ensure that users can access all applications regardless. Even with the best IT operations in practice, system failures can be caused by any number of things from natural disasters to human error and power outages. In the event of catastrophe, users may want their corporate coffee voucher app back up and running in time for Monday morning, however, protecting all data and application code in the same way is not always the most economic approach, regardless of storage technique.

In a world where IT directors and businesses are faced with an increasingly complex application set, and a growing number of on-premise, cloud and hybrid storage options, not to mention pressures to save money and innovate, IT teams should refrain from the temptation to bet the house on one backup methodology. When assessing back-up and recovery options, businesses should look to a hybrid storage model to meet their individual needs. The most common enterprise use for cloud storage today is in fact off-site backup and archiving and with a hybrid cloud storage model companies can use a combination of on-premise storage and storage in the public cloud to deliver even better value. For example, they may choose to migrate non-mission critical applications to the public cloud and keep critical ones on premise for increased security.

A structured approach to restoration

Once a storage strategy has been implemented, it’s important that organisations don’t deploy just one backup methodology. Instead, they should utilise tools that can forecast acceptable risk profiles for the re-deployment of applications, including analysis of licensing costs, budgeting capital expenditures and assessment of application availability versus business risk.

Whilst some applications are mission critical to the running of a business, for example trading tools in the financial industry, some applications, such as timesheets for recording employee activity, can endure downtime with fewer ramifications. An estimated 20 per cent of organisations’ applications are non-mission critical, so maximising resource utilization and improving virtual application performance across hybrid environments will be key to ensuring business continuity in the event of catastrophic breach.

With the right tools in place, and a multi-option approach that clearly forecasts backup and recovery implications, organisations can deploy the most cost-effective mix of applications and data at every phase, and ensure backup and recovery judgement isn’t clouded by the demands of its users. The key is understanding that every business is on its own journey of digital transformation and will be at different stages within a growing virtualized environment that needs a tailored back-up and recovery plan that isn’t simply backing up every application.

Just launched: Parallels Mac Management v6 for Microsoft SCCM

Today, Parallels® unveiled Parallels Mac Management v6 for Microsoft SCCM, which extends Microsoft® System Center Configuration Manager (SCCM) functionality and enables IT managers to use one pane of glass to manage both PC and Mac® computers with ease, increased efficiency, and higher productivity. The feedback from our v6 beta testers had been fantastic and we […]

The post Just launched: Parallels Mac Management v6 for Microsoft SCCM appeared first on Parallels Blog.

[slides] Million Dollar SaaS with Kubernetes | @CloudExpo @Supergiantio #SaaS #Cloud #DevOps

In his session at 20th Cloud Expo, Mike Johnston, an infrastructure engineer at Supergiant.io, discussed how to use Kubernetes to set up a SaaS infrastructure for your business. Mike Johnston is an infrastructure engineer at Supergiant.io with over 12 years of experience designing, deploying, and maintaining server and workstation infrastructure at all scales. He has experience with brick and mortar data centers as well as cloud providers like Digital Ocean, Amazon Web Services, and Rackspace. His expertise is in automating deployment, management, and problem resolution in these environments, allowing his teams to run large transactional applications with high availability and the speed the consumer demands.

read more

Thwarting Ransomware Attacks | @CloudExpo #Cloud #Cybersecurity

As we have seen over and over again, a new wave of ransomware attacks has been plaguing large parts of Europe over the last couple of weeks. While the affected individuals and organizations are struggling with the very tangible business impact of the loss of revenue and operations, it’s critical to step back and review what else one could do to mitigate and minimize the damage from such attacks in the future.

read more

How enterprises can catch up on user experience – by reengineering the network

In today’s digital marketplace, business agility is praised above all else. Agility, after all, provides the opportunity to change services on a dime and ensure that customer needs are met, and expectations exceeded. But agility is a big concept and often it can be ill-defined and poorly executed upon. However, if your company wants to keep its competitive edge, it has to achieve this goal. As such, the question becomes, how do you do it?

The answer might be simpler than you think. It lies in better user experience. By dramatically improving how employees and customers interact with websites, applications and devices within the business — you can potentially change, and at the very least enhance, your company’s fortunes. This is because good user experience allows employees to be more productive in the work environment, and it affords them the ability to quickly evolve with changing customer requirements, thereby helping your company to deliver better customer engagement. New technologies play a major role in achieving great user experience, and therefore business agility. This is also the reason why so many organisations are pursuing a digital business strategy, shifting away from legacy IT services, and adopting cloud-based services instead.

The difficulty most companies face in delivering a good IT user experience, despite building more agile systems and moving data centres, applications and storage to the cloud, lies in their networks. While the digital world is moving rapidly forward, not much has changed in networks in the last 20 years. Two decades is a lifetime in technology terms, and this is why so many organisations find themselves restricted in their ability to change fast enough to support their digital growth and agility ambitions.

To achieve business agility, and deliver on the promise of better user experience of applications hosted in the cloud, companies must take a new approach to network connectivity in the enterprise. The time for change has come — and the right technology is already available. SD-WAN is the key to meeting agile business needs efficiently and effectively.

Challenged by infrastructure

The reality is, your business is only as agile, flexible and user focused as your network allows it to be. The majority of companies, and their IT departments, are still struggling to stay in control of complex networks. There are a number of reasons for this. Firstly, a large number of organisations are building applications and storing information in the cloud as well as on on-premise systems — these hybrid models are popular, but they do require complex changes to existing antiquated networks, which many simply can’t handle. The manual changes and hundreds of commands to configure the network can be an incredibly time and labour-intensive effort that can take months to carry out, and are prone to error, severely increasing the risk of network downtime.

Another reason can be found within your end user pool. This is because employees within most organisations are now using a range of devices to access more and more cloud-based applications and services. The thing is, across all these platforms and applications, end users expect consistent levels of network availability and performance — even when they are spread across the globe in remote locations and branch offices. Networks can strain under these circumstances, which in turn affect application performance levels, and bandwidth availability for business critical applications, and ultimately impact user experience. In fact, Riverbed Technology’s Global Application Performance Survey revealed that 89% of executives said that poor application performance negatively impacts their work on a regular basis — and this is no way to foster productivity or agility.

The key to optimisation

These problems require more than a patchwork approach to upgrading legacy network architectures. Simply reacting to network hotspots and user complaints is a recipe for failure; the only winning strategy is to get out in front of the challenges.

Providing effective long-term solutions requires a rethink of networking itself. Today’s organisations need a resolution that fundamentally redefines the way networking is done to better align with new, cloud-centric business practices. It needs to be more than an incremental approach of replacing manual processes with automated ones, and should take the form of a holistic solution that supports business-aligned orchestration and management of the network.

This is where SD-WAN can be of help, as it enables organisations to make on-the-fly adjustments to network performance and application delivery using embedded services such as application optimisation and QoS from a centralised location, and therefore can meet businesses’ ever-changing needs, when they need them. Ultimately, this translates into reduced costs and operational complexity, as well as increased agility to deliver superior-performing applications and experiences to users.

Technology delivers user experience

SD-WAN allows businesses to streamline and simplify how hybrid networks are deployed and managed. This in turn enables organisations to provide fast and secure delivery of applications hosted on-premises or in the public cloud to employees — boosting end user experience, agility and also productivity.

SD-WAN also enables organisations to direct traffic and deploy advanced network services from a centralised location, with mere clicks of a mouse. Reducing the time to market through efficient configuration and provisioning is vital in a world that is constantly evolving — and the good news is much of this can be automated. What’s more, an app-centric SD-WAN will automatically identify the applications in the organisation’s network and group them into logical categories based on business criticality, and apply network-service policies to those categories based on built-in best practices.

These benefits of flexible reengineering of your company’s network are endless. Application aware SD-WAN can automatically route user sessions to cloud hosted services, send voice traffic to their highest-quality network paths; segregate employee traffic from that of partners and customers; and send recreational Internet traffic through the most rigorous firewalls — all adding to risk mitigation.

SD-WAN can offer organisations a much more holistic approach that makes orchestrating enterprise and cloud connectivity easier and more cost-effective, which is why the SD-WAN market is expected to grow significantly in the next several years. According to Gartner, by the end of 2019, 30% of enterprises will use SD-WAN products in remote branches, up from less than 1% at the end of 2015.

In today’s digitally-driven world where business velocity is increasing, and end users have higher expectations of technology, it is imperative to rethink the way things have always been done. Until organisations rethink their network structure, they will struggle to deliver user experience and find agility unachievable. 

Public cloud services revenue will reach $266 billion, says IDC

The shift of IT workloads to hybrid cloud computing continues to grow, fuelled in part by the rise of digital transformation projects. Worldwide spending on public cloud services and infrastructure is forecast to reach $266 billion in 2021, according to the latest market study by International Data Corporation (IDC).

Although spending growth will likely slow during 2016-2021, the market is still expected to achieve a five-year compound annual growth rate (CAGR) of 21 percent. Moreover, public cloud services spending will reach $128 billion in 2017 — that’s an increase of 25.4 percent over 2016.

Public cloud market development

The United States will be the largest market for public cloud services accounting for more than 60 percent of worldwide revenues throughout the forecast, and total spending of $163 billion in 2021.

Western Europe and Asia-Pacific excluding Japan (APeJ) will be the second and third largest regions with 2021 spending levels of $52 billion and $25 billion, respectively. APeJ and Latin America will experience the fastest spending growth over the forecast period with CAGRs of 26.7 percent and 26.2 percent, respectively.

However, according to the IDC assessment, six of the eight regions are forecast to experience CAGRs greater than 20 percent over the next five years.

“In Western Europe, the public cloud market is going to more than double in the 2016-2021 time frame led by strong spending growth in Germany, which is also the largest national market, Italy, and Sweden,” said Angela Vacca, senior research manager at IDC.

The U.S. industries that will see the fastest growth in public cloud services spending are professional services (21.5 percent CAGR), media (21 percent CAGR), retail, and telecom (each with a CAGR of 20.9 percent).

The U.S. industries that will spend the most on public cloud services are discrete manufacturing, professional services, and banking. Together, these three industries will account for nearly one third of all public cloud services spending in the United States in 2021.

In Asia-Pacific excluding Japan, banking, professional services, and telecom will deliver more than a third of the region’s public cloud services spending in 2021 while the industries with the fastest spending growth will be professional services, personal and consumer services, and process manufacturing.

Software as a service (SaaS) will remain the dominant cloud computing type, capturing two thirds of all public cloud spending in 2017 and nearly 60 percent in 2021.  Spending on infrastructure as a service (IaaS) and platform as a service (PaaS) will grow at much faster rates than SaaS with five-year CAGRs of 30 percent and 29.7 percent, respectively.

Outlook for enterprise adoption growth

In terms of company size, more than half of all public cloud spending will come from very large businesses (those with more than 1,000 employees) while medium-sized businesses (100-499 employees) will deliver about 20 percent of spending throughout the forecast.

Large businesses (500-999 employees) will see the fastest growth with a five-year CAGR of 22.8 percent. While purchase priorities vary somewhat depending on company size, the leading product categories include CRM and ERM applications in addition to server and storage hardware.

Announcing @SkyScaleLLC to Exhibit at @CloudExpo Silicon Valley | #API #Cloud #Security

SYS-CON Events announced today that SkyScale will exhibit at SYS-CON’s 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
SkyScale is a world-class provider of cloud-based, ultra-fast multi-GPU hardware platforms for lease to customers desiring the fastest performance available as a service anywhere in the world. SkyScale builds, configures, and manages dedicated systems strategically located in maximum-security facilities, allowing customers to focus on results while minimizing capital equipment investment.

read more