IBM lands six major European cloud deals


Clare Hopping

4 Jul, 2018

IBM has announced partnerships with six European firms using its cloud services to grow their AI, blockchain and analytics businesses.

It will work closely with Dutch logistics firm Koopman Logistics to build its track and trace solution using IBM’s blockchain technology. Koopman transports consignments across Europe and it needed to implement a secure technology to replace its paper-based tracking process. Now it’s using IBM’s blockchain to track consignments using digital records.

The second partnership IBM announced is with Italian multimedia organisation Gruppo 24 Ore, which is using the company’s IBM Watson AI services hosted on the IBM Cloud to help tax professionals respond to questions about the Italian tax coding system. IBM Watson was implemented to process 1.5 million documents relating to the financial system and glean the data it needs to advise professionals.

French bank Crédit Mutuel is also using IBM Watson on IBM’s Cloud environment in France (with a back up in Germany) to power its virtual assistants that help the company’s 20,000 relationship managers advise their customers.

Digital health company Teckel Medical is running its digital health checker on IBM Cloud, while RS Components is making use of IBM’s Cloud platform, building its peer-to-peer marketplace in IBM’s London Cloud Garage, enabling startups to promote, test and sell their inventions online.

Finally, IBM has announced a partnership with Osram AG, a lighting solutions company that has switched its operation to a digital environment powered by IBM Cloud, resulting in greater operational savings and flexibility.

“Enterprises across Europe are gravitating to the IBM Cloud because it helps them modernize their existing infrastructures by gaining access to exciting technologies like AI, blockchain, IoT, analytics and more,” said Sebastian Krause, general manager IBM Cloud Europe. 

“At the same time, these companies value IBM’s deep industry and business process expertise, along with IBM’s commitment to the responsible management of their enterprise data.”

Image credit: IBM 

G-Cloud 10 arrives with 3,500 suppliers


Joe Curtis

3 Jul, 2018

G-Cloud 10 is now live, with more than 3,500 suppliers listed on the latest iteration of the framework.

More than 90% of the 3,505 companies who’ll be competing for public sector contracts are SMBs, according to the Crown Commercial Service (CCS), and there are 649 more vendors listed on the new version of G-Cloud than there were on its previous incarnation.

“Small businesses are the backbone of our economy, delivering innovative solutions in partnership with the public sector, fuelling economic growth and supporting the delivery of efficient, effective public services that meet the needs of citizens,” said Oliver Dowden, minister for implementation, who oversees CCS.

“The success of G-Cloud demonstrates how we are breaking down the barriers for SMBs who want to supply to government.”

Government figures show that G-Cloud has racked up £3.1 billion in sales since its launch in 2012, with 48% of that going to SMBs.

But SMBs have criticised the framework for its inability to allow them to change their prices on a given iteration – e.g. G-Cloud 9 – if their own costs increase.

Nevertheless, suppliers welcomed the launch of the latest version, with UKCloud founder Simon Hansford, whose firm has listed services since G-Cloud’s inception, saying: “With each iteration the framework has seen enhanced functionality and an increased volume of transactions as it has supported a thriving ecosystem of UK tech SMBs that have succeeded in winning business through it.”

G-Cloud allows public sector departments to put cloud contracts up to tender to a wider pool of bidders that are often smaller than the big tech firms that have historically benefitted from UK government spending.

The arrival of G-Cloud 10 was in doubt for some time, after the government originally said G-Cloud 9 would remain in place until May 2019, rethinking its decision earlier this year.

A new framework means suppliers can list new services they provide and adjust their prices.

Google Cloud investigates automated customer service practices after complaint

Google Cloud Platform has said it will conduct a detailed review of its abuse prevention processes after a customer complained about its treatment.

The unnamed customer, who works in the renewable energy industry, wrote in a Medium post that the company was a few days away from ‘losing everything’ after Google’s automated system pinged it for questionable activity.

Those who get pinged will receive a variety of emails – a ‘barrage’, as the customer put it – detailing that each service is down, the payments account is temporarily closed, and what needs to be done about it. Chat support is switched off, with a warning that unless a picture of the credit card and a government-issued photo ID of the card holder is uploaded within three days, the project will be deleted.

The customer warned about the consequences if the card holder – in this instance, the CFO – was not available, and around the automated nature of the system.

“I understand Google’s need to monitor and prevent suspicious activity. But how you handle things after some suspicious activity is detected matters a lot,” the post explains. “You need a human element here – one that cannot be replaced by any amount of code/AI. You just can’t turn things off and then ask for an explanation.”

In a statement, posted by Brian Bender, Google Cloud Platform engineering support regional lead, Google said it will be re-evaluating data sources used to assess potential fraudulent activity, implementing additional mechanisms for suspect accounts, and improve how it communicates account warnings. “Protecting our customers and systems are a top priority,” the statement added. “We sincerely apologise for this issue and are working quickly to make things better, not just for this customer but for all GCP customers.”

Given Google’s rise in the cloud infrastructure arena over the past 12 months – the company was listed in the leaders’ section for public cloud IaaS by Gartner in May – it is interesting to note that, for this particular customer, this was the first project built by them entirely on Google’s cloud. The customer was previously an AWS house; and while there was no technical reason cited for the change – both are ‘on-par’, as the customer put it – there was a note on the differing customer experiences.

“In our experience AWS handles billing issues in a much more humane way,” the customer explained. “They warn you about suspicious activity and give you time to explain and sort things out. They don’t kick you down the stairs.”

Other issues were sorted. Mike Kahn, Google Cloud customer engineer, noted the importance of having an enterprise user account rather than a consumer one – yet another commenter described this approach as having 'borderline contempt' for customers.

UK gov using emotion detecting AI for digital content


Bobby Hellard

3 Jul, 2018

The UK government is using a type of artificial intelligence that can detect emotion on social media to measure and understand how people feel on certain topics.

Web science firm FlyingBinary has released the “artificial emotional intelligence” service to the government’s G-Cloud marketplace, in partnership with emotional AI recognition company Emrays B.V.

“The web has become a noisy space as online content grows exponentially,” said Professor Jacqui Taylor, CEO of FlyingBinary. “Where once tools were in the hands of a social team it is increasingly difficult for humans using social media monitoring to understand the signals about a brand, initiatives, good news or issues.”

“This service uses AI technology to understand digital content from an emotional perspective and how resonant this is with an online audience before content is shared online.”

The two companies have deployed the artificial emotional intelligence engine as part of a newly awarded G-Cloud 10 service built for the UK government.

FlyingBinary has vast experience in web science, GDPR and security and has thus far supported almost 40,000 government organisations, helping them to understand the dynamics of emotions on the social web, and in the mass media space.

Emrays emotion AI, on the other hand, is said to be able to detect more than 20 distinct emotions in any digital content, which it says can help companies and governmental organisations measure and understand how people feel about any topic, ranging from companies, brands and concepts.

The engine learns collective patterns of emotional reactions to digital content publicly available on the web. The emotion AI analyses and “feels” content on par with humans, based on more than one billion data points it has already been trained on. It uses a diverse set of human emotions, such as love, anger, surprise and shock.

Taylor added that no personal data is used by the AI engine and that it focuses instead on the content itself and the human emotion expressed.

FlyingBinary was one of thousands of small businesses that won the chance to bid to supply cloud computing services to government bodies through the major government procurement framework.

G-Cloud 10, which is predicted by the government to have a potential worth of £600 million, gives the central government, local councils, NHS Trusts and other public sector bodies a way to purchase cloud-based services, such as web hosting from a single, central website.

Oliver Dowden, the Minister for Implementation, said: “Small businesses are the backbone of our economy, delivering innovative solutions in partnership with the public sector, fuelling economic growth and supporting the delivery of efficient, effective public services that meet the needs of citizens.

“The success of G-Cloud demonstrates how we are breaking down the barriers for SMEs who want to supply to the government.”

Picture: Shutterstock

Avoiding cloud vendor lock-in: It’s all in the planning

The cloud computing world is becoming increasingly dominated by a small number of IT giants aiming to be your one-stop services shop. From infrastructure to software, one vendor could hypothetically provide an organisation’s entire IT system. Naturally, this is appealing to users that want to simplify their estate and partner with a well-established and reliable cloud host – but ask yourself, do you really want all your eggs in one basket?

With risks like hacking and large-scale denial-of-service attacks (DDoS) on the rise, many organisations are now looking to keep their options open and spread their IT requirements across different service providers to improve redundancy and balance risk. However, some organisations already find themselves backed into a corner and locked into their primary, dominant vendor.

So, what causes vendor lock-in and how are the IT giants really driving this problem? What’s more, how can organisations avoid becoming locked on to one cloud-computing provider?

The lock-in challenge

Many of the larger scale and legacy IT companies within the market have allowed their customers to become increasingly dependent on the single provider model by making their technologies incompatible with other systems – meaning these vendors can make it very hard for organisations to switch later on down the line. A range of issues, such as inefficient processes and extremely high costs, mean migration between cloud vendors can be a time-consuming and expensive pain point. The alternative is to stick with a cloud provider that doesn’t meet your business needs.

One of the biggest mistakes that can leave an organisation locked-in with a single vendor is a lack of planning in the initial stages of a deployment or migration. Before an organisation even looks to contact a cloud vendor, the IT team should do their homework and find out if the service providers they are considering are going to be able to meet their business needs. If they can, there shouldn’t be any reason for the organisation to leave later on down the line anyway.

Even once all the research is out of the way, though, it can never hurt to have an exit plan should you want to switch vendor in the future. Let’s draw a non-tech comparison; when entering into a marriage, some people choose to opt for a pre-nup to ensure the divide is clearly set out, and opting for a cloud provider should adopt an equivalent, objective approach. For this reason, organisations should have a detailed implementation plan in place when signing contracts with their chosen provider. This should include the option to easily and cost-effectively migrate data out and to a new provider if the need arises.

Keep your options open

Choosing the right vendor for your business should be based on a clear understanding of each individual cloud technology in the mix. An increasingly popular strategy is for organisations to opt for a multi-cloud approach that combines different types of cloud – private, public and hybrid – allowing them to reap the benefits of all of them without compromise. For this reason, organisations should be considering not just one cloud provider, but multiple ones to ensure the best provider for each need, such as backup, computing and disaster recovery.

This sort of approach should also be paired with keeping applications as flexible as possible – i.e. not vendor specific. For instance, you can keep cloud components linked with application components, rather than creating a lot of mess later on if you want to move providers. It’s also sensible to back up your data regularly in an easily useable format to eliminate any future mishaps. Similarly, having separate security and disaster recovery options could be important for business continuity – it could be the difference between disaster and recovery.

It’s also important to focus on the future and emerging tech trends, such as DevOps. Reconfiguring applications to run on a new platform is time consuming and expensive. But using open platforms, such as Docker containers, means organisations can isolate software and have them running on top of the infrastructure. They are also easy to relocate and rebuild, which saves a lot of hassle if and when you decide to move over to a new provider. Configuration management tools can also be utilised to automate the configuration of your infrastructure.

But if you do just one thing to avoid vendor lock-in, it should be to create that exit plan. The big market players are bound to come up with more products and services to entice businesses away from their competition, and vendor-lock-in has become an unfortunate side effect – good forward planning will need to go hand-in-hand with the future direction of the cloud industry.

Alibaba Cloud seeks partners for EMEA drive


Clare Hopping

3 Jul, 2018

Alibaba Cloud has launched its EMEA Ecosystem Partner Programme, with the aim of helping to develop the company’s presence in the region.

It’s already onboarded some big industry players, including Accenture, Altran, Ecritel, Hashicorp, Intel, Linkbynet, and Micropole.

Alibaba wants to focus on helping businesses in “targeted industries” with their digital transformation efforts as well as developing talent and boosting innovation across the cloud-powered board. 

“Our goal in EMEA is to bring powerful and elastic cloud services to our customers and create a well-connected, comprehensive ecosystem with our partners to accelerate cloud technology development in the regional cloud industry,” said Yeming Wang, general manager of Alibaba Cloud EMEA.

The company is now looking for more partners to tie up with in order to grow its EMEA business and is seeking startups as well as established companies to help it grow.

The announcement was made at the Ecosystem Summit EMEA 2018 hosted by Alibaba Cloud at Station F in Paris, where 400 people congregated from across the tech industry.

Representatives from private companies, public sector organisations, developers, engineers, channel partners and startups came together to discuss the future of the cloud and joined forces to think about how the industry can “incubate” a wider technology ecosystem and support startups.

“We are committed to foster innovation and nurture local talent, which is why we are excited to have held our Summit today at Station F, a campus which gathers a whole start-up ecosystem under one roof here in Paris. We hope that the Summit has inspired companies of all sizes and demonstrated Alibaba Cloud’s belief in working together for the future of the industry.” Wang added. 

The IT Challenge: How to Manage Diverse Systems Consistently

Whoever is in charge of IT is always something of a flea circus keeper. They must have everything under control—all issues that might spring up on devices at any place in their organization. They are supposed to be enablers, bringing new areas of operation to bear, but without losing control. Examples include (unofficial) bring-your-own-device (BYOD) […]

The post The IT Challenge: How to Manage Diverse Systems Consistently appeared first on Parallels Blog.

When AI meets DevOps: Getting the best out of both worlds

DevOps has been widely embraced by businesses under pressure to get competitively advantageous digital deliverables to market at the fastest possible cadence—especially given the reality of limited coder headcount and the need to rigorously avoid brand-toxic snafus in the customer experience. Artificial intelligence (AI), in stark contrast, is a potentially transformative digital discipline that is still very new to most enterprise IT organizations.

But while it’s certainly important that CIOs nurture AI adoption with appropriately resourced pilots, it’s also essential to link nascent AI efforts to maturing DevOps concept-to-production pipelines. Here’s why.  

The data science silo

“AI” has become a catch-all term to describe a broad range of algorithm-based disciplines such as machine learning and natural language processing capable of discovering patterns, trends and anomalies in large volumes of diverse data. Given the wealth of data increasingly available to businesses, this AI-based discovery can potentially deliver significant benefits—from anticipating customer needs to identifying emerging market risk.

The algorithms that fuel AI, however, bear little resemblance to classic application code. Code is written by developers to execute actions in some logical sequence. If you want to change those actions, developers have to change the code.

Algorithms, on the other hand, are crafted by data scientists to tease hidden insights out of data. Data scientists may certainly tweak those algorithms over time to enhance the resulting insights—but, to a large extent, well-crafted algorithms inherently respond to change without explicit human intervention.

Due to these unique characteristics and skill-sets, organizations typically initiate their AI efforts in sandboxed pilots where the main challenge is determining which types of algorithm can uncover the insights that are most valuable—which typically also means most actionable.

This experimentation is good and fitting. It’s tough to on-board data science talent, and it’s tough to connect raw technical data science talent to the real-world needs of the business. So we all have a lot of learning to do when it comes to AI.

That learning can’t take remain in a silo forever, though

Escaping the AI island

In an increasingly digital marketplace, actions take place in code. For the insights revealed by our new AI environments to actionably impact businesses, they must be acted on programmatically.

In some cases, this programmatic action may be sending an alert to a customer’s phone. In some cases, it may be changing the price of a SKU. In others, it may be re-prioritizing workflow to internal staff.

Regardless of the specific use-case, there is clearly a need to connect AI insights with application code.

This has several implications when it comes to DevOps. For one thing, developers must be able to code and test calls to AI systems in much the same way as they do to databases and other resources.

For another, ops teams must be able to ensure that the new generation of hybrid AI-application systems reliably perform at required levels even as workloads spike. Such performance SLAs can be particularly challenging given the intensity and volatility of AI processing.

Change management is another key consideration. Developers must preserve the integrity of AI calls, even when they add, delete, or modify other aspects of their applications. And, conversely, when data science staffs modify their AI environments, we must somehow ensure that there aren’t unexpected adverse impacts on end-to-end system behaviours.

Security and compliance are considerations as well. AI ingests and egests a lot of potentially sensitive data. The safety and proper governance of that data in these increasingly complex environments doesn’t just happen. Nor should they be grafted on to systems as an afterthought. That’s another reason AI and security and DevOps—or, as many of us have taken to calling it, DevSecOps—must come together.

Chaperoning the DevOps-AI courtship

Given the imperatives above, CIOs and other digital leaders in the enterprise need to take several steps now to ensure that any future relationship between AI and DevOps will be a cordial and productive one.

Suggested steps include:

  • Begin mapping processes and workflows in your DevSecOps toolchain that will provide the same automation, QA, and auditability of Ai integrations as you’re presently implementing for  APIs, database calls, cloud connectivity, and the like.
  • Ensure that your data governance methods and technologies can be uniformly applied across platforms, environment, and data sources.
  • Get your DevOps and data science people together. Their tools, skills, and cultures may be markedly dissimilar—but ultimately, for your business to win, they will have to collaborate in much the same way as we are driving developers, QA teams, ops staff, security professionals, and business analysts to collaborate.

AI will transform business in the coming years. But it won’t do so by itself. Only in concert with a holistic approach to digital transformation can businesses reap the full potential value of AI.

Formula 1 races to AWS as official cloud provider, cites importance of machine learning capabilities

It has been described by Citrix as a ‘never-ending technology arms race to optimise performance’ – and now Formula 1 has gotten a further boost by selecting Amazon Web Services (AWS) as its official cloud and machine learning provider.

The move will see Formula 1 move the vast majority of its infrastructure from on-premises data centres to AWS, and use a variety of products to help improve broadcasts, data tracking, and race strategies.

Amazon SageMaker – AWS’ service to help developers build, train and deploy machine learning models – will be put to task by Formula 1’s team of data scientists against more than 65 years of race data. The data, collected in real time by Amazon Kinesis and stored in Amazon DynamoDB and cold storage product Glacier, will be crunched to extract performance statistics and make predictions for upcoming races.

Other AWS products being utilised by Formula 1 are AWS Lambda for serverless capabilities, and AWS Elemental Media Services for greater video options.

“For our needs, AWS outperforms all other cloud providers, in speed, scalability, reliability, global reach, partner community, and breadth and depth of cloud services available,” said Pete Samara, Formula 1 director of innovation and digital technology in a statement. “By leveraging Amazon SageMaker and AWS’s machine learning services, we are now able to deliver these powerful insights and predictions to fans in real time.”

This is by no means the first customer to cite machine learning as a key element of future strategies. In May, Ryanair announced it was going all-in on AWS, saying greater data insights and better customer experience through machine learning was vital to its decision. The airline is using Amazon Lex, the technology underpinning smart assistant Alexa, on a trial basis.

Why optimal hybrid cloud champions will lead the market

Vendor revenue from sales of infrastructure products — server, storage, and Ethernet switch — for cloud IT grew by 45.5 percent year-over-year in the first quarter of 2018 (1Q18), reaching $12.9 billion according to the latest worldwide market study by International Data Corporation (IDC).

IDC also raised its forecast for total spending on cloud IT infrastructure in 2018 to $57.2 billion with year-over-year growth of 21.3 percent. Let's consider the key trends that are driving this phenomena. What really matters most, going forward?

Cloud infrastructure market development

Public cloud infrastructure quarterly revenue has more than doubled in the past three years to $9 billion in 1Q18, growing 55.8 percent year-over -year. Private cloud revenue reached $3.9 billion for an annual increase of 26.5 percent.

The combined public and private cloud revenues now represent 46.1 percent of the total worldwide IT infrastructure spending, up from 41.8 percent a year ago. Traditional (non-cloud) IT infrastructure revenue grew 22 percent from a year ago, although it's declined over the past several years — at $15.1 billion in 1Q18 it still represents 53.9 percent of total worldwide IT infrastructure spending.

"Hyperscaler datacenter expansion and refresh continued to drive overall cloud IT infrastructure growth in the first quarter," said Kuba Stolarski, research director at IDC. "While all infrastructure segments continued their strong growth, public cloud has been growing the most."

IDC expects this trend to continue through the end of 2018. Digital transformation initiatives such as edge computing and machine learning have been bringing new enterprise workloads into the cloud, driving up the demand for higher density configurations of cores, memory, and storage.

As systems technology continues to evolve towards pooled resources and composable infrastructure, the emergence of these next-generation workloads will drive net new growth beyond traditional enterprise workloads.

All regions grew their cloud IT Infrastructure revenue by double digits in 1Q18. Asia-Pacific (excluding Japan) grew revenue the fastest, by 74.7 percent year-over-year.

Next were the U.S. market (43.6 percent), Middle East & Africa (42.3 percent), Central and Eastern Europe (39.2 percent), Latin America (37.7 percent), Canada (29.4 percent), Western Europe (26.1 percent), and Japan (15 percent).

IDC's cloud IT infrastructure forecast measures total spend (vendor recognized revenue plus channel revenue). Of the $57.2 billion in cloud IT infrastructure spend forecast for 2018, public cloud will account for 67 percent of the total, growing at an annual rate of 23.6 percent. Private cloud will grow at 16.7 percent year-over-year.

That said, worldwide spending on traditional 'non-cloud' IT infrastructure is expected to grow by just 4.2 percent in 2018 as enterprises continue to refresh their legacy platforms. Traditional IT infrastructure will account for 54 percent of total end user spending on IT infrastructure products — that's down from 57.8 percent in 2017.

Outlook for cloud computing growth

This represents a decelerating share loss as compared to the previous four years. Moreover, the growing share of cloud environments in overall spending on IT infrastructure is common across all regions.

Long-term, IDC expects spending on cloud IT infrastructure to grow at a five-year compound annual growth rate (CAGR) of 10.5 percent — reaching $77.7 billion in 2022, and accounting for 55.4 percent of total IT infrastructure spend.

Public cloud datacenters will account for 64.7 percent of this amount, growing at a 10.2 percent CAGR. Spending on private cloud infrastructure will grow at a CAGR of 11.1 percent.

Some analysts already believe that it doesn't matter who leads the cloud infrastructure market, since it's essentially a commodity business with rapidly shrinking profit margins. So, what does really matter? Which vendors are best positioned to champion and lead the 'Optimal Hybrid Cloud' environment?