Semantic technology: is it the next big thing or just another buzzword?

Most buzzwords circulating right now describe very attention-grabbing products: virtual reality headsets, smart watches, internet-connected toasters. Big Data is the prime example of this: many firms are marketing themselves to be associated with this term and its technologies while it’s ‘of the moment’, but are they really innovating or simply adding some marketing hype to their existing technology? Just how ‘big’ is their Big Data?

On the surface of it one would expect semantic technology to face similar problems, however the underlying technology requires a much more subtle approach. The technology is at its best when it’s transparent, built into a set of tools to analyse, categorise and retrieve content and data before it’s even displayed to the end user. While this means it may not experience as much short term media buzz, it is profoundly changing the way we use the internet and interact with content and data.

This is much bigger than Big Data. But what is semantic technology? Broadly speaking, semantic technologies encode meaning into content and data to enable a computer system to possess human-like understanding and reasoning. There are a number of different approaches to semantic technology, but for the purposes of this article we’ll focus ‘Linked Data’. In general terms this means creating links between data points within documents and other forms of data containers, rather than the documents themselves. It is in many ways similar what Tim Berners-Lee did in creating the standards by which we link documents, just on a more granular scale.

Existing text analysis techniques can identify entities within documents. For example, in the sentence “Haruhiko Kuroda, governor of Bank of Japan, announced 0.1 percent growth,” ‘Haruhiko Kuroda’ and ‘Bank of Japan’ are both entities, and they are ‘tagged’ as such using specialised markup language. These tags are simply a way of highlighting that the text has some significance; it remains with the human user to understand what the tags mean.

 

1 taggingOnce tagged, entities can then be recognised and have information from various sources associated with them. Groundbreaking? Not really. It’s easy to tag content such that the system knows that “Haruhiko Kuroda” is a type of ‘person’, however this still requires human input.

2 named entity recognition

Where semantics gets more interesting is in the representation and analysis of the relationships between these entities. Using the same example, the system is able to create a formal, machine-readable relationship between Haruhiko Kuroda, his role as the governor, and the Bank of Japan.

3 relation extraction

In order for this to happen, the pre-existing environment must be defined. In order for the system to understand that ‘governor’ is a ‘job’ which exists within the entity of ‘Bank of Japan’, a rule must exist which states this as an abstraction. This is called an ontology.

Think of an ontology as the rule-book: it describes the world in which the source material exists. If semantic technology was used in the context of pharmaceuticals, the ontology would be full of information about classifications of diseases, disorders, body systems and their relationships to each other. If the same technology was used in the context of the football World Cup, the ontology would contain information about footballers, managers, teams and the relationships between those entities.

What happens when we put this all together? We can begin to infer relationships between entities in a system that have not been directly linked by human action.

4 inference

An example: a visitor arrives on the website of a newspaper and would like information about bank governors in Asia. Semantic technology allows the website to return a much more sophisticated set of results from the initial search query. Because the system has an understanding of the relationships defining bank governors generally (via the the ontology), it is able to leverage the entire database of published text content in a more sophisticated way, capturing relationships that would have been overlooked by computer analysis alone. The result is that the user is provided with content more closely aligned to what they are already reading.

Read the sentence and answer the question: “What is a ‘Haruhiko Kuroda’?” As a human the answer is obvious. He is several things: human, male, and a governor of the Bank of Japan. This is the type of analytical thought process, this ability to assign traits to entities and then use these traits to infer relationships between new entities, that has so far eluded computer systems. The technology allows the inference of relationships that are not specifically stated within the source material: because the system knows that Haruhiko Kuroda is governor of Bank of Japan, it is able to infer that he works with other employees of the Bank of Japan, that he lives in Tokyo, which is in Japan, which is a set of islands in the Pacific.

Companies such as the BBC, which Ontotext has worked with, are sitting on more text data than they have ever experienced before. This is hardly unique to the publishing industry, either. According to Eric Schmidt, former Google CEO and executive chairman of Alphabet, every two days we create as much information as was generated from the dawn of civilisation up until 2003 – and he said that in 2010. Five years later and businesses of all sizes are waking up to this fact – they must invest in the infrastructure to fully take advantage of their own data. You may not be aware of it, but you are already using semantic technology every day. Take Google search as an example: when you input a search term, for example ‘Bulgaria’, two columns appear. On the left are the actual search results, and on the right are semantic search results: information about the country’s flag, capital, currency and other information that is pulled from various sources based on semantic inference.

Written by Jarred McGinnis, UK managing consultant at Ontotext

Public cloud generating $22 billion a quarter for IT Companies

metalcloud_lowresPublic cloud computing generated over $22 billion in revenues for IT companies in the second financial quarter of 2015, according to a study by Synergy Research Group.

The revenue breaks down into $10 billion earned by companies supplying public cloud operators with hardware, software and data centre facilities and $12 billion being generated from selling infrastructure, platforms and software as a service.

In addition the public cloud supports ‘huge’ revenue streams from a variety of internet services such as search, social networking, email and e-commerce platforms, says the report. It identifies the supply side companies with the biggest share of revenues as Cisco, HP, Dell, IBM and Equinix. On the cloud services side the market leaders are AWS, Microsoft, Salesforce, Google and IBM.

As the public cloud makes inroads into the total IT market, the hardware and software used to build public clouds now account for 24 per cent of all data centre infrastructure spending. Public cloud operators and associated digital content companies account for 47 per cent of the data centre colocation market.

While the total IT market grew at less than five per cent per year, the growth of cloud revenues outpaced it. Infrastructure and platform as a service revenues (Iaas/Paas) grew by 49 per cent in the past year and software as a service (SaaS) grew by 29 per cent.

“Public cloud is now a market that is characterized by big numbers, high growth rates and a relatively small number of global IT players,” said Synergy Research Group’s chief analyst Jeremy Duke.

However, the report noted that there is still a place for regional small-medium sized public cloud players.

Enterprise Release Management By @DaliborSiroky | @DevOpsSummit #DevOps #BigData #API #Docker

Large organizations engaged in enterprise release management seldom have a single “enterprise release manager.” Instead of a single, “enterprise-wide” responsibility, most large, decentralized organizations assign responsibility for more strategic, release management functions to several existing roles.

An enterprise release management practice supports and is supported by the following enterprise release management roles:

IT Portfolio Management – An efficient ERM practice provides portfolio managers greater visibility into changes affecting multiple systems to create a consolidated status for change initiatives across an entire portfolio. By assembling data across multiple initiatives, ERM facilitates a process of continuous improvement at the portfolio level giving organizations a central mechanism to track common challenges and lessons learned. With ERM IT Portfolio Managers make strategic adjustments to both staffing and spend across departments as change initiatives evolve continuously.

read more

You may be right to be worried about DRaaS – but help is at hand

(c)iStock.com/stevanovicigor

Latest figures from the Cloud Industry Forum (CIF) indicate that cloud adoption is at its highest figure to date, with 78 per cent of organisations now having formally adopted at least one type of cloud-based service. TechNavio echoes this surge and in particular the surge in growth of disaster recovery as a service, forecasting a compound annual growth rate of 54.64 per cent between 2014 and 2018. However, despite the striking numbers and growth expectations there are still many IT professionals out there who have fears about adopting disaster recovery as a service.

Today, most companies are beginning to realise that they are not well prepared to face adversities. Right now business and IT executives want guarantees that disaster recovery processes actually work and they owe it to themselves, their employees, customers and investors to make sure this is the case.

That said, we constantly hear that most IT executives cannot satisfactorily answer a simple question: “If your systems went down, would your company be able to get them up and running again within a timeframe that meets your business requirements and are you able to recover critical business data?”

A recent example that just highlights how easily this can happen is the lightning strike that hit one of Google’s data centres four times and resulted in some people losing their data forever. Losing data is never a good thing, but losing data forever, as a business is unthinkable.  Apparently a number of the disks in the Belgian data centre were completely wiped, meaning some people have permanently lost files.  This event illustrates that even providers like Google can find themselves subject to acts of nature that can disrupt or destroy critical business data.

This underlines the need for all businesses to adopt geographic and even multi-vendor redundancy to ensure proper measures for disaster recovery. The good news is that the industry has evolved to the point that there are disaster recovery solutions for any budget – as long as we can convince those that are worried about DRaaS.

IT folks who have fears around DRaaS tend to become the ‘worriers’ or ‘blockers’ in their organisations and resist attempts from IT management and the C-suite to implement a cloud-based disaster recovery solution. That’s not good for either them or the organisation as the potential for data centre outages is only increasing and DRaaS is a proven, reliable and cost-effective way to maintain business continuity when faced with a disaster.

So, what specifically is IT afraid of when it comes to DRaaS? We’ve found it falls into three main areas:

– Losing control and visibility – you don’t want your applications and data sent into the abyss. You want to know exactly where everything is at all times, how it is performing and define exactly what needs to be failed over and when.

– Trusting cloud infrastructure – trusting your data and applications to a cloud service provider and being able to rely on that in a disaster is a challenge for many, particularly for those in highly regulated industries such as healthcare and finance.

– Uncontrollable costs – one of the reassuring things about a physical disaster recovery solution is that costs are predictable – you may want to avoid complex DRaaS pricing algorithms that make budgeting a nightmare.  IT should be wary about hidden costs in any disaster recovery or backup solution.

These fears are all valid and yet all of these can be overcome with the right DRaaS solution.

In terms of maintaining control and visibility, the cloud portal that iland offers delivers granular management of cloud resources and costs. Customers can view performance, capacity and usage metrics, initiate failover and failback, re-allocate workloads and much more. With that kind of control, the IT ‘worriers’ should have no need to fear the unknown – they have full visibility into their resources and costs and can proactively manage them, along with backup should they need it.

Security of cloud infrastructure should absolutely be at the top of your DRaaS shopping list. Additionally our US, UK and APAC data centres are designed to meet advanced security and compliance standards with vulnerability scanning, intrusion detection and whole disk encryption being just some of the security features. iland data centershold SSAE 16 and ISO 9001/27001 certifications and cloud to cloud replication is available if our customers need a secondary failover site.

Our DRaaS offering enables a near-zero Recovery Time Objective (RTO) and self-service testing so you get the peace of mind that comes with knowing that business continuity is assured. Mind you, we get the need for straight-forward pricing. Disaster Recovery is too important to be spending hours trying to figure out what it’s going to cost to protect your business.

So I hope you can see that if you’re a DRaaS worrier, there are now a lot fewer reasons to be afraid of DRaaS and resist implementing it in your organisation. In fact, you may want to jump on board with the DRaaS optimists.

Businesses, driven by customer expectations and auditors will begin to care about more than just data restoration, they will care about business service restoration. As a result, there is and will continue to be a greater need to verify the ability of an organisation to bring virtual and cloud-based business applications back into service within strict and very fast service level agreements.

Lack of vendor visibility is number one pain point for cloud customers

(c)iStock.com/Evgeny Sergeev

New survey data revealed by the SANS Institute shows more issues cloud providers face in keeping their customers happy, with a lack of visibility into operations the biggest bugbear for clients.

The report, entitled ‘Orchestrating Security in the Cloud’, spoke to 485 IT professionals and found lack of visibility was cited by 48% of respondents as a problem. A lack of virtual machine and workload visibility was selected by 46% of those polled, while vulnerabilities introduced by the vendor which resulted in a breach were a pain point for 26% of respondents.

One in three respondents (33%) say they do not have enough visibility into their cloud providers’ operations, while 40% admit unauthorised access to sensitive data from other tenants is a major concern with public cloud deployments. For the public cloud, denial of service is the biggest threat (36%), compared to malware for private cloud (33%).

Not altogether unsurprisingly, the research also found hybrid cloud architectures were the way to go forward for most respondents. 40% of those polled are currently using them, while 43% plan to move towards a hybrid architecture in the coming 12 months. Only 12% of organisations say they use public cloud.

For CloudPassage, who sponsored the study, it reinforces what the company already suspected; trying to increase speed of elastic infrastructure while maintaining security is a tough balancing act. SANS analyst Dave Shackleford, who authored the report, noted: “Although most organisations have not experienced a breach in the cloud, security teams are concerned about illicit account and data access, maintaining compliance and integrating with on-premise security controls.

“Visibility into cloud environments remains a challenge, as does implementing cloud-focused incident response and pen testing processes,” he added.

This is not the first piece of research which argues cloud providers are not doing enough to satisfy their customers. A report from iland and Forrester Research, published in June, argued the key to vendors building better relationships with their clients is to release metadata exposing performance, security and cost. One in three survey respondents agreed with the statement ‘my provider charges me for every little question or incident’, while 45% resonated with ‘if I were a bigger customer, my cloud provider would care more about my success.’

Updated Racemi Software Makes Cloud Migrations Easier | @CloudExpo #Cloud

Racemi, a provider of automated server migration software, announces availability of updated DynaCenter software that offers Amazon Web Services (AWS) Cloud Formation templates to simplify installation and configuration, plus support for eight additional IBM SoftLayer data centers giving customers a choice of deploying to 20 SoftLayer data centers around the globe.

read more

Why Cloud Workload Optimization Is Optimal By @ABridgwater | @CloudExpo #Cloud

Buy a cloud, any cloud you like and stick it on whatever application and data storage, management and/or analysis workload you like – that’s all it takes, right?
Actually, that’s kind of not how it works.
This thing called ‘optimization’ is one of those often used industry buzzwords that unfortunately gets way overused to the point where its initial method and meaning becomes dangerously diluted.

read more

Is the Cloud Right for You?

I recently presented a session entitled, “Is the Cloud Right for You?” with Randy Weis and wanted to provide a recap of the things I covered in the presentation. In this video, I discuss some of the advantages of cloud including the access to enterprise class hardware that you might not normally be able to afford, load balancers, multiple data centers, redundancy, automation and more. I also cover some of the risks associated with the cloud. Enjoy, and as always, reach out with any questions!

Download eBook: The Evolution of the Corporate IT Department

By Chris Chesley, Solutions Architect

Is the Cloud Right for You?

I recently presented a session entitled, “Is the Cloud Right for You?” with Randy Weis and wanted to provide a recap of the things I covered in the presentation. In this video, I discuss some of the advantages of cloud including the access to enterprise class hardware that you might not normally be able to afford, load balancers, multiple data centers, redundancy, automation and more. I also cover some of the risks associated with the cloud. Enjoy, and as always, reach out with any questions!

 

Download eBook: The Evolution of the Corporate IT Department

 

By Chris Chesley, Solutions Architect