Todas las entradas hechas por Guest Author

Containers at Christmas: wrapping, cloud and competition

Empty road and containers in harbor at sunsetAs anyone that’s ever been disappointed by a Christmas present will tell you – shiny packaging can be very misleading. As we hear all the time, it’s what’s inside that counts…

What then, are we to make of the Docker hype, centred precisely on shiny, new packaging? (Docker is the vendor that two years ago found a way to containerise applications: other types of containers, operating system containers, have been around for a couple of decades)

It is not all about the packaging, of course. Perhaps we should say that it is more about on what the package is placed, and how it is managed (amongst other things) that matters most?

Regardless, containers are one part of a changing cloud, data centre and enterprise IT landscape, the ‘cloud native’ movement widely seen as driving a significant shift in enterprise infrastructure and application development.

What the industry is trying to figure out, and what could prove the most disruptive angle to watch as more and more enterprises roll out containers into production, is the developing competition within this whole container/cloud/data centre market.

The question of competition is a very hot topic in the container, devops and cloud space.  Nobody could have thought the OCI co-operation between Docker and CoreOS meant they were suddenly BFFs. Indeed, the drive to become the enterprise container of choice now seems to be at the forefront of both companies’ plans. Is this, however, the most dynamic relationship in the space? What about the Google-Docker-Mesos orchestration game? It would seem that Google’s trusted container experience is already allowing it to gain favour with enterprises, with Kubernetes taking a lead. And with CoreOS in bed with Google’s open source Kubernetes, placing it at the heart of Tectonic, does this mean that CoreOS has a stronger play in the enterprise market to Docker? We will wait and see…

We will also wait and see how the Big Cloud Three will come out of the expected container-driven market shift. Somebody described AWS as ‘a BT’ to me…that is, the incumbent who will be affected most by the new disruptive changes brought by containers, since it makes a lot of money from an older model of infrastructure….

Microsoft’s container ambition is also being watched closely. There is a lot of interest from both the development and IT Ops communities in their play in the emerging ecosystem. At a recent meet-up, an Azure evangelist had to field a number of deeply technical questions regarding exactly how Microsoft’s containers fair next to Linux’s. The question is whether, when assessing who will win the largest piece of the enterprise pie, this will prove the crux of the matter?

Containers are not merely changing the enterprise cloud game (with third place Google seemingly getting it very right) but also driving the IT Ops’ DevOps dream to reality; in fact, many are predicting that it could eventually prove a bit of a threat to Chef and Puppet’s future….

So, maybe kids at Christmas have got it right….it is all about the wrapping and boxes! We’ll have to wait a little longer than Christmas Day to find out.

Lucy Ashton. Head of Content & Production, Container WorldWritten by Lucy Ashton, Head of Content & Production, Container World

The end of the artisan world of IT computing

cloud computing machine learning autonomousWe are all working toward an era of autonomics ‒ a time when machines not only automate key processes and tasks, but truly begin to analyse and make decisions for themselves. We are on the cusp of a golden age for our ability to utilise the capacity of the machines that we create.

There is a lot of research about autonomic cloud computing and therefore there are a lot of definitions as to what it is. The definition from webopedia probably does the best job at describing autonomic computing.

It is, it says: “A type of computing model in which the system is self-healing, self-configured, self-protected and self-managed. Designed to mimic the human body’s nervous system in that the autonomic nervous system acts and reacts to stimuli independent of the individual’s conscious input.

“An autonomic computing environment functions with a high level of artificial intelligence while remaining invisible to the users. Just as the human body acts and responds without the individual controlling functions (e.g., internal temperature rises and falls, breathing rate fluctuates, glands secrete hormones in response to stimulus), the autonomic computing environment operates organically in response to the input it collects.”

Some of the features of autonomic computing are available today for organisations that have completed – or at least partly completed – their journey to the cloud. The more information that machines can interpret, the more opportunity they have to understand the world around them.

It spells the death of the artisan IT worker – a person working exclusively with one company, maintaining the servers and systems that kept a company running. Today, the ‘cloud’ has literally turned computing on its head. Companies can access computing services and storage at the click of a button, providing scalability, agility and control to exactly meet their needs. Companies pay for what they get and can scale up or down instantly. What’s more, they don’t need their army of IT artisans to keep the operation running.

This, of course, assumes that the applications that leverage the cloud have been developed to be native using a model like the one developed by Adam Wiggins, who co-founded Heroku. However, many current applications and the software stacks that support them can also use the cloud successfully.

More and more companies are beginning to realise the benefit that cloud can provide, either private, public or hybrid. For start-ups, the decision is easy. They are ‘cloud first’ businesses with no overheads or legacy IT infrastructure to slow them down. For CIOs of larger organisations, it’s a different picture. They need to move from a complex, heterogeneous IT infrastructure into the highly orchestrated and automated – and ultimately, highly scalable and autonomic – homogeneous new world.

CIOs are looking for companies with deep domain expertise as well as infrastructure at scale. In the switch to cloud services, the provision of managed services remains essential. To ensure a smooth and successful journey to the cloud, enterprises need a company that can bridge the gap between the heterogeneous and homogeneous infrastructure.

Using a trusted service provider to bridge that gap is vital to maintain a consistent service level to the business users that use or consume the application being hosted. But a cloud user has many more choices to make in the provision of their services. Companies can take a ‘do it myself approach’, where they are willing to outsource their web platform but keep control of testing and development. Alternatively, they can take a ‘do it with me’ approach, working closely with a provider in areas such as managed security and managed application services. This spreads the responsibility between the customer and provider, which can be decided at the outset of the contract.

In the final ‘do it for me’ scenario, trust in the service provider is absolute. It allows the enterprise customer to focus fully on the business outcomes. As more services are brought into the automation layer, delivery picks up speed which in turn means quick, predictable and high-quality service.

Hybrid cloud presents a scenario of the ‘best of both worlds’. Companies are secure in the knowledge that their most valuable data assets are still either on premise in the company’s own private servers or within a trusted hosting facility utilising isolated services. At the same time, they can rely on the flexibility of cloud to provide computing services that can be scaled up or down at will, at a much better price point than would otherwise be the case.

Companies who learn to trust their service provider will get the best user experience. In essence, the provider must become an extension of the customer’s business and not operate on the fringes as a vendor.

People, processes and technology all go together to create an IT solution. But they need to integrate between the company and the service provider as part of a cohesive solution to meet the company’s needs. The solution needs to be relevant for today but able to evolve in the future as business priorities change. Only then can we work toward a future where autonomics begins to play a much bigger part in our working lives.

Eventually, autonomic computing can evolve almost naturally, much like human intelligence has over the millennia. The only difference is that with cloud computing the advances will be made in years, not thousands of years. We are not there yet, but watch this space. In your lifetime, we are more than likely to make that breakthrough to lead us into a brave new world of cloud computing.

 

Written by Jamie Tyler, CenturyLink’s Director of Solutions Engineering, EMEA

Happy (belated) birthday, OpenStack: you have much to look forward to

Now past the five-year anniversary of OpenStack’s creation, the half-decade milestone provides an opportunity to look back on how far the project has come in that time – and to peer thoughtfully into OpenStack’s next few years. At present, OpenStack represents the collective efforts of hundreds of companies and an army of developers numbering in the thousands. Their active engagement in continually pushing the project’s technical boundaries and implementing new capabilities – demanded by OpenStack operators – has defined its success.

Companies involved with OpenStack include some of the most prestigious and interesting tech enterprises out there, so it’s no surprise that this past year has seen tremendous momentum surrounding OpenStack’s Win the Enterprise program. This initiative – central to the future of the OpenStack project – garnered displays of the same contagious enthusiasm demonstrated in the stratospheric year-over-year growth in attendance at OpenStack Summits (the most version of the event, held in Tokyo, being no exception). The widespread desire of respected and highly-capable companies and individuals to be involved with the project is profoundly assuring, and proves the recognition of OpenStack as a frontrunner for the title of most innovative software and development community when it comes to serving enterprises’ needs for cloud services.

With enterprise adoption front of mind, these are the key trends now propelling OpenStack into its next five years:

Continuing to Redefine OpenStack

The collaborative open source nature of OpenStack has successfully provided the project with many more facets and functionalities than could be dreamt of initially five years ago, and this increase in scope (along with the rise of myriad new related components) has led to the serious question: “What is OpenStack?” This is not merely an esoteric query – enterprises and operators must know what available software is-and-is-not OpenStack in order to proceed confidently in their decision-making around the implementation of consistent solutions in their clouds. Developers require clarity here as well, as their applications may potentially need to be prepared to operate across different public and private OpenStack clouds in multiple regions.

If someone were to look up OpenStack in the dictionary (although not yet in Webster’s), what they’d see there would be the output of OpenStack’s DefCore project, which has implemented a process that now has a number of monthly definition cycles under its belt. This process bases the definition of a piece of software as belonging to OpenStack on core capabilities, implementation code and APIs, and utilizes RefStack verification tests. Now OpenStack distributions and operators have this DefCore process to rely on in striving for consistent OpenStack implementations, especially for enterprise.

Enterprise Implementation Made Easy

The OpenStack developer community is operating under a new “big tent” paradigm, tightening coordination on project roadmaps and releases through mid-cycle planning sessions and improved communication. The intended result? A more integrated and well-documented stack. Actively inviting new major corporate sponsors and contributors (for example Fujitsu, a new Gold member of OpenStack as of this July) has helped to better inform the ease of implementation with which enterprise can get on board with OpenStack.

Of course, OpenStack will still require expertise to be implemented for any particular use case, as it’s a complicated, highly configurable piece of software that can run across distributed systems – not to mention the knowledge needed to select storage sub-systems and networking options, and to manage a production environment at scale. However, many capable distribution and implementation partners have arisen worldwide to provide for these needs (Mirantis, Canonical, Red Hat, Aptira, etc), and these certainly have advantages over proprietary choices when looking at the costs and effort it takes to get a production cloud up and running.

The OpenStack Accelerator

A positive phenomena that enterprises experience when enabling their developers and IT teams to work within the OpenStack community is seen in the dividends gained from new insights into technologies that can be valuable within their own IT infrastructure. The open collaborations at the heart of OpenStack expose contributors to a vast ecosystem of OpenStack innovations, which enterprises then benefit from internalizing. Examples of these innovations include network virtualization software (Astara, MidoNet), software-defined storage (Swift, Ceph, SolidFire), configuration management tools (Chef, Puppet, Ansible), and a new world of hardware components and systems offering enough benefit to make enterprises begin planning how to take advantage of them.

The pace of change driven by OpenStack’s fast-moving platform is now such that it can even create concern in many quarters of the IT industry. Enterprise-grade technology that evolves quickly and attracts a lot of investment interest will always have its detractors. Incumbent vendors fear erosion of market share. IT services providers fear retooling their expertise and workflows. Startups (healthily) fear the prospect of failure. But the difference is that startups and innovators choose to embrace what’s new anyway, despite the fear. That drives technology forward, and fast. And even when innovators don’t succeed, they leave behind a rich legacy of new software, talent, and tribal knowledge that we all stand on the shoulders of today. This has been so in the OpenStack community, and speaks well of its future.

 

DreamHost - Stefano Maffulli HeadshotStefano Maffulli is the Director of Cloud and Community at DreamHost, a global web hosting and cloud services provider whose offerings include the cloud computing service DreamCompute powered by OpenStack, and the cloud storage service DreamObjects powered by Ceph.

How data classification and security issues are affecting international standards in public sector cloud

Cloud technology is rapidly becoming the new normal, replacing traditional IT solutions. The revenues of top cloud service providers are doubling each year, at the start of a predicted period of sustained growth in cloud services. The private sector is leading this growth in workloads migrating to the cloud. Governments, however, are bringing up the rear, with under 5 percent of a given country’s public sector IT budget being dedicated to cloud spending. Once the public sector tackle the blockers  that are preventing uptake, spending looks likely to rapidly increase.

The classic NIST definition of the Cloud specifies Software (SaaS), Platform (PaaS) and Infrastructure (IaaS) as the main Cloud services (see figure 1 below), where each is supplied via network access on a self-service, on-demand, one-to-many, scalable and metered basis, from a private (dedicated), community (group), public (multi-tenant) or hybrid (load balancing) Cloud data centre.

Figure 1: Customer Managed to Cloud Service Provider Managed: The Continuum of Cloud Services

 

Kemp aas diagram 2

The Continuum of Cloud Services

 

The benefits of the Cloud are real and evidenced, especially between the private and public cloud where public cloud economies of scale, demand diversification and multi-tenancy are estimated to drive down the costs of an equivalent private cloud by up to ninety percent.

Also equally real are the blockers to public sector cloud adoption, where studies consistently show that management of security risk is at the centre of practical, front-line worries about cloud take-up, and that removing them will be indispensable to unlocking the potential for growth.  Demonstrating effective management of cloud security to and for all stakeholders is therefore central to cloud adoption by the public sector and a key driver of government cloud policy.

A number of governments have been at the forefront of developing an effective approach to cloud security management, especially the UK which has published a full suite of documentation covering the essentials.  (A list of the UK government documentation – which serves as an accessible ‘how to’ for countries who do not want to reinvent this particular wheel – is set out in the Annex to our white paper, Seeding the Public Cloud: Part II – the UK’s approach as a pathfinder for other countries).  The key elements for effective cloud security management have emerged as:

  • a transparent and published cloud security framework based on the data classification;
  • a structured and transparent approach to data classification; and
  • the use of international standards as an effective way to demonstrate compliance with the cloud security framework.

Data classification enables a cloud security framework to be developed and mapped to the different kinds of data. Here, the UK government has published a full set of cloud security principles, guidance and implementation dealing with the range of relevant issues from data in transit protection through to security of supply chain, personnel, service operations and consumer management. These cloud security principles have been taken up by the supplier community, and tier one providers like Amazon and Microsoft have published documentation based on them in order to assist UK public sector customers in making cloud service buying decisions consistently with the mandated requirements.

Data classification is the real key to unlocking the cloud. This allows organisations to categorise the data they possess by sensitivity and business impact in order to assess risk. The UK has recently moved to a three tier classification model (OFFICIAL → SECRET → TOP SECRET) and has indicated that the OFFICIAL category ‘covers up to ninety percent of public sector business’ like most policy development, service delivery, legal advice, personal data, contracts, statistics, case files, and administrative data. OFFICIAL data in the UK ‘must be secured against a threat model that is broadly similar to that faced by a large UK private company’ with levels of security controls that ‘are based on good, commercially available products in the same way that the best-run businesses manage their sensitive information’.

Compliance with the published security framework, in turn based on the data classification, can then be evidenced through procedures designed to assess and certify achievement of the cloud security standards. The UK’s cloud security guidance on standards references ISO 27001 as a standard to assess implementation of its cloud security principles.  ISO 27001 sets out for managing information security certain control objectives and the controls themselves against which an organisation can be certified, audited and benchmarked.  Organisations can request third party certification assurance and this certification can then be provided to the organisation’s customers.  ISO 27001 certification is generally expected for approved providers of UK G-Cloud services.

Allowing the public sector cloud to achieve its potential will take a combination of comprehensive data classification, effective cloud security frameworks, and the pragmatic assurance provided by evidenced adherence to generally accepted international standards. These will remove the blockers on the public sector cloud, unlocking the clear benefits.

Written by Richard Kemp, Founder of Kemp IT Law

Bringing the enterprise out of the shadows

Ian McEwanIan McEwan, VP and General Manager, EMEA at Egnyte discusses why IT departments must provide employees with secure, adaptive cloud-based file sync and share services, or run the risk of ‘shadow IT’ — inviting major security vulnerabilities and compliance issues within organisations.

The advent of cloud technology has brought a wide range of benefits to businesses of all sizes, improving processes by offering on-demand, distributed access to the information and applications that employees rely on. This change has not only made IT easier for businesses, it is also fueling new business models and leading to increased revenues for those making best use of the emerging technology.

The cloud arguably offers a business the greatest benefit when used for file sync and share services, allowing users to collaborate on projects in real-time, at any time on any device from any geographic location. File sync and share makes email attachments redundant, allowing businesses to reclaim and reduce the daily time spent by employees on email, as well as the chances of files being lost, leaked or overwritten. If used correctly, IT departments can have a comprehensive overview of all the files and activity on the system, enabling considerably better file management and organisation.

Employees ahead of the corporate crowd

Unfortunately business adoption of file sharing services is often behind where employees would like it to be and staff are turning to ‘shadow IT’ – unsanctioned consumer-grade file sharing solutions. These services undermine the security and centralised control of IT departments. Businesses lose visibility over who has access to certain files and where they are being stored, which can lead to serious security and compliance problems.

CIOs need to protect their companies from the negative impact of unsanctioned cloud applications by implementing a secure solution that monitors all file activity across their business.

Secure cloud-based file sharing

To satisfy both the individual user and business as a whole, IT departments need to identify file sharing services that deliver the agility that comes with storing files in the cloud. It starts with ensuring that a five-pronged security strategy is in place that can apply consistent, effective control and protection over the corporate information throughout its lifecycle. This strategy should cover:

  • User Security – controlling who can access which files, what they can do with them and how long their access will last.
  • Device Security – protecting corporate information at the point of consumption on end user devices.
  • Network Security – protecting data in transit (over encrypted channels) to prevent eavesdropping and tampering.
  • Data Centre Security – providing a choice of deployment model that offers storage options both on premises and in the cloud and total control over where the data is stored.
  • Content Security – attaching policies to the content itself to ensure it can’t leave the company’s controlled environment even when downloaded to a device.

A solution that addresses these security areas will allow efficient collaboration without sacrificing security, compliance and control.

A user friendly, business ready solution

Furthermore, the selected solution and strategy will need to keep up with business demands and industry regulations. Flexibility can be achieved if businesses consider adaptive file sharing services that give them access to files regardless of where they are stored – in the cloud, on premises or a hybrid approach. This enables a business to adapt the service for its own changing business preferences, as well as industry standards that can dictate where data is stored and how it is shared. Recent changes to the US-EU Safe Harbour regulations which determine how businesses from the US and EU must share and keep track of data, highlight the necessity for businesses to have an adaptive file sharing solution in place to meet the demands of new regulations,  or else risk heavy fines and reputational damage.

The final hurdle towards successful implementation of a cloud-based file sharing service is ensuring user adoption through simple functionality. If a service isn’t easy to use, staff may find themselves falling back on shadow IT services due to convenience. It is important, therefore, that IT seeks solutions that can be accessed across all devices, and can be integrated with other popular applications already in used within an organisation.

The integrity and privacy of a business’ information requires a secure, adaptive cloud-based file sharing solution that gives organisations comprehensive visibility and control across the lifecycle of its data. Overlooking the security implications of shadow IT services can result in a company incurring significant costs – not just in financial terms, but for a company’s brand, reputation and growth potential. It’s time for IT departments to act now and adopt cloud services that enable efficient collaboration, mitigate any chances of risk and lift the shadow from corporate data.

3 approaches to a successful enterprise IT platform rollout strategy

enterprise IT rolloutExecuting a successful enterprise IT platform rollout is as much about earning widespread support as it is about proper pacing. It’s necessary to sell the rollout within the organization, both to win budget approval and to gain general acceptance so that adoption of the new platform goes smoothly.

Each group being asked to change their ways and learn this new platform must have the value of the rollout identified and demonstrated for them. The goal of the rollout process is to see the platform solution become successfully adopted, self-sustaining, efficient in assisting users, and, ultimately, seamlessly embedded into the organization’s way of doing business.

Deploying a new solution for use across an organization boils down to three approaches, each with their advantages and drawbacks: rolling out slowly (to one department at a time), rolling out all at once (across the entire organization), or a cleverly targeted mix of the two.

Vertical Rollouts (taking departments one at a time, slow and steady)

This strategy applies when selecting a single department or business function within the organization (ex: customer support, HR, etc.), for an initial targeted rollout and deploying the new platform in phases to each vertical, one at a time. The benefit here is a greater focus on the specific needs and usage models within the department that is receiving full attention during their phase of the rollout implementation, yielding advantages in the customization of training and tools to best fit those users.

For example, the tools and interfaces used daily by customer service personnel may be entirely irrelevant to HR staff or to engineers, who will appreciate that their own solutions are being streamlined and that their time is being respected, rather than needing to accept a crude one-size-fits-all treatment and have to work to discover what components apply to them. It’s then more obvious to each vertical audience what the value added is for them personally, better garnering support and fast platform adoption. Because this type of rollout is incremental, it’s ripe for iterative improvements and evolution based on user feedback.

Where vertical, phased rollouts are less effective is in gaining visibility within the organization, and in lacking the rallying cry of an all-in effort. This can make it difficult to win over those in departments that aren’t offered the same immediate advantages, and to achieve the critical mass of adoption necessary to launch a platform into a self-sustaining orbit (even for those tools that could benefit any user regardless of their department).

Horizontal Rollouts (deploying to everyone at the same time)

Delivering components of a new platform across all departments at once comes with the power of an official company decree: “get on board because this is what we’re doing now.” This kind of large-scale rollout makes everyone take notice, and often makes it easier not only to get budget approval (for one large scale project and platform rather than a slew of small ones), but also to fold the effort into an overall company roadmap and present it as part of a cohesive strategy. Similar organizational roles in the company can connect and benefit from each other with a horizontal rollout, pooling their knowledge and best practices for using certain relevant tools and templates.

This strategy of reaching widely with the rollout helps to ensure continuity within the organization. However, big rollouts come with big stakes: the organization only gets one try to get the messaging and the execution correct – there aren’t opportunities to learn from missteps on a smaller scale and work out the kinks. Users in each department won’t receive special attention to ensure that they receive and recognize value from the rollout. In the worst-case scenario, a user may log in to the new platform for the first time, not see anything that speaks to them and their needs in a compelling way, and not return, at least not until the organization wages a costly revitalization campaign to try and win them over properly.  Even in this revitalization effort, a company may find users jaded by the loss of their investment in the previous platform rollout.

The Hybrid Approach to Rollouts

For many, the best rollout strategy will borrow a little from both of the approaches above. An organization can control the horizontal and the vertical aspects of a rollout to produce a two-dimensional, targeted deployment, with all the strengths of the approaches detailed above and less of the weaknesses. With this approach, each phase of a rollout can engage more closely with specific vertical groups that the tools being deployed most affect, while simultaneously casting a wide horizontal net to increase visibility and convey the rollouts as company initiatives key to overall strategy and demanding of attention across departments. Smartly targeting hybrid rollouts to introduce tools valuable across verticals – while focusing on the most valuable use case within each vertical – is essential to success with them. In short, hybrid rollouts offer something for many, and a lot specifically for the target user being introduced to the new platform.

In executing a hybrid rollout of your enterprise IT platform, begin with a foundational phase that addresses horizontal use cases, while enticing users with the knowledge that more is coming. Solicit and utilize user feedback, and put this information to work in serving more advanced use cases as the platform iterates and improves. Next, start making the case for why the vertical group with the most horizontally applicable use cases should embrace the platform. With that initial group of supporters won over, you have a staging area to approach other verticals with specific hybrid rollouts, putting together the puzzle of how best to approach each while showcasing a wide scope and specific value added for each type of user. Importantly, don’t try to sell the platform as immediately being all things to all people. Instead, define and convey a solid vision for the platform, identify the purpose of the existing release, and let these hybrid rollouts take hold at a natural pace. This allows the separate phases to win their target constituents and act as segments to a cohesive overall strategy.

If properly planned and executed, your enterprise IT platform rollout will look not like a patchwork quilt with benefits for some and not others, but rather a rich tapestry of solutions inviting to everyone, and beneficial to the organization as a whole.

 

roguen kellerWritten by Roguen Keller, Director of Global Services at Liferay, an enterprise open source portal and collaboration software company.

Why visibility and control are critical for container security

Reacting to the steady flow of reported security breaches in open source components such as Heartbleed, Shellshock and Poodle is making organisations focus increasingly on making the software they build more secure, improving application delivery, agility and security. As organisations increasingly turn to containers to improve application delivery and agility, the security ramifications of the containers and their contents are coming under increased scrutiny.

An overview of today’s container security initiatives 

Container providers such as Docker and Red Hat, are aggressively moving towards reassuring the marketplace about container security. Ultimately, they are focusing on the use of encryption to secure the code and software versions running in Docker users’ software infrastructure to protect users from malicious backdoors included in shared application images and other potential security threats.

However, this method is slowly being put under scrutiny as it covers only one aspect of container security, excluding whether software stacks and application portfolios are free of known, exploitable versions of open source code.

Without open source hygiene, Docker Content Trust will only ever ensure that Docker images contain the exact same bits that developers originally put there, including any vulnerabilities present in the open source components. Therefore, they only amount to a partial solution.

A more holistic approach to container security

Knowing that the container is free of vulnerabilities at the time of initial build and deployment is necessary, but far from sufficient. New vulnerabilities are being constantly discovered and these can often impact older versions of open source components. Therefore, what’s needed is an informed open source technology that provides selection and vigilance opportunities to users.

Moreover, the security risk posed by a container also depends on the sensitivity of the data accessed via it, as well as the location of where the container is deployed. For example, whether the container is deployed on the internal network behind a firewall or if it’s internet-facing will affect the level of risk.

In this context, a publicly available attack makes containers subject to a range of threats, including cross-scripting, SQL injection and denial-of-services which containers deployed on an internal network behind a firewall wouldn’t be exposed to.

For this reason, having visibility into the code inside containers is a critical element of container security, even aside from the issue of security of the containers themselves.

It’s critical to develop robust processes for determining; what open source software resides in or is deployed along with an application, where this open source software is located in build trees and system architectures, whether the code exhibits security vulnerabilities and whether an accurate open source risk profile exists.

Will security concerns slow container adoption? – The industry analysts’ perspective

Enterprise organisations today are embracing containers because of their proven benefits; improved application scalability, fewer deployment errors, faster time to market and simplified application management. However, just as organisations have moved over the years from viewing open source as a curiosity to understanding its business necessity, containers seem to have reached a similar tipping point. The question now seems to be shifting towards whether security concerns about containers will inhibit further adoption. Industry analysts differ in their assessment of this.

By drawing a parallel to the rapid adoption of virtualisation technologies even before the establishment of security requirements Dave Bartoletti, Principal Analyst at Forrester Research, believes security concerns won’t significantly slow container adoption. “With virtualization, people deployed anyway, even when security and compliance hadn’t caught up yet, and I think we’ll see a lot of the same with Docker,” according to Bartoletti.

Meanwhile, Adrian Sanabria Senior Security Analyst at 451 Research believes enterprises will give containers a wide berth until security standards are identified and established. “The reality is that security is still a barrier today, and some companies won’t go near containers until there are certain standards in place”, he explains.

To overcome these concerns, organisations are best served to take advantage of the automated tools available to gain control over all the elements of their software infrastructure, including containers.

Hence, the presence of vulnerabilities in all types of software is inevitable, and open source is no exception. Detection and remediation of vulnerabilities, are increasingly seen as a security imperative and a key part of a strong application security strategy.

 

Bill_LedinghamWritten by Bill Ledingham, EVP of Engineering and Chief Technology Officer, Black Duck Software.

Preparing for ‘Bring Your Own Cloud’

BYOD1_smallIn 2015, experts expect to see more sync and sharing platforms like Google Drive, SharePoint and Dropbox offer unlimited storage to users at no cost – and an increasing number of employees will no doubt take advantage of these simple to use consumer platforms to store corporate documents, whether they are sanctioned by IT or not, turning companies into ‘Bring Your Own Cloud’ free-for-alls.

How can IT leaders prepare for this trend in enterprise?
Firstly, it’s important to realise it is going to happen. This isn’t something IT managers can stop or block – so businesses need to accept reality and plan for it.

IT leaders should: consider what’s really important to manage, and select a solution that solves the problem they need to solve. Opting for huge solutions that do everything isn’t always the best option, so teams should identify whether they need to protect data or devices.

Planning for how to communicate the new solution to users is something to consider early and partnering with the business units to deliver the message in terms that are important to them is an invaluable part of the process. The days of IT deploying solutions and expecting usage are long gone.

Using a two-pronged approach is recommended – IT managers should utilise both internal marketing and education to spread awareness about the benefits of the solution, and implement policies to set standards on what is required. Often end users aren’t aware that their organisation even has data security policies, and education can go a long way to getting compliance without being punitive.

What are the benefits of allowing employees to use these services to store corporate information?

The key benefits are mobility, increased productivity, improved user experience, and greater employee satisfaction and control.

What are the biggest implications for security?

The biggest implications for security involve the loss of valuable intellectual property and internal information such as financials and HR data, as well as data leakage, leading to privacy violations and loss of sensitive customer data. In addition, there are potential violations of regulatory policies for healthcare, financial services, and similar industries.

How can companies manage and control the use of these cloud storage apps when employees are using them in a BYOD environment?

In BYO use cases, companies should look for solutions that are focused on securing and managing data rather than devices. In a BYOD environment, IT managers can’t rely on the ability to lock down devices through traditional methods.

Instead, companies must be able to provide workspaces that have secure IT oversight, but also integrate with what is in the current environment.

Often the current environment has data in many places: file servers, private clouds, public clouds, etc. Choosing a data management solution that integrates with where the company’s data lives today will be more suitable than forcing data to be moved to a single location. This will reduce deployment time and give more flexibility later on to choose where to store the data.

How can organisations educate users and create suitable policies around the use of these tools?

Organisations should consider classifying corporate data. Does every piece of data need to be treated the same way?

Creating realistic policies that protect the company from real harm is so important, as is treating highly sensitive data differently from other data and training employees to know the difference.  Teams will also find it useful to integrate data security essentials into regular organisational onboarding and training programs, and update them as policies evolve.

How can companies find the most suitable alternatives to the free unlimited cloud storage users are turning to, and how do you convince employees to use them over consumer options?

The best solutions balance user experience for end users with robust security, management, and audit controls on the IT side. From a user experience perspective, companies should choose a solution with broad platform adoption, especially for BYOD environments. From a security perspective, choosing a solution that is flexible enough to provide secure IT oversight and that integrates with what you have today will stand the company in good stead. The last thing IT managers want to do is to manage a huge data migration project just to get a data security solution off the ground.

How can companies get around the costs and resources needed to manage their own cloud storage solutions?
Again, flexibility is key here. The best solutions will be flexible enough to integrate with what you have today, but also will allow you to use lower-cost cloud storage when you are ready.

What’s the future of the market for consumer cloud storage – can we expect their use to continue with employees?

Cloud storage in general isn’t going anywhere. The benefits and economics are just too compelling for both consumers and organisations. However, there is and has always been a need to manage corporate data — wherever it resides — in a responsible way. The best way to do this is by using solutions that deliver workspaces that are secure, manageable, and integrated with what businesses and consumers have today.

 

chanel chambersWritten by Chanel Chambers, Director of Product Marketing, ShareFile Enterprise, Citrix.

Game development and the cloud

Sherman ChinBCN has partnered with the Cloud South East Asia event to interview some of its speakers. In this interview we speak to Sherman Chin, Founder & CIO of Sherman3D.

Cloud South East Asia: Please tell us more about Sherman3D and your role in the gaming industry.

Sherman Chin:  I started game development during my college days when I did game development as hobby projects. I then graduated with a BSc (Hons) in Computing from the University of Portsmouth, UK, and was the recipient of the 2002 International Game Developers Association scholarship. I formed Sherman3D shortly after and I oversaw the entire game development pipeline. Though my experience is in programming, I am able to serve as a bridge between the technical and creative team members.

I worked on over 20 internationally recognized games including Scribblenauts published by Warner Bros. Interactive Entertainment, Nickelodeon Diego’s Build & Rescue published by 2K Play, and Moshi Monsters Moshling Zoo published by Activision. Sherman3D is the longest lasting Malaysian indie game development company incorporated since 2003. With Sherman3D, I am the first Malaysian to release a game on Steam, the largest digital distribution platform for games online, after being voted in by international players via the Steam Greenlight process.

Within the gaming industry, I also worked as a producer in Japan, as a project manager in Canada, and as a COO in Malaysia. With over 15 years of experience in the gaming industry, I am currently the external examiner for the games design course at LimKokWing University and a game industry consultant for the Gerson Lehrman Group providing advisory services for international investors.

How has technology such as cloud supported your growth?

One important aspect of cloud technology is how ubiquitous it is. It allows my international development team to work online from anywhere in the world. This has helped us tremendously as we move our development operations online. We have our documents edited and stored online, we have our project management online, we have our video conference sharing sessions online, and we even have our game sessions online.

These online activities are made possible with cloud technology. More directly related to our product, Alpha Kimori was initially coded as a 3D tech demo for the Butterfly.net supercomputing grid, which was showcased at the Electronic Entertainment Expo in 2003.

I continued work on Alpha Kimori as a 2D JRPG that was then featured on the OnLive cloud gaming service for PC, Mac, TV, and mobile. OnLive streamed our game on multiple platforms with minimal effort on our part. Thanks to OnLive, we reached a bigger audience before finally making it on to Steam via the Greenlight voting process by players who wanted to see Alpha Kimori on Steam.

Do you think cloud has an important role in the gaming industry and do providers give you enough support?

Yes, cloud does play an important role in the gaming industry and providers do give enough support. OnLive was extremely helpful for example. It was perfect for an asynchronous game such as Alpha Kimori which had a turn based battle system. Unfortunately, synchronous realtime games have a more difficult time adapting to the slower response rate from the streaming cloud servers. In order to boost response time, servers have to be placed near the players. Depending on the location of the servers, a player’s mileage might vary.

As broadband penetration increases, this becomes less of an issue so early implementations of Cloud gaming might have been too early for its time. I do see a bright future though. We just have to match the optimum sort of games to Cloud gaming as the technology progresses.

What will you be discussing at Cloud South East Asia?

At Cloud South East Asia, I will be discussing how asynchronous Japanese Role Playing Game elements are suitable for Cloud gaming as they require less of a response time compared to synchronous real time battle games. I will also do a post mortem of Alpha Kimori on the Cloud gaming platforms it was on.

Cloud technology was not always a bed of roses for us and we had to adapt as there were not many precedents. In the end though, each cloud gaming platform that Alpha Kimori was on helped us to advance our game content further. I will also talk about the auxiliary resources on the Cloud for game design such as the amazing suite of free technology provided by Google. I will also talk a bit about the sales of Alpha Kimori on Steam and how Cloud technology affects it with features such as Steam Cards.

Why do you think it is an important industry event and who do you look forward to meeting and hearing more from?

Having its roots in Japanese Role Playing Games, Alpha Kimori was selected by the Tokyo Game Show (TGS) committee for its Indie Game Area in September, 2015. Sherman3D is once again honoured to be the only Malaysian indie team sponsored by TGS and as such, we view TGS as an important industry event for us. It will help us penetrate the Japanese market and we look forward to meeting and hearing from potential Japanese business partners willing to help us push the Alpha Kimori intellectual property in Japan.

What is next for Sherman3D?

Sherman3D will go on developing the Alpha Kimori series and licensing our Alpha Kimori intellectual property to other developers worldwide. We want to see our Alpha Kimori universe and brand grow. We are also working on the Alpha Kimori comic and anime series. Ultimately, Sherman3D will spread the Great Doubt philosophy in Alpha Kimori where it is not about the past or the future but our experience in the current moment that counts. Only from now do we see our past and future shaped by our own perspective because the truth is relative to our human senses. Attaching too much to anything causes us suffering and accepting the moment gives us true freedom as it allows us to love without inhibitions. Sherman3D will continue to spread the Great Doubt philosophy in its endeavours in the entertainment industry.

Learn more about how the cloud is developing in South East Asia by attending Cloud South East Asia on 7th & 8th October 2015 at Connexion @ Nexus, KL, Malaysia.

SEA Logo

Semantic technology: is it the next big thing or just another buzzword?

Most buzzwords circulating right now describe very attention-grabbing products: virtual reality headsets, smart watches, internet-connected toasters. Big Data is the prime example of this: many firms are marketing themselves to be associated with this term and its technologies while it’s ‘of the moment’, but are they really innovating or simply adding some marketing hype to their existing technology? Just how ‘big’ is their Big Data?

On the surface of it one would expect semantic technology to face similar problems, however the underlying technology requires a much more subtle approach. The technology is at its best when it’s transparent, built into a set of tools to analyse, categorise and retrieve content and data before it’s even displayed to the end user. While this means it may not experience as much short term media buzz, it is profoundly changing the way we use the internet and interact with content and data.

This is much bigger than Big Data. But what is semantic technology? Broadly speaking, semantic technologies encode meaning into content and data to enable a computer system to possess human-like understanding and reasoning. There are a number of different approaches to semantic technology, but for the purposes of this article we’ll focus ‘Linked Data’. In general terms this means creating links between data points within documents and other forms of data containers, rather than the documents themselves. It is in many ways similar what Tim Berners-Lee did in creating the standards by which we link documents, just on a more granular scale.

Existing text analysis techniques can identify entities within documents. For example, in the sentence “Haruhiko Kuroda, governor of Bank of Japan, announced 0.1 percent growth,” ‘Haruhiko Kuroda’ and ‘Bank of Japan’ are both entities, and they are ‘tagged’ as such using specialised markup language. These tags are simply a way of highlighting that the text has some significance; it remains with the human user to understand what the tags mean.

 

1 taggingOnce tagged, entities can then be recognised and have information from various sources associated with them. Groundbreaking? Not really. It’s easy to tag content such that the system knows that “Haruhiko Kuroda” is a type of ‘person’, however this still requires human input.

2 named entity recognition

Where semantics gets more interesting is in the representation and analysis of the relationships between these entities. Using the same example, the system is able to create a formal, machine-readable relationship between Haruhiko Kuroda, his role as the governor, and the Bank of Japan.

3 relation extraction

In order for this to happen, the pre-existing environment must be defined. In order for the system to understand that ‘governor’ is a ‘job’ which exists within the entity of ‘Bank of Japan’, a rule must exist which states this as an abstraction. This is called an ontology.

Think of an ontology as the rule-book: it describes the world in which the source material exists. If semantic technology was used in the context of pharmaceuticals, the ontology would be full of information about classifications of diseases, disorders, body systems and their relationships to each other. If the same technology was used in the context of the football World Cup, the ontology would contain information about footballers, managers, teams and the relationships between those entities.

What happens when we put this all together? We can begin to infer relationships between entities in a system that have not been directly linked by human action.

4 inference

An example: a visitor arrives on the website of a newspaper and would like information about bank governors in Asia. Semantic technology allows the website to return a much more sophisticated set of results from the initial search query. Because the system has an understanding of the relationships defining bank governors generally (via the the ontology), it is able to leverage the entire database of published text content in a more sophisticated way, capturing relationships that would have been overlooked by computer analysis alone. The result is that the user is provided with content more closely aligned to what they are already reading.

Read the sentence and answer the question: “What is a ‘Haruhiko Kuroda’?” As a human the answer is obvious. He is several things: human, male, and a governor of the Bank of Japan. This is the type of analytical thought process, this ability to assign traits to entities and then use these traits to infer relationships between new entities, that has so far eluded computer systems. The technology allows the inference of relationships that are not specifically stated within the source material: because the system knows that Haruhiko Kuroda is governor of Bank of Japan, it is able to infer that he works with other employees of the Bank of Japan, that he lives in Tokyo, which is in Japan, which is a set of islands in the Pacific.

Companies such as the BBC, which Ontotext has worked with, are sitting on more text data than they have ever experienced before. This is hardly unique to the publishing industry, either. According to Eric Schmidt, former Google CEO and executive chairman of Alphabet, every two days we create as much information as was generated from the dawn of civilisation up until 2003 – and he said that in 2010. Five years later and businesses of all sizes are waking up to this fact – they must invest in the infrastructure to fully take advantage of their own data. You may not be aware of it, but you are already using semantic technology every day. Take Google search as an example: when you input a search term, for example ‘Bulgaria’, two columns appear. On the left are the actual search results, and on the right are semantic search results: information about the country’s flag, capital, currency and other information that is pulled from various sources based on semantic inference.

Written by Jarred McGinnis, UK managing consultant at Ontotext