Announcing @LiaisonTech Named “Silver Sponsor” of Cloud Expo NY [#Cloud]

SYS-CON Events announced today that Liaison Technologies, a global provider of secure, cloud-based data management and integration services and solutions, has been named “Silver Sponsor” of SYS-CON’s 16th International Cloud Expo®, which will take place on June 9–11, 2015, at the Javits Center in New York, NY.
Liaison Technologies is a global data management and integration company. It provides innovative solutions to integrate, transform, harmonize, manage and secure critical business data on-premise or in the cloud. With a comprehensive array of business-to-business and application-to-application integration and data transformation services, as well as on-premise and cloud-based data security solutions, Liaison’s practitioners implement data management infrastructures adapted to each client’s specific business requirements. Headquartered in Atlanta, Liaison has offices in the Netherlands, Finland, Sweden and the United Kingdom.

read more

DevOps automation: Financial implication and benefits with AWS

(c)iStock.com/urbancow

Automation applied to efficient operations can lead to a gain in efficiency that directly translates to the bottom line of a business. This article is about how DevOps automation in an Amazon Web Services (AWS) environment has tangible financial implications and benefits.

It is no surprise that cloud computing brings with it monetary advantages, which are realised when an organisation trades fixed capital expenditure (CAPEX) for variable operational expenditure (OPEX) on a pay-as-you-go usage model (learn more about Capex vs. Opex). In AWS environments, savings are further realised because there are no wasted resources as a result of the auto scaling capabilities inherent in AWS. As a result of automation, the gap between predicted demand and actual usage is minimised, something that is rarely possible in traditional, on-premises technology deployments.

Moreover, automation allows the continuous deployment of new infrastructure to be accomplished within a matter of minutes, making it possible for a quick time to market without the significant collateral damage of failed deployments. These benefits are further amplified when automation is in place. DevOps solutions, such as CloudFormation and OpsWorks within the AWS service offerings, make this automation possible.

AWS CloudFormation automates the provisioning and management of AWS resources using pre-built or configurable templates. These templates are text-based with configurable parameters that can be used to set up the version, map the region where the resources need to be deployed, and set up the resources and security groups that need to be provisioned automatically.

AWS OpsWorks makes it possible to automatically spin up new instances of AWS resources, as and when needed, and also change configuration settings based on system event triggers. Manual operations to perform some common activities – e.g. installing new applications, copying data, configuring ports, setting up firewalls, patching, DNS registrations, device mounting, starting and stopping services and rebooting – can all be automated using the AWS DevOps OpsWorks product, or by using additional configuration management frameworks such as Chef or Puppet.

In addition to AWS DevOps solutions, AWS also provides application management tools that can automate code deployment (AWS CodeDeploy), source code control and commitment (AWS CodeControl) and continuous deployment (AWS CodePipeline). In addition to making it possible to rapidly release new features, AWS CodeDeploy can be used to automate deployment, which reduces manual error-prone operations that could be costly from which to recover.

AWS CodeControl can be used to manage source code versions and source control repositories, ensuring that the correct version is deployed, thereby reducing the need to rollback, which can be expensive from the perspective of time lost, missed deadlines, and missed revenue opportunities. AWS CodePipeline makes infrastructure and application release seamless and automated, reducing the operational costs of checking-out code, building the code, deploying the application to a test and staging environment, and releasing it to production.

The automation and auto scaling of infrastructure and applications in AWS environments allows a business to benefit from both reduced time and cost. AWS automation in Amazon’s cloud computing service offerings not only makes the business operations more efficient, but also magnifies the efficiencies gained, which will have a direct impact on the bottom line – the financial posture of the business.

The post DevOps Automation: Financial Implication and Benefits with AWS appeared first on Cloud Computing News.

How cloud providers can prevent data loss: A guide

(c)iStock.com/4774344sean

Feature Cloud service providers find themselves in a struggle balancing responsibility for maintaining data integrity with delivering cost effective solutions to their customers, all the while protecting their own data assets and bottom line.

Generally, the type of service they are delivering limits a provider’s responsibility level. In the case of infrastructure as a service (IaaS) a provider might just be delivering infrastructure and a means of creating cloud environments with no provision of customer data backup. As part of a platform as a service (PaaS) offering backup and other forms of data protection may well be key selling points.

Basic types of data loss include data destruction, data corruption and unauthorised data access. The reason for these types of loss a varied and include infrastructure malfunctions, software errors and security breaches. Due to the complexities around data centre and cloud security, this article will deal destruction and corruption of data only.

Definition of data loss domains

There exist many types of data within a cloud environment – infact too many to enumerate. These data types can be classified into general categories or data domains. The importance of these domains, to the constituents of the cloud environment, gives rise to the concept of data loss domains or, who is effected most and how much impact is there if the data is lost. The above diagram represents the three major data domains; provider non-customer effective (PNCE), provider customer effective (PCE) and customer (CUST) and in the case of the provider domain examples of the types of data. This section will define the domains and the data types.

Provider data non-customer effective (PNCE)

The data loss domain contains information that belongs to the cloud service provider and has no effect on the customer. This information if lost or damaged will have a significant impact on the provider and their ability to conduct business.

On the other hand, loss of this data has little to no effect on the customers. For example if billing information were lost and irretrievable the customer would probably not mind. The obvious responsibility for protecting this data lies with the provider. The following is a short list of examples of PNCE data:

  • Business management data
    – Billing and metering information
    – Service quality data
    – IT Benchmarking data
  • Environment management data
    – Development/DevOpS data
    – Inventory and configuration management data
    – Performance and capacity management data
  • Security data
    – Security systems management
    – ID management, authentication and authorisation data
  • Logging data
    – System logs
    – Network activity logs
    – Security logs

Provider data customer-effective (PCE)

The domain represents that data which is owned by the provider and significant to the provider for business reasons (the provider needs to know how many VMs a customer has created) and significant to the customer as it defines their cloud deployment.

Both provider and customer will be impacted in the case of loss and responsibility – for protected the data is shared but primarily falls on the provider. For example, Virtual Machine configurations are the responsibility of the provider to protect but not if they are marked transient (the usual default state). If they are marked transient, then no protection is required. Some of the data types that fall into this domain are:

  • Self-service portal data
    – Blueprints
    – Environment default settings
  • Virtual infrastructure configuration
    – Virtual machine/compute configurations
    – Virtual networking (SDN, vRouting, vSwitching, VLAN, VXLAN)
  • Orchestration and Automation
    – Provisioning and provisioning management
    – Promotion

Customer data

Customer data can take an infinite number of forms but constitutes the universe of data need to run customer developed and/or deployed services. The customer owns this data and is responsible for its protection unless otherwise arranged with the provider.  A customer may choose to have a cloud service provider replicate, back-up or otherwise protect customer owned data based on an agreement with the provider. These services generally take the form of a financial and service-level agreement between the parties. 

Preventative measures

Just because the IT world now lives, or is moving to the cloud, doesn’t mean that the rules of data protection have changed. We still need to measure Recovery Point Objective (RPO) and Recovery Time Objective (RTO) the same way that we have in the past. We still need to implement data protection solutions based on the balance of RTO/RPO, the criticality of data and the cost of implementation. We still tend to implement multiple solutions or multiple tiers of solutions to suit the environment.

The difference is, as we have shown above, who owns the data and who is responsible for the protection of the data.

There are some general categories of data protection methods that can be used, and should be considered in an effort to prepare an environment for minimized data loss. They include:

  • Disk level data protection – This can be the old, and still best practice, of RAID based protection of disk resources. Another, option is Scale-out storage (ex. EMC Isilon, Nutanix) which spreads data across multiple nodes of the data cluster.
  • Backup/replicated backup – The periodic backing up data to a lower tier, lower cost medium. The advances in disk-based backup and replication technologies have lowered the cost, increased efficiency and raised the level of recoverability of backup and recovery solutions.
  • Data replication – Data replication technology has existed in one form or another for a number of years. Data written to one set of storage resources is automatically replicated, via software, to secondary storage. The problem for most of replication technology’s history has been accessibility and granularity. Technologies such as SRDF have always been highly reliable but the ability to access the data from primary and secondary resources and the ability to retrieve any but the most recent information were not possible.
  • Journaled/checkpoint based replication – Enter journaled files systems and checkpoint based replication. This type of technology (e.g. EMC Recoverpoint) allows not only read/write access to either side of a replicated storage set, but also the ability to recover data at a point in time.

Protecting Data

Now that we understand the data and the means for protecting it we can now move on to the process for doing so. Two major steps are necessary for a cloud service provider to consider, classifying data and building a flexible DP environment. 

The following template can be used to classify the data that needs protecting:

Data domain      Data type     Criticality     RTO/RPO     DP Method    
 1  –      
 2  –      
 3  –      

Once RTO/RPO and criticality characteristics of the data are understood, an intelligent decision about protection method and frequency of execution can be made. For example if a data set has a low RPO (transactional) then replication with frequent snapshots might be necessary. If a data set has a high RPO (low rate of change) then low frequency backup should suffice.

The following diagram shows an example environment including data protection elements:

The combination of clear classification of data and a comprehensive data protection solution, including tools, infrastructure, processes and procedures should be a major goal of the cloud service provider. Planning and execution around these concepts will allow the service provider to remain fiscally responsible while maintaining their data protection responsibilities to their customers.

SOASTA Adds RUM to CloudTest Solution | @DevOpsSummit @CloudTest [#DevOps]

SOASTA has combined its two testing and monitoring products into a single solution for continuous performance analysis for mobile and Web applications. SOASTA CloudTest now incorporates mPulse, the company’s Real User Monitoring (RUM) solution, which integrates complete performance data from real users to ensure continuous peak performance of digital businesses.

read more

The ‘IT-ification’ of Business… By @ExtraHop | @CloudExpo [#Cloud]

A few years ago, there was much ado about the “consumerization” of IT, and its impact on IT operations. As a CIO, “consumerization” was an issue, but in my experience the more profoundly impactful trend has really been the “IT-ification” of business. I’ve spent the last 20 years watching as information technologies creep into every aspect of business operations, from human resources to point-of-sale systems to healthcare delivery. While we’ve been moving in this direction for a while now, I believe that 2015 is the year when organizations will really begin to understand just how strategic a weapon IT can be. Over the next 12 months, expect to see the following trends crystallize around the recognition of the strategic business value of IT.

read more

Announcing @NimbleStorage to Exhibit at @CloudExpo New York [#Cloud]

SYS-CON Events announced today that Nimble Storage, the flash storage solutions company, will exhibit at SYS-CON’s 16th International Cloud Expo®, which will take place on June 9-11, 2015, at the Javits Center in New York City, NY.
Nimble Storage (NYSE: NMBL) is redefining the storage market with its Adaptive Flash platform. Nimble’s flash storage solutions enable the consolidation of all workloads and eliminate storage silos by providing enterprises with significant improvements in application performance and storage capacity. At the same time, Nimble delivers superior data protection, while simplifying business operations and lowering costs. At the core of the Adaptive Flash platform is the patented Cache Accelerated Sequential Layout (CASL) architecture and InfoSight, an automated cloud-based management and support system that maintains storage system peak health. More than 4,300 enterprises, governments, and service providers have deployed Nimble’s flash storage solutions across 38 countries.

read more

The CEO Is Dead, Long Live the CIO & CEO By @ABridgwater [#Cloud #BigData]

Perhaps not quite as clear-cut as a logical AND in the programmatic sense, but the age of the CEO as we once knew it could be over.
There’s a simple reason for this, CIOs (and yes of course we do mean Chief Information Officers) are starting to run their operations as business entities in their own right.
In no way detracting from the popular assertion that IT has to “progress from simply being a function in the business to now become a business enabling function in and of itself” (as the expression goes)… there’s an even deeper truth to realize.

read more

Tune into the Cloud: Dock of the Bay By @GregorPetri | @CloudExpo [#Cloud]

No trend is currently as hot as containers and particularly the popular container management system Docker. Although arguably just maturing, it has already reached almost mythical proportions of cloud hype. Docker would make virtualization superfluous, replace PaaS all together and thanks to unparalleled portability will put a final end to decades of platform and vendor lock-in. As a result there currently is no start-up or cloud provider who has not incorporated Docker prominently in its 2015 strategy.

read more

2015 Predictions: End User Computing and Security

Earlier in the week, we posted some 2015 predictions from Chris Ward and John Dixon. These predictions covered cloud, the internet of things and software defined technologies. Here are a few quick predictions around end user computing and security from Francis Czekalski and Dan Allen.

 

Francis Czekalski, Practice Manager, End User Computing

Short and sweet – here are four things to keep an eye on in 2015 around end user computing:

  • More integration with mobile devices
  • Wrappers for Legacy Applications to be delivered to IOS devices
  • Less and less dependency for traditional desktops and more focus of delivery on demand
  • Heightened focus on data security

 

end user computing and security

 

Francis presenting at GreenPages’ annual Summit event

Dan Allen, Solutions Architect

Hacktimonium! Remember when only big companies got spam? Then small companies? Then individuals? The same is happening with hacking and digital intrusion. This trend will continue into 2015. Having a Firewall isn’t going to be enough; you need to have some sort of implemented Intrusion Prevention Services like an ASA with sourcefire, Radware appliance, or even some of the smaller brands have a Unified Threat Management piece.

A Year in review: Who got hacked last year?

The Big Ones

  • Apple’s iCloud – Individual accounts hacked.
  • JP Morgan Chase – Enterprise network hacked
  • Sony – Individual and then enterprise hack
  • UPS
  • Target

A list of others you might know.

  • AOL
  • Ebay
  • Living Social
  • Nintendo
  • Evernote
  • USPS
  • Blizzard
  • SnapChat
  • NeimanMarcus
  • Home Depot
  • Washington State Justice Computer Network
  • Yahoo-Japan
  • Dominos-France

The final word here? You Won’t Know You’ve Been Hacked Until It’s Already Gone.

What do you think 2015 has in store around end user computing and security?

 

By Ben Stephenson, Emerging Media Specialist

.@CommVault Launches Endpoint Data Protection | @CloudExpo [#Cloud]

CommVault has launched Simpana for Endpoint Data Protection, a new solution set designed to help protect and enable the mobile workforce by efficiently backing-up laptops, desktops and mobile devices and providing secure access and self-service capabilities.

With today’s mobile workforce increasing their reliance on information saved on local endpoints and outside IT’s traditional domain, the need to protect sensitive data residing on desktops, laptops, and mobile devices has become more critical than ever before. As global data breaches reach an average cost of $3.5 million in US dollars according to a study by the Ponemon Institute due to lost or unrecoverable data on employee devices, including desktops, laptops, and mobile devices, organizations are beginning to embrace centrally managed platforms that can be used to simultaneously address data protection, collaboration, regulatory, and eDiscovery requirements in a secure manner.

read more