Everything You Need to Know About Network Modes in Parallels Desktop

Guest blog by RamaKrishna Sarma Chavali, Parallels Support Team How does your virtual machine connect to the Internet in Parallels Desktop? This is a question I hear pretty often from users, so let me shed some light on this. Parallels Desktop has three different networking modes to “talk to the world”. These are Bridged, Shared and Host-Only. […]

The post Everything You Need to Know About Network Modes in Parallels Desktop appeared first on Parallels Blog.

Parallels RAS Web Portal vs. Citrix XenApp Plug-in

Mobility, easy-access through any browser and high availability are the three important reasons that make the cloud portal popular in recent times. A cloud portal is a web-based interface that allows users to access corporate resources from a web browser. While businesses get the agility and flexibility to rapidly deploy business services, end-users enjoy the […]

The post Parallels RAS Web Portal vs. Citrix XenApp Plug-in appeared first on Parallels Blog.

Data sovereignty and the cloud: How do we fully control data amidst the cloud sprawl?

(c)iStock.com/cherezoff

The problem

One of the basic tenets of cloud computing is the ability to provide access to resources across a geographically dispersed cloud environment. This makes the cloud ideal for global distribution of applications and data. But what about those geographies that have highly restrictive data sovereignty laws or practices, such as Germany, Austria and South Korea? What about governmental bodies attempting to protect information while utilising the cloud?

An interesting example is the German government which, in certain circumstances, will require that data on German companies and their employees never leave German soil and that only German citizens be allowed to administer said data. These data sovereignty (DS) scenarios and many others present a challenge for organizations in terms of protecting the data entrusted to them while cutting costs and gaining efficiencies associated with the cloud.

From a business standpoint, these organisations are charged with protecting information about their business, customers, users or governments. Unauthorised access to private customer data, governmental assets or corporate assets could be devastating. We need look no further than the recent state sponsored attack on US federal government employee databases to see the effect of these types of breaches.

From a technical view, IT departments are being increasingly relied upon to implement data access controls, data filtering and separation management functions according to DS rules. Then as soon as IT thinks they finally have a handle on the problem, here comes the cloud, offering ubiquitous data access, and messing up the nice neat model they’ve created.

So, how do we control data where the point of cloud is to distribute data and applications? Large organisations, especially those that span multiple countries are facing this very question on a daily basis. I was recently involved with a client that not only does business globally and needs to be sensitive to governmental restrictions, but also has specific contractual obligations with a number of their customers as to where and how files, email and other data can be stored and transmitted.

The solution

The chosen solution will be specific to the circumstances and organisational type, although it can be viewed as having various components:

  • Security standards. These solutions require a strong set of on-premise and cloud based security standards. As I have previously written, it is important when developing a hybrid cloud solution to extend the corporate security standards, as much as possible, to the cloud.
  • Data loss prevention (DLP) monitoring and controls. DLP software defines controls over the flow of data and monitors that data flow to detect data breaches.
  • Data aware services. As services are developed, the integrated software components need to have proper authorisation and filtering capabilities. An example would be an identity management system where the directory services replicate data between geographically dispersed instances, based on filtering rules.
  • Data segmentation across a hybrid cloud infrastructure. As in the example given above, countries or organisations may require different levels of DS control necessitating that data have a defined location. In this case, a straightforward solution comes in the form of hybrid cloud with regional instances located at or in proximity to the point of high DS requirement.
  • Consistent management tools. Common and consistent management tools and practices across all cloud regions with controls in place as to who is authorised to administer a given instance, or data set.

The following diagram shows an example solution utilising all of the above concepts.

IT teams facing data sovereignty and data protection issues should not view the cloud as a challenge to be overcome, but as a partner to be embraced. The technology and best practices exist to gain all the benefits of cloud computing while ensuring the protection, privacy and authorised access to sensitive and regulated data.

[session] Infrastructure as a Toolbox By @SoftLayer | @CloudExpo #Cloud

Countless business models have spawned from the IaaS industry – resell Web hosting, blogs, public cloud, and on and on. With the overwhelming amount of tools available to us, it’s sometimes easy to overlook that many of them are just new skins of resources we’ve had for a long time.
In his general session at 17th Cloud Expo, Phil Jackson, Lead Technology Evangelist at SoftLayer, an IBM Company, will break down what we’ve got to work with, discuss the benefits and pitfalls and how we can best use them to design hosted applications.

read more

Case Study: Software Development – A Nearshore Success Story | @CloudExpo #Cloud

Serving more than 600 hospitals in the U.S., Adreima provides clinically integrated revenue cycle services. Read this case study to learn how partnering with Tiempo Development has proved to be the most productive, cost-effective way to advance a software platform that serves marketing strategy, client service delivery, and information management.

read more

Microsoft unveils cloud security plans for Adallom amid rising cloud unrest

Cloud securityMicrosoft has announced its plans for Israeli founded cloud security firm Adallom, the cloud security firm it bought for a reported $250 million.

Detail of the plans for its new acquisition was unveiled in a Microsoft blog by corporate VP for cloud and enterprise marketing Takeshi Numoto. Though reports of the acquisition emerged in July details of Microsoft’s cloud security strategy have only just been unveiled.

The frequency of advanced cybersecurity attacks has made security ‘top of mind’ among cloud users, according to Numoto. The acquisition of Adallom will expand Microsoft’s existing identity assets by acting as a cloud access security broker, allowing customer to see and control application access, Numoto explained. It will also protect critical company data stored across cloud services. Adallom helps secure and manage popular cloud applications including Salesforce, Box, Dropbox, ServiceNow, Ariba and Microsoft’s own Office 365.

Adallom will complement existing Microsoft offerings as part of Office 365 (serving in a monitoring capacity) and the Enterprise Mobility Suite (EMS), which includes Microsoft’s Advanced Threat Analytics system. Microsoft had previously bought another cloud-security vendor, Aorato, with Israeli Defence Force ties, in 2014. Aorato was rebranded as Advanced Threat Analytics.

Adallom’s technology monitors the use of software-as-a-service applications and was created by founders 2012 by Assaf Rappaport, Ami Luttwak and Roy Reznik who met while serving in intelligence for the Israel Defense Forces.

The unveiling of Microsoft’s cloud defence plans coincides with an independent report, by Osterman Research, that 76 per cent of UK firms are concerned about the lack of security in the cloud, with consumer-grade cloud storage of corporate documents being named as the chief cause of unease.

The report found that employees preferred consumer-focused file sync and share (CFSS) solutions to enterprise-grade file sync and share (EFSS) solutions in the workplace, and often failed to consider the security risk posed by CFSS solutions.

Services that will be monitored by Microsoft’s new cloud security acquisition, such as Dropbox, which allow consumers to instantly sync files across all devices, but do not provide the same protection of information as EFSS, were identified in Osterman Research’s report as a particular cause for concern.

“Use of CFSS over EFSS significantly increases corporate risk and liability,” the Osterman Research report warned.

“We are thrilled to welcome the Adallom team into the Microsoft family,” said Numoto in his Microsoft blog, “cybercrime will persist in this mobile-first, cloud-first era, but at Microsoft we remain committed to helping our customers protect their data.”

APMG launches end user cloud computing foundation certification scheme

Skills and trainingCloud industry expert Bernard Golden has created a vendor-independent course to help people and businesses make the transition to cloud-based services.

Golden developed the course for APMG International which has launched a new end user cloud computing certification scheme. The aim is to give the workforce the cloud skills needed to support the migration to cloud-based computing.

The course, Cloud Computing Foundation Certification, is designed to give an impartial and objective overview as an introduction to cloud computing. This is necessary, according to Golden, before any organisations can move to the cloud successfully.

The certification was developed in response to the mounting need for businesses to understand and prepare for the move to cloud. The course is aimed at all enterprise IT employees, from finance to operations, and sets out to outline the fundamentals of cloud computing. It will then move on to explain the benefits, challenges and pros and cons of rival delivery models.

The most important aspect, according to the author, is to create a cloud computing action plan for course participants. The ultimate proof of the course will be the successful adoption of cloud computing, according to Golden, APMG’s Chief Examiner.

“With cloud computing fast becoming the de facto platform for enterprise computing, the failure to understand its fundamentals poses a real danger,” said Golden. Failure will affect both the productivity of businesses and the employment prospects of the staff within them, he warned.

Though the benefits of moving an organisation’s data to the cloud – from potential cost savings to increased flexibility – are well documented, the execution is not, according to Golden. It is this gap in understanding that he intends to address, he said.

“The fact is that the majority of deployments aren’t as simple as just flicking a switch – you need to fully comprehend the security, technical and regulatory implications to make cloud a success, which is why training and certification are critical,” said Golden.

Many cloud computing training courses tend to be heavily weighted in favour of one vendor, which ultimately provides a skewed view, Golden claimed.

“This course has been designed to provide a vendor neutral knowledge base to provide an objective education about the topic,” said Golden, who promised there would be no ‘abstract knowledge without practical application’. Students will learn concepts and tools that can be applied immediately in the working environment, said Golden.

“For cloud projects to succeed they need to gain acceptance within businesses,” said Richard Pharro, CEO of APMG.

The case for disaster recovery services beyond business continuity

(c)iStock.com/shutter_m

Disaster recovery isn’t a new concept for IT. We’ve been backing up data for years to offsite locations, and used in-house data duplication in order to prevent the risks of losing data stores. But now that cloud adoption has increased, there have been some shifts in how traditional disaster recovery is being handled.

First, we’re seeing increased adoption of cloud-based backup and disaster recovery. Gartner stated that between 2012 and 2016, one third of organisations are going to be looking at new solutions to replace current ones particularly because of cost, complexity or capability. These new solutions not just address data, but the applications themselves, and are paving way for Disaster recovery as a services (DRaaS).  

Unfortunately, there is still some confusion as to when cloud services may suffice for disaster recovery, or if looking at fully-fledged DRaaS makes more sense for organisations. Let’s explore four of the key considerations when it comes to DRaaS and cloud backup services.

DRaaS isn’t just for emergency situations

A lot of organisations still view Disaster Recovery as a reactive solution, and forget that sometimes just by having cloud based services in the first place, especially with a provider who utilises business continuity best practices in their services, there might be inherent DR/failover protection in place.

This means less downtime risks overall, and a more proactive approach to ensuring that your organization is up and running at all times.  This helps organisations ensure that they can react to their customers 24/7/365.

Cloud services might help boost a small IT department’s overall security profile

While you should absolutely do your homework before signing up for cloud services, the real fact is that often these services are more secure than many organisations, and come with enterprise security-grade solutions that are specifically configured to address the unique characteristics of the individual services.

This means if you are a smaller organisation who might not have a ton of security resources to do all the legwork for an in-house build, looking at a cloud solution might give you more bang for your buck in terms of reducing your onsite data protection costs, personnel costs and the day to day management of ensuring security controls are in place.

Consider the skillsets required for disaster recovery

There are a lot of solutions that you can leverage for in-house builds that can deliver not just lower costs, but also provide better control and the ability to work with multiple different platforms and projects. But the reality is that Disaster Recovery needs to be at the forefront of these projects (in addition to security and functionality) and if you don’t have the right skillsets to ensure it is not just built in, but constantly reviewed and updated it might be best to look at a service provider who does. The last thing your organization can afford should something happen, is not having the right resources to ensure business continuity during the outage and scrambling to figure out how to fix it.

Cloud storage isn’t a way to get around disaster recovery

While it’s important to be able to access your files no matter what happens, if you can’t run the front ends to get to the data, it’s going to be a nightmare. By looking at a DRaaS versus a Cloud storage solution, having multiple failover sites for applications as well, you will still be able to run your systems themselves should there be an outage This is why we will continue to see large enterprises start to look at IT services failover across multiple data centres as a disaster recovery strategy, making cloud more of an data centre on demand type of service.

Conclusion

No matter what service you ultimately decide to go with, the real thing is to make sure that you do your research. You need to really take a good inventory of what systems are involved, from application and data servers (physical and virtual), and endpoints, along with the usual SQL, Exchange and CRM systems. You should also be aware of what the disaster recovery process would look like, to ensure that if the vendor needs to be involved, you know ahead of time.

Most importantly, be realistic with the skill sets available on your IT team, and if there is a gap, this could be a good indicator that it makes sense to look at hosted or managed solutions. The last thing you want to do in the case of an outage is to go back through SLAs to figure out whom you need to contact for help, or who is ultimately responsible for different functions. The more control you have over the DR environment, the easier it will be for you to get back up and running.

The post The Case for Disaster Recovery Services Beyond Business Continuity appeared first on Cloud Best Practices.

VMworld 2015: Day One Recap

It was a long but good week out west for VMworld 2015. This year’s event was kicked off by Carl Eschenbach (COO) who said there were roughly 23,000 attendees at the event this year, a new record. Carl highlighted that the core challenges seen today by VMware’s customers are speed, innovation, productivity, agility, security, and cost.  Not a huge surprise based on what I have seen with our customer base. Carl then went into how VMware could help customers overcome these challenges and broke the solutions up into categories of run, build, deliver, and secure. The overarching message here was that VMware is keenly focused on making the first three (run, build, and deliver) easier and focusing on security across all of the various product/solution sets in the portfolio.  Carl also hit on freedom, flexibility, and choice as being core to VMware, meaning that they are committed to working with any and all vendors/solutions/products, both upstream in the software world and downstream in the hardware world.  We’ve heard this message now for a couple of years and it’s obvious that VMware is making strides in that area (one example being more and more Openstack integration points).

 

Carl then began discussing the concept of a single Unified Hybrid Cloud.  In a way, this is very similar to GreenPages’ CMaaS messaging in that we don’t necessarily care where systems and applications physically reside because we can provide a single pane of glass to manage and monitor regardless of location.  In the case of VMware, this means having a common vSphere based infrastructure in the datacenter or in the cloud and allowing seamless movement of applications across various private or public environments.

Carl then introduced Bill Fathers, the general manager for vCloud Air.  Apparently, the recent rumors regarding the death of vCloud Air were greatly exaggerated as it was front and center in both keynotes and during Sunday’s partner day. As far as vCloud Air adoption, Bill said that VMware is seeing the most traction in the areas of DR, application scaling, and mobile development.

Bill brought Raghu Raghuram, who runs the infrastructure and management (SDDC) business, up on stage with him. Ragu, again, kept the conversation at a high level and touched on the rise of the hybrid application and how VMware’s Unified Hybrid Cloud strategy could address this.  A hybrid application is one in which some components (typically back end databases) run in the traditional on premise datacenter while other components (web servers, middleware servers, etc.) run in a public cloud environment. This really ties into the age old concept of “cloud bursting,” where one might need to spin up a lot of web servers for a short period of time (black Friday for retail, Valentine’s day for flower shops, etc.) then spin them back down. This has really been a bit of science fiction to date, as most applications were never developed with this in mind and, thus, don’t necessarily play nice in this world.  However, VMware (and I can personally attest to this via conversations with customers) is seeing more and more customers develop “cloud native” applications which ARE designed to work in this way. I would agree, this will be a very powerful cloud use case over the next 12-24 months. I see GreenPages being very well position to add a ton of value for our customers in this area, as we have strong teams on both the infrastructure and cloud native application development sides of the equation.

Another tight collaboration between Bill and Raghu’s teams is Project Skyscraper; the concept of Cross-Cloud vMotion, which, as the name would imply, is the process of moving a live running virtual machine between a private cloud and vCloud Air (or vice versa) with literally zero downtime.  Several technologies come together to make this happen including NSX to provide the layer 2 stretch between the environments and shared nothing vMotion/vSphere replication  to handle the data replication and actual movement of the VM.  While this is very cool and makes for a great demo, I do question why you would want to do a lot of it. As we know, there is much more to moving an existing application to a cloud environment than simply forklifting what you have today.  Typically, you’ll want to re-architect the application to take full advantage of what the public cloud can offer. But, if you simply want an active/active datacenter and/or stretch cluster setup and don’t have your own secondary datacenter or co-lo facility to build it, this could be a quick way to get there.

Following Raghu was Rodney Rogers CEO of Virtustream, the hosting provider recently acquired by EMC and the rumored death nail to vCloud Air.  Rodney did a great job explaining where Virtustream fits in the cloud arena. It is essentially a place to host business critical tier 1 applications, like SAP, in a public cloud environment.  I won’t go into deep technical detail, but Virtustream has found a way to make hosting these large critical applications cost effective in a robust/resilient way. I believe the core message here was that Virtustream and vCloud Air are a bit like apples and oranges and that neither is going away. I do believe at some point soon we’ll be hearing about some form of consolidation between the two so stay tuned!

Ray O’Farrell, the newly appointed CTO and longtime CDO (Chief Development Officer), was next up on the stage.  He started off talking about containers (Docker, Kubernetes, etc.) in a general sense.  He quickly went on to show some pretty cool extensions that VMware is working on so that the virtualization admins can have visibility into the container level via traditional management tools such as the vCenter Web Client.  This is a bit of a blind spot currently as the VMware management tools can drill down to the virtual machine level but not any additional partitioning (such as containers) which may exist within virtual machines.  Additionally, Ray announced Project Photon. It’s basically a super thin hypervisor based on the vSphere kernel which would act as a container platform within the VMware ecosystem. The platform consists of a controller which VMware will release as open source and a ‘machine’ which will be proprietary to VMware as part of the Photon Platform but will be a paid subscription service.  Additionally, there will be an integrated bundle of the Pivotal Cloud Foundry platform bundled with Photon as another subscription option.  It’s apparent that VMware is really driving hard into the developer space, but it remains to be seen if workloads like big data and containers will embrace a virtual platform. I’ll post a recap of Tuesday’s general session tomorrow!

GreenPages is hosting a webinar on 9/16, “How to Increase Your IT Equity: Deploying a Build-Operate-Transform Model for IT Operations” . Learn how to create long-term value for your organization and meet the increasing demand for services. Register Now!

 

By Chris Ward, CTO

Interoute buys Easynet for £402 million

interoute logoNetwork and cloud service operator Interoute has entered an agreement to buy European managed services provider Easynet in a deal valued at £402 million.

Easynet manages services for clients including Sports Direct, EDF, Bouygues, Anglian Water, Bridgestone, Levi Strauss and Campofrio Food Group. It has a twenty year pedigree of running integrated networks, hosting and unified communications solutions to national and global clients. Its data center and cloud computing services include colocation, security, voice and application performance management. It has been appointed by the UK government’s Procurement Service to assist the UK Government in creating a ‘network of networks’ with an emphasis on machine to machine (M2M) development.

Interoute’s technology estate includes 12 data centres, 14 virtual data centres and 31 colocation centres along with connection to 195 additional third-party data centres across Europe. It owns and operates 24 connected city networks within Europe’s major business centres.

According to Interoute, the acquisition means that enterprise, government and service provider customers of the two companies will get a fuller suite of products, services and skillsets.

“These are exciting times for Interoute customers,” said Interoute CEO Gareth Williams, “Interoute is creating a leading, independent European ICT provider. This is the next step in our acquisition strategy and moves us much closer to our goal of being the provider of choice to Europe’s digital economy.”

Easynet CEO Mark Thompson reassured customers that the combination of the two service providers will bring better service to clients of both. “The combined companies can offer broader and deeper connectivity options, as well as an expanded portfolio of products and services,” said Thompson. “The acquisition will expand an already market-leading cloud hosting capability in Europe.”

Williams had previously told analysts that Interoute needed to grow before going public. The takeover will double revenue in the division that sells telecoms services to large companies and government departments.

British telco Easynet became one of the champions of broadband competition in Britain after it was acquired in 2006 by Sky for £211 million. In 2010, Easynet announced its sale from BSkyB (Sky) to Lloyds Development Capital (LDC), the private equity arm of Lloyds Banking Group.

In December 2013 the company was acquired by MDNX Group, the UK’s largest independent carrier integrator.

Interoute was recently recognised by market analyst Gartner as a leader in its 2015 Magic Quadrant for Cloud-Enabled Managed Hosting, Europe report.