StubHub selects Google Cloud and Pivotal for digital transformation drive


Clare Hopping

10 May, 2018

StubHub has selected public cloud providers Google Cloud and Pivotal to power its ticketing platform, helping it grow its mobile presence and make it easier for customers to engage with the brand.

The company will make use of Google Cloud and Pivotal’s entire stack of services, covering machine learning, analytics, databases, serverless computing and developer tools to build new apps and services to inspire customers.

“StubHub is all about the customer. Everything we create, from our mobile products to our customer service, reflects a deep desire to put the needs of fans first,” said StubHub CTO Matt Swann. “We intend to set the bar for what highly curated fan experiences can be at scale before, during and after the event, everywhere fans expect us to be. We have bold plans for innovation in the years ahead.”

Many of Google and Pivotal’s cloud-based services will replace StubHub’s legacy infrastructure that’s no longer meeting demands of its digital products. This will significantly speed up the time it takes for products and services to arrive on the market, making it more competitive.

“Digital transformation is on the mind of every technology leader, especially in industries requiring the capability to rapidly respond to changing consumer expectations,” Bill Cook, president of Pivotal said. “To adapt, enterprises need to bring together the best of modern developer environments with software-driven customer experiences designed to drive richer engagement.”

One of the new ways StubHub will be using Google and Pivotal is offering its customers to interact with live events around the world. It will also use 17 years worth of data to predict how, where and when to communicate with fans.

“Google Cloud Platform’s global presence, scale, and world-class AI and data solutions, coupled with Pivotal’s services and platform will help StubHub offer the best customer experience,” Brian Stevens, chief technology officer of Google Cloud added.

Origin on Mac with Parallels Desktop

Ensuring Mac® and Windows users ever-changing needs are being met with excellent software is our priority here at Parallels. Our marketing team sincerely listens to the endless user suggestions and questions regarding support for popular Windows-specific software with Parallels Desktop® for Mac. Here’s a popular question we’ve received from (many) users: Q: Can OriginLab’s data […]

The post Origin on Mac with Parallels Desktop appeared first on Parallels Blog.

Q&A: Citrix’s privacy chief Peter Lefkowitz talks GDPR compliance at Synergy 2018


Keumars Afifi-Sabet

10 May, 2018

With GDPR set to come into force later this month, organisations of all sizes are racing to comply with a new set of tougher data protection laws.

Cloud Pro caught up with Citrix’s chief privacy and digital risk officer Peter Lefkowitz at Citrix Synergy 2018, hosted in Anaheim, California, to discuss what the new legislation means for organisations, how it changes the way businesses approach privacy, and how Citrix itself has changed in light of imminent GDPR enforcement.

“We’re just at that moment – we’re 16 days out – so I’m spending a lot of time on it, but it’s not just internal system compliance, it’s looking at our products – what data do they collect, what are our retention rules, how do we promote ourselves to customers?” he said. 

Citrix’s bid to comply with GDPR, according to Lefkowitz, has included updating all global contracts, putting out a new data privacy addendum, standard contractual clauses – pushing those to “77,000 of our active customers in April” – and new terms for all its partner channel and suppliers.

The cloud-centric company has also asked its suppliers to sign up to new privacy terms, and fill out a questionnaire, so Citrix knows “who their security contacts are, where we go in the event of an incident, who to contact, and that sort of thing”.

On how GDPR has changed the way Citrix operates internally, Lefkowitz said: “By virtue of the fact the GDPR is so focused on accountability, on all of these controls, and on transparency, it has raised privacy awareness and security design awareness to a higher level, so we now have some of the members of our executive leadership team who want regular updates on these topics.

“It has raised that discussion up against how we design our products, how we manage our services, what we do on the back-end.”

Lefkowitz’s comments chimes with chief product officer PJ Hough’s assertion that Citrix is not only GDPR-ready itself, but has made efforts to ensure wider compliance among its associates in the industry.

“For all of our existing commercial products we have gone through GDPR review already, and we have actually not just complied ourselves, we’re actively engaged with many of our large European and global customers to help them become GDPR compliant in their entire deployment,” he said at a press Q&A following the opening keynote address.

“So I would say as we bring more of our products online we will be compliant with the regulations in all the markets in which we serve.”

CEO David Henshall added, in the same session, that regulatory compliance is “woven into how we think about the company – how we think about delivering cloud services – it’s just part of the fabric”.

Lefkowitz continued to outline specifically how Citrix is helping its partners and customers through an array of blogs, white papers, schematics, and a range of different materials featured online, outlining its approach to GDPR and data protection more generally.

“We’ve done training internally, for our support organisations, for our sales force, for our legal department, for a lot of people that touch customers and touch suppliers, so people are aware of what the key issues are. The goal is to really be as transparent as possible – and to make it as easy as possible for our customers to use these products,” he said. 

Turning to the legislation, Citrix’s in-house privacy expert explained the benefits of GDPR include that it forces organisations into adopting healthier data protection practices – while he warned against some of the unintended consequences.

“Raising these topics, making those operation controls more of a requirement, has taken a lot of effort from every company,” Lefkowitz said.

“But if you know where your data is, where your systems are, how they’re managed, you regularly check them and update them, I think the companies that take GDPR seriously are overall going to have a better framework for security control for all of their data – particularly for sensitive personal data.”

Organisations, however, should be wary of the impact of the ePrivacy regulation, according to Lefkowitz, a separate regulation that governs electronic privacy and marketing, that sits alongside existing regulation, and is in the process of being rewritten.

“Nobody knows where it’s going to land,” he explained, adding: “We’ve all been doing this big effort around marketing systems and marketing controls around GDPR, and then probably next year we’re going to be hit with an entirely new regulation.

Lefkowitz also warned there are a number of areas under the regulation that have been left open for individual member states to pass their own laws, or enforce in their own way, going against the main purpose of GDPR; that is unifying data protection regulations across the continent, and the wider world.

“A worry is that once the regulation is in effect and countries start seeing new technologies, new instances, new breaches, we may see countries splintering a bit on some very important topics,” he explained.

He outlined a hypothetical scenario of a company heading to a lead regulator in one country, presenting its system and its controls and gaining clearance, only for another regulator in another country to pull the company up on the same issues, as a point of great concern.

In terms of regulation penalties will be enforced, Citrix’s privacy chief said the legislation brings punishment under GDPR to the same standards as that under existing laws, with the whole notion of fines of 2% and 4% annual revenue based on competition and antitrust law.

He explained there will be two prongs to the regulatory approach based on the severity of non-compliance.

Outlining the first, he said: “Some of the regulators have already spoken publicly about this – they’ve hired more staff – so on 28 May, they’re going to go out and really look for basic stuff that hasn’t been done.” This will include situation in which an organisation doesn’t have a privacy policy, or if there’s evidence they’re not giving somebody access rights.

“Tranche two is going to be when the really, really, really, really bad stuff happens – the breach that has a horrendous impact, that easily could have been avoided; the company that is selling lists of sensitive information and not following up on controls – we’ve heard a little something about that recently – those I think the regulators will take very seriously,” explained Lefkowitz. 

“Time will tell whether the fines will be similar to what we’ve seen under competition law. I can’t make a guess at that; just the fact that the regulators will have that in their back pocket I think will make a significant difference in compliance.”

Q&A: Red Hat’s Werner Knoblich talks hybrid, talent, and the serverless future


Dale Walker

10 May, 2018

“You don’t get lucky 64 times in a row,” says Werner Knoblich, Red Hat’s SVP and GM of EMEA, pointing to the open source company’s unprecedented streak of quarterly revenue growth.

Red Hat posted $772 million in revenue for the fourth quarter of the 2018 fiscal year, 23% higher than over the same period in 2017. The company is riding high on a burgeoning market that’s seen a shift towards containerisation and open source in recent years.

Knoblich argues that Red Hat’s core philosophy of open source, and its bet on Kubernetes and hybrid cloud just over five years ago, is helping to place the company at the forefront of application development, particularly as others have historically relied on vendor lock-in.

The Switzerland of hybrid cloud

“We often talk about ourselves as being Switzerland,” says Knoblich. “We’re becoming the abstraction layer to ensure people aren’t locked in. It’s one of the reasons we are still an independent company.

“It’s one of our key value propositions – being a neutral company. If years ago HP or IBM had acquired us, we would have lost our neutrality. The ecosystem’s other players know this as well, as open source is an innovation engine.”

Red Hat’s 2018 Summit will be a particularly memorable one in years to come as it marked the year that IBM, a rival platform provider, announced it would shifting its WebSphere applications over to Red Hat’s OpenShift.

“There’s endorsement there,” says Knoblich. “IBM is now fully containerising all of its WebSphere applications, making them available on OpenShift. It’s their preferred platform. We see WebSphere as a direct competitor to us in terms of JBoss (Red Hat’s application server), and they’re fully switching over to our technology. It’s a gigantic endorsement that we’ve done the right things with OpenShift.”

Its record growth is being fuelled by an increasing willingness to embrace open source services among markets that have traditionally been locked to Windows-based systems.

“We struggled in the early days when we were mainly a pure Linux company,” says Knoblich. “Our sweet spot was countries and industries where there was heavy Unix usage and not just pure Microsoft – then we did Unix replacements, not necessarily Microsoft migrations.

“There’s not really a struggle anymore, because even a Microsoft customer is becoming a very good target for us – they also need to containerise their applications and automate their environment,” adds Knoblich.

“Our portfolio has grown so much that we have so many more possibilities. Even if a customer says ‘I’m staying completely on Windows, I don’t want to switch to Linux’, even for this customer we have offerings that can provide value where a couple of years back we had nothing.”

The lure of open source

Red Hat also attributes a great deal of its success to its ability to outmanoeuvre its rivals when it comes to talent acquisition.

“Every single company is fighting for the same talent,” says Knoblich. “Google tries to hire best developers in the world, so does Deutsche Bank, Barclays, Volkswagen. But if you’re the best developer, you can choose. Companies need to be attractive to these developers, otherwise they won’t get them.”

He argues that if employers don’t start offering developers a means to work within open source communities, they risk alienating budding talent looking to build experience.

“As a developer, where do you build your CV? It’s on Github. They want to make a name for themselves, not just put on a CV that they worked at Barclays or whoever.”

If companies don’t allow them to do this, the developer will say ‘this company is limiting my career path, so I won’t join them’. That’s what accelerated the whole [open source] movement.”

A shift of this kind can require a cultural change, something that Red Hat, as a veteran in open source space, has been fostering among other companies in the role of a consultant.

“Often the legal department is the big issue – they say ‘we can’t have our employees making submissions to open source communities with the domain email address of the employer’.

‘What’s the legal exposure if I as an employee make a commit and something goes wrong with the code? Is there a liability?”

Finding the next industry standard

Having placed an early stake on hybrid, Red Hat is hoping to once again position itself ahead of the competition. Part of that shift will be to invest in what it sees as the next big thing: serverless.

“It was kind of introduced by Amazon, but with that, again, they’re trying to lock their customers to their platform,” says Knoblich, commenting on AWS’ shift towards offering a dynamic cloud service that handles the allocation of machine resources – normally handled by a server.

He explains that Red Hat is looking at making a serverless offering as a function of OpenShift in the near future, competing directly with Amazon.

However, possibly the biggest focus for the company will be in the untapped multi-cloud management space, explains Knoblich, as customers look for products that make it easier to handle large numbers of deployments.

“Because the world is hybrid, customers will have different environments – on-premise, VMware virtualised environment, OpenStack, private cloud, Azure. But they somehow still need to manage it all with a single page of glass – they don’t want to have use all the individual dedicated tools.”

“That’s obviously a big play. The environment is not becoming simpler, it’s becoming, to a certain extent, more complicated,” says Knoblich. “Those management tools that bring that all together… that’s also something I think we will be focusing on.”

Image: Shutterstock

Condoleezza Rice warns against threat of cyber warfare at Citrix Synergy 2018


Keumars Afifi-Sabet

10 May, 2018

Former US secretary of state Condoleezza Rice warned against the growing threat of cyber warfare in an address at Citrix’s annual Synergy conference hosted in Anaheim, California.

In a half-hour speech, and follow-up Q&A session, Rice – who served under George HW Bush and George W Bush – spoke broadly on a range of subjects including the new geopolitical landscape, advancements in technology, and women in STEM.

“As somebody once said you either have been hacked and know it, or you have been hacked and don’t know it,” she told an audience of more than 5,000 at the Anaheim Convention Centre.

“People, particularly countries, are getting more aggressive at it, they’re getting more capable at it, and I don’t think anybody thought coming round the corner that the Russians would actually use bots and the like, to use infinity loops, to try stir chaos in American politics.

“That one I didn’t see coming and I have been a Russianist for a long time. And so people are finding ever-more innovative and creative ways for malevolence using cyber.”

Rice also outlined three points to address the growing threat of cyber warfare, including better cooperation between the private sector and government, more effort to get some expertise on boards, and that people can be “really stupid” when it comes to cyber.

She lamented Homeland Security, which she said she helped create, as a “monstrosity of an organisation” as it is really hard to use as an interface, adding the Edward Snowden leaks “really eroded any trust between the private sector and the government”.

After outlining her point that companies should be better utilising specialists and experts on their boards, Rice then went on to claim that “human beings are the real problem”.

According to Rice, a US intelligence agency conducted an experiment once where they took a USB drive and dropped it in the car park. She said a high percentage of people, who worked for the agency, picked it up and put it into their computer, concluding “human beings can’t resist doing things that are really stupid from a cyber point of view”.

Elsewhere in her address, Rice outlined her views on the biggest geopolitical issues facing democracy right now – including North Korea and Iran – as well as discussing her views on why Trump and Brexit appealed to many voters.

“The people who didn’t quite benefit from globalisation, and who are now being even further disrupted by automation are the people for whom it is worth it to take a chance, because it couldn’t get worse,” she said, adding that people with global power are failing to speak the language and connect with those who have genuine concerns.

Meanwhile, Rice outlined how technology may not always be married up with the appropriate level of wisdom, explaining “technology is not good, or bad, it’s neutral – the question is how is it applied”.

Amongst dipping into areas such as R&D investment, competing with Chinese, and the key threats facing the US, Rice outlined how to boost the position of women in tech.

“With girls in particular, the first thing is to educate them in a way that they don’t cut off their options early; so this is the issue of women not being STEM-ready when they get to college because they were somehow not convinced to get ready in elementary school – or even high school,” she said. 

“You hear the ‘we can’t find any women for our board’… uh, really? Yeah you can. There are lots of them out there, and it just comes from looking out of your normal channels; maybe it’s not an ex-CEO but maybe it’s someone who comes from government, or academia, or from cybersecurity – not a bad thing to have on your board these days.”

Red Hat sets out roadmap for CoreOS integration


Dale Walker

9 May, 2018

Red Hat has released the first details of its roadmap for the integration of the newly acquired CoreOS tools into its existing suite of container-based services.

The open source giant snapped up CoreOS, a highly successful cloud-native startup, for $250 million back in January, a move considered to be Red Hat’s biggest acquisition since its shift in focus towards providing Kubernetes services.

Since then, Red Hat has been silent on what tools the company would formally embrace, however, it has now confirmed that CoreOS Tectonic, Quay and Container Linux will all be integrated into Red Hat’s OpenShift container platform.

Tectonic was originally developed to solve problems associated with managing Kubernetes deployments at scale by introducing automation, supported by much-lauded ‘over-the-air’ updates. Integrated into OpenShift as ‘automated operations’, the feature should make it easier for IT admins to roll out automatic upgrades across their clusters and hosts.

Also making its way into OpenShift is Container Linux, a lightweight operating system providing immutable infrastructure for container deployments, that also benefits from over-the-air updates.

“Our number one principle is that no customer is left behind,” said Ashesh Badani, VP and general manager of OpenShift, speaking at Red Hat Summit. “We want to make sure that all the community interests, all the customers, around Container Linux are supported. We move that forward injecting Red Hat content into that.

“Tectonic was a pretty popular distribution of Kubernetes – customers really liked the fact Tectonic was focused on over the air upgrades, technologies around monitoring and metering. We’re taking all of that and converging that into the OpenShift platform, available over the next six months.”

Quay, a service that acts as a registry for managing container images, will be a standalone product of the OpenShift portfolio, the company confirmed.

Red Hat Quay will be available as an on-premise deployment or through a hosted service as Red Hat Quay.io, and will feature the same tools that made the service popular, including geographic replication, security scanning, and image time machine features.

Badani added that the integration roadmap would be fully delivered to customers by the end of the year, and that incremental progress updates would be provided, the next being at some point over the summer.

Image: Shutterstock

Google Cloud to launch in Switzerland, furthering global expansion

Google Cloud is already expanding into five new regions this year – and now Switzerland can be added to the list.

The move to open facilities in Zurich will mean Google has half a dozen regions in Europe, taking the overall total of existing and announced regions to 20. The Swiss zones will open in the first half of 2019, Google said.

“Customers in Switzerland will benefit from lower latency for their cloud-based workloads and data, and the region is also designed for high availability, launching with three zones to protect against service disruptions,” wrote Urs Hölzle, Google SVP technical infrastructure in a blog post confirming the news.

Back in January, Google announced extensive infrastructure expansion plans, with five new regions and three subsea cables launched. The new regions were in the Netherlands – opened immediately with two zones, with the third arriving in March – and Montreal, also open for business, with Los Angeles, Finland and Hong Kong to follow.

Google Cloud’s performance continues to impress, with CEO Sundar Pichai telling analysts last month the company was ‘growing well’ and that deals being struck were ‘larger’ and ‘more strategic.’ The company does not disclose specific cloud revenues, but according to its most recent financial results its ‘other’ revenues – of which Google Cloud is a part – hit $4.35 billion, up 35% on this time last year.

Evidence of the company’s more impressive client roster has been seen in recent months through various disclosures. Spotify is a confirmed Google Cloud customer after it came out in their IPO filing, while eagle-eyed observers saw, buried deep in an iOS security guide, that Apple was also a customer. More recently Netflix, the poster child of Amazon Web Services (AWS), confirmed it also ran disaster recovery workloads on Google after a story from The Information, which the company described as ‘overly sensationalised.’

You can find out more about the Switzerland opening here.

VDI deployment best practices: A guide

Virtual desktop infrastructure (VDI) can bring significant benefits to organisations looking to be more agile, as well as reduce the cost and complexity of managing a variety of client desktops, laptops, and mobile handheld devices.

Organisations using VDI are able to benefit from centralised desktop management, rapid deployment, lower support costs, standardised deployment, increased security, and other management efficiencies. However, VDI can also bring along several challenges to organisations who do not plan for, and implement, the technology correctly. Many VDI pilot projects fail due to improper design considerations that lead to performance issues. This in turn leads to dissatisfied end users.

Let’s take a look at several VDI best practices and why these are important to consider before, during, and after installation – as well as moving into ‘day 2’ operations.

VDI deployment best practices

When thinking about deploying a VDI infrastructure, there are several best practices that need to be considered to ensure successful implementation. Consider the following:

  • Understanding end user requirements
  • Designing and sizing VDI network and storage correctly
  • Deciding how to provision virtual desktops – persistent vs. non-persistent
  • Using a thin client management solution
  • Ensuring high availability

Let’s take a look at each of the above considerations one by one and see how and why organisations need to give due diligence to each area in deploying a VDI solution.

Understanding end user requirements

To deploy a successful and performant VDI solution for your organisation, the needs of the end user need to be determined. To begin with, this requires understanding end user applications. Understanding what types of end user applications are utilised is necessary to understand the sizing requirements of the VDI solution that will be deployed.

Clearly, the performance requirements for users who are performing complex 3D graphics rendering will be quite a bit more than end users simply running email and web applications. A successful VDI deployment often depends on whether or not a thorough understanding of the end user environment has been ascertained or not.

Understanding end user requirements also includes the simple practical requirements of users, such as monitor support, profile persistence, USB redirection, audio profiles, printer needs, scanner needs, and two factor authentication.

Considering all of these key areas helps to ensure a successful VDI deployment.

Designing and sizing VDI network and storage correctly

One of the single most important VDI deployment best practices involves designing network and storage correctly. An incorrectly designed network and storage can lead to disastrous consequences to VDI performance and overall end user satisfaction.

The network becomes even more important with VDI deployments, since in VDI, the network is used not only to exchange user/application data between the end user and servers, but also to feed the entire desktop display/experience. Since the architecture of VDI involves centralised virtual machines that are running in the data centre, the VDI desktop display for the end user depends on protocols such as PCoIP, ICA, RDP, or Blast Extreme (VMware) being able to successfully tream data between the end user and the data centre. This places more burden on the underlying network to be able to transmit VDI display data across the wire.

Additionally, organisations need to be able to understand how VDI traffic and user experience differs between LAN and WAN connections. It would be a mistake for VDI POC or test installs to only include testing and POC involving LAN connections. Organisations need to understand fully all aspects of VDI performance when connecting from both high-speed LAN connections, as well as the slower links such as over the WAN.

What about storage? It cannot be stressed enough just how crucial properly designed and sized storage is to a successful VDI deployment. When thinking about traditional workstations, these operate in a distributed fashion. All the compute, memory, and storage performance are contained within the individual workstations. However, in a VDI environment, you are taking all of the compute, memory, and storage (disk IOPs) that would in a traditional workstation environment be distributed across all workstations – and centralising those requirements to the backend VDI environment.

The VDI storage subsystem must be able to handle all the I/O performance requirements of all end user VDI virtual machines. This includes any ‘I/O storms’ that occur. An I/O storm occurs during a number of different events that can overwhelm VDI storage. These events can include boot, login, and logoff events of a large number of VDI end users. When a large number of VDI end users boot up, login, or logoff their VDI desktop images during the same short time span, VDI storage can become saturated and result in performance issues.

Today’s new hybrid and all-flash SAN arrays are generally powerful enough to alleviate many of the issues associated with I/O storms. However, due to the expensive nature of all-flash SANs, this weighs in on the decision making process with deploying VDI. Other software-defined storage, such as vSAN, offers attractive capabilities as relates to VDI due to the ease of scaling up and out, and other architectural advantages for VDI deployments.

Keeping the importance of the network and storage in mind when deploying VDI environments is certainly a best practice needed for success in deployment.

Deciding how to provision virtual desktops – persistent vs. non-persistent

One of the decisions that need to be made is what type of virtual desktop will be deployed via VDI. There are generally two types of virtual desktops that can be utilised – persistent and non-persistent. What are the differences and use cases?

Persistent virtual desktops are more in line with keeping the same philosophy as physical workstations assigned to users. You have one workstation that is assigned to a particular user. That user always gets his/her same virtual desktop each time they login. With non-persistent desktops, admins set up a ‘pool’ of desktops that are created from a ‘gold’ virtual desktop image. When the user logs in, they are assigned to one of the generic virtual desktops in the pool.

Persistent virtual desktops bring much of the familiarity of managing physical infrastructure into the world of VDI. However, it can also add to management overhead and storage space since each user has a specific virtual desktop image that is stored and maintained. The advantage is their data is able to persist and can be managed in the traditional sense.

Non-persistent virtual desktops are more efficient from a storage and management perspective since there is only one ‘gold’ image to maintain and provisioned virtual desktops can be recycled when a user logs off. Non-persistent environments add the complexity of managing user profiles and user data in a less traditional way. Generally, by utilising folder redirection, or other products such as VMware User Environment Manager, these challenges can be overcome with non-persistent implementations.

Organisations must assess the needs of end users and decide which type of provisioning will work best for their particular end users and use cases.

Using a thin client management solution

An important aspect to consider when implementing a VDI solution is managing thin clients in a VDI deployment. Even though physical workstations are replaced by virtual desktops running on a VDI platform, end users still need a way to access those virtual desktops. Thin clients are very lightweight computers that can be configured with much less internal hardware, stripped down OS, and are generally much cheaper than physical workstations. Thin clients enable end users the ability to connect to the backend virtual desktop infrastructure.

Deploying all new thin clients can represent a tremendous investment as well as management overhead for organisations moving to VDI. However, by utilising client management software such as Praim ThinMan, as well as software that can allow turning a PC into a thin client, such as Praim ThinOX4PC, you can offset both challenges.

Make VDI environments highly available

It is crucial for organisations to understand the importance of making a VDI solution highly available. Generally speaking, there are many aspects of today’s modern hypervisors that make them resilient and highly available, such as VMware HA and so on. In traditional client/server infrastructure, if an end user desktop is broken, all other end users are still up and running. In contrast, by utilising a VDI solution, all end user desktops are going to rely on the availability of the backend VDI solution. With this being said, making sure to build out enough hosts in the VDI cluster, having redundant data paths to storage and network connections and redundant power, will help to alleviate concerns around availability.

A well designed and architected hypervisor and storage solution will take care of any concerns that may come about when considering the move to a native VDI solution for end user desktops.

Concluding thoughts

VDI solutions offer tremendous benefits to organisations in terms of manageability, performance, security, and other benefits. However, there are key deployment best practices that need to be considered when looking to move to a VDI solution. This includes understanding end user requirements, designing and sizing VDI network and storage correctly, deciding how to provision user desktops, making use of a thin client management solution, and ensuring high availability.

By proper planning and testing with a well designed POC, organisations can expect to achieve a successful, effective, and problem-free VDI deployments.

Run Act! CRM on Mac with Parallels Desktop

Here at Parallels, we love hearing from our incredible users about how Parallels® Desktop for Mac empowers their productivity at work. Business owners deserve to run the software they need on the devices they have. Whether our users are small independent businesses or large enterprise corporations, customer relationship management (CRM) software is a vital tool […]

The post Run Act! CRM on Mac with Parallels Desktop appeared first on Parallels Blog.

Citrix rolls out Workspace App for mobile productivity


Keumars Afifi-Sabet

9 May, 2018

Citrix unveiled the Workspace App at its annual Synergy conference yesterday, underlining its vision of a simplified and unified digital workspace.

Making the keynote address at the Anaheim Convention Centre, Citrix CEO David Henshall introduced “the world’s first unified digital workspace for business” as a way for teams and users to access their applications, content, files and information in one space.

The Citrix Workspace App is “one way to organise, access and open all of your files, regardless of whether they’re on your hard drive, on your network drive, on cloud or anywhere in between,” Henshall said.

He added the app, available in-browser, on desktop, or on mobile, was “one that integrates with what you already have, what’s already existing on your on-premise infrastructure, and [is] ready to support you when you’re moving to the cloud.”

“The result is everything you need to be productive in one single unified experience,” he said.

The app sees Citrix unify a series of isolated digital workspace products, also tying into Henshall’s vision of “people-centric computing” – ridding organisations of unnecessary complexity and barriers in a bid to boost productivity.

Key features include prebuilt SaaS integrations, universal search, and all your apps and files brought together in one space. It’s essentially an app version of the Citrix Workspace Service, which debuted at last year’s Citrix Synergy conference, hosted in Orlando.

The company’s former CEO, Mark Templeton, even showed off a prototype concept called Citrix Workspace Services at Citrix Synergy 2014. The prototype version featured the ability to host apps where the business wants to, behaving as a single destination for users to do everything, no matter where they were.

However, Citrix’s latest rollout puts a focus on a mobile-first and cloud-first work environment. The idea of putting people at the heart of things has featured heavily at this year’s Synergy conference, and is a concept that Citrix’s VP of product management and workspace services, Sridhar Mullapudi, claimed was key to developing the Workspace App.

“In a lot of our conversations with customers and partners, a lot of point products and solutions are cobbled together to either solve their experience need, or security need, but it’s a broken experience for users – it’s just fractured experience, and what it does is frustrate the users, and just lowers productivity,” he said.

“So the number one thing is having that great productivity experience for users so they get things done; doesn’t matter what device, or what kind of application they’re trying to launch.”

Picture: Keumars Afifi-Sabet/Cloud Pro