7 Things Every Web Developer Needs to Know

Think you know everything you need to about web development? Considering just how much true web development encompasses, it can be hard to know everything from cover-to-cover. Still, there are certain things every web developer should know. We asked members of our own web team at Parallels what they thought those nebulous things were, and […]

The post 7 Things Every Web Developer Needs to Know appeared first on Parallels Blog.

Here’s How to Eliminate Your Citrix Printing Issue

Citrix products have been popular for their feature-rich tools and performance. However, these advantages come with certain challenges. Along with high cost and complexity, Citrix printing issues are a major challenge for businesses. Although Citrix is constantly trying to resolve printing issues, Citrix environments are still sensitive when it comes to printing. In addition, server […]

The post Here’s How to Eliminate Your Citrix Printing Issue appeared first on Parallels Blog.

Why Verizon shutting down part of its public cloud is “no surprise”

(c)iStock.com/RiverNorthPhotography

Verizon is to shut down part of its public cloud service, according to an email sent to customers, with an analyst arguing the telco’s pushback comes as “no surprise”.

In the email to Verizon Cloud customers, as posted by Kenn White on Twitter, users of Verizon’s Public Cloud and Reserved Performance Cloud Spaces services will have to make other plans by April 12, when the virtual servers will be switched off. Users of the Verizon Virtual Private Cloud and Verizon Cloud Storage will not be affected by the move, while the Cloud Marketplace store, launched to great fanfare in 2014, will also close.

“Verizon is discontinuing its cloud service that accepts credit card payments on April 12,” a company statement read. “Verizon remains committed to delivering a range of cloud services for enterprise and government customers and is making significant investments in its cloud platform in 2016.”

Yet the move makes for interesting, if not particularly surprising, news for telcos trying to move in on the cloud infrastructure market, according to Synergy Research chief analyst John Dinsdale.

“Telcos generally are having to take a back seat on cloud and especially on public cloud services,” he told CloudTech. “They do not have the focus and the data centre footprint to compete effectively with the hyperscale cloud providers, so they are tending to drop back into subsidiary roles as partners or on-ramps to the leading cloud companies.”

Synergy’s regular research into cloud infrastructure reveals a continued yawning gap between leaders Amazon Web Services (AWS), which has more than 30% market share, and the competition, with Microsoft, IBM, Google, and Salesforce completing the top five. While Microsoft and Google’s revenues are growing faster than AWS year over year, it is hardly even a dent in the latter’s share.

Dinsdale argues it is this speed of growth which has made it difficult for telcos to compete. “Early on in the growth of the cloud market it had seemed like telcos might have a leading part to play – but the speed of cloud market development and the aggressiveness of the leading cloud providers has largely left them behind,” he said.

“There is now quite a bit of head scratching going on within telcos as they figure out how best to position themselves in the new cloud ecosystem.”

In January last year, Verizon took the unconventional decision to undertake a planned outage in order to future-proof further downtime “in the background with no impact to customers.” The company has warned customers that data will be “irrecoverably deleted” if not retrieved in time.

How performance requirements can prove a stumbling block for cloud transformation

(c)iStock.com/sndr

As businesses look to cloud for faster, more flexible growth, they confront significant challenges from a legacy application base that has varying levels of cloud suitability.

Some applications have specific kinds of performance requirements that may limit or eliminate their eligibility for virtualisation, which is fundamental to optimising cloud efficiencies.  These performance characteristics come in various flavours: A requirement for specialty hardware, a requirement for particularly extreme CPU or memory utilisation, or a requirement for real time deterministic performance.

Under the hood

The core value proposition of virtualisation and cloud is using a single standard hardware platform across a wide range of applications or services.  In some cases, though, specialised hardware might be required to most effectively provide some function.  Sophisticated maths can sometimes benefit from utilising Graphical Processing Units (GPUs), high-throughput systems benefit from solid-state disk, while cryptographic applications can use random number generators, which can be difficult to come by in virtual environments. Low latency stock trading environments often use specialised low-latency switches and network taps for operations and instrumentation.  While a specialty hardware requirement may not prevent migrating an application to a cloud environment, it may limit the vendor options or require special accommodation in the case of private clouds.

Another common obstacle to virtualisation is a requirement for raw processing horsepower. Virtualisation requires some CPU cycles to facilitate management among the various virtual machines running on the server.  Maximising the CPU available to an application means using only one virtual machine on that hardware.  At which point, the question becomes whether it is cost-effective to use a hypervisor at all.

A more subtle performance requirement that is particularly troublesome in shared service environments is deterministic performance. Some transactions or activities need to happen within a fixed – often short – amount of time.  Software defined networking (SDN) solutions, some types of live media broadcasting, big data real time analytics applications, algorithmic trading platforms, all benefit from deterministic, consistent, performance. Cloud-like, virtual, shared resource provisioning is subject to the “noisy neighbour” problem, where it’s difficult to know what other applications might be running without some workload planning and engineering. This yields unpredictable performance patterns which can impact the usability of the application.

While this issue is most obvious in multi-tenant public clouds, private clouds, where there is more knowledge of the overall workload, can be problematic as well. Mixing different operating environments, commonly done as a cost or flexibility measure, can create issues. The classic example is sharing hardware between development and QA or production.  Development virtual machines can often come and go, with different performance characteristics between each machine and its predecessor.  If hardware is shared with production, this may influence production performance. While horsepower performance may be satisfactory, the “noise” brought on by changing out virtual machines may create unacceptable inconsistency.

Various strategies can be used to manage performance capacity more actively. In a private cloud, it is possible to be selective about how VMs are packed into hardware and workloads are deployed.  In public clouds, buying larger instances might limit the number of neighbours. Another strategy is to forgo virtualisation altogether and run on hardware while trying to leverage some of the self-service and on-demand qualities of cloud computing.  The market is showing up in this area with bare metal options from providers like Rackspace and IBM. The open source cloud platform OpenStack has a sub-project that brings bare metal provisioning into the same framework as virtual machine provisioning.

Latency and congestion

Along with processing and memory, network and storage latency requirements for an application should be evaluated. Latency is the amount of time it takes a packet to traverse a network end to end. Since individual transactions can use several packets (or blocks, for storage), the round trip time can add up quickly.  While there are strategies to offset latency – TCP flow control, multi-threaded applications like web servers – some applications remain latency sensitive.

There are three different primary latency areas to examine.  First, latency between the application and the end-user or client can create performance issues.  In today’s applications, that connection is commonly a web server and browser.  Modern web servers and browsers generally use multi-threading to get multiple page components at once, so the latency issue is obscured by running multiple connections.  But there are instances where code is downloaded to the browser (Flash, Java, HTML5) that relies on single-threaded connectivity to the server.  It is also possible the communication itself may be structured in a way that exacerbates latency issues (retrieving database tables row by row, for example).  Finally, an application may have a custom client that is latency sensitive.

The second primary network latency area is storage latency, or the latency between the application and data.  This will show up when application code does not have direct local access to the back-end storage, be it a database or some other persistent store. This is a common case in cloud environments where the storage interface for the compute node tends not to be directly attached. 

As a result, there is network latency due to the storage network and latency due to the underlying storage hardware responsiveness. In public clouds, storage traffic also competes with traffic from other customers of the cloud provider. Latency can build up around any of these areas, particularly if an application uses smaller reads and writes. Writing individual transactions to database logs is a good example of this, and special attention to IO tuning for transaction logs is common. Storage latencies and bandwidth requirements can be offset by running multiple VMs, multiple storage volumes, or by using a solid state disk based service, but the choice will impact the functional and financial requirements of the cloud solution.

The third primary latency area is between cooperating applications; that is, application code may depend on some other application or service to do its job. In this case, the network latency between the applications (along with latencies in the applications themselves) can often cause slowness in the overall end-user or service delivery experience.

Depending on the specifics of the cloud solution being considered, these various latencies may be LAN, SAN, WAN, or even Internet dependent. Careful analysis of the latencies in an application and the likely impact of the planned cloud implementation is warranted. Often, as with raw performance above, consistency rather than raw speed is important.

Conclusion

In short, while moving applications into a private or public cloud environment may present an opportunity to save costs or improve operations, applications vary in their suitability for cloud infrastructures.  The common technical concerns presented here can add complexity but are manageable with proper planning, design, and execution.  Evaluating applications for cloud readiness allows evidence-based planning to take best advantage of cloud economics and efficiencies in the enterprise.

Efficient Enterprise DevOps | @DevOpsSummit #DevOps #Microservices

If you work for a large company, you’ll often look around and wonder if moving to a daily release cycle or automating deployments is even possible given the number of meetings and the amount of process your releases are subject to. Between the CAB meetings, the QA schedules, and the coordinated conference calls it’s tough to imagine a monthly release process being condensed down to a process that could run in a single day. For most organizations dealing with real risk – it’s true – you will not be able to condense release cycles down to a single day, but you can get closer to this ideal if you make the right decisions about how your department is organized and managed.

read more

IoT and Predictive Analytics | @ThingsExpo #IoT #M2M #BigData #InternetOfThings

Companies can harness IoT and predictive analytics to sustain business continuity; predict and manage site performance during emergencies; minimize expensive reactive maintenance; and forecast equipment and maintenance budgets and expenditures.
Providing cost-effective, uninterrupted service is challenging, particularly for organizations with geographically dispersed operations.

read more

Amazon Web Services buys HPC management specialist Nice

amazon awsAmazon Web Services (AWS) has announced its intention to acquire high performance and grid computer boosting specialist Nice.

Details of the takeover of the Asti based software and services company were not revealed. However, in his company blog AWS chief evangelist Jeff Barr outlined the logic of the acquisition. “These [Nice] products help customers to optimise and centralise their high performance computing and visualization workloads,” wrote Barr, “they also provide tools that are a great fit for distributed workforces making use of mobile devices.”

The NICE brand and team will remain intact in Italy, said Barr. Their brief is to continue to develop and support the company’s EnginFrame and Desktop Cloud Visualization (DCV) products. The only difference, said Barr, is that they now have the backing of the AWS team. In future, however, NICE and AWS are to collaborate on projects to create better tools and services for high performance computing and virtualisation.

NICE describes itself as a ‘Grid and Cloud Solutions’ developer, specialising in technical computing portals, grid and high performance computing (HPC) technologies. Its services include remote visualization, application grid-enablement, data exchange, collaboration, software as a service and grid intelligence.

The EnginFrame product is a grid computing portal designed to make it easier to submit analysis jobs to super computers and to manage and monitor the results. EnginFrame is an open framework based on Java, XML and Web Services. Its purpose is to make it easier to set up user-friendly, application- and data-oriented portals. It simplifies the submission and control of grid computing enabled applications. It also acts to monitor workloads, data and licenses from within the same user dashboard. By hiding the diversity and complexity of the native interfaces, it aims to allow more users get the full range of benefits from high performing computing platforms, whose operating systems are off-puttingly complex.

Desktop Cloud Visualization is a remote 3D visualization technology that enables technical computing users to connect to OpenGL and Direct/X applications running in a data centre. NICE has customers in industries ranging from aerospace to industrial, energy and utilities.

The deal is expected to close by the end of March 2016.

For Valentine’s Day: A Love Letter from Dev/Test Cloud to Service Virtualization | @CloudExpo #Cloud

For Valentine’s Day, here’s a lighthearted look at the «relationship» between two complementary technologies: service virtualization and cloud dev/test labs.
Hey, I know it’s been a while since we started being «a thing.» When we met, everyone said you were just mocking, and that I wasn’t real enough to make a living, with my head in the clouds. Yet, here we are, a few years later.

read more

Efficiency gains most compelling reason for cloud, say enterprises

SurveyThe majority of US enterprises will increase their spending on cloud computing by up to 50% this year, according to US based researcher Clutch.

Conversely, the research also indicates that 6% of enterprises will cut their spending on cloud. The survey of 300 IT professionals at medium to large enterprises could indicate the different uses for cloud computing, with some companies using it to manage costs while others use it as a strategic weapon.

The study found that nearly 30% of the sample will maintain their current levels of cloud spending, with 6% saying they will reduce their cloud computing budget. A significant minority, 47%, identified efficiency improvements as the main benefit of cloud computing. There were no figures on whether performance improvements may encourage companies to spend less money on cloud services in future however.

The statistics on the uses for cloud computing do not suggest this is a tactical, strategic investment, however. The most popular motive cited for enterprise cloud usage, in the US, would appear to be better file storage, which was nominated as the primary objective for buying cloud services by 70% of the survey. The next most popular application of the cloud, backup and disaster recovery, which was nominated by 62% of the IT professionals, is another cost item. However, the cloud was chosen for application deployment among 51% of the sample, but there was no breakdown of whether this was viewed as a cost saving measure or a tactical investment. Similarly, the figures for the numbers of buyers who used the cloud for testing, 46%, was not broken down into tactical and cost saving motives.

Storage costs are the easy win and prove the value of the cloud: tactical use may be a later development, said Duane Tharp, VP of technical sales and services at service provider Cloud-Elements. “The returns on file storage are pretty straight-forward. Every company needs file storage,” said Tharp. “The ease of adopting the cloud for file storage could prove the concept and pave the way for the adoption of other use cases later.”

[session] Keeping High Availability in the Cloud By @LeeAtchison | @CloudExpo #Cloud

When building large, cloud-based applications that operate at a high scale, it’s important to maintain a high availability and resilience to failures. In order to do that, you must be tolerant of failures, even in light of failures in other areas of your application.
“Fly two mistakes high” is an old adage in the radio control airplane hobby. It means, fly high enough so that if you make a mistake, you can continue flying with room to still make mistakes.
In his session at 18th Cloud Expo, Lee Atchison, Principal Cloud Architect and Advocate at New Relic, will discuss how this same philosophy can be applied to highly scaled applications, and can dramatically increase your resilience to failure.

read more