How to clear the final hurdle to public cloud adoption

(c)iStock.com/Michael Chamberlin

As demand for public cloud services continues to grow rapidly the major providers are busy developing hyperscale clouds supported by regional data centres. Both Amazon and Microsoft have announced their UK based cloud services with the goal to meet sovereignty needs while also helping organisations achieve their digital transformation objectives, allowing them to once and for all unshackle themselves from the constraints and costs of legacy IT systems.

All well and good but as was recently underlined by Gartner’s cloud adoption survey there are many enterprises out there still reticent to move forward with public cloud services until they have more assurances about performance and security. While Gartner and others clearly don’t dispute the continuing meteoric rise of public cloud there’s still much to be done to finally remove the fear, uncertainty and doubt surrounding public cloud, convincing users and boardrooms that it is safe and robust, even though in the majority of cases it is already more secure than what they using today.

So what more can CIOs and service providers do to deliver the missing ‘X Factor’ that many users and boardrooms still demand before fully embracing public cloud services, let alone mixing these with their private cloud and legacy systems?

Connectivity to these cloud services is increasingly a key part of the solution. It can no longer be an afterthought and must be seriously considered from the outset, particularly for applications which are sensitive to latency issues.

Companies cannot always rely on the vagaries of the public internet which can be the weakest link of any public cloud offering. They must invest in secure circuits and MPLS networks as they make the move to using cloud services. 

The development of independent cloud gateways and exchanges to access cloud services is a relatively new development in this area as it allows the end user to separate their connectivity provider from its cloud provider.  This allows greater flexibility and more control on costs than purchasing all aspects of the solution from one provider.

Connectivity to these cloud services is increasingly a key part of the solution – it can no longer be an afterthought and must seriously be considered from the outset

Cloud gateways allow fast, highly secure virtual private network connections directly into the global public cloud network infrastructures such as Microsoft’s Azure ExpressRoute.  Otherwise it’s rather like investing in a Ferrari but one that is powered by a Morris Minor engine.  

Seamlessly plugging into these global public cloud infrastructures – comprising of subsea cables and terrestrial networks and bypassing the public internet – will increase security, reduce latency and optimise bandwidth in one fell swoop. Furthermore, with multiple interfaces, private connectivity to multiple cloud locations can be achieved and improve resilience.

Certainty and predictability

As many enterprise organisations consider move to public cloud services they are also looking at on premise and colocation data centres and how to reduce costs and improve efficiencies.  They must consider how to meet compute and storage capacity requirements in the brave new world of cloud services without compromising on security and network performance. Rightly so, since at the end of the day when it comes to cloud adoption it is the data centre’s resilience and connectivity which can make or break any cloud model.

Very often hybrid cloud deployments are the answer to these questions and with it the choice of the data centre on where to deploy the private clouds. Combining colocation, private and public cloud is increasingly coming into play by making public services more ‘palatable’ among the ‘non-believers’. It enables users to have the best of both worlds by offering the comfort blanket of retaining core legacy software systems and IT equipment, but with the added flexibility of easy access to the public cloud for accessing non-core applications and services.           

With proven security accreditations, uptime track records and SLA histories all readily available when evaluating today’s modern facilities, the new ‘holy grail’ for data centres must now be in delivering cloud providers and their users’ consistency and certainty. Where connectivity is secure and always on and latency and response times are uniformly consistent: no matter if it’s just a cloud model supporting a few hundred users nationwide, or several thousand spread across the globe. In other words, it can scale without degradation.    

Only by data centres circumnavigating the public internet with private connections can public and hybrid cloud users expect to be on the same level as private cloud. At such point any pre-existing customer engagement issues over security and consistency will quickly disappear as their users will no longer be able to tell the difference, whichever variety or combination of cloud they are using. This is when that missing ‘X Factor’ will have truly arrived.          

Achieve IT business continuity | Application Delivery

In today’s highly competitive IT world, IT business continuity is a key element to keep businesses competitive. With the cloud revolution, businesses are now able to centrally host resources so they are accessible by any remote device 24/7. In the cloud environment, business continuity becomes a mandatory requirement. For instance, the Delta Air Lines outage […]

The post Achieve IT business continuity | Application Delivery appeared first on Parallels Blog.

Parallels Mac Management v5 with Apple DEP support is now available

How do you make the most advanced Mac management plug-in for Microsoft SCCM better? We’ve done it by making an Apple Mac management technology available to SCCM administrators! Parallels Mac Management v5 with Apple DEP support is now available! The Parallels team is happy to announce the availability of Parallels Mac Management v5 for Microsoft SCCM. […]

The post Parallels Mac Management v5 with Apple DEP support is now available appeared first on Parallels Blog.

Key Announcements from VMworld with Chris Ward

GreenPages’ CTO, Chris Ward, recently held a webinar detailing all of the key U.S. and European announcements made at VMworld 2016. In case you missed it, watch Chris’s short webinar recap below highlighting all the news, including VMware Cloud Foundation, Cross-Cloud Services, vSphere 6.5, and vSan 6.5. If you are interested in hearing Chris dive deeper into these key announcements, download the entire webinar here.

Or watch the video on our YouTube page.

By Chris Ward, CTO, GreenPages Technology Solutions

How to reduce write amplification on NAND chips in a cloud storage system

(c)iStock.com/BsWei

Solid-state disk (SSD) technology using integrated circuit assemblies is considered to be a readily available, drop-in replacement for discrete hard disks and is used to increase throughput in large cloud storage networks. But unfortunately, these devices introduce the problem of write amplification (WA) into a cloud storage system.

Reducing this occurrence is of paramount concern to most system administrators who use NAND chips in these situations because it reduces the lifespan of these chips and harms the throughput rate of a cloud storage and retrieval paradigm.

Calculating write amplification values

All forms of NAND memory chips used in SSD construction have to be erased before they can be written to again. This means that moving, rewriting and storing data to pages on SSDs will cause used portions of the chips to go through the process more than once for each file system operation. Consider a drive that has several logical blocks filled with data and several other continuous blocks that are empty. An operator using a Web-interface document manager adds data to a spreadsheet, and then saves it from his/her client machine. This necessitates the logical block that holds it to be erased and then rewritten.

If the file now takes up more space than a single block provides, other used blocks will have to be erased and rewritten until the file system achieves stability. Added to this wear and tear is the leveling and garbage collection necessary on NAND devices. Write amplification can be seriously problematic. Fortunately, there is a way to calculate the amount of WA that will be necessary to store data from cloud clients: total wear amplification = (amount of data written to NAND chips / data the host writes)

Overprovisioning and logical capacity

The difference between the actual physical capacity of an SSD and the logical capacity that an operating system sees is referred to as “over-provisioning” (OP). Additional space provided by an OP design helps the controller chip handle wear-leveling and garbage collection tasks, and provide additional blocks after some go bad

The most obvious and superfluous source of this comes from system software that uses a binary Gigabyte (1,024 MB = 1 GB), while hardware manufacturers utilise the metric Gigabyte (1,000 MB = 1 GB). This figure usually isn’t relevant when considering the problem of write amplification, but a second OP source comes from hardware vendors that purposefully add unused NAND chips somewhere in an SSD. If these values are known, OP values can be calculated easily: OP = (true capacity – visible capacity) / total user capacity

Cooperative data management policies

Several different new cooperative data management policies can be installed across a cloud storage network to cut down on WA cycles. Eliminating redundant cloud storage system caches is the first step. Currently, file systems often keep additional caches with the hopes of speeding the system up, but this only increases speed when working with mechanical drives. Ultimately, this increases granularity by orders of magnitude on SSD devices.

These policies also create victim blocks, which aren’t written to and are instead skipped, because they’re marked as “in use” in the cache. Utilising file systems designed to work with NAND chips changes these policies, which also help to reuse these victim blocks. Most importantly, the cache system should be flushed out, ensuring that extra data isn’t written to the drive.

Leveraging page metadata to reduce file system overhead

Traditional file systems that rely on an allocation table, journal or master record protect all file metadata to ensure durability (even in the event of system failures). This requires numerous writes to these tables, and while it provides an extra layer of safety, new file systems designed to work with solid-state memory are starting to replace these models.

Metadata can be leveraged to guarantee that indices can be restored if anything unusual happens, as opposed to ensuring that the metadata is always durable. By using a consistent approach instead of looking at durability to solve the problem, far less metadata has to be written to the drive to begin with. This dramatically reduces the influence of WA cycles.

Write amplification must be reduced

System administrators need to reduce write amplification, as it adds overhead and reduces the life of chips. As new file systems designed with NAND technology mature, compliance from the industry is likely to rise as the advantages become clearer.

Synchronoss Impresses Investors

Synchronoss Technologies Inc, a global leader in managed mobility solutions, continued its upward trend that was evident in its third-quarter results. A company release showed that the adjusted revenue climbed to $181 million, while net income rose to $32.5 million. Both these numbers are 20 percent higher than the same quarter last year. As a result, the company’s adjusted earnings was $0.68 per share. These numbers are sure to make investors happy as the expected revenue was almost three million more than what they expected, and the adjusted earnings was also $0.01 more. Due to such impressive numbers and happy investors, the shares of Synchronoss rose by 13.7 percent during trading on Tuesday.

A closer look at these numbers show that revenue from its cloud business grew by 40 percent year over year, and accounts for almost 60 percent of the company’s total sales. This growth was fueled by the rising demand from customers who wanted to make the most of cloud power. Specifically, successful cloud migrations to companies like Softbank and British Telecom helped it to gain international recognition. Also, this company’s enterprise security mobility platform brought in new clients from healthcare, legal, and financial industries. More importantly, Synchross’ partnership with Verizon UID gave it access to almost one-third of the US consumer market, besides the enterprise market. All these developments and strategies have helped Synchronoss to make such impressive strides over the last year.

Despite these numbers, there are some things that investors should watch out for. An important aspect is stock-based compensation expense, that was almost $9 million in this quarter, while the acquisition expenses amounted to $7.3 million. There is a big difference between both these expenses – the first one is something that investors will see every quarter, but the second one is more of a rarity, so it’s impact will also be for a short term only. In other words, investors should watch out for this stock-based compensation, and should ensure that it does not go too high.

Currently, this company boasts of more than 130 patents, and three billion plus mobile subscribers from around the world. Many of its customers include leading companies such as Verizon, AT&T, Charter Communications, Vodafone, Comcast, and Time Warner Cable in the communications sector, Goldman Sachs and Softbank in the financial sector, and OEM companies like Microsoft, Apple, and Samsung. Other than these big names, Synchronoss caters to almost 300 of the Fortune 500 companies.

Going forward, this client-base is only going to increase. In fact, Synchronoss is likely to be a good bet for investors, as it is focusing more on its cloud business, and through it, plans to increase its sales and revenue multi-fold. It is also planning to come up with more cloud-based solutions, and expand the features on its existing products, to meet the growing demands of its customers.

This company has three broad lines of business – universal ID, secure mobility, and personal cloud. It  is headquartered in Bridgewater, New Jersey, and trades under the stock symbol SNCR on the New York Stock Exchange.

The post Synchronoss Impresses Investors appeared first on Cloud News Daily.

Mobility, IoT and SDN helping in network refreshes – but security may be an afterthought

(c)iStock.com/plusphoto

Good news and bad news, according to the latest research from Dimension Data; enterprise networks are being refreshed more frequently, but it’s coming at a cost on the security side.

The classic image of the enterprise network sagging under a series of legacy technologies is becoming a thing of the past, the company argues in its latest Network Barometer Report due to the pressure from technologies such as mobility, software defined networking (SDN) and the Internet of Things (IoT). Yet when it comes to security, companies neglecting to patch their networks remains a concern.

In terms of geographical splits, the Americas saw a particularly fast growth, with the number of ageing and obsolete devices spotted dropping from 60% in 2015 to 29% this year, while Europe, Asia Pacific and Australia saw growth at a more regular pace. According to the company, this explanation can be put down to American enterprises refreshing their networks due to the new generation of programmable infrastructure, compared to other regions who refresh as part of data centre network redesigns.

Despite some of the issues with older networks – such as not being able to handle the traffic for cloud-based collaboration, or IoT or SDN, Dimension Data argues it needs to be handled with care.

“Ageing networks are not necessarily a bad thing: companies just need to understand the implications,” said Andre van Schalkwyk, network consulting senior practice manager at Dimension Data in a statement. “They require a different support construct, with gradually increasing support costs. On the other hand, this also means that organisations can delay refresh costs.”

One aspect of the research was much bleaker than the rest; the number of devices inspected which had at least one known security vulnerability – in other words, made public by their manufacturer – had risen from 60% last year to 76% in 2016. 97,000 network devices were assessed in total.

Oracle completes $9.3bn NetSuite deal after bump in road

©iStock.com/maybefalse

Oracle has announced the completion of cloud ERP software provider NetSuite for $9.3 billion after shareholders approved the transaction.

In a short note issued earlier this week, the company confirmed that the acquisition would be completed by November 7, after a small majority – 53% – of NetSuite shares had been tendered in favour of the agreement. The original tender date expired on November 4, while the Department of Justice approved the deal in September.

The deal has not been without its hiccups, however. T. Rowe Price, a NetSuite shareholder, warned Oracle to up its offer from $109 per share cash to $133 per share. A month ago, Oracle showed its hand and announced a final extension of its tender offer, noting: “In the event that a majority of NetSuite’s unaffiliated shareholders do not tender sufficient shares to reach the minimum tender condition, Oracle will respect the will of NetSuite’s unaffiliated shareholders and terminate its proposed acquisition.”

Speaking to this publication when the deal was originally announced in July, John Dinsdale, chief analyst and managing director of Synergy Research, explained how the acquisition would strengthen Oracle’s cloud story. “It will push Oracle a couple of places higher in the enterprise SaaS market share rankings and will strengthen its position as one of the two leading ERP SaaS vendors, alongside SAP,” he said.

In an FAQ feature when the deal was announced, the two companies elaborated on the benefits of the partnership and that it would remain business as usual for both parties. “Oracle and NetSuite cloud applications are complementary and will co-exist in the marketplace forever,” it read. “Oracle intends to invest heavily in both products – engineering and distribution.”

Get productivity gains with a mobile workforce solution

The mobile revolution along with cloud technology has changed the definition of an office, transforming it from a location-based entity into a virtual office with mobile workforce. According to IDC, the US mobile workforce population amounted to 96.2 million in 2015. It is expected to touch 105.4 million by 2020. Strategy Analytics firm reports that […]

The post Get productivity gains with a mobile workforce solution appeared first on Parallels Blog.

Optimizing VMware Environments for Peak SQL Server Performance | @CloudExpo #Cloud #Analytics #MachineLearning

VMware configurations designed to provide high availability often make it difficult to achieve satisfactory performance required by mission-critical SQL Server applications. But what if it were possible to have both high availability and high performance without the high cost and complexity normally required?
This article explores two requirements for getting both for SQL applications, while reducing capital and operational expenditures. The first is to implement a storage architecture within VMware environments designed for both high availability and high performance; the second involves tuning the high availability (HA) and high performance (HP) HA/HP architecture for peak performance.

read more