Putting the “Converged” in Hyperconverged Support

Today’s hyperconverged technologies are here to stay it seems.  I mean, who wouldn’t want to employ a novel technology approach that “consolidates all required functionality” into a single infrastructure appliance that provides an “efficient, elastic pool of x86” resources controlled by a “software-centric” architecture?  I mean, outside of the x86 component, it’s not like we haven’t seen this type of platform before (hello, mainframe anyone?).

But this post is not about the technology behind HCI, nor about whether this technology is the right choice for your IT demands – it’s more about what you need to consider on day two, after your new platform is happily spinning away in your datacenter.  Assuming you have determined that the hyperconverged path will deliver technology and business value for your organization, why wouldn’t you extend that belief system to how you plan on operating it?

Today’s hyperconverged vendors offer very comprehensive packages that include some advanced support offerings.  They have spent much time and energy (and VC dollars) in creating monitoring and analytics platforms that are definitely an advancement over traditional technology support packages.  While technology vendors such as HP, Dell/EMC, Cisco and others have for years provided phone-home monitoring and utilization/performance reporting capabilities, hyperconverged vendors have pushed these capabilities further with real-time analytics and automation workflows (ie Nutanix Prism, SimpliVity OmniWatch, OmniView).  Additionally, these vendors have aligned support plans to business outcomes such as “mission critical”, “production”, “basic”, etc.

Now you are asking, Mr. know-it-all, didn’t you just debunk your own argument? Au contraire I say, I have just re-enforced it…

Each hyperconverged vendor technology requires its own SEPARATE platform for monitoring and analytics.  And these tools are RESTRICTED to just what is happening INTERNALLY within the converged platform.  Sure, that covers quite a bit of your operational needs, but is it the COMPLETE story?

Let’s say you deploy SimpliVity for your main datacenter.  You adopt the “Mission Critical” support plan, which comes with OmniWatch and OmniView.  You now have great insight into how your OmniCube architecture is operating, and you can delve into the analytics to understand how your SimpliVity resources are being utilized.  In addition, you get software support with 1, 2, or 4 hour response (depending on the the channel you use – phone, email, web ticket).  You also get software updates and RCA reports.  It sounds like a comprehensive, “converged” set of required support services.

And it is, for your selected hyperconverged vendor.  What these services do not provide is a holistic view of how the hyperconverged platforms are operating WITHIN the totality of your environment.  How effective is the networking that connects it to the rest of the datacenter?  What about non-hyperconverged based workloads, either on traditional server platforms or in the cloud?  And how do you measure end user experience if your view is limited to hyperconverged data-points?  Not to mention, what happens if your selected hyperconverged vendor is gobbled up by one of the major technology companies or, worse, closes when funding runs dry?

Adopting hyperconverged as your next-generation technology play is certainly something to consider carefully, and has the potential to positively impact your overall operational maturity.  You can reduce the number of vendor technologies and management interfaces, get more proactive, and make decisions based on real data analytics. But your operations teams will still need to determine if the source of impact is within the scope of the hyperconverged stack and covered by the vendor support plan, or if its symptomatic of an external influence.

Beyond the awareness of health and optimized operations, there will be service interruptions.  If there weren’t we would all be in the unemployment line.  Will a 1 hour response be sufficient in a major outage?  Is your operational team able to response 7X24 with hyperconverged skills?  And, how will you consolidate governance and compliance reporting between the hyperconverged platform and the rest of your infrastructure?

Hyperconverged platforms can certainly enhance and help mature your IT operations, but they do provide only part of the story.  Consider carefully if their operational and support offerings are sufficient for overall IT operational effectiveness.  Look for ways to consolidate the operational information and data provided by hyperconverged platforms with the rest of your management interfaces into a single control plane, where your operations team can work more efficiently.  If you’re looking for help, GreenPages can provide this support via its Cloud Management as a Service (CMaaS) offering.

Convergence at this level is even more critical to ensure maximum support of your business objectives.

If you are interested in learning how GreenPages’ CMaaS platform can help you manage hyper-converged offerings, reach out!

 

By Geoff Smith, Senior Manager, Managed Services Business Development

How cloud and IoT services are driving deployment of public key infrastructures

(c)iStock.com/cherezoff

A new study from Thales and the Ponemon Institute has found that, for more than three in five businesses polled, cloud-based services were the biggest trend driving the deployment of applications using public key infrastructures (PKI).

PKI refers to the ability for users and organisations to send secure data over networks; as defined by TechTarget, it “supports the distribution and identification of public encryption keys, enabling users and computers to both securely exchange data over networks such as the Internet and verify the identity of the other party.”

According to the research, which surveyed more than 5,000 business and IT managers in 11 countries and across five continents, PKIs are being used to support more and more applications. The greatest usage is in the US, but on average PKIs support eight different apps within a business, up one from this time last year. For the 62% of respondents who say they use PKI credentials for public cloud-based applications and services, it represents a 12% increase on 2015.

Yet the report’s findings were not all positive. More than half (58%) of those polled say their existing PKI is not equipped to support new applications, while more worryingly, 37% say they have no existing PKI in their organisation.

Dr. Larry Ponemon, chairman and founder of the Ponemon Institute, argued that in an increasingly cloud and IoT-enabled business landscape, businesses who are not adhering to best practice guidelines around PKIs could see serious danger.

“As organisations digitally transform their business, they are increasingly relying on cloud-based services and applications, as well as experiencing an explosion in IoT connected devices,” he said in a statement. “This rapidly escalating burden of data sharing and device authentication is set to apply an unprecedented level of pressure onto existing PKIs, which now are considered part of the core IT backbone, resulting in a huge challenge for security professionals to create trusted environments.

“In short, as organisations continue to move to the cloud it is hugely important that PKIs are future proofed – sooner rather than later,” Dr. Ponemon added.

Recent studies around encryption and cloud security have the potential for progress. In August, a research paper from Microsoft offered the concept of a secure data exchange where the cloud performs data trades between multiple willing parties to give users full control over the exchange of information.  

Why preparation is key to securing your cloud migration

(c)iStock.com/fazon1

The benefits of big data are real. And with so many businesses looking to migrate their data to the cloud, they want to make sure everything arrives safely and intact. After all, much of this data contains sensitive and proprietary information, and the prospect of moving it from the safety of the corporate firewall to a cloud environment is cause for concern.

Still, as data volumes continue their exponential growth, moving vast sets of structured and unstructured data from the restrictive confines of an on-premise Hadoop deployment to a cloud-based solution will be an inescapable choice for companies looking to stay competitive.

Fortunately, proper preparation is the key to ensuring a smooth and secure transition to the cloud. With that goal in mind, here are some steps your business can take on the preparation side to secure your cloud migration.

Pick your cloud vendor carefully

Data migration to the cloud necessitates a cloud host. And there are a variety of cloud modular solutions to choose from. The key to choosing the right cloud vendor for your organization lies in understanding your big data needs. While price is certainly a consideration, other criteria such as data security and how well the vendor is equipped to carry out the big data storage and analytics tasks that you need are critical.

If data security is your main concern, then vet your vendors accordingly. If you need a vendor that excels at app hosting, make sure that the hosts you are considering excel in that area. If rapid data analytics and reduced time-to-insight are top of mind criteria, then a versatile cloud solution such as Spark as a Service would be worth your consideration. In all cases, make sure that the cloud vendor’s platform conforms to industry and internal compliance standards before entering into a cloud service agreement.

Take baby steps

When it comes to adopting a new and promising technology, the tendency for many companies is to want to jump in with both feet. But adoption typically comes with a learning curve, and cloud adoption is no exception. By nature, data migration to the cloud can often cause some downtime, which could potentially impact the business. The smart approach to mitigate the risk of business disruption is to take small steps, beginning with the migration of apps and data that aren’t classified as sensitive or mission critical. Once the security and reliability of the cloud host have been assessed, the next bigger step of loading more sensitive data into the cloud can be taken.

Get clear on security

When it comes to data security you need to be clear on which security protocols your cloud vendor uses and the degree to which these protocols can ensure that sensitive information remains private. However, if your organization is like most, you won’t be transferring all of your data to the cloud. Some data will remain on your own servers. This means that you now have data in two different environments, not to mention those cloud hosted apps that come with their own security systems.

Multiple data environments can lead to confusion for your IT team—the kind of confusion that wastes valuable time and reduces the productivity of your big data initiative. To solve this data security dilemma you’ll need to get clear on implementing a broad and coordinated security policy with technology and policy management protocols that cover apps in the data center and apps in the cloud.

Be strict with BYOD

Migrating data to the cloud enables employees to collaborate like never before. And with the proliferation of mobile devices such as smartphones and tablets, more and more businesses are bringing the practice of bring your own device (BYOD) into the workplace to make workforce collaboration even more convenient. 

However, granting employees access to potentially sensitive data and applications in the cloud poses a number of security risks, especially since mobile devices are fast becoming the favoured targets of skilled hackers. Organisations looking to leverage BYOD need to implement and enforce strict protocols for how data may be accessed and used, along with guidelines that clearly spell out which employees have permission to access sensitive data and cloud-based applications on mobile devices, and which employees do not. Like all security technology and protocols, BYOD safeguards should be segregated solely to the IT department to ensure quality security assessment across the organisation.

As technology advances and data volumes grow ever larger, the rush to the cloud by organizations will only intensify. That being said, the migration of data to the cloud cannot be rushed into. By following these and other guidelines, and by exercising careful planning and preparation to ensure a successful and secure data migration to the cloud, organisations stand to reap the many bottom-line benefits that a cloud solution offers.

Meet the Parallels team at ALSO Expo 2016

2016 has marked the beginning of an extremely successful collaboration between Parallels and ALSO, the largest IT B2B specialist in Finland. As we approach the tail end of the year, ALSO is preparing to celebrate its many business partnerships and solutions at the ALSO Expo Experience, which will be held on October 27 at Tampere […]

The post Meet the Parallels team at ALSO Expo 2016 appeared first on Parallels Blog.

[session] @VMware Compliance | @CloudExpo @IBMcloud @CloudRaxak

Successful transition from traditional IT to cloud computing requires three key ingredients: an IT architecture that allows companies to extend their internal best practices to the cloud, a cost point that allows economies of scale, and automated processes that manage risk exposure and maintain regulatory compliance with industry regulations (FFIEC, PCI-DSS, HIPAA, FISMA). The unique combination of VMware, the IBM Cloud, and Cloud Raxak, a 2016 Gartner Cool Vendor in IT Automation, provides a cost-effective way to leverage the cloud, manage risk and maintain continuous security compliance.

read more

ReadyTalk to Sponsor @CloudExpo | @ReadyTalk #IoT #RTC #UCaaS #WebRTC

SYS-CON Events announced today that ReadyTalk, a leading provider of online conferencing and webinar services, has been named Vendor Presentation Sponsor at the 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA.
ReadyTalk delivers audio and web conferencing services that inspire collaboration and enable the Future of Work for today’s increasingly digital and mobile workforce. By combining intuitive, innovative technology with unmatched customer service, ReadyTalk provides a seamless collaboration experience for anyone, across any device, platform or location. Everything you need, anywhere you are, for before, during and after your online event. ReadyTalk is headquartered in Denver, CO and was founded in 2001.

read more

SoftLayer «Platinum Sponsor» of @CloudExpo | #DataCenter #IoT #SDS #ML

SYS-CON Events announced today that SoftLayer, an IBM Company, has been named “Gold Sponsor” of SYS-CON’s 18th Cloud Expo, which will take place on June 7-9, 2016, at the Javits Center in New York, New York. SoftLayer, an IBM Company, provides cloud infrastructure as a service from a growing number of data centers and network points of presence around the world. SoftLayer’s customers range from Web startups to global enterprises.

read more

Pros and Cons of Plex Cloud

Plex Cloud, the latest offering from Plex, is likely to take user experience to new levels. With this service, you no longer have to run Plex Media Server on a computer or to any other Network Attached Storage in your house. Rather, you can access it all directly from the cloud.

The obvious advantage is it gives you greater flexibility as you can use it on a broad range of devices. Also, you’re not confined to a particular geographical location as you can use it from any device as long as you have access to the Internet.

Another related advantage is you don’t have to worry about downloading, installing, or configuring Plex software on a PC or Mac. This is sure to come as a huge relief to customers, as it can be daunting for novices and for those who haven’t used media streaming services in the past. Also, you don’t have to keep your PC or Mac on all the time, so in this sense, this service signals the end of  the “always-on computer” idea.

This concept of streaming from the cloud takes a lot of work off your hands, especially those related to transcoding options, if you’re computer isn’t a powerhouse to handle all streaming demands. This way, you don’t have to spend money on a high-capability computer, for just streaming media.

In addition, you can create a private list of favorite collection, and watch them from any device, as long as the device supports secure connections to the Internet. Simply sign in with your Plex account, browse and play the content you want. Your only limitation here is the speed of your Internet; if it’s fast enough for Netflix, then it’s sure to be good for Plex Cloud too.

Despite the above advantages, there are some downsides too. Firstly, Plex Cloud doesn’t score high on functionality when compared to the installed version, mainly because the server is not running continuously in the background. Secondly, there is only a limited support for third-party channels.

More importantly, Plex Cloud is not free, as you need an active Plex Passs Subscription to use it. As of now, it costs $4.99 per month, $39.99 per year, or $149.99 for a lifetime. These rates are obviously cheaper than Netflix, but they still cost you, unlike the Plex Media Server that doesn’t need any subscription. Besides, you’ll also need an Amazon Drive account, because it plays content from your computer, and this content has to be in the cloud. This subscription to Amazon Drive sets you back by $59.99 a year, as this is cost for “unlimited everything” plan. Though Plex has a tie-up only with Amazon now, there is always a possibility for it to expand its offerings to Microsoft, Dropbox, SugarSync, and other cloud storage and backup service providers, in the future.

Thus, Plex Cloud is going to cost you some money, but it’s not so expensive that it’ll break your bank, and plus, you get good value for what you pay.

The post Pros and Cons of Plex Cloud appeared first on Cloud News Daily.

More money in big data initiatives, Gartner argues – but is the ROI still unclear?

(c)iStock.com/Danil Melekhin

The big data landscape is approaching a state of maturation: according to the latest note from analyst house Gartner, more money is being invested in big data but fewer companies are deciding to commit.

The research, which polled almost 200 members of the Gartner Research Circle, found that 48% of companies have invested in big data this year – up 3% from this time last year – but also argued that only a quarter (25%) plan to invest in the coming years, down from 31% in 2015.

One area which remained relatively unchanged was with regard to getting big data projects from ideation to creation. Gartner argues the issue of organisations being stuck in the pilot phase remains an important one; 15% of respondents reported deploying their big data project protection in the most recent survey, compared with 14% this time last year.

Gartner argues a couple of reasons may be behind the findings. Big data projects seem to be receiving less spending priority than other IT initiatives, while similarly there is also a lack of return on investment reported. This may change as the big data term dissipates; in other words, as dealing with multiple strands of data and larger datasets becomes the de facto method.

“The big issue is not so much big data itself, but rather how it is used,” said Nick Heudecker, Gartner research director. “While organisations have understood that big data is not just about a specific technology, they need to avoid thinking about big data as a separate effort.” Research director Jim Hare added: “When it comes to big data, many organisations are still finding themselves at the crafting stage. Industrialisation – and the performance and stability guarantees that come with it – have yet to penetrate big data thinking.”

A recent report from Forrester Research found that, in terms of big data technologies, NoSQL and Hadoop were forecast to grow the quickest, with the pharmaceutical, transport and primary production industries the fastest growing.

The hidden dangers of legacy technology – and how to resolve them

(c)iStock.com/TiSanti

Every business has that one legacy system they can’t seem to let go of. You know you’ve got one – a relic hidden away in some dusty server room. But are you aware of the damage these outdated systems can, will and may already be having on your organisation?

The damage inflicted by legacy technology can range from minor systems issues through to major events that could put your organisation out of business – and it’s important to know the hidden dangers.

Increased downtime

Outdated software runs on outdated hardware, all of which eventually leads to ever increasing downtime and continued system failures.  Running systems past their operational lifespan is a recipe for disaster. These systems will increasingly overheat, crash and eventually cease to operate.

The damage caused by system failures can range from frustrating to devastating. It can be as minor as IT spending countless hours rebooting servers, or the inconvenience of data loss. It could however be customers left unable to make purchases on the busiest day of the year as your website is offline. The bottom line is that system failures cost money; it could be hundreds, it could be millions.

RBS are an example of a company that learned this lesson the hard way. Legacy systems at the bank failed for several days in 2012, leaving their customers unable to access their accounts and make online payments. Worse still, staff were required to manually update balances in this time. This not only damaged their brand, but cost them millions in lost business. Paris’ Orly airport suffered a similar fate. The airport was forced into grounding planes for hours after an instance of Windows 3.1 crashed in bad weather. This system is 23 years old. It’s deeply concerning that some of the most important networks and systems today are woefully outdated.

Compliance issues

Depending on your industry, holding onto legacy technology is the equivalent of holding a ticking compliance time bomb. Once a legacy technology becomes unsupported, the vast majority will fail to meet industry compliance standards like PCI DSS, SOX and HIPPA.

These standards place strict requirements which encompass the entire IT infrastructure, often with specific focus on server and network security. Unsupported systems that do not meet these requirements will require significant investment to maintain compliance.

Running a system that is no longer compliant can result in hefty fines from regulating bodies. Visa and MasterCard impose financial penalties on merchants and service providers for non-compliance. These charges can range from £3,500 to £75,000 per month until compliance is resumed. Windows Server 2003 is as an example of technology which no longer meets PCI compliance. So if you’re processing card payments through a website running on Windows Server 2003 you could be non-compliant already.

Increasing operational costs

Running outdated technology increases operating costs. Old hardware platforms lack modern power saving technology, while old operating systems are devoid of virtualisation features. These systems are inefficient and cost more to run and maintain.

As previously mentioned, these systems crash often and require constant attention from the IT department, eating away at employee resource. Failure rates on legacy technology mean you’ll need to track down increasingly rare replacement parts which manufacturers may have stopped supplying.

There’s also the risk presented by a dwindling talent pool. As technologies pass out of circulation, so to do the IT professionals with the requisite skills to support these technologies. Lose an existing staff member and you risk paying over the odds to employ or train a replacement with the skills necessary to manage the tech. That is of course if training is still available the technology in question. Reflecting on the earlier Paris’ Orly airport incident as an example, they are now in a race against time to replace the outdated system, before the only technician they have who is familiar with Windows 3.1 retires. 

Data breaches

Legacy technologies are extremely vulnerable to attack from cyber criminals. With the average cost of a single data breach now reported at $4 million, this event falls into the potential business ending category. This of course depends on the size of the company and the severity of the breach.

The problem with these outdated systems is that they are (predominantly) no longer supported by the company that created them. You are on your own, if a new vulnerability is discovered by cyber criminals, there will be no security updates released to patch the issue. It’s also unlikely you will be informed of this vulnerability, meaning you are blindly running a system prone to constant attack.

Old technology also doesn’t benefit from advances in security. Take Windows Server 2003 as an example, old server platform lacks compartmentalisation available in modern server operating systems. Once an intruder gains access to your system, they will have free reign to move around. Through a single unpatched vulnerability, attackers can access all applications, middleware and databases running on the server platform.

Outpaced by competitors

We are all faced by digital disruption, accelerating at a pace we’ve not witnessed in any previous era of technology-induced change. The explosion of mobile devices and real-time transactions – supported by cloud services – cannot be handled by legacy systems which were never designed to accommodate these interactions at such a high volume.

This is a simple case of Darwinism, adapt or die. You cannot hope to be a 21st century organisation running on 20th century technology. By clinging on to that legacy system you may find your business lost to a digital start-up.

Don’t believe me? We need only look at our recent history. More than 80% of Fortune 500 companies from 20 years ago are no longer on the list. Having failed to make the transition to an internet-based business in the 1990s, they have been replaced largely by organisations born in the last 20 years as an Internet-based business.

The same fundamental transformation is happening now. Instead of a shift to online business, it’s a shift to digital business models and modern digital infrastructures. If you stick with your legacy technology, you face losing relevance and suffering the same fate as those from the 1990s.