Announcing @Streamlyzer to Exhibit at @CloudExpo | #Streaming #OTT #VOD

SYS-CON Events announced today that Streamlyzer will exhibit at the 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA.
Streamlyzer is a powerful analytics for video streaming service that enables video streaming providers to monitor and analyze QoE (Quality-of-Experience) from end-user devices in real time.

read more

OnProcess to Discuss #IoT at @ThingsExpo | #M2M #DigitalTransformation

OnProcess Technology has announced it will be a featured speaker at @ThingsExpo, taking place November 1 – 3, 2016, in Santa Clara, California. Dan Gettens, OnProcess’ Chief Analytics Officer, will discuss how Internet of Things (IoT) data can be leveraged to predict product failures, improve uptime and slash costly inventory stock.
@ThingsExpo is an annual gathering of IoT and cloud developers, practitioners and thought-leaders who exchange ideas and insights on topics ranging from Big Data in IoT, smart grids, wearables and incorporating IoT into modern data centers.

read more

Tech News Recap for the Week of 10/24/2016

Were you busy this week? Here’s a tech news recap of articles you may have missed for the week of 10/24/2016.

The Dyn DDoS attack that occurred last week was likely the work of script kiddies, according to Flashpoint. Microsoft announced it’s launching the public beta of Azure Analysis Services. They also unveiled the Surface Studio PC, in addition to new Windows 10 features. Apple presented the new Macbook Pro, with touch screen keys and a price jump. Red Cross personal data breach of 550,000 blood donors and more top news this week you may have missed!

Remember, to stay up-to-date on the latest tech news throughout the week, follow @GreenPagesIT on Twitter.

Tech News Recap

Did you miss VMworld? Click here to download our webinar ‘Buzz from VMworld 2016, Key U.S and Europe Announcements’

By Jake Cryan, GreenPages Technology Solutions

Amazon Cloud Posts Yet Another Stellar Quarter Results

Amazon Cloud posted yet another impressive number for the last quarter, signaling the continued strength of this line of Amazon’s business. AWS reported a sale of $3.2 billion, and this is almost 55 percent higher than the $2.08 billion it posted during the same period, a year ago. The operating income of the company was $1.02 billion, up nearly 96 percent from the $521 million it posted a year ago.

These numbers clearly show that AWS is growing at an incredible pace, despite facing intense competition from deep-pocketed companies such as Microsoft and Google. Much of this success can be attributed to a simple and clear strategy of helping business leverage the power of AWS to improve their performance. When AWS was launched ten years ago, it allowed firms to rent computing capacity, which means, they paid only for what they used. Such a model made cloud more accessible for all companies, including startups with limited budgets.

Another important strategy that AWS followed was to make strategic partnerships at the right times. Recently, AWS announced a partnership with VMware, under which VMware’s cloud software will run on AWS. Other similar partnerships have helped AWS to gain a strong foothold in the cloud market, and this in turn, has helped it to stay ahead of its competitors.

Even during the earnings call, AWS reiterated that the company will focus on helping more businesses to move to AWS, from both on-premise and hybrid environment. To this end, it has launched a new tool called Server Migration Service, that’ll ease the process of moving legacy applications to the cloud. This tool will help IT teams to create incremental replication of virtual machines from their on-premise infrastructure to AWS, with an aim to help them reap the many benefits that come with public cloud. This is an important move because moving legacy applications to the cloud is a painful process, to say the least, and this is mainly why many companies are opting for a hybrid environment. As a result, they miss out on the flexibility and cost-saving that a public cloud offers. With this tool, companies now have the option to move their operations entirely to the cloud, so they can make the most of the benefits offered by it.

Besides this tool, Amazon Cloud has also announced that it’ll add data center facilities across many new geographical regions. This move is in tune with the trend of keeping data as close to the customers as possible, so they experience low latency and faster access speeds. Some countries like Germany even mandate that data should be kept only within its sovereign borders, so the new data center facilities are being setup to comply with these regulations as well. These strategies are likely to bring more benefits to AWS and its customers in the future.

Despite all these positive data, shares of Amazon fell in after hours trading, with an almost six percent drop towards the end. This fall is because the parent company’s profits was lesser than what was expected. In this sense, Amazon Cloud can be the silver lining for this company.

The post Amazon Cloud Posts Yet Another Stellar Quarter Results appeared first on Cloud News Daily.

[session] Empowering Enterprise Security with the IoT By @SecureChannels | @ThingsExpo #IoT #IIoT #M2M #API

The Internet of Things (IoT) promises to simplify and streamline our lives by automating routine tasks that distract us from our goals. This promise is based on the ubiquitous deployment of smart, connected devices that link everything from industrial control systems to automobiles to refrigerators. Unfortunately, comparatively few of the devices currently deployed have been developed with an eye toward security, and as the DDoS attacks of late October 2016 have demonstrated, this oversight can have devastating, if not catastrophic results.

read more

[session] Long Live Multi-Factor Authentication | @CloudExpo #API #Cloud #Security

President Obama recently announced the launch of a new national awareness campaign to “encourage more Americans to move beyond passwords – adding an extra layer of security like a fingerprint or codes sent to your cellphone.” The shift from single passwords to multi-factor authentication couldn’t be timelier or more strategic. This session will focus on why passwords alone are no longer effective, and why the time to act is now.
In his session at 19th Cloud Expo, Chris Webber, security strategist at Centrify, will discuss how we can move away from passwords towards better, more secure methods of identity verification.

read more

IBM DevOps Workshop at @CloudExpo | @IBMDevOps @Skytap #Agile #DevOps

Join IBM November 2 at 19th Cloud Expo at the Santa Clara Convention Center in Santa Clara, CA, and learn how to go beyond multi-speed it to bring agility to traditional enterprise applications.
Technology innovation is the driving force behind modern business and enterprises must respond by increasing the speed and efficiency of software delivery. The challenge is that existing enterprise applications are expensive to develop and difficult to modernize. This often results in what Gartner calls “Bimodal IT,” where business struggle to apply modern tools and practices to traditional monolithic applications. But these existing assets can be modernized and made more efficient without having to be completely overhauled. By leveraging methodologies like DevOps and agile, alongside emerging technologies like cloud-native services and containerization, traditional applications and teams can be more easily modernized without risking everything that depends on them. This session will describe how to apply lessons learned from modern app development, including the starting point to modernization that many enterprises are using to quickly improve speed, efficiency, and software quality.

read more

What Do DDOS Attacks Mean for Cloud Users? | @CloudExpo #Cloud #Cybersecurity

Cloud services are supposedly known for being highly available but various types of outages prevent users from accessing those services, sometimes on very large scale. What are the implications of DDOS attacks on Cloud services and what are the alternatives?
Last Friday, DDOS attacks disrupted major parts of the internet in both North America and Europe. The attacks seems largely targeted on DNS provider Dyn disrupting access to major service providers such as Level 3, Zendesk, Okta, Github, Paypal, and more, according to sources like Gizmodo. This kind of botnet-driven DDOS attack is a harbinger of future attacks that can be carried out over an increasingly connected device world based on the Internet of Things (IoT) and poorly secured devices.

read more

Cloudian raises $41m for ‘rapid adoption’ of object storage platform

(c)iStock.com/Alex Sava

Hybrid cloud object storage system provider Cloudian has announced the completion of a $41 million (£33.8m) financing round, with the funds helping the company expand sales and marketing as well as growing international operations.

The funding round includes existing investors including Intel Capital, INCJ, Eight Roads, and Goldman Sachs, as well as new investors Lenovo, City National Bank, Epsilon Venture Partners, and DVP Investment.

“Connected devices are already generating millions of terabytes of information every day. The challenge is both to manage that information and to analyse and derive value from it,” said Lisa Spelman, vice president and general manager of Intel’s data centre marketing group in a statement. “We believe Cloudian’s unified storage approach, which combines data management and data analytics in a single platform running on Intel Architecture, positions data centre managers exceptionally well to extract value from the massive volumes of information created by accelerating device connectivity and advances in machine learning.”

The path of Lenovo integrating with Cloudian dates back to an OEM agreement back in June which enables Lenovo’s salesforce to offer Cloudian-based object storage. The manufacturer added that Cloudian’s storage solution was the ‘clear winner’ for driving ‘innovation, efficiency and investment protection into the data centre’.

Similarly, Cloudian is one of the various vendors integrating with Coldline, Google’s latest cold storage service announced earlier this month. Cloudian’s HyperStore product is seamless integrated with Coldline, offering up to hundreds of petabytes of on-premise storage, as this company blog post explains.

Total funding for Cloudian stands at $81.68 million, in five rounds from eight investors. 

The four essentials MSPs forget when disaster recovery testing

(c)iStock.com/Dmitrii Guzhanin

By Mary McCoy

By now, most MSPs recognise that offering backup is table stakes.

Your clients can receive this service from any number of your competitors. In order to stand out and increase monthly recurring revenue (MRR), focus on the disaster recovery (DR) aspect of backup and disaster recovery (BDR). Offer your clients DR testing.

To fully capitalise on the advantages of DR testing, keep the following four best practices in mind when adding this service to your IT portfolio. 

Test everything

Technology alone won’t save businesses paralysed by an IT emergency. DR testing should also engage on the business level, considering continuity of operations and processes along with the validation of actual data availability. How robust is your client’s DR plan? Being properly prepared can be as simple as knowing who to call and having an up-to-date contact list.

Your DR plan should also avoid ambiguity and set expectations when it comes to designating team and individual roles and responsibilities. Do both you and your clients know what to hold each other accountable for or who to reach out to when something goes wrong?

Pro tip: Your DR plans are not one-size-fits-all, which means your testing should vary across your client base. Each business you serve has different needs. Many organisations have specific compliance and regulatory statutes that they’re required to adhere to. You may back up and store some clients’ data at a physical location offsite and others’ in the cloud. No two clients are alike. When DR testing, processes and procedures should be optimised for each individual client. 

Test regularly 

How often should you be conducting disaster recovery tests? There is no hard and fast rule, and it really depends on the client in question. That being said, you should run annual DR tests, at the very least. Your clients’ disaster readiness depends on every employee’s understanding of the current DR plan, which they can ultimately only achieve after familiarisation with the DR testing process. And when factoring in employee turnover, testing every year helps acclimate any new hires to the proper procedures and protocol, thereby helping you fine-tune your clients’ disaster response. 

Considering that a company’s DR strategy is only as strong as its least prepared employee, you’d think more would advocate frequent DR testing to mitigate risk. According to the 2016 Disaster Recovery as a Service Attitude and Adoption Report, however, 22% of respondents test their DR plans less than once a year or in many cases, never test at all. Help them avoid this liability and package regular DR tests into your overall BDR offering.

Sure, testing backups every year should be the standard, but even this may be too conservative in certain circumstances. Let’s examine a scenario in which you may want to test more frequently. Perhaps you serve a bank or any other financial services business bound by PCI DSS compliance. To comply with regulatory standards, you may need to test this client’s DR plan every three months to ensure your BDR solution meets the necessary requirements. In contrast, a barber shop’s DR plan may only need to be tested two to three times per year. Again, when formulating DR plans, always make sure you optimise procedures and processes at the client level.

Document outcomes 

Strong DR documentation starts with a client’s disaster recovery plan, which should outline everything anyone would need to know in the event of an emergency. This includes contact information, a detailed outline of the steps and procedures that individuals need to follow in order to activate a disaster recovery, expected time frames for recovering data and more. 

Only when your response policy is put to the test, can you adequately assess the effectiveness of a DR plan. Maybe certain directions are unclear and create friction across teams. Document any and all outcomes during and after testing. What worked? What didn’t? Where were the failure points? Why did those failures occur? How do you address these in your client’s plan? Were any employees or team leads unavailable? In the event that you can’t reach these people in the future, who are their backups? Little details like this can mean everything when the clock is ticking and your clients’ business continuity is at stake. To help ensure a more seamless DR response, record all results that may be used to improve your clients’ disaster readiness. Then, conduct a post-mortem with all involved, to review lessons learned and areas for improvement.

Update DR plans

Finally, update your clients’ DR plans as necessary. This testing is all for naught if you don’t do anything with the data you record. It’s not enough to simply remember what to do next time around. Recall the conversation around client employee churn. If your client onboards a new hire after your DR test, this employee will only have the existing DR documentation to follow. Rather than repeat the same mistakes in your next round of DR testing, correct now to save your clients later. And remember, disaster readiness is ongoing. Continue to frequently revisit and strengthen your DR plans so that testing runs smoother going forward. 

The article ‘4 Essentials MSPs Forget When Disaster Recovery Testing‘ first appeared on Continuum Blog.