Equifax Is an Enron Moment | @CloudExpo #AI #DX #SDN #Cybersecurity

Enron changed how U.S. public companies audit and report their financial data. There is also an opportunity to use the Equifax data breach to create a framework for better protection of our data in future.
The credit reporting agency reported one of the largest data breaches in the history. Hackers were able to steal sensitive information from its internal servers. The stolen data include name, Social Security Number (SSN), date of birth, and also credit card numbers and driver license numbers in some cases. A massive breach like this can haunt the victims for years to come.

read more

Tech News Recap for the Week of 10/02/17

If you had a busy week and need to catch up, here’s a tech news recap of articles you may have missed for the week of 10/02/2017!

New update on Yahoo data breach, now it’s every single Yahoo account. Three questions to ask about hybrid cloud. What visibility really means in IT. New Windows 10 security features and how to use them. How AWS saves their customers lots of money and more top news this week you may have missed! Remember, to stay up-to-date on the latest tech news throughout the week, follow @GreenPagesIT on Twitter.

Also, Cisco Connect is in Tampa, FL in just a few days! We hope to see you there! Register here.

Tech News RecapTransform IT Security

Featured

  • How to build a modern 24/7 help desk [infographic]

IT Operations

  • The changing role of Modern IT: How one solutions provider has evolved
  • What’s new in MySQL 8.0 Database
  • Keeping IT real: What visibility really means
  • Virtualization and IoT made for one another, but performance monitoring still essential
  • NetApp HCI launches at subdued user show

[Interested in learning more about SD-WAN? DownloadWhat to Look For When Considering an SD-WAN Solution.]

Microsoft

AWS

  • Amazon AWS saved its customers $500M by alerting them when they’re overpaying

Cisco

Cloud

Security

By Jake Cryan, Digital Marketing Specialist

While you’re here, check out this white paper on how to rethink your IT security, especially when it comes to financial services.

Transform IT Security

 

 

Why software-defined storage revenue will reach $16.2 billion

As more CIOs and CTOs prepare for the data deluge that’s driving demand for enterprise storage solutions, savvy IT infrastructure vendors are already offering next-generation systems that meet the evolving requirements of their customers’ digital transformation projects.

Software-defined storage (SDS) is one of several new technologies that are rapidly penetrating the IT infrastructure of enterprises and cloud service providers. SDS is gaining traction because it meets the demands of the next-generation data center much better than legacy storage infrastructure.

As a result, International Data Corporation (IDC) forecasts the worldwide SDS market will see a compound annual growth rate (CAGR) of 13.5 percent over the 2017-2021 forecast period, with revenues of nearly $16.2 billion in 2021.

Enterprise storage market development

Enterprise storage spending has already begun to move away from hardware-defined, dual-controller array designs toward SDS and from traditional on-premises IT infrastructure toward cloud environments (both public and private) based on commodity Web-scale infrastructure.

SDS solutions run on commodity, off-the-shelf hardware, delivering all the key storage functionality in software. Relative to legacy storage architectures, SDS products deliver improved agility — including faster, easier storage provisioning – when compared to traditional storage system constraints.

“For IT organizations undergoing digital transformation, SDS provides a good match for the capabilities needed — flexible IT agility; easier, more intuitive administration driven by the characteristics of autonomous storage management; and lower capital costs due to the use of commodity and off-the-shelf hardware,” said Eric Burgener, research director at IDC.

According to the IDC assessment, as these features appear more on CIOs and CTOs list of purchase criteria, enterprise storage revenue will continue to transition to software-defined storage solutions.

Within the SDS market, the expansion of three key sub-segments – file, object, and hyperconverged infrastructure (HCI) – is being strongly driven forward by next-generation data center requirements.

Outlook for enterprise SDS solutions

Of these sub-segments, HCI is both the fastest growing with a five-year CAGR of 26.6 percent and the largest overall with revenues approaching $7.15 billion in 2021. Object-based storage will experience a CAGR of 10.3 percent over the forecast period while file-based storage and block-based storage will trail with CAGRs of 6.3 percent and 4.7 percent, respectively.

Because hyperconverged systems typically replace legacy SAN- and NAS-based storage systems, all the major enterprise storage systems providers have committed to the HCI market in a major way over the past 18 months.

This has made the HCI sub-segment one of the most active merger and acquisition markets, as these vendors prepare to capture anticipated SAN and NAS revenue losses to HCI due to enterprises shifting toward more cost-effective SDS solutions.

Beyond disasters: It’s time to do more with DR

Disaster recovery plans exist to ensure business continuity when the worst happens. They allow organisations and individuals caught up in life changing events such as natural disasters to focus 100% on protecting people from harm, without the distraction of worrying about data and IT systems. Recent events – both climate related such as the hurricanes in the US as well as manmade events such as ransomware attacks –  have brought disaster recovery to the top of mind, and organisations are looking to boost their preparedness. As they do this, they should also consider some of the useful ways that cloud-based disaster recovery solutions can contribute to everyday business operations – even before catastrophe strikes. 

We all know that geographic redundancy and replicating critical workloads can keep businesses up and running during catastrophic events, but there are many more everyday use cases for DRaaS (disaster recovery as a service) that can drive additional value from the investment. A key use case is maximising the value of the test environment created as part of the DRaaS provision. Every DR plan needs to be regularly tested to provide assurance that it is fit for purpose, but beyond readiness there are a few other key reasons why testing can be important for your business.

When delivering Zerto for cloud-based DR solutions for our customers, iland creates an environment on our Secure Cloud with preconfigured networks for both live and test failovers. The test environment is completely isolated, and can serve as a sandbox for all of your DR and out of band testing needs, all without any impact to production or replication. That isolated environment can be used in a number of ways:

Code development/quality assurance

These days, almost all organisations do some level of internal development. Many have also adopted agile development principles along the way. Unfortunately, not everyone has adopted Test-Driven Development (TDD), whereby you write tests before you write code. You simply refactor the code until the test passes. Boom, automatic quality assurance – in an ideal world, that is.

With this newfound agility, however, comes the temptation to release code a bit too prematurely. Why not test in an isolated environment? By performing a test failover of your development systems, you can begin to run those applications in a pseudo real-world environment. Often, it’s not until the services are live that you find errors. Debugging in this sandbox allows for your development team to clean up what otherwise might have gone undetected until launch. This practice will save you from having to issue patches down the line and will ultimately result in higher quality code releases that are less likely to have negative impacts on your employees or customers.

Application changes/patching

Speaking of patches, back to my “in a perfect world” comment: they’re going to happen. Knowing that’s the case, it makes perfect sense to test the patches, or any other code changes for that matter, in a secure environment. We’ve had a number of customers find this test environment conducive to finding issues and bugs much sooner.

Security/penetration testing: Overcoming the fear factor

Let’s face it, as fearful of security events as we all are at this point, many organisations are also nervous of impacting critical systems by performing security checks on their production systems. Having an isolated environment to conduct security tests is invaluable.

Within this isolated test environment, you’re able to perform test failovers of the workloads you’d like to audit without impacting production. Now that you have a test bed established, you can start by performing penetration tests. There might be unknown critical vulnerabilities present on those systems, but you’ll be able to run a vulnerability scan, identifying chinks in your systems’ armour.

Following that, you could inject sample malware into the environment to put the system’s detection capabilities to the test. Other tools worth checking would be malicious file detection, intrusion detection and prevention, and URL filtering. There are many open-source tools and publicly available utilities for simulating malicious activity. Many iland customers have found security vulnerabilities and weaknesses in their IT systems by deploying DR security testing in this way. In other words: “Hack” away my friends!

As you can see, I’ve identified a few, pretty cool, use-cases for DR testing that go beyond its primary purpose. While the occurrence of natural disasters may well be the prompt that starts organisations looking more closely at cloud-based disaster recovery, once they do I think they’ll find some seriously compelling reasons to build regular testing using DR test environments into their core business processes.

Disaster recovery as a service offers so much more than peace of mind for when the worst happens, and savvy customers are really starting to make it work in their favour.

General Electric names AWS as preferred cloud provider

General Electric (GE) has selected Amazon Web Services (AWS) as its preferred cloud provider, according to an Amazon announcement.

In a short note, Amazon said GE ‘continues to migrate thousands of core applications to AWS’, and that many of its cloud apps on businesses such as aviation, healthcare and transportation, run on Amazon’s cloud. GE began an enterprise-wide migration in 2014 and has migrated more than 2,000 applications in that time.

GE’s focus on digital is crystal clear. As Pat Gelsinger, CEO of VMware, pointed out in a speech last year, it is the only company still standing from the original 12 listed on the original Dow Jones Industrial Average in 1896. Writing on LinkedIn last month, GE CEO and chairman John Flannery said the decision to go ‘all-in’ on digital was an easy one.

“We have fully embraced the digital industrial transformation, and we believe in its potential to change the world,” he wrote. “For our customers, digital is bringing new levels of innovation and productivity – and they are seeing real, tangible outcomes. Now, we are taking these outcomes and transferring what we have learned directly to our customers.”

GE’s commitment to cloud has similarly shone through. Alongside AWS, the behemoth had previously inked deals with Microsoft in 2015, moving 300,000 employees over to Office 365, and Box in 2014 for content sharing and collaboration.

“Adopting a cloud-first strategy with AWS is helping our IT teams get out of the business of building and running data centres and refocus our resources on innovation as we undergo one of the largest and most important transformations in GE’s history,” said Chris Drumgoole, General Electric chief technology officer and corporate vice president in a statement.

“We chose AWS as the preferred cloud provider for GE because AWS’s industry leading cloud services have allowed us to push the boundaries, think big, and deliver better outcomes for GE,” Drumgoole added.

Can Oracle Beat AWS?

The cloud war is heating up with the entry of another tech giant, Oracle, in the cloud industry. For some time, Oracle has been planning the transition to cloud and it has finally succeeded in years of strategy and effort. So, it’s time for the other three giants, namely Google, Amazon Web Services and Microsoft, to up their stakes.

While some may regard Oracle as a relatively new entrant to the cloud world, Larry Ellison, the CTO and co-founder of Oracle, doesn’t think so. In a keynote address at Oracle’s OpenWorld conference in San Francisco, he took a few digs at AWS and said that Oracle could beat AWS.

Why did he say that and is there any truth in his statement?

One area that Oracle has massive experience when compared to AWS and other cloud companies is in running mission critical applications. In fact, if you look back at Oracle’s past work, it has been running many complex applications that are very critical for the businesses that use it. This experience is where Oracle scores over other cloud companies, as it can be helpful in running critical applications in the cloud.

Besides this experience, Oracle is embarking on an aggressive price strategy to woo customers to its own products. Also, the guaranteed high performance in Oracle’s cloud platform, something that comes from the strong infrastructure it has developed over the years, ia another aspect that should worry the top three cloud computing giants.

Another fact is that there is so much of untapped potential in the enterprise infrastructure market that there is an opportunity for any company with the right products to thrive in it. One possible advantage for Oracle is that it entered late, but it has learned from the mistakes of other companies, so the chances for it to use the right strategy is high. Considering that more than eighty percent of companies haven’t moved their infrastructure to the cloud, there’s plenty of opportunities for everyone, including Oracle.

Let’s see how all of this plays out for Oracle. Can it really beat the king of cloud computing, AWS? Yes, provided it plays its cards well and makes the most of the advantages it has.

 

The post Can Oracle Beat AWS? appeared first on Cloud News Daily.

Parallels Desktop 13.1 Update Release Notes

Our team of engineers has been hard at work after we released Parallels Desktop® 13 for Mac! Hundreds of development hours go into ensuring Parallels Desktop users are the center of everything we do. We’d like to wholeheartedly thank the users who provided such wonderful product feedback. Combining the valuable user feedback and engineering Q&A, […]

The post Parallels Desktop 13.1 Update Release Notes appeared first on Parallels Blog.

Taica to Exhibit at @CloudExpo | #AI #DX #SmartCities #MachineLearning

SYS-CON Events announced today that Taica will exhibit at the Japan External Trade Organization (JETRO) Pavilion at SYS-CON’s 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
Taica manufacturers Alpha-GEL brand silicone components and materials, which maintain outstanding performance over a wide temperature range -40C to +200C. For more information, visit http://www.taica.co.jp/english/.

read more

What is Cloud Firestore?

Cloud Firestore is yet another cloud product from the stables of Google.

Firestore is a new database service for Firebase, Google’s app platform for developers. Firebase has been fairly popular with developers because it helps you build apps on the Google cloud platform without worrying about managing the underlying infrastructure. This way, you have the flexibility to focus on the functionality you want and at the same time, you get a world-class platform for deployment. In addition, you also get the necessary insights and analytics that can be shared across all Google’s products.

Another salient aspect of Firebase is that it gives developers access to a real-time database that is effectively managed and scaled by Google. Due to such advantages, Firebase has caught on well with developers.

This new Firestore database is sure to make Firebase a more attractive choice for developers, as it complements the existing Firebase Realtime Database. IN fact, if you look closely, there’s quite a bit of overlap between these two databases.

So, why do we need Firestore?

Over time, Google realized that developers hit some issues during the process of development and these issues could not be solved with the existing realtime database. This is why a new service was introduced, so it could ease these pain points and make the development process more enjoyable and productive for developers.

Realtime databases, in general, come with many limitations. First off, it can’t handle a ton of complex queries and secondly, the platform itself was architected to have a limit of 100,000 connected devices. Some of the largest Firebase’s customers hit this limit fairly quickly, and this means, they are forced to spread their database among different shards. All this negates the use of a realtime database and makes them a lot less effective.

Firestore was introduced to overcome these limitations.

Some critics argue that an easier way is to redesign the realtime database completely. But that’s easier said than done as it would entail a lot of time and resources. Instead, the Google team chose to build another database that would overlap the existing database, and at the same time, will be free of the existing limitations.

In addition, Firestore gives the choice to build offline apps using the local database for web, iOS and Android platforms. Users can also sync data across apps in real time, and this adds to the attractiveness of Firestore.

To top it, small customers don’t really have to make the switch to Firestore right away.

Overall, this is a good move by Google and one that could boost the usage of Google cloud platform.

 

The post What is Cloud Firestore? appeared first on Cloud News Daily.

[session] Shifting Left on Development with Experimentation | @CloudExpo @Optimizely #CD #Cloud #DevOps

High-velocity engineering teams are applying not only continuous delivery processes, but also lessons in experimentation from established leaders like Amazon, Netflix, and Facebook. These companies have made experimentation a foundation for their release processes, allowing them to try out major feature releases and redesigns within smaller groups before making them broadly available.
In his session at 21st Cloud Expo, Brian Lucas, Senior Staff Engineer at Optimizely, will discuss how by using new techniques such as feature flagging, rollouts, and traffic splitting, experimentation is no longer just the future for marketing teams, it’s quickly becoming an essential practice for high-performing development teams as well.

read more