Big Data, IoT, API – Newer Technologies Protected by Older Security

Nowadays every single CIO, CTO, or business executive that I speak to is captivated by these three new technologies: Big Data, API management and IoTs (Internet of Things). Every single organizational executive that I speak with confirms that they either have current projects that are actively using these technologies, or they are in the planning stages and are about to embark on the mission soon.
Though the underlying need and purpose served are unique to each of these technologies, they all have one thing common: they all necessitate newer security models and security tools to serve any organization well. I will explain that in a bit, but let us see what is the value added by these technologies to any organization.

read more

Cloud Expo New York: Cloud Architecture and Engineering

In his session at the 12th International Cloud Expo, Tony Shan, who was one of the key drivers inside IBM for the cloud reference architecture, and also helped coin «Cloud Engineering,» will discuss lots of valuable best practices and lessons learned in designing cloud architecture and applying cloud engineering disciplines in real-world projects as case studies.
It was Shan who created the first version of the term Cloud Engineering in Wikipedia. Some of the works are shared in his blog – cloudonomic.blogspot.com.

read more

Cloud Boom Continues as Quarterly IaaS/PaaS Revenues Exceed $2B

New data from Synergy Research Group shows that Q1 service revenues from IaaS and PaaS exceeded $2 billion, having grown by 56% from the first quarter of 2012. Amazon (AWS) remains in a league of its own with a 27% market share, while a group of IT heavyweights have so far failed to close the gap on the market leader despite an increasing number of service launches and marketing initiatives — Amazon’s share of the market for the whole of 2012 was also 27%.

read more

Cloud Boom Continues as Quarterly IaaS/PaaS Revenues Exceed $2B

New data from Synergy Research Group shows that Q1 service revenues from IaaS and PaaS exceeded $2 billion, having grown by 56% from the first quarter of 2012. Amazon (AWS) remains in a league of its own with a 27% market share, while a group of IT heavyweights have so far failed to close the gap on the market leader despite an increasing number of service launches and marketing initiatives — Amazon’s share of the market for the whole of 2012 was also 27%.

read more

Convergence and Interoperability Will Define Next-Gen Cloud Architectures

Our more interconnected planet is accelerating the adoption and convergence of next-generation architectures, in the form of cloud, mobile and instrumented physical assets. Organizations that can effectively balance optimization and innovation, will be in a position to leverage new systems of engagement, out maneuver their peers and achieve desired outcomes. In the Opening Keynote at 12th Cloud Expo | Cloud Expo New York, IBM GM & Next Generation Platform CTO Dr Danny Sabbah will detail the critical architectural considerations and success factors organizations must internalize to successfully implement, optimize and innovate using next generation architectures.

read more

Cisco Buying JouleX for Cloud-Based Energy Software

Cisco is spending $107 million in cash and retention incentives to buy JouleX, whose cloud-based software is supposed to help companies reduce energy costs.
It monitors, analyzes and manages the energy use of all network-connected devices and systems through a set of policies derived through analytics tailored to the enterprise.
The widgetry, which provides policy governance and compliance as well, works across global IT environments.
The company was started in Munich in 2009 and is now based in Atlanta with R&D in Kassel, Germany. It’s gotten at least $17 million in venture funding from such companies as Intel Capital.

read more

Questions Around Uptime Guarantees

Some manufacturers recently have made an impact with a “five nines” uptime guarantee, so I thought I’d provide some perspective. Most recently, I’ve come in contact with Hitachi’s guarantee. I quickly checked with a few other manufacturers (e.g. Dell EqualLogic) to see if they offer that guarantee for their storage arrays, and many do…but realistically, no one can guarantee uptime because “uptime” really needs to be measured from the host or application perspective. Read below for additional factors that impact storage uptime.

Five Nines is 5.26 minutes of downtime per year, or 25.9 seconds a month.

Four Nines is 52.6 minutes/year, which is one hour of maintenance, roughly.

Array controller failover in EQL and other dual controller, modular arrays (EMC, HDS, etc.) is automated to eliminate downtime. That is really just the beginning of the story. The discussion with my clients often comes down to a clarification of what uptime means – and besides uninterrupted connectivity to storage, data loss (due to corruption, user error, drive failure, etc.) is often closely linked in people’s minds, but is really a completely separate issue.

What are the teeth in the uptime guarantee? If the array does go down, does the manufacturer pay the customer money to make up for downtime and lost data?

{Register for our upcoming webinar on June 12th ”What’s Missing in Hybrid Cloud Management- Leveraging Cloud Brokerage“ featuring guest speakers from Forrester and Gravitant}

There are other array considerations that impact “uptime” besides upgrade or failover.

  • Multiple drive failures, since most are purchased in batches, are a real possibility. How does the guarantee cover this?
  • Very large drives must be in a suitable RAID configuration to improve the chances that a RAID rebuild will be completed before another URE (unrecoverable read error) occurs. How does the guarantee cover this?
  • Dual controller failures do happen to all the array makers, although I don’t recall this happening with EQL. Even a VMAX went down in Virginia once, in the last couple of years. How does the guarantee cover this?

 

The uptime “promise” doesn’t include all the connected components. Nearly every environment has something with a single path or SPOF or other configuration issue that must be addressed to insure uninterrupted storage connectivity.

  • Are applications, hosts, network and storage all capable of automated failover at sub-10 ms speeds? For a heavily loaded Oracle database server to continue working in a dual array controller “failure” (which is what an upgrade resembles), it must be connected via multiple paths to an array, using all available paths.
  • Some operating systems don’t support an automatic retry of paths (Windows), nor do all applications resume processing automatically without IO errors, outright failures or reboots.
  • You often need to make temporary changes in OS & iSCSI initiator configurations to support an upgrade – e.g. change timeout value.
  • Also, the MPIO software makes a difference. Dell EQL MEM helps a great deal in a VMware cluster to insure proper path failover, as do EMC PowerPath and Hitachi Dynamic Link Manager. Dell offers a MS MPIO extension and DSM plugin to help Windows recover from a path loss in a more resilient fashion
  • Network considerations are paramount, too.
    • Network switches often take 30 seconds to a few minutes to reboot after a power cycle or reboot.
    • Also in the network, if non-stacked switches are used, RSTP must be enabled. If not, and anything else isn’t configured correctly, connectivity to storage will be lost.
    • Flow Control must be enabled, among other considerations (disable unicast storm control, for example), to insure that the network is resilient enough.
    • Link aggregation, if not using stacked switches, must be dynamic or the iSCSI network might not support failover redundancy

 

Nearly every array manufacturer will say that upgrades are non-disruptive, but that is at the most simplistic level. Upgrades to a unified storage array, for example, will involve disruption to file system presentation, almost always. Clustered or multi-engine frame arrays (HP 3PAR, EMC VMAX, NetApp, Hitachi VSP) can offer the best hope of achieving 5 nines, or even greater. We have customers with VMAX and Symmetrix that have had 100% uptime for a few years, but the arrays are multi-million dollar investments. Dual controller modular arrays, like EMC and HDS, can’t really offer that level of redundancy, and that includes EQL.

If the environment is very carefully and correctly set up for automated failover, as noted above, then those 5 nines can be achieved, but not really guaranteed.

 

Nick Carr’s 2003 Rules for IT Management: An Open Nerve?

If an article, 10 years after its initial publication date, is featured in several look backs, reviews, Q&As and still gathers reactions and emotional analysis, it can be concluded it must have struck a chord – or in this case – more an open nerve.

In May 2003*, the Harvard Business Review published «IT Doesn’t Matter» , an article by then still largely unknown editor «at large» Nicholas Carr.

read more

Environmental Pressures Driving an Evolution in File Storage

Stagnant budgets, overwhelming data growth, and new user and application demands, are just a few of the many challenges that are putting IT organizations under more pressure today than ever before. As a result, a new approach is required. The session being given by Hitachi Data Systems’ Jeff Lundberg at next month’s 12th Cloud Expo | Cloud Expo New York [June 10-13, 2013] will discuss why object storage based private cloud is necessary for evolving into a next-generation of IT that supports a new world of applications and storage service delivery models.

read more

Hadoop and Big Data Easily Understood – How to Conduct a Census of a City

BigData (and Hadoop) are buzzword and growth areas of computing; this article will distill the concepts into easy-to-understand terms.
As the name implies, BigData is literally «big data» or «lots of data» that needs to be processed. Lets take a simple example: the city council of San Francisco is required to take a census of its population – literally how many people live at each address. There are city employees who are employed to count the residents. The city of Los Angeles has a similar requirement.

read more