Building High IOPS Flash Array By @Innodisk_Corp | @CloudExpo [#Cloud]

With the rapid advancement of processor technologies, disk access has been identified as the next performance bottleneck in many cloud computing applications. In recent years, storage appliances based on flash memory technologies have been deemed as practical solutions to resolving such performance bottleneck. However, high-end flash appliances are mostly built with proprietary hardware designs, aiming at particular scenarios in larger-scale data centers, and hence are barely affordable by enterprise and industry customers that are also deploying private clouds. Innodisk FlexiRemap™ technology, on the other hand, deals with the challenges of performance, data endurance, and affordability through innovations on software and firmware, creating a new category of flash-collaborative (in contrast to flash-aware or flash-optimized) storage appliances that deliver sustained high IOPS, even for random write operations.

read more

Data Science of Email By @SendGrid | @BigDataExpo [#BigData]

There are 182 billion emails sent every day, generating a lot of data about how recipients and ISPs respond. Many marketers take a more-is-better approach to stats, preferring to have the ability to slice and dice their email lists based numerous arbitrary stats. However, fundamentally what really matters is whether or not sending an email to a particular recipient will generate value. Data Scientists can design high-level insights such as engagement prediction models and content clusters that allow marketers to cut through the noise and design their campaigns around strong, predictive signals, rather than arbitrary statistics.
SendGrid sends up to half a billion emails a day for customers such as Pinterest and GitHub. All this email adds up to more text than produced in the entire twitterverse. We track events like clicks, opens and deliveries to help improve deliverability for our customers – adding up to over 50 billion useful events every month. While SendGrid data covers only about 2% of all non-spam email activity in the world, it gives SendGrid a unique snapshot of email activity spanning from the senders to the recipients and inbox providers like gmail and yahoo. To cope with data of this scale SendGrid has designed and implemented custom data structures to store 10s of billions of items in memory on a single commodity machine.

read more

HyperStore: @CloudianStorage Announces ‘#Cloud Storage for Everyone’ | @CloudExpo

Cloudian announced immediate availability of Cloudian HyperStore appliances and Cloudian HyperStore 5.0 software. Flash-optimized, rack-ready HyperStore appliances make it easy to economically deploy full-featured, highly scalable S3-compliant storage with three enterprise-focused configurations. HyperStore appliances come fully integrated with Cloudian HyperStore software to assure unlimited scale, multi-data center storage, fully automated data tiering, and support for all S3 applications.

read more

Safeguarding Data in the Cloud By @Vormetric | @CloudExpo

As enterprises look to take advantage of the cloud, they need to understand the importance of safeguarding their confidential and sensitive data in cloud environments. Enterprises must protect their data from (i) system administrators who don’t need to see the data in the clear and (ii) adversaries who become system administrators from stolen credentials. In short, enterprises must take control of their data: The best way to do this is by using advanced encryption, centralized key management and cutting edge access controls and policies.

read more

SolidFire announces new funding, new storage nodes, but no plans for IPO

Picture credit: Bob Mical/Flickr

Flash storage provider SolidFire has beefed up its funding pool with a series D round of $82m, bringing its total up to $150m.

The funding was led by Greenspring Associates, a new investor, along with current investors NEA, Novak Biddle, Samsung Ventures and Valhalla Partners. SolidFire says the new funds will be part of a global push and to advance its all-flash storage architecture.

“Series D financing for SolidFire is important,” Jay Prassl, SolidFire VP marketing, told CloudTech. “When you’re building an infrastructure company and a storage company like SolidFire, it’s a capital-intensive business.

“This D round funding is very important because it puts SolidFire very much on a path to profitability,” he added. “We are growing a very long term standalone storage company, and raising these funds allows us to really set us up on a path to profitability and leave the options open, if you will, for SolidFire to continue to make additional moves as it goes forward.”

Prassl added there was nothing set in stone regarding an IPO – indeed, reading the history books of what’s happened to storage companies in the past, he admits there’s no prize for going public too early.

Citing Violin Memory as an example, Prassl said: “Going public is often just one step in the process of continuing to grow a company, and it’s a choice you make at a certain point in time. Many companies…have been forced to go public…SolidFire certainly does not want to be in that position.”

SolidFire sees itself squarely at a key trend of big data architecture, offering storage based on flash memory, which is a more energy-efficient way of reading and digesting data. It’s evidently a popular idea, as the investment money keeps rolling in.

The firm has announced the expansion of its SF Series product line with two new storage nodes, offering users a cheaper way to get on board with the product. The SF2405 and SF4805 nodes represent the third generation of SolidFire hardware, with the SF2405 a low-end product release and SF4805 doubling up on that.

SolidFire says the SF2405 is aimed at IT departments and managers looking to take their first steps towards deploying a private cloud infrastructure, and IT as a service – but it doesn’t mean the company is taking its eye off the ball for its traditional large enterprise customer base.

“It’s cut the entry price point for SolidFire storage systems in half,” Prassl said. “That’s significant because that opens up a broader array of customers to SolidFire’s capabilities that maybe weren’t accessible before.

“Many people often think that a smaller storage node indicates your movement towards a smaller target customers, and that’s not the case here,” he added.

Prassl added the keys to initiating a small cloud environment are consolidation, automation of infrastructure, and ability to scale – something SolidFire feels it can do particularly well.

“So many flash companies out there today are focused on one thing: flash”, he said. “We’re a very different storage company. We use flash, for sure, but we go far beyond the media to deliver these three key areas.”

As Prassl argued, this is one of the main reasons why Greenspring Associates took an interest in investing.

Atos and IOC outline plans for cloud computing in 2016 Olympic Games

Picture credit: Oliver E Hopkins/Flickr

International IT services provider Atos has confirmed that Canopy, a platform as a service cloud offering, will be providing the platform for the Olympics to move to the cloud.

Canopy, which is backed by Atos, EMC and VMware, will provide a private cloud solution to transition core planning systems for the Olympics, including accreditation, sport entries and qualification, and workforce management.

Atos has been working with the International Olympic Committee (IOC) since the 1980s, and last year signed a new long term contract to deliver IT solutions for the Olympics.

Both sides aren’t backing down over the toughness of the challenge. Over 80 competition and non-competition venues will have its IT infrastructure linked together, which totals hundreds of servers and thousands of laptops and PCs.

“Here we see a paradigm shift,” Atos notes, “from a ‘build each time’ to a ‘build once’ model and delivering services over the cloud.

“Rio 2016 is a key milestone in this transformational shift.”

“Atos is our long-term worldwide IT partner who has played a critical role in helping us deliver seven successful Olympic Games,” said the IOC’s Jean-Benoit Gauthier. “We are now trusting it to transfer the delivery of the IT for the Games to the cloud, so we can continue to innovate and ensure an excellent Games experience for all.”

Preparation for getting Rio’s infrastructure in shape began at the time of the last Olympic Games in 2012 with the design of the systems. Currently the focus is on building the systems ready for testing, and by 2016 the IT equipment will be deployed.

Rio will be the first Games with extensive IT infrastructure built in the cloud, with the technology being too nascent for the London Games in 2012. Gerry Pennell, CIO of LOCOG (London Organising Committee of the Olympic and Paralympic Games) told Computing at the time: “The infrastructure in the cloud is not sufficiently mature enough to support the kinds of things we’re doing in the Olympics.”

Given worries from senior IOC officials concerning the state of preparation for the Olympics – one called it “the worst in living memory” – let’s hope the IT building doesn’t go over time.

Seagate Has Shipped Over 10 Million Storage HHDD’s

Seagate has shipped over 10 Million storage HHDD’s, is that a lot?

By Greg Schulz

Seagate has shipped over 10 Million storage HHDD’s, is that a lot?

Recently Seagate made an announcement that they have shipped over 10 million Hybrid Hard Disk Drives (HHDD) also known as Solid State Hybrid Drives (SSHD) over that past few years. Disclosure Seagate has been a StorageIO client.

Seagate HHDD

I know where some of those desktop class HHDD’s including Momentus XTs ended up as I bought some of the 500GB and 750GB models via Amazon and have them in various systems. Likewise I have installed in VMware servers the newer generation of enterprise class SSHD’s which Seagate now refers to as Turbo models as companions to my older HHDD’s

What is a HHDD or SSHD?

The HHDD’s continue to evolve from initially accelerating reads to now being capable of speeding up write operations across different families (desktop/mobile, workstation and enterprise). What makes a HHDD or SSHD is that as their name implies, they are a hybrid combing a traditional spinning magnetic Hard Disk Drive (HDD) along with flash SSD storage. The flash persistent memory is in addition to the DRAM or non-persistent memory typically found on HDDs used as a cache buffer. These HHDDs or SSHDs are self-contained in that the flash are built-in to the actual drive as part of its internal electronics circuit board (controller). This means that the drives should be transparent to the operating systems or hypervisors on servers or storage controllers without need for special adapters, controller cards or drivers. In addition, there is no extra software needed to automated tiering or movement between the flash on the HHDD or SSHD and its internal HDD, its all self-contained managed by the drives firmware (e.g. software).

Some SSHD and HHDD industry perspectives

Jim Handy over at Objective Analysis has this interesting post discussing Hybrid Drives Not Catching On. The following is an excerpt from Jim’s post.

Why were our expectations higher? 

There were a few reasons: The hybrid drive can be viewed as an evolution of the DRAM cache already incorporated into nearly all HDDs today. 

  • Replacing or augmenting an expensive DRAM cache with a slower, cheaper NAND cache makes a lot of sense.
  • An SSHD performs much better than a standard HDD at a lower price than an SSD. In fact, an SSD of the same capacity as today’s average HDD would cost about an order of magnitude more than the HDD. The beauty of an SSHD is that it provides near-SSD performance at a near-HDD price. This could have been a very compelling sales proposition had it been promoted in a way that was understood and embraced by end users.
  • Some expected for Seagate to include this technology into all HDDs and not to try to continue using it as a differentiator between different Seagate product lines. The company could have taken either of two approaches: To use hybrid technology to break apart two product lines – standard HDDs and higher-margin hybrid HDDs, or to merge hybrid technology into all Seagate HDDs to differentiate Seagate HDDs from competitors’ products, allowing Seagate to take slightly higher margins on all HDDs. Seagate chose the first path.

The net result is shipments of 10 million units since its 2010 introduction, for an average of 2.5 million per year, out of a total annual HDD shipments of around 500 million units, or one half of one percent.

Continue reading more of Jim’s post here.

In his post, Jim raises some good points including that HHDD’s and SSHD’s are still a fraction of the overall HDD’s shipped on an annual basis. However IMHO the annual growth rate has not been a flat average of 2.5 million, rather starting at a lower rate and then increasing year over year. For example Seagate issued a press release back in summer 2011 that they had shipped a million HHDD’s a year after their release. Also keep in mind that those HHDD’s were focused on desktop workstations and in particular, at Gamers among others.

The early HHDD’s such as the Momentus XTs that I was using starting in June 2010 only had read acceleration which was better than HDD’s, however did not help out on writes. Over the past couple of years there have been enhancements to the HHDD’s including the newer generation also known as SSHD’s or Turbo drives as Seagate now calls them. These newer drives include write acceleration as well as with models for mobile/laptop, workstation and enterprise class including higher-performance and high-capacity versions. Thus my estimates or analysis has the growth on an accelerating curve vs. linear growth rate (e.g. average of 2.5 million units per year).

  Units shipped per year Running total units shipped
2010-2011 1.0 Million 1.0 Million
2011-2012 1.25 Million (est.) 2.25 Million (est.)
2012-2013 2.75 Million (est.) 5.0 Million (est.)
2013-2014 5.0 Million (est) 10.0 Million

StorageIO estimates on HHDD/SSHD units shipped based on Seagate announcements

estimated hhdd and sshd shipments

However IMHO there is more to the story beyond numbers of HHDD/SSHD shipped or if they are accelerating in deployment or growing at an average rate. Some of those perspectives are in my comments over on Jim Handy’s site with an excerpt below.

In talking with IT professionals (e.g. what the vendors/industry calls users/customers) they are generally not aware that these devices exist, or if they are aware of them, they are only aware of what was available in the past (e.g. the consumer class read optimized versions). I do talk with some who are aware of the newer generation devices however their comments are usually tied to lack of system integrator (SI) or vendor/OEM support, or sole source. Also there was a focus on promoting the HHDD’s to “gamers” or other power users as opposed to broader marketing efforts. Also most of these IT people are not aware of the newer generation of SSHD or what Seagate is now calling “Turbo” drives.

When talking with VAR’s, there is a similar reaction which is discussion about lack of support for HHDD’s or SSHD’s from the SI/vendor OEMs, or single source supply concerns. Also a common reaction is lack of awareness around current generation of SSHD’s (e.g. those that do write optimization, as well as enterprise class versions).

When talking with vendors/OEMs, there is a general lack of awareness of the newer enterprise class SSHD’s/HHDD’s that do write acceleration, sometimes there is concern of how this would disrupt their “hybrid” SSD + HDD or tiering marketing stories/strategies, as well as comments about single source suppliers. Have also heard comments to the effect of concerns about how long or committed are the drive manufactures going to be focused on SSHD/HHDD, or is this just a gap filler for now.

Not surprisingly when I talk with industry pundits, influencers, amplifiers (e.g. analyst, media, consultants, blogalysts) there is a reflection of all the above which is lack of awareness of what is available (not to mention lack of experience) vs. repeating what has been heard or read about in the past.

IMHO while there are some technology hurdles, the biggest issue and challenge is that of some basic marketing and business development to generate awareness with the industry (e.g. pundits), vendors/OEMs, VAR’s, and IT customers, that is of course assuming SSHD/HHDD are here to stay and not just a passing fad…

What about SSHD and HHDD performance on reads and writes?

What about the performance of today’s HHDD’s and SSHD’s, particular those that can accelerate writes as well as reads?

SSHD and HHDD read / write performance exchange
Enterprise Turbo SSHD read and write performance (Exchange Email)

What about the performance of today’s HHDD’s and SSHD’s, particular those that can accelerate writes as well as reads?

SSHD and HHDD performance TPC-B
Enterprise Turbo SSHD read and write performance (TPC-B database)

SSHD and HHDD performance TPC-E
Enterprise Turbo SSHD read and write performance (TPC-E database)

Additional details and information about HHDD/SSHD or as Seagate now refers to them Turbo drives can be found in two StorageIO Industry Trends Perspective White Papers (located here and another here).

Where to learn more

Refer to the following links to learn more about HHDD and SSHD devices.
StorageIO Momentus Hybrid Hard Disk Drive (HHDD) Moments
Enterprise SSHD and Flash SSD
Part of an Enterprise Tiered Storage Strategy

Part II: How many IOPS can a HDD, HHDD or SSD do with VMware?
2011 Summer momentus hybrid hard disk drive (HHDD) moment
More Storage IO momentus HHDD and SSD moments part I
More Storage IO momentus HHDD and SSD moments part II
New Seagate Momentus XT Hybrid drive (SSD and HDD)
Another StorageIO Hybrid Momentus Moment
SSD past, present and future with Jim Handy
Part II: How many IOPS can a HDD, HHDD or SSD do with VMware?

Closing comments and perspectives

I continue to be bullish on hybrid storage solutions from cloud, to storage systems as well as hybrid-storage devices. However like many technology just because something makes sense or is interesting does not mean its a near-term or long-term winner. My main concern with SSHD and HHDD is if the manufactures such as Seagate and WD are serious about making them a standard feature in all drives, or simply as a near-term stop-gap solution.

What’s your take or experience with using HHDD and/or SSHDs?

Ok, nuff said (for now)

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2014 StorageIO All Rights Reserved

read more

BlueBox Named “Silver Sponsor” of @DevOpsSummit | @BlouBox [#DevOps]

BlueBox bridge the chasm between development and infrastructure. Hosting providers are taking standardization and automation too far. For many app developers it does nothing but spawn mayhem and more work. They have to figure out how their creations live on a pre-fab infrastructure solution full of constraints. Operations-as-a-Service is what BlueBox does. BlueBox utilizes development tools such as OpenStack, EMC Razor, Opscode’s Chef and BlueBox’s proprietary tools give the power to do the unorthodox things which most hosting providers shun.

read more

Larry Ellison Rocks the Oracle Boat

Larry Ellison turned 70 and has decided to turn over the CEO reins at Oracle. Safra Catz and Mark Hurd, both in their 50s, will function as a “Ms. Inside and Mr. Outside” as co-CEOs, at least for awhile.

Serious reverberations will be felt within this highly competitive company and the highly competitive industry in which it makes its money.

Even while guiding his yacht to an America’s Cup title, Larry Ellison remained in firm control of the company he founded in 1977. He still has an ownership stake of about 20% of the company–1 billion or so shares of Oracle stock worth about $40 billion. Who can imagine that he’ll be a docile, passive Chairman?

Yes, he is returning as Chairman, with Jeff Henley, currently in that role, moving aside to be Vice-Chairman. Ellison reports he will also serve as Chief Technology Officer. So it’s clear he’s not fading from the scene. But he will not be able to micromanage the company by any measure.

What Does It Mean?
Think of all of the very strong executives over the years who rose quickly and highly in Oracle, only to be banished from the kingdom and/or to start their own big companies. Ray Lane, Marc Benioff, and Tom Siebel spring immediately to mind (with Siebel Systems eventually coming back to the mother ship). There are dozens, if not hundreds, more.

How will today’s generation of willful executives develop and fare now that the lion has retreated?

I’ve already read reports about how Ellison’s decision—which was announced after the markets closed, of course—comes at a “difficult” or “momentous” time for the company. This is the era of cloud computing, Oracle was slow to the dance (with Ellison infamously mocking it a few years ago), and is now furiously developing strategies to maintain its grip as one of the key companies in enterprise IT.

The reality is that things have always been difficult for Oracle. In its early days, it had to fight IBM and a group of seeming equals—Informix, Sybase, and Ingres—for early dominance in the then-new era of relational databases. By cleverly marketing itself as the one company that worked on all platforms and by dint of a hyper-aggressive sales effort, Oracle emerged as the clear leader.

Then its stock got plastered in the early 90s when the company showed its first loss. The stock fell to $5 and Ellison himself lost billions and billions of paper wealth.

I remember him saying a few years ago that he would encourage today’s generation of young and newly rich entrepreneurs to buy tangible things like land and buildings and traditional businesses. Paper wealth comes and goes, but tangible things tend to stay that way. He’s followed his own advice with several eye-popping real-estate purchases in recent years.

We all remember him saying a few years ago that he saw the future of Silicon Valley perhaps looking like Detroit, given that consolidation was going to extinguish most companies and the innovative spirit that drives them. Certainly Oracle has acquired numerous big companies over the past decade, PeopleSoft and Sun Microsystems among this numerous group.

Then there’s the current era of cloud. Oracle’s always been in a struggle, and today’s era is no different.

I’ve pinged a couple of Oracle execs for comment, but nothing has returned my way.

I have no idea what this move means for the future of Oracle, cloud, Silicon Valley, or the world, but I can imagine fistfights in the cafeteria as execs and divisions jockey for position.

Who knows, it might drive an era of incredible innovation at the company now that Larry’s not around to approve every comma in every speech and every line of code in every program.

Safra Catz is known as a hyper-competent operations person and Mark Hurd could sell sand in the Mojave. If they can continue to work splendidly together, as Ellison alleged today they have been doing, Oracle could not only be an enteprise gorilla but dare we say a cool one at that.

My question to Safra and Mark now that they’ve had a few minutes to adjust to the new job: what are you doing in the IoT? That’s your next struggle, and all the other big players area already staking some big claims in the biggest technology market ever.

read more

AT&T and Amazon Web Services buddy up with NetBond VPN service

Picture credit: Mike Mozart/Flickr

Recent stories that have arrived at CloudTech HQ have focused on the opportunity the cloud provides for telcos, and this one’s no different: AT&T has announced a partnership with Amazon Web Services (AWS) for AT&T NetBondSM, its network enabled cloud (NEC) solution.

With NetBond, AT&T customers can utilise a VPN to connect to any cloud compute or IT service environment in AT&T’s partner ecosystem, bypassing the Internet completely. The partnership with AWS enables users to access business applications and information stored in Amazon’s cloud.

Melanie Posey, IDC research vice president, described the collaboration as a “likely game changer” and added: “The addition of AWS broadens AT&T’s already expansive NetBond ecosystem and will give customers highly secure, reliable, and on-demand connections to another key public cloud service provider.”

Current NetBond partners include Microsoft, Salesforce.com, VMware, IBM and Box, covering a large breadth of the cloud computing ecosystem.

AT&T’s particular cloud play is an interesting one. NetBond integrates with the existing VPN, meaning customers don’t need to order more equipment; the technology avoids DDoS attacks through the private global network; and users can save up to 60% on networking costs, and increase performance by up to 50%.

The overall effect is an intriguing one as telcos begin to make their behemoth moves to providing cloud services. It’s always been there in the background, of course, but a litany of recent strategic shifts can’t just be a coincidence.

Last week comms provider CenturyLink announced expansion to China – always a good sign of growth – while Ericsson recently announced it had taken a majority stake in platform as a service (PaaS) provider Apcera, adding to its Ericsson Cloud System portfolio.

Telcos see the market opportunity; they’ve already got an expansive network and customer buy-in, and if they add cloud services on top it’s got all the ingredients of being a winning proposition.

For AT&T in particular, it’s been a long time coming. Back in 1993 the firm released a product called PersonaLink, which aimed to be an ‘electronic meeting place’ for people to share information and documents, and could differentiate between ‘dumb’ and ‘smart’ messages.

“You can think of our meeting place as the cloud,” the video stated. Did AT&T invent the cloud more than 20 years ago, as Salesforce CEO Marc Benioff tweeted back in May? Not quite – the technology came from now defunct tech firm General Magic, whose goal was to distribute the computing load evenly between bigger and smaller devices in the network.

But with a big telco network and a large partner ecosystem, AT&T might be coming good on that video’s promise.