@ThingsExpo | Zero to a Connected Internet of Things (#IoT) Application

Where historically app development would require developers to manage device functionality, application environment and application logic, today new platforms are emerging that are IoT focused and arm developers with cloud based connectivity and communications, development, monitoring, management and analytics tools. In her session at Internet of @ThingsExpo, Seema Jethani, Director of Product Management at Basho Technologies, will explore how to rapidly prototype using IoT cloud platforms and choose the right platform to match application requirements, security and privacy needs, data management capabilities and development tools.

read more

@Gridstore Named «Exhibitor» of @CloudExpo Silicon Valley [#Cloud]

SYS-CON Events announced today that Gridstore™, the leader in software-defined storage (SDS) purpose-built for Windows Servers and Hyper-V, will exhibit at SYS-CON’s 15th International Cloud Expo®, which will take place on November 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA.
Gridstore™ is the leader in software-defined storage purpose built for virtualization that is designed to accelerate applications in virtualized environments. Using its patented Server-Side Virtual Controller™ Technology (SVCT) to eliminate the I/O blender effect and accelerate applications Gridstore delivers vmOptimized™ Storage that self-optimizes to each application or VM across both virtual and physical environments. Leveraging a grid architecture, Gridstore delivers the first end-to-end storage QoS to ensure the most important App or VM performance is never compromised. The storage grid, that uses Gridstore’s performance optimized nodes or capacity optimized nodes, starts with as few as 3 nodes and then can grow one or more at a time to deliver a cost effective scale-as-you-grow solution. Headquartered in Mountain View, CA. its products and services are available through a global network of value-added resellers.

read more

@Vormetric To Present #BigData and #Cloud At @CloudExpo Silicon Valley

Cloud and Big Data present unique dilemmas: embracing the benefits of these new technologies while maintaining the security of your organization’s assets. When an outside party owns, controls and manages your infrastructure and computational resources, how can you be assured that sensitive data remains private and secure? How do you best protect data in mixed use cloud and big data infrastructure sets? Can you still satisfy the full range of reporting, compliance and regulatory requirements?
In his session at 15th Cloud Expo, Derek Tumulak, Vice President of Product Management at Vormetric, will discuss how to address data security in cloud and Big Data environments so that your organization isn’t next week’s data breach headline.

read more

Understanding APM on the Network

In Part 6, we dove into the Nagle algorithm – perhaps (or hopefully) something you’ll never see. In Part VII, we get back to “pure” network and TCP roots as we examine how the TCP receive window interacts with WAN links.
Each node participating in a TCP connection advertises its available buffer space using the TCP window size field. This value identifies the maximum amount of data a sender can transmit without receiving a window update via a TCP acknowledgement; in other words, this is the maximum number of “bytes in flight” – bytes that have been sent, are traversing the network, but remain unacknowledged. Once the sender has reached this limit and exhausted the receive window, the sender must stop and wait for a window update.

read more

@ThingsExpo | Solgenia To Discuss #BigData and Internet of Things [#IoT]

Working with Big Data is challenging, especially when decision makers depend on market insights and intelligence from your data but don’t have quick access to it or find it unusable. In their session at 15th Cloud Expo, Ian Khan, Global Strategic Positioning & Brand Manager at Solgenia; Zel Bianco, President, CEO and Co-Founder of Interactive Edge of Solgenia; and Ermanno Bonifazi, CEO & Founder at Solgenia, will discuss how a revolutionary cloud-based BI along with mobile analytics is already changing the way organizations rely on data for decisions that affect operations as well as strategy. Also learn how combined predictive analytics, data modelling, mobile, and cloud BI are fast changing the key decision-making mechanisms in the enterprise.

read more

Solgenia to Exhibit at Cloud Expo Silicon Valley

SYS-CON Events announced today that Solgenia, the global market leader in Cloud Collaboration and Cloud Infrastructure software solutions, will exhibit at SYS-CON’s 15th International Cloud Expo®, which will take place on November 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA.
Solgenia is the global market leader in Cloud Collaboration and Cloud Infrastructure software solutions. Designed to “Bridge the Gap” between personal and professional social, mobile and cloud user experiences, our solutions help large and medium-sized organizations dramatically improve productivity, reduce collaboration costs, and increase the overall enterprise value by bringing collaboration and infrastructure solutions to the cloud.

read more

X as a Service (XaaS): What the Future of Cloud Computing Will Bring

By John Dixon, Consulting Architect

 

Last week, Chris Ward and I hosted a breakout session at Cloudscape 2014, GreenPages’ annual customer Summit. We spoke about cloud service models today (IaaS, PaaS, and SaaS), as well as tomorrow’s models — loosely defined as XaaS, or Anything-as-a-Service. In this post, I’ll discuss XaaS: what it is and why you might want to consider using it.

First, what is XaaS? Is this just more marketing fluff? Why do we need to define yet another model to fully describe cloud services? I contest that XaaS is a legitimate term, and that it is useful to describe a new type of cloud services — those that make use of IaaS, PaaS, and SaaS all neatly delivered in one package. Such packages are intended to fully displace the delivery of a commodity IT service. My favorite example of XaaS is desktop as a service, or DaaS. In a DaaS product, a service provider might assemble it with the following:

  • Servers to run Virtual Desktop Infrastructure from a provider such as Terremark (IaaS)
  • An office suite such as Microsoft Office365 (SaaS)
  • Patching and maintenance services
  • A physical endpoint such as a Chromebook or thin client device

The organization providing DaaS would design, assemble, and manage the product out of best-of-breed offerings in this case. The customer would pay one fee for the use of the product and have the all-important “one throat to choke” for the delivery of the product. At GreenPages, we see the emergence of XaaS (such as DaaS) as a natural evolution of the market for cloud services. This sort of market behavior is nothing new for other industries in a competitive market. Take a look at the auto industry (another one of my favorite examples). When you purchase a car, you are buying a single product from one manufacturer. That product is assembled from pieces provided by many other companies — from the paint, to the brake system, to the interior, to the tires, to the navigation system, to name a few. GM or Ford, for example, doesn’t manufacture any of those items themselves (they did in days past). They source those parts to specialist providers. The brakes come from Brembo. The interior is provided by Lear Corp. The Tires are from Goodyear. The navigation system is produced by Harman. The auto manufacturer specializes in the design, marketing, assembly, and maintenance of the end product, just as a service provider does in the case of XaaS. When you buy an XaaS product from a provider, you are purchasing a single product, with guaranteed performance, and one price. You have one bill to pay. And you often purchase XaaS on a subscription basis, sometimes with $0 of capital investment.

You can download John’s “The Evolution of Your Corporate IT Department” eBook here

So, secondly, why would you want to use XaaS? Let’s go back to our DaaS example. At GreenPages, we think of XaaS as one of those products that can completely displace a commodity service that is delivered by corporate IT today. What are commodity services? I like to think of them as the set of services that every IT department delivers to its internal customers. In my mind, commodity IT services deliver little or no value to the top line (revenue) or bottom line (profit) of the business. Desktops and email are my favorite commodity services. Increased investment in email or the desktop environment does not translate into increases in top-line revenue or bottom-line profit for the business. Consider that investment includes financial and time investments. So, why have an employee spend time maintaining an email system if it doesn’t provide any value to the business? Two key questions:

  1. Does investment in the service return measurable value to the business?
  2. In the market for cloud services, can your IT department compete with a specialist in delivering the service?

When looking at a particular service, if you answer is “No” to both questions, then you are likely dealing with a commodity service. Email and desktops are two of my favorite examples. Coming back to the original question… you may want to source commodity services to specialist providers in order to increase investment (time and money) on services that do return value to the business.

We’ll expand this discussion into the role of corporate IT in a future post. For now though, what do you think of XaaS? Would you use it to replace one of your commodity services? Maybe you already do. I’m interested to hear from you about which services you have chosen to source to specialist providers.

@CloudExpo | Efficient Multi-Vendor #Cloud Helps Break Incipient Monopolies

Think of a cloud provider. I’d bet that for the majority of people reading this article, the first that comes to mind is AWS. Amazon Web Services were a trailblazer in the cloud space, and they still lead adoption rates at all levels of the market, from SMBs to multinationals. In some ways that’s great: Amazon constantly innovate and refine their product. But, at the same time, it’s not entirely healthy for a market to be completely dominated by one vendor. Google’s Compute Engine is snapping at Amazon’s heels, but ideally we’d like to see a flourishing market with many competitors. A market in which the word “cloud” doesn’t immediately bring one vendor to mind.

read more

Harnessing the power of Google’s cloud: Google BigQuery Analytics book extract

This is an edited extract from Google BigQuery Analytics, by Jordan Tigani and Siddartha Naidu, published August 2014 by Wiley, £30.99.

When you run your queries via BigQuery, you put a giant cluster of machines to work for you. Although the BigQuery clusters represent only a small fraction of Google’s global fleet, each query cluster is measured in the thousands of cores. When BigQuery needs to grow, there are plenty of resources that can be harnessed to meet the demand.

If you want to, you could probably figure out the size of one of BigQuery’s compute clusters by carefully controlling the size of data being scanned in your queries. The number of processor cores involved is in the thousands, the number of disks in the hundreds of thousands. Most organizations don’t have the budget to build at that kind of scale just to run some queries over their data. The benefits of the Google cloud go beyond the amount of hardware that is used, however. A massive datacenter is useless unless you can keep it running.

If you have a cluster of 100,000 disks, some reasonable number of those disks is going to fail every day. If you have thousands of servers, some of the power supplies are going to die every day. Even if you have highly reliable software running on those servers, some of them are going to crash every day.

To keep a datacenter up and running requires a lot of expertise and knowhow. How do you maximize the life of a disk? How do you know exactly which parts are failing? How do you know which crashes are due to hardware failures and which to software? Moreover, you need software that is written to handle failures at any time and in any combination. Running in Google’s cloud means that Google worries about these things so that you don’t have to.

There is another key factor to the performance of Google’s cloud that some of the early adopters of Google Compute Engine have started to notice: It has an extremely fast network. Parallel computation requires a lot of coordination and aggregation, and if you spend all your time moving the data around, it doesn’t matter how fast your algorithms are or how much hardware you have. The details of how Google achieves these network speeds are shrouded in secrecy, but the super-fast machine-to-machine transfer rates are key to making BigQuery fast.

Cloud data warehousing

Most companies are accustomed to storing their data on-premise or in leased datacenters on hardware that they own or rent. Fault tolerance is usually handled by adding redundancy within a machine, such as extra power supplies, RAID disk controllers, and ECC memory. All these things add to the cost of the machine but don’t actually distance you from the consequences of a hardware failure. If a disk goes bad, someone has to go to the datacenter, find the rack with the bad disk, and swap it out for a new one.

Cloud data warehousing offers the promise of relieving you of the responsibility of caring about whether RAID-5 is good enough, whether your tape backups are running frequently enough, or whether a natural disaster might take you offline completely. Cloud data warehouses, whether Google’s or a competitor’s, offer fault-tolerance, geographic distribution, and automated backups.

Ever since Google made the decision to go with exclusively scale-out architectures, it has focused on making its software accustomed to handling frequent hardware failures. There are stories about Google teams that run missioncritical components, who don’t even bother to free memory—the amount of bugs and performance problems associated with memory management is too high. Instead, they just let the process run out of memory and crash, at which time it will get automatically restarted. Because the software has been designed to not only handle but also expect that type of failure, a large class of errors is virtually eliminated.

For the user of Google’s cloud, this means that the underlying infrastructure pieces are extraordinarily failure-resistant and fault-tolerant. Your data is replicated to several disks within a datacenter and then replicated again to multiple datacenters. Failure of a disk, a switch, a load balancer, or a rack won’t be noticeable to anyone except a datacenter technician. The only kind of hardware failure that would escalate to the BigQuery operations engineers would be if someone hit the big red off button in a datacenter or if somebody took out a fiber backbone with a backhoe. This type of failure still wouldn’t take BigQuery down, however, since BigQuery runs in multiple geographically distributed datacenters and will fail over automatically.

Of course, this is where we have to remind you that all software is fallible. Just because your data is replicated nine ways doesn’t mean that it is completely immune to loss. A buggy software release could cause data to be inadvertently deleted from all nine of those disks. If you have critical data, make sure to back it up.

Many organizations are understandably reluctant to move their data into the cloud. It can be difficult to have your data in a place where you don’t control it. If there is data loss, or an outage, all you can do is take your business elsewhere—there is no one except support staff to yell at and little you can do to prevent the problem from happening in the future.

That said, the specialized knowledge and operational overhead required to run your own hardware is large and gets only larger. The advantages of scale that Google or Amazon has only get bigger as they get better at managing their datacenters and improving their data warehousing techniques. It seems likely that the days when most companies run their own IT hardware are numbered.

This is an edited extract from Google BigQuery Analytics, by Jordan Tigani and Siddartha Naidu, published August 2014 by Wiley, £30.99.

@QuantumCorp To Present At @CloudExpo Silicon Valley

More and more file-based and machine generated data is being created every day causing exponential data and content growth, and creating a management nightmare for IT managers. What data centers really need to cope with this growth is a purpose-built tiered archive appliance that enables users to establish a single storage target for all of their applications – an appliance that will intelligently place and move data to and between storage tiers based on user-defined policies.

read more