In the old world of IT, if you didn’t have hardware capacity or the budget to buy more, your project was dead in the water. Budget constraints can leave some of the best, most creative and most ingenious innovations on the cutting room floor. It’s a true dilemma for developers and innovators – why spend the time creating, when a project could be abandoned in a blink? That was the old world. In the new world of IT, developers rule. They have access to resources they can spin up instantly.
A hybrid cloud ignites innovation and empowers developers to focus on what they need. A hybrid cloud blends the best of all worlds, public cloud, private cloud and dedicated servers to fit the needs of developers and offer the ideal environment for each app and workload without the constraints of a one-size-fits-all cloud.
Archivo mensual: mayo 2013
BigData-Startups Named “Association Sponsor” of Cloud Expo & Big Data Expo
SYS-CON Events announced today that the BigData-Startups, the online Big Data knowledge platform, has been named “Association Sponsor” of SYS-CON’s 12th International Cloud Expo and 3rd International Big Data Expo, which will take place on June 10–13, 2013, at the Javits Center in New York City, New York, and the 13th International Cloud Expo and 4th International Big Data Expo, which will take place on November 4–7, 2013, at the Santa Clara Convention Center in Santa Clara, CA.
BigData-Startups is the ultimate Big Data platform, giving advice to organizations who want to develop a Big Data strategy.
Forecasting the Cloud Market: How the Sausage Is Made
Cloud Computing is not really a market or set of markets at all. It’s part of a paradigm-shifting trend that is reinventing how organizations large and small purchase, provision, utilize, pay for, and think about IT resources. Eventually, everything will be the Cloud, which from the market sizing perspective means that nothing will be the Cloud.
Group vendors into separate, distinct markets (or do so for individual products, when vendors compete in multiple markets). Assign each of your analysts to interview each vendor once a year in the analyst’s assigned market, in the hopes that the vendor will reveal their true revenues for the year in question. Put the resulting numbers in the spreadsheet, add them all up, and you are able to calculate a reasonable estimate for the size of the given market.
Author Thomas Erl to Attend Launch of New Cloud Book at Cloud Expo NY
One of the top-selling IT authors of the past decade will be making a special appearance on June 11 at the 12th International Cloud Expo at the Javits Center in New York City. Thomas Erl, author and co-author of eight books and series editor of the Prentice Hall Service Technology Series from Thomas Erl, will be participating with fellow author Ricardo Puttini in the official launch ceremony for their new title Cloud Computing: Concepts, Technology & Architecture.
This hardcover textbook, that was also co-authored by Zaigham Mahmood and received significant contributions from Amin Naserpour, will be these first to document cloud computing from a purely academic perspective, focusing on well-defined concepts, models, technology mechanisms, and technology architectures, all from an industry-centric and vendor-neutral point of view. The book includes over 260 figures and covers 29 cloud architectures, 20 cloud technology mechanisms, and has already been endorsed by members of Cisco, Oracle, IBM, Microsoft, HP, Layer 7, Cognizant, Red Hat, Capgemini, Accenture, Wipro and several other organizations.
Huh? What’s the Network Have to Do with It?
By Nate Schnable, Sr. Solutions Architect
Having been in this field for 17 years it still amazes me that people always tend to forget about the network. Everything a user accesses on their device that isn’t installed or stored locally, depends on the network more than any other element of the environment. It’s responsible for the quick and reliable transport of data. That means the user experience while working with remote files and applications, almost completely depends on the network.
However, this isn’t always obvious to everyone. Therefore, they will rarely ask for network related services as they aren’t aware the network is the cause of their problems. Whether it is a storage, compute, virtualization or IP Telephony initiative – all of these types of projects rely heavily on the network to function properly. In fact, the network is the only element of a customer’s environment that touches every other component. Its stability can make or break the success and all important user experience.
In a VoIP initiative we have to consider, amongst many things, that proper QoS policies be setup – so let’s hope you are not running on some dumb hubs. Power over Ethernet (PoE) for the phones should be available unless you want to use bricks of some type of mid-span device (yuck). I used to work for a Fortune 50 Insurance Company and one day an employee decided to plug both of the ports on their phone into the network because it would make the experience even better – not so much. They brought down that whole environment. Made some changes after that to avoid that happening again!
In a Disaster Recovery project we have to take a look at distances and subsequent latencies between locations. What is the bandwidth and how much data do you need to back up? Do we have Layer 2 handoffs between sites or is it more of a traditional L3 site to site connection?
If we are implementing a new iSCSI SAN do we need ten gig or one gig? Do your switches support Jumbo Frames and flow control? Hope that your iSCSI switches are truly stackable because spanning-tree could cause some of those paths to be redundant, but not active.
I was reading the other day that the sales of smart phones and tablets would reach approximately 1.2 billiion in 2013. Some of these will most certainly end up on your wireless networks. How to manage that is definitely a topic for another day.
In the end it just makes sense that you really need to consider the network implications before jumping into almost any type of IT initiative. Just because those green lights are flickering doesn’t mean it’s all good.
To learn more about how GreenPages Networking Practice can help your organization, fill out this form and someone will be in touch with you shortly.
Big Data Without Security = Big Risk
Guest Post by C.J. Radford, VP of Cloud for Vormetric
Big Data initiatives are heating up. From financial services and government to healthcare, retail and manufacturing, organizations across most verticals are investing in Big Data to improve the quality and speed of decision making as well as enable better planning, forecasting, marketing and customer service. It’s clear to virtually everyone that Big Data represents a tremendous opportunity for organizations to increase both their productivity and financial performance.
According to WiPro, the leading regions taking on Big Data implementations are North America, Europe and Asia. To date, organizations in North America have amassed over 3,500 petabytes (PBs) of Big Data, organizations in Europe over 2,000 PBs, and organizations in Asia over 800 PBs. And we are still in the early days of Big Data – last year was all about investigation and this year is about execution; given this, it’s widely expected that the global stockpile of data used for Big Data will continue to grow exponentially.
Despite all the goodness that can stem from Big Data, one has to consider the risks as well. Big Data confers enormous competitive advantage to organizations able to quickly analyze vast data sets and turn it into business value, yet it can also put sensitive data at risk of a breach or violating privacy and compliance requirements. Big Data security is fast becoming a front-burner issue for organizations of all sizes. Why? Because Big Data without security = Big Risk.
The fact is, today’s cyber attacks are getting more sophisticated and attackers are changing their tactics in real time to get access to sensitive data in organizations around the globe. The barbarians have already breached your perimeter defenses and are inside the gates. For these advanced threat actors, Big Data represents an opportunity to steal an organization’s most sensitive business data, intellectual property and trade secrets for significant economic gain.
One approach used by these malicious actors to steal valuable data is by way of an Advanced Persistent Threat (APT). APTs are network attacks in which an unauthorized actor gains access to information by slipping in “under the radar” somehow. (Yes, legacy approaches like perimeter security are failing.) These attackers typically reside inside the firewall undetected for long periods of time (an average of 243 days, according to Mandiant’s most recent Threat Landscape Report), slowly gaining access to and stealing sensitive data.
Given that advanced attackers are already using APTs to target the most sensitive data within organizations, it’s only a matter of time before attackers will start targeting Big Data implementations. Since data is the new currency, it just makes sense for attackers to go after Big Data implementations because that’s where big value is.
So, what does all this mean for today’s business and security professionals? It means that when implementing Big Data, they need to take a holistic approach and ensure the organization can benefit from the results of Big Data in a manner that doesn’t negatively affect the risk posture of the organization.
The best way to mitigate risk of a Big Data breach is by reducing the attack surface, and taking a data-centric approach to securing Big Data implementations. These are the key steps:
Lock down sensitive data no matter the location.
The concept is simple; ensure your data is locked down regardless of whether it’s in your own data center or hosted in the cloud. This means you should use advanced file-level encryption for structured and unstructured data with integrated key management. If you’re relying upon a cloud service provider (CSP) and consuming Big Data as a service, it’s critical to ensure that your CSP is taking the necessary precautions to lock down sensitive data. If your cloud provider doesn’t have the capabilities in place or feels data security is your responsibility, ensure your encryption and key management solution is architecturally flexible in order to accommodate protecting data both on-premise and in the cloud.
Manage access through strong polices.
Access to Big Data should only be granted to those authorized end users and business processes that absolutely need to view it. If the data is particularly sensitive, it is a business imperative to have strong polices in place to tightly govern access. Fine-grained access control is essential, including things like the ability to block access by even IT system administrators (they may have the need to do things like back up the data, but they don’t need full access to that data as part of their jobs). Blocking access to data by IT system administrators becomes even more crucial when the data is located in the cloud and is not under an organization’s direct control.
Ensure ongoing visibility into user access to the data and IT processes.
Security Intelligence is a “must have” when defending against APTs and other security threats. The intelligence gained can support what actions to take in order to safeguard and protect what matters – an organization’s sensitive data. End-user and IT processes that access Big Data should be logged and reported to the organization on a regular basis. And this level of visibility must occur whether your Big Data implementation is within your own infrastructure or in the cloud.
To effectively manage that risk, the bottom line is that you need to lock down your sensitive data, manage access to it through policy, and ensure ongoing visibility into both user and IT processes that access your sensitive data. Big Data is a tremendous opportunity for organizations like yours to reap big benefits, as long as you proactively manage the business risks.
You can follow C.J. Radford on Twitter @CJRad.
Part 1 | Understanding the Impact of IT on Business
Part 1 – of a two part series looking at the journey enterprise IT departments take as they increasingly seek to understand the relationships and impact of IT infrastructure performance on application performance and business services.
As a product manager at Netuitive, I’m often put in a position to explain how my product works. This question usually refers not just to the nuts and bolts of the technology, but also to the more specific question: “How do I make it work?” To get to the heart of the answer, you need to understand the underpinnings of today’s monitoring solutions and why most of them don’t represent a complete solution.
To help illustrate this, I’ll look at the problem from the perspective of Fred, an operations manager for “Acmecorp.” Fred is responsible for keeping Acmecorp’s key E-Commerce platform, BuyThis.com, up and performing under stringent 24×7 SLAs.
Aplos Updates Cloud-based Accounting for Non-Profits
Aplos Software has released Aplos Accounting 3.0, the latest version of their Cloud-based fund accounting software for nonprofits and churches. Here’s a short video intro:
VMware sees software defined data centre as the future for IT as a service
Roy Illsley, Principal Analyst, Ovum IT
The recent VMware Forum 2013 in London provided a platform for VMware EMEA chief technologist, Joe Baguley, to reveal how VMware is addressing the challenges of transforming IT to an as-a-service paradigm. This transformation is based on three key pillars.
First, it is about making the entire data center more agile, and the software-defined data center (SDDC) is the vehicle that VMware sees as being a major disruptive technology to enable the delivery of IT in this way. Second, it is about open standards to support the hybrid cloud. Finally, it is about the way in which customers access these services is changing.
For VMware this signals a shift from a virtual desktop policy to a multi-device strategy that supports the concept of enablement, not control. Ovum believes that this shift does not represent a radical rebirth for VMware, but instead is the result …
Cloud Expo New York | Breaking Out: An Introduction to OpenStack Cells
OpenStack Cells is one of the most anticipated features in Grizzly, the seventh release of the open source software that offers more block storage options and scalability. It has been running in production at Rackspace for more than a year.
In his session at the 12th International Cloud Expo, Wayne Walls, OpenStack Developer Advocate at Rackspace Hosting, will discuss nova cells and how it is changing the way you design your cloud applications and infrastructure. He will explain how OpenStack Cells gives added flexibility in application and infrastructure design. He will also enlighten attendees on how it is similar to “zones” at AWS, and how OpenStack Cells helps enable the hybrid cloud and allow specialized workload distribution.