Cloud has arrived. Everyone in the business, from the CEO to the customer relations manager, wants in on a computing model that promises to lower costs while delivering better service and greater efficiency. The business finally sees the potential of IT to add value, yet such high expectations don’t come without risks for failure.
How can you increase the odds of success? By building a firm foundation based on clear communication with the business about their requirements. These conversations should be specific, detailed and, most important, collaborative. The following five steps outline a requirements-gathering process that brings the business and IT together.
Monthly Archives: April 2013
Rackspace Offers Developers Mobile Stacks
Having decided that mobile is the place to be, Rackspace is offering developers free purpose-built mobile cloud stacks based on OpenStack to help them “design, build, test, deploy and scale mobile apps” on its cloud.
The stacks are supposed to reduce complexity and save deployment time while Rackspace manages background operations.
The first stack is for PHP and includes LAMP plus the HTTP accelerator Varnishd, Memcache, PHP Memcache extensions and Alternative PHP Cache (APC), all open source code.
Rackspace is also working on stacks for Ruby on Rails and Node.js back-ends.
Cloud Expo New York: Building Clouds for Complex IT Environments
Enterprises that are looking to deploy cloud architectures quickly run into challenges in dealing with complex requirements around resilience, availability, multi-tenant security and data gravity. A naïve “one-size-fits-all” approach to cloud building based on legacy infrastructure is inadequate for such environments, resulting in high costs, inefficiency and needless frustration.
In his general session at the 12th International Cloud Expo, Paul Rogers, chief development officer for GE’s Global Software Headquarters, will look at how cloud architects can leverage state-of-the-art colocation facilities, scale-out software-defined storage solutions based on Ethernet storage architectures, and modern open source cloud platforms such as OpenStack to build private data clouds. Adopting this approach allows customers to combine enterprise requirements such as security in multi-tenant environments, high availability and business continuity with the benefits of a cloud architecture.
Cloud Security and the Omnibus HIPAA
The new and enhanced HIPAA omnibus standard brings an interesting question with regards to cloud security and the shared responsibility model in IaaS clouds. Since the release of the HIPAA omnibus, we’ve received many questions around “BAA” agreements, and how the responsibility split actually happens between (for example) the cloud provider and an ISV providing a healthcare application in an IaaS environment.
Without getting to the details of what a “Business Associate Agreement” means, I’ll simply say that the updated regulation makes business associates (Healthcare ISVs’, and potentially the cloud providers themselves) of covered entities (i.e. clinics or hospitals) directly liable for compliance with certain requirements of the HIPAA privacy and security rules (read more about it in this excellent HIPAA survival guide post). In other words, the entire “food chain” (The cloud provider, the ISV, and any other business associates in the logical flow to the covered entity), should ideally sign a business associate agreement. But what is the practical meaning of such requirement in an IaaS cloud environment? As one should expect – full compliance can be achieved only if all parties (business associates) will enforce compliance where they can actually do so. The IaaS cloud provider for example, will prove compliance on the physical and hypervisor level, while the Healthcare ISV will prove compliance on the guest OS, the healthcare application, and PHI data stored in the cloud.
A Roadmap to High-Value Cloud Infrastructure
With the increasing prevalence and acceptance of the cloud as a viable alternative to on-premise IT, today’s IT organizations are faced with a wide range of options. In fact, had you just woken from a five-year slumber, you might find the available array of cloud service options quite daunting.
At just a moment’s notice, you can spin up pretty much anything, with Software as a Service (SaaS), Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) offerings all readily available. Services can reside on public clouds in multi-tenant environments, private clouds within the four walls of an organization, community clouds shared between a limited set of tenants, or even a hybrid arrangement.
For the many IT organizations mired in maintaining server and storage infrastructure, IaaS appears a very attractive alternative to managing hardware in-house. While it’s still a rare organization who seeks to move all their IT infrastructure to the cloud, there is a long list of benefits to strategically and selectively partitioning infrastructure using a hybrid strategy. Take a look at just a few benefits: reduction of capital expenses, maintenance expenses and avoidance of the often dreaded refresh cycles required on a 3-5 year basis.
Choosing a cloud provider: The importance of compliance transparency
Looking beyond HIPAA, SOX or PCI-DSS compliance claims
The scary part about shopping for a cloud solution is that even if the managed services provider claims compliance, this doesn’t mean that they actually are compliant.
In fact, the provider may not even realise they are being misleading. Because regulatory compliance is too often left open to interpretation, your definition of HIPAA, SOX or PCI-DSS compliance might be different than your cloud provider’s.
This gap becomes even more critical as today’s information technology environments are being asked to house an expanding library of personal, private and sensitive data.
Whether you are aware of it or not, new regulations and industry standards are seemingly being created every day, meaning that your cloud provider may play a critical role in your regulatory auditing process.
The trick then becomes finding a provider who does more than offer the mere promise of …
The Answer to Data Scientist Scarcity Lies in Automation
If zettabytes of data exist, why is less than 1% of the world’s data being analyzed today? Seasoned entrepreneur and startup CEO Radhika Subramanian believes that the inability to analyze and gain value from Big Data is that organizations are taking a services-centered approach. As the title of the session implies, Subramanian believes that the data needs to do the talking, not armies of analysts searching and querying databases. Her company has developed high-speed, advanced algorithms to automate pattern detection for rapid, real-time discovery of the “unknown unknows” in structured and unstructured data. Subramanian teams up with internationally renowned High-Performance Computing luminary Dr. David Bader to tackle Big Data’s biggest challenges. Together they ask, ‘What if you didn’t have to analyze data at all?’
Cloud security: From hacking the mainframe to protecting identity
By Andi Mann, Vice President, Strategic Solutions at CA
Cloud computing, mobility, and the Internet of Things are leading us towards a more technology-driven world. In my last blog, I wrote about how the Internet of Things will change our everyday lives, but with these new technologies comes new risks to the organization.
To understand how recent trends are shifting security, let’s revisit the golden age of hacking movies from the ‘80s and ‘90s. A recent post by Alexis Madrigal of The Atlantic sums up this era of Hollywood hackers by saying that “the mainframe was unhackable unless [the hackers] were in the room, in which case, it was simple.”
That’s not far off from how IT security was structured in those years. Enterprises secured data by keeping everything inside a corporate firewall and only granting accessed to employees within the perimeter. Typically, the perimeter extended as far …
See How 365 Command Manages Office 365 With Graphical Interface
365 Command for managing Office 365 subscriptions replaces command line and and scripting with an HTML5 GUI. Watch the video for a walkthrough:
Top Eight Application Performance Landmines
We have been blogging about the same problems and problem patterns we see while working with our customers over the past few of years. There have always been the classic application performance landmines in the areas of inefficient database access, misconfigured frameworks, excessive memory usage, bloated web pages and not following common web performance best practices among others.
More than two years ago we posted summary blogs of the Top Server-Side Performance Problems and the Top 10 Client-Side Performance Problems to give operations, architects, testers and developers easy-to-consume best practices. We feel that it is time to provide an update to these best practices as new problem patterns have since come into play. We also want to cover more than just problems that happen within your application by broadening the scope across the entire Application Delivery Chain. This includes all components between your end user and your back-end systems, databases and third-party services. The following illustrates which components are involved and what the typical errors are along the delivery chain.