In previous articles I predicted that wearable technology would be powered by light-weight operating systems, citing Samsung’s decision to go with Tizen instead of Android. This decision was apparently based on battery life and user interface considerations. However, just after the article hit the Internet, Google executive Sundar Pichai announced the Android SDK for Wearables. Android is used in many different ways as demonstrated by Kindle and Nokia X (Nokia X seems to have deployed a Windows 8 look and feel on top of Android). Indeed, for this very reason Android 4.4 has moved a lot of key APIs into the Cloud.
Monthly Archives: March 2014
New Relic Announces Real-Time Analytics Platform
New Relic on Wednesday announced the arrival of New Relic Insights, a real-time analytics platform that collects, stores and presents valuable data directly from modern software, and transforms the data into insights about customers, applications and the business. Delivered as a cloud-based software-as-a-service (SaaS) offering and using New Relic’s lightning-fast and custom-built Big Data database platform, New Relic Insights empowers application developers and business users alike to make ad hoc and iterative queries across trillions of events and metrics and get answers in seconds
OpenNebula 4.6 Beta Released
The OpenNebula project has just announced the availability of OpenNebula 4.6 Beta (Carina). This release brings many new features and stabilizes features that were introduced in previous versions.
OpenNebula 4.6 introduces important improvements in several areas. The provisioning model has been greatly simplify by supplementing user groups with resource providers. This extended model, the Virtual Data Center, offers an integrated and comprehensive framework for resource allocation and isolation.
Another important new feature has taken place in the OpenNebula core. It has undergone a minor re-design of its internal data model to allow federation of OpenNebula daemons. With OpenNebula Carina your users can access resource providers from multiple data-centers in a federated way.
Meeting the Challenges of Effectively Managing a Recurring Revenue Model
Today the ability to increase revenue by utilizing recurring revenue models is driving new business for companies in all market sectors. However, delivering the features, benefits, and capabilities necessary to enable companies to offer products or services on a recurring basis can be a challenge. At the same time, companies prefer to augment their existing systems and services rather than replace them.
In his session at 14th Cloud Expo, Brendan O’Brien, co-founder of Aria Systems, will examine the key issues for executives and their IT staff who are mandated to leverage their legacy systems and services in order to drive new recurring revenue.
AppZero Launches New Release of Self-Service App Migration Tool on AWS
AppZero, a fast, flexible way to move enterprise applications to the cloud, announced on Tuesday a new version of AppZero Service Provider Edition for use on the Amazon Web Services (AWS) platform. AppZero’s award-winning application migration tool enables customers to move server applications onto AWS simply, fast, and with no code changes — at the push of a button.
New features in this release include a “single view” of a migration, the ability to conduct multiple, concurrent migration, and simplified dashboard/navigation.
EVault Cited as a Leader in Disaster Recovery as a Service
EVault, Inc., a Seagate company (NASDAQ:STX), has announced that Forrester Research, Inc. has recognized EVault as a leader in Disaster Recovery as a Service (DRaaS) according to the January 2014 report, The Forrester Wave™: Disaster-Recovery-As-A-Service Providers, Q1 2014[1]. Forrester evaluated vendors based on their current offering, strategy, and market presence. EVault tied for the second highest scores for its core DRaaS offering and security, and the highest score possible for recovery objective capabilities. EVault also achieved the highest score possible for planned service enhancements and again tied for the second highest score for value proposition and vision.
Stanford Researchers Create Tool to Triple Cloud Server Efficiency
Two Stanford engineers have created a cluster management tool that can triple server efficiency while delivering reliable service at all times, allowing data center operators to serve more customers for each dollar they invest.
“This is a proof of concept for an approach that could change the way we manage server clusters,” said Jason Mars, a computer science professor at the University of Michigan at Ann Arbor.
Kushagra Vaid, general manager for cloud server engineering at Microsoft Corp., said that the largest data center operators have devised ways to manage their operations but that a great many smaller organizations haven’t.
“If you can double the amount of work you do with the same server footprint, it would give you the agility to grow your business fast,” said Vaid, who oversees a global operation with more than a million servers catering to more than a billion users.
How Quasar works takes some explaining, but one key ingredient is a sophisticated algorithm that is modeled on the way companies such as Netflix and Amazon recommend movies, books and other products to their customers. Instead of asking developers to estimate how much capacity they are likely to need, the Stanford system would start by asking what sort of performance their applications require.
Read much more detail here.
SoftLayer – IBM’s New Quarterback for the Cloud
What typically happens to smaller firms when they are being eaten up by mega-companies like IBM seems pretty obvious – they go to the dogs. Key executives will leave the team, product roadmaps will lose their binding character and strategy will get fuzzy due to the sheer magnitude of the organization and its inherent complexity. But not so with SoftLayer / IBM.
It has become obvious over the past 6 months that IBM’s senior management had more in mind than a pure opportunistic acquisition when they bought hosting firm SoftLayer in the summer of 2013 for 2 billion USD. Eric Clementi who was among the early cloud advocates within IBM has made a strategic bet that will enable IBM to regain its position as one of the key cloud leaders in the market over the next 2–3 years.
vCOPS? vCAC? Where and When It Makes Sense to Use VMware Management Solutions
By Chris Ward, CTO
I’ve been having a lot of conversations recently, both internally and with customers, around management strategies and tools related to virtualized and cloud infrastructures. There are many solutions out there and, as always, there is not a one size fits all silver bullet to solve all problems. VMware in particular has several solutions in their Cloud Infrastructure Management (CIM) portfolio, but it can get confusing trying to figure out the use cases for each product and when it may be the right fit to solve your specific challenge. I just finished giving some training to our internal teams around this topic and thought it would be good to share with the broader community at large. I hope you find it helpful and know that we at GreenPages are happy to engage in more detailed conversations to help you make the best choices for your management challenges.
The core solutions that VMware has brought to market in the past few years include vCenter Operations Manager (vCOPS), vCloud Automation Center (vCAC), IT Business Management (ITBM), and Log Insight. I’ll briefly cover each of these including what they do and where/when it makes sense to use them.
vCOPS
What is it? vCOPS is actually a solution suite which is available in four editions: Foundation, Standard, Advanced, and Enterprise.
The core component of all four editions is vCenter Operations Manager which came from the acquisition of Integrian back in 2010 and is essentially a monitoring solution on steroids. In addition to typical performance and health monitoring/alerting, the secret sauce of this tool is its ability to learn what ‘normal’ is for your specific environment and provide predictive analytics. The tool will collect data from various virtual or physical systems (networking, storage, compute, virtual, etc.) and dynamically determine proper thresholds rather than the typical ‘best practice’ model thus reducing overall noise and false positive alarms. It can also provide proactive alerts as to when a problem may arise in the future vs. simply alerting after a problem has occurred. Finally, it also does a great job analyzing VM sizing and assisting in capacity planning. All of this is coupled with a very slick interface which is highly customizable.
The Advanced and Enterprise editions of the suite also include vCenter Configuration Manager (vCM), vCenter Hyperic, vCenter Infrastructure Navigator (VIN), and vCenter Chargeback Manager (vCBM).
vCM automates configuration and compliance management across virtual, physical, and cloud environments. Essentially this means those pesky Windows registry key changes, Linux iptables settings, etc. can be automated and reported upon to ensure that your environment remains configured to the standards you have developed.
Hyperic does at the application layer what vCOPS does for the underlying infrastructure. It can monitor operating system, middleware, and application layers and provide automated workflows to resolve potential issues.
VIN is a discovery tool used to create application dependency maps which are key when planning and designing security boundaries and disaster recovery solutions.
vCBM is utilized for showback or chargeback so that various lines of business can be accountable for IT resource utilization.
Where is it best utilized?
The vCOPS suites are best suited for environments that require robust monitoring and/or configuration management and that have fairly mature IT organizations capable of realizing the toolset’s full potential.
vCAC
What is it? Stemming from the acquisition of DynamicOps, this is primarily an automation/orchestration toolset designed to deploy and provision workloads and applications across multiple platforms be they physical, virtual, or cloud based. Additionally, vCAC provides a front end service catalog enabling end user IT self-service. Like most VMware product sets, vCAC comes in multiple editions as well including standard, advanced, and enterprise. Standard edition provides the base automation toolsets, advanced adds in the self-service catalog (the original DynamicOps feature set), and enterprise adds in dynamic application provisioning (formally vFabric AppDirector).
Where is it best utilized?
If you have a very dynamic environment, such as development or devops, then vCAC may well be the tool for you. By utilizing automation and self-service, it can take the time required to provision workloads/applications/platforms from potentially days or weeks down to minutes. If you have the issue of ‘shadow IT’ where end users are directly utilizing external services, such as Amazon, to bypass internal IT due to red tape, vCAC can help solve that problem by providing the speed and flexibility of AWS while also maintaining command and control internally.
ITBM
What is it? Think of ITBM as more a CFO tool vs. a raw IT technology tool. Its purpose is to provide financial management of large (millions of dollars) IT budgets by providing visibility into true costs and quality so that IT may be better aligned to the business. It too comes in multiple editions including standard, advanced, and enterprise. The standard edition provides visibility into VMware virtualized environments and can determine relative true cost per VM/workload/application. Advanced adds the physical and non-VMware world into the equation and enterprise adds the quality component.
Where is it best utilized?
The standard edition of ITBM makes sense for most mid-market and above level customers who want/need to get a sense of the true cost of IT. This is very important when considering any move to a public cloud environment as you need to be able to truly compare costs. I hear all the time that ‘cloud is cheaper’ but I have to ask ‘cheaper than what.’ If I ask you how much it costs to run workload X on your internal infrastructure per hour, week, month, etc. can you honestly give me an accurate answer? In most cases the answer is no, and that’s exactly where ITBM comes into play. On a side note, the standard edition of ITBM does require vCAC so if you’re already looking at vCAC then it makes a lot of sense to also consider ITBM.
Log Insight
What is it? Simply stated, it’s a dumping ground for just about any type of log you can imagine but with a google type flare. It has a very nice indexing/search capability that can help make sense of insanely large amounts of log data from numerous sources thus helping greatly in event correlation and troubleshooting as well as auditing.
Where is it best utilized?
Any environment where log management is required and/or for enhanced troubleshooting/root cause analysis. The licensing for this is interesting because unlike similar products it is per device rather than a per terabyte of data model, which can potentially provide a huge cost savings.
vSOM and vCloud Suites
vSOM (vSphere with Operations Management) is simply a bundle of traditional vSphere with vCOPS. The editions here are a little confusing as the standard edition of vCOPs comes with every edition of vSOM. The only difference in the vSOM editions are the underlying vSphere edition.
The vCloud Suite includes most of what I have described above, but again comes in our favorite three editions of standard, advanced, and enterprise. Basically, if you’re already looking at two to three a la carte solutions that are part of a vCloud Suite edition, then you’re better off looking at the suite. You’ll get more value because the suites include multiple solutions and the suites, along with vSOM, remain licensed by physical processor socket vs by the number of VMs.
Leave a comment if you have any other questions or would like a more detailed answer. Again, GreenPages helps our customers make the right choices for their individual needs so reach out if you would like to set up some time to talk. Hope this was helpful!
Download this webinar recording to learn more about VMware’s Horizon Suite
Amazon Web Services turns eight: Highlights…and the future
Amidst the World Wide Web turning 25, a company in Seattle was having a little celebration of its own.
On March 14 2006, Amazon Web Services (AWS) announced the launch of Amazon S3, described as “a simple storage service that offers software developers a highly scalable, reliable, and low-latency data storage infrastructure at very low costs.”
To say the eight years which have passed since then have been a success is an understatement to say the least.
With over 5,000 consulting and systems integrator partners, 3,000 technology and ISV partners, as well as more than 1,100 software listings for customers, it’s difficult to argue against Amazon as the market leader in infrastructure as a service.
In April last year Amazon announced the S3 cloud was storing more than two trillion objects – or 20 objects for every single person ever born on Earth. To put it into …