Archivo de la categoría: Cloud & IT Management

NetDNA, YC’s Leftronic Partner on Real-Time CDN Monitoring

Content delivery network provider NetDNA today announced its partnership with Y-Combinator-funded Leftronic, to develop high-performance, secure and real-time data visualization dashboards for its CDN services.

Now, NetDNA or MaxCDN customers can get a bird’s-eye view of their traffic, including daily, weekly or monthly statistics on popular files, popular file types, status codes, cache hit percentage, statistics by location and more.  All of which are presented in a Web browser using Leftronic’s real-time, large screen metric dashboard platform.

The companies also worked together to provide dashboards measuring traffic on NetDNA’s Bootstrap CDN project, samples of which are available online.

“We’re excited to be partnering with Leftronic because monitoring traffic is a critical step to receiving the full benefits of our CDN service.  Now they can do that in a big and beautiful way using only a browser,” said Justin Dorfman, NetDNA’s Developer Advocate.

“We could see an immediate synergy between our two companies when NetDNA shared with us the demand for better CDN reporting it saw in its customer base,” said Rajiv Ghanta, CEO of Leftronic. “The company is truly developer friendly with its well-supported API and fast customer support.  We’re looking forward to a lasting relationship with the company.”

Leftronic has created a data visualization platform that helps companies monitor and track their business metrics. The platform has no software to download, instead everything is shown in a Web browser for easy accessibility and organization. The technology is combined with a simple-to-use front-end interface for visualizing the data and has an API for integrating custom company data.

With more than 10,000 customers trusting its content delivery services, NetDNA provides simple, efficient and affordable web performance optimization solutions that help customers to increase and improve their website speed. Most recently, NetDNA’s MaxCDN service has been its leading, popular solution among businesses because of its easy sign up system and versatile web performance acceleration.

 

Measurement, Control and Efficiency in the Data Center

Guest Post by Roger Keenan, Managing Director of City Lifeline

To control something, you must first be able to measure it.  This is one of the most basic principles of engineering.  Once there is measurement, there can be feedback.  Feedback creates a virtuous loop in which the output changes to better track the changing input demand.  Improving data centre efficiency is no different.  If efficiency means better adherence to the demand from the organisation for lower energy consumption, better utilisation of assets, faster response to change requests, then the very first step is to measure those things, and use the measurements to provide feedback and thereby control.

So what do we want to control?  We can divide it into three: the data centre facility, the use of compute capacity and the communications between the data centre and the outside world.  The balance of importance of those will differ between all organisations.

There are all sorts of types of data centres, ranging from professional colocation data centres to the server-cupboard-under-the-stairs found in some smaller enterprises.  Professional data centre operators focus hard on the energy efficiency of the total facility.  The most common measure of energy efficiency is PUE, defined originally by the Green Grid organisation.  This is simple:   the energy going into the facility divided by the energy used to power electronic equipment.  Although it is often abused, a nice example is the data centre that powered its facility lighting over POE, (power over ethernet) thus making the lighting part of the ‘electronic equipment, it is widely understood and used world-wide.  It provides visibility and focus for the process of continuous improvement.  It is easy to measure at facility level, as it only needs monitors on the mains feeds into the building and monitors on the UPS outputs.

Power efficiency can be managed at multiple levels:  at the facility level, at the cabinet level and at the level of ‘useful work’.  This last is difficult to define, let alone measure and there are various working groups around the world trying to decide what ‘useful work’ means.  It may be compute cycles per KW, revenue generated within the organisation per KW or application run time per KW and it may be different for different organisations.  Whatever it is, it has to be properly defined and measured before it can be controlled.

DCIM (data centre infrastructure management) systems provide a way to measure the population and activity of servers and particularly of virtualised machines.  In large organisations, with potentially many thousands of servers, DCIM provides a means of physical inventory tracking and control.  More important than the question “how many servers do I have?” is “how much useful work do they do?”  Typically a large data centre will have around 10% ghost servers – servers which are powered and running but which do not do anything useful.  DCIM can justify its costs and the effort needed to set it up on those alone.

Virtualisation brings its own challenges.  Virtualisation has taken us away from the days when a typical server operated at 10-15% efficiency, but we are still a long way from most data centres operating efficiently with virtualisation.  Often users will over-specify server capacity for an application, using more CPU’s, memory and storage than really needed, just to be on the safe side and because they can.   Users see the data centre as a sunk cost – it’s already there and paid for, so we might as well use it.  This creates ‘VM Sprawl’.  The way out of this is to measure, quote and charge.  If a user is charged for the machine time used, that user will think more carefully about wasting it and about piling contingency allowance upon contingency allowance ‘just in case’, leading to inefficient stranded capacity.  And if the user is given a real-time quote for the costs before committing to them, they will think harder about how much capacity is really needed.

Data centres do not exist in isolation.  Every data centre is connected to other data centres and often to multiple external premises, such as retail shops or oil rigs.  Often those have little redundancy and may well not operate efficiently.  Again, to optimise efficiency and reliability of those networks, the first requirement is to be able to measure what they are doing.  That means having a separate mechanism at each remote point, connected via a different communications network back to a central point.  The mobile phone network often performs that role.

Measurement is the core of all control and efficiency improvement in the modern data centre.  If the organisation demands improved efficiency (and if it can define what that means) then the first step to achieving it is measurement of the present state of whatever it is we are trying to improve.  From measurement comes feedback.  From feedback comes improvement and from improvement comes control.  From control comes efficiency, which is what we are all trying to achieve.

Roger Keenan, Managing Director of City Lifeline

Roger Keenan joined City Lifeline, a leading carrier neutral colocation data centre in Central London, as managing director in 2005.  His main responsibilities are to oversee the management of all business and marketing strategies and profitability. Prior to City Lifeline, Roger was general manager at Trafficmaster plc, where he fully established Trafficmaster’s German operations and successfully managed the $30 million acquisition of Teletrac Inc in California, becoming its first post-acquisition Chief Executive.

BluePhoenix Moves Mainframe COBOL, Batch Processing to the Cloud

BluePhoenix has released their Cloud Transaction Engine and Batch In The Cloud Service. The Cloud Transaction Engine (CTE) is a module of the company’s soon-to-be-released ATLAS Platform.    CTE is a proprietary codebase that enables mainframe processes to be run from off- mainframe infrastructure. BluePhoenix’s Batch In The Cloud service is the first formal offering leveraging CTE capabilities.

“Batch In The Cloud uses off-mainframe, cloud-based processing power to reduce mainframe MIPS and total cost of ownership,” explains Rick Oppedisano, BluePhoenix’s Vice President of Marketing. “The huge array of virtual machines in the cloud brings greater performance and scalability than the mainframe. Jobs can be processed quicker at a lower cost. It’s a great way for customers to save money immediately and explore options for an eventual mainframe transition.”

The Batch In The Cloud service is supported on private or public clouds, including Microsoft’s Azure and Amazon’s EC2. This service is designed to enable COBOL, CA GEN and Natural/ADABAS mainframe environments.

“In a typical scenario, workloads continue to grow while the mainframe’s processing power and batch window stays the same,” says BluePhoenix’s VP of Engineering, Florin Sunel. “Our technology acts as a bridge between the mainframe and cloud. With Batch In The Cloud, all business logic is preserved. Customers can reduce usage cost by running jobs like reporting from the cloud platform rather than the mainframe. In that scenario, they can also add business value by using modern business intelligence tools that aren’t compatible with the mainframe to gain insight from their data.”
Adds Oppedisano, “Beyond the immediate cost savings, this technology creates a competitive advantage. Exposing data in an off-mainframe location empowers the customer to become more agile. Not only can they process reports faster, but they can slice and dice their data to get a broader perspective than competitors who keep data on the mainframe.”

“By moving batch workloads to Windows Azure or a Microsoft Private Cloud, companies are able to take advantage of cloud economics,” said Bob Ellsworth, Microsoft Worldwide Director of Platform Modernization. “Combined with the advanced analytics included in SQL Server, the customer not only realizes great savings, scale and flexibility but increased business value through self-service BI.”

BluePhoenix is offering a free Proof of Concept for the Batch In The Cloud service. “To manage the scale and demand, we’re going to start with a complimentary assessment of the customer environment to identify the most appropriate applications for this service,” says Oppedisano. “Once those applications are identified, we will build the roadmap and execute the Proof of Concept on the cloud platform of the customer’s choice.”

Additional details on the Batch In The Cloud service and Proof of Concept can be found here.

Project Management and the Cloud

A Guest Post by Joel Parkinson, a writer for projectmanager.com

In the world of information technology, the “cloud” has paved the way for a new method for managing things on the Internet. In a cloud environment, computing “takes place” on the Worldwide Web, and it takes the place of the software that you use on your desktop. Cloud computing is also hosted on the Web, on a server installed in a “data center”, which is usually staffed and managed by people who are experts at technology management. What does the cloud mean to project management? Here’s an overview of what cloud project management is.

What Cloud Computing Means to Project Managers

Project management is defined as the “set” of activities and processes that are done to execute, and complete, a task that’s outsourced by one party to another. Project management ensures the high probability of success of a project, through the efficient use and management of resources.   So what does cloud computing mean to project managers?  According to PM veterans, cloud computing offers a greener and more sustainable project management environment, lowers cost, eliminates the use of unnecessary software and hardware, improves scalability, and eases the process of information-sharing between team managers and staff, customers and executive management.

Benefits of Cloud Project Management

In a project management environment, the cloud speeds up the whole process. As cloud services are available anytime, any day, the cloud can help a project management team hasten the process of execution, and provides improved results and outputs too.   With the cloud, project managers and staff can also easily monitor, and act without delays as information is delivered on a real-time basis. Let’s look at the other benefits of the cloud for project managers.

Improved Resource Management

The cloud’s centralized nature also allows for the improved utilization, allocation and release of resources, with status updates and real-time information provided to help optimize utilization. The cloud also helps maintain the cost of resource use, whether its machine, capital or human resource.

Enhanced Integration Management

With the cloud, different processes and methods are integrated, and combined to create a collaborative approach for performing projects. The use of cloud-based software can also aid in the mapping and monitoring of different processes, to improve overall project management efficiency.

Overall, the cloud platform reduces the gridlocks and smoothens the project management process, and makes the whole project team productive and efficient in terms of quality of service for the customer, and it also enhances the revenues of the organization.

But does the cloud project management model mean a more carefree and less-costly environment? We could say it makes the whole process less costly, but not overly carefree. Despite the perks provided by the cloud, everything still needs to be tested and monitored, and every member of the project management team must still work upon deployment, and each of them should still be fully supported by project managers, and the clients. The cloud is perhaps the biggest innovation in the IT industry because it “optimizes” the utilization of resources within an enterprise.

Wired Profiles a New Breed of Internet Hero, the Data Center Guru

The whole idea of cloud computing is that mere mortals can stop worrying about hardware and focus on delivering applications. But cloud services like Amazon’s AWS, and the amazingly complex hardware and software that underpins all that power and flexibility, do not happen by chance. This Wired article about James Hamilton paints of a picture of a new breed of folks the Internet has come to rely on:

…with this enormous success comes a whole new set of computing problems, and James Hamilton is one of the key thinkers charged with solving such problems, striving to rethink the data center for the age of cloud computing. Much like two other cloud computing giants — Google and Microsoft — Amazon says very little about the particulars of its data center work, viewing this as the most important of trade secrets, but Hamilton is held in such high regard, he’s one of the few Amazon employees permitted to blog about his big ideas, and the fifty-something Canadian has developed a reputation across the industry as a guru of distributing systems — the kind of massive online operations that Amazon builds to support thousands of companies across the globe.

Read the article.

 

Interop Technologies Adds RCS Version 5 Features to Cloud Technology

Interop Technologies, a provider of core wireless solutions for advanced messaging, over-the-air handset management, and connectivity gateways, today announced that its Rich Communication Services (RCS) solution now supports a network address book and social presence. The network address book synchronizes a user’s contacts among different devices including mobile phones, tablets, and PCs. With social presence, RCS users receive rich, real-time information, such as availability, location, favorite link, and portrait icon, for each of their contacts.

Interop Technologies is demonstrating its enhanced RCS solution at OMA Demo Day on February 27 during Mobile World Congress in Barcelona.

The network address book and social presence features are aligned with the GSMA-managed RCS Blackbird and Crane releases, which include subsets of priority RCS version 5 features. Using an XML document management server (XDMS) and presence server, the Interop solution stores social presence and service capability information and makes it available in real time to RCS users. Since May 2012, the fully compliant, cloud-based Interop RCS solution has also supported legacy messaging interworking, an RCS version 5 feature providing backward compatibility with Short Message Service (SMS) and Multimedia Message Service (MMS).

Interop has made the enhancements available in its interoperability testing (IOT) environment currently in use by multiple RCS client vendors and smartphone manufacturers. As client and handset vendors release new versions of RCS client software, they can continue to test against the Interop RCS solution to ensure that standards compliance and interoperability are achieved.

RCS, branded as “joyn” by the GSMA, gives subscribers innovative communication options including video chat, one-to-one and group messaging, file transfer, and real-time exchange of image or video files during communication sessions. Because the Interop solution enables operators to offer RCS without a costly and complex IP Multimedia Subsystem (IMS) core, operators can compete with popular “over-the-top” (OTT) services without expensive changes to their current network. In addition, Interop’s cloud technology option minimizes up-front costs and speeds time to market.

“Our client-agnostic, cloud-based solution now includes multiple RCS version 5 capabilities in line with the Blackbird and Crane releases, resulting in the most advanced, feature-rich RCS solution available today,” said Steve Zitnik, Executive Vice President and Chief Technology Officer, Interop Technologies. “By deploying in the cloud, operators can provide their subscribers with this state-of-the-art communication option quickly and cost effectively.”

For more information or to schedule a meeting during Mobile World Congress, please contact info@interoptechnologies.com.

Cloudreach Says Businesses Don’t Maximize Cloud Investment, They Can Help

To help businesses maximize their investment in cloud technologies, Cloudreach has today launched an Innovation Services program. Starting with Google Apps, this consultancy-based service helps business cloud users consistently review how they’re using Google Apps, make the best of the tools they have and adopting new services from Google as they are released.  Cloudreach will work with new and existing users to help them adopt and benefit from all of the services Google Apps offers as opposed to just the familiar tools – going beyond the familiarity of Gmail, contacts and calendar to other Enterprise functionality and even the less well known Google Maps and Geospatial data management tools.  By working with Cloudreach businesses can ensure they extract maximum value from the cloud-based services and avoid needlessly making costly investments elsewhere, all while enabling a culture of collaboration and giving end-users the flexible tools and features they demand.

Failing to exploit public cloud services thoroughly is proving costly for cloud users, says Cloudreach. As the adoption of public cloud services soars, businesses have the opportunity to realize vast benefits, such as reduced costs and improved efficiency, but many are only scratching the surface by not reviewing and not looking beyond the more familiar tools, says the cloud computing consultancy.

“It’s not just about driving innovation within business, but also about optimizing all the services you use. In the world of using public cloud, one drives the other, so it’s important that businesses remain focused on getting the most on both counts,” comments Pontus Noren, co-founder and CEO, Cloudreach. “The cloud is a great thing, and the reality is it just keeps getting better as more developments are made by Google Apps and Amazon Web Services. This ongoing evolution is only good news for businesses – as long as they continually review to ensure they harness these developments.”

Anomaly Detective Adds Predictive Analytics to Splunk

Prelert today announced Anomaly Detective, an advanced machine intelligence solution for Splunk Enterprise environments. The introduction of Anomaly Detective expands Prelert’s line of diagnostic predictive analytics products that integrate with a customer’s existing IT management tools and quickly provide value by finding problematic behavior changes hidden in huge volumes of operations data.

Anomaly Detective’s self-learning predictive analytics with machine intelligence assistance recognize both normal and abnormal machine behavior. Using highly advanced pattern recognition algorithms, Anomaly Detective identifies developing issues and provides detailed diagnostic data, enabling IT experts to avoid problems or diagnose them as much as 90 percent faster than previously possible. IT personnel who utilize Splunk Enterprise software in infrastructure, applications performance and security can now additionally benefit from machine learning to automatically spot anomalies and isolate their root causes in minutes, saving time and resolving problems before the business is impacted.

Anomaly Detective is  downloadable software that installs as a tightly integrated application for Splunk Enterprise. Because it leverages recent advances in machine intelligence, Anomaly Detective is 100 percent self-learning and requires minimal configuration. Anomaly Detective augments existing IT expertise, empowering IT staff to spend less time mining data, reduce troubleshooting costs and improve compliance with service-level agreements — all of which contribute to a rapid return on investment.

“Prelert Anomaly Detective is like a machine intelligence assistant, using advanced machine learning analytics to analyze the massive amounts of IT operations management data produced by today’s online applications and services,” said Mark Jaffe, CEO of Prelert. “We’ve packaged the power of big data analytics, normally focused on solving business problems, in easy-to-use machine intelligence solutions that are greatly needed in the real world of IT operations.”

Prelert Anomaly Detective is now available and easily downloadable from the Prelert website and from Prelert resellers. Pricing is based on the amount of data analyzed per day, starting at $1,200 for environments indexing more than 500MB of data per day. For information on pricing for Splunk Enterprise, go to http://www.splunk.com/view/how-to-get-splunk/SP-CAAADFV.

ManageEngine Adds Android Mobile Device Management

ManageEngine today announced it now manages Android devices in the latest version of its desktop and mobile device management (MDM) software, Desktop Central. The move extends the mobile device management support in Desktop Central to include smartphones and tablets running Google’s popular mobile OS as well as devices running Apple iOS.

“The mobile usage trends will eventually drive sharp increases in demand for enterprise MDM solutions that embrace BYOD while ensuring enterprise data security,” said Mathivanan Venkatachalam, director of product management at ManageEngine. “The growing Android market and increasing demand for Android support among our customer base encouraged us to add Android support to Desktop Central as quickly as possible.”

Android MDM in Desktop Central provides data wipe, mobile application management, configuring profile/policy and default option to run mandatory background applications

Desktop Central 8 is available immediately. Prices start at $10 per computer annually for the Professional Edition. The MDM add-on module support is available on all the editions, and prices start at $15 per device annually. The Free Edition of Desktop Central manages up to 25 computers and two mobile devices. A free, fully-functional trial version is available at http://www.manageengine.com/products/desktop-central/download.html.

NetDNA EdgeRules Gives Websites Control over CDN Content

NetDNA today announced EdgeRules, an instantaneous HTTP caching rules service, giving site managers rapid and granular control over their web content for a better user experience, improved security, lower bandwidth costs and the ability to better monetize content by preventing hotlinking.

EdgeRules is an add-on service to NetDNA’s EdgeCaching and EdgeCaching for Platforms.  Both of these HTTP caching services place site content in NetDNA’s worldwide network of edge servers and peering partners for superior web performance optimization.

Using the EdgeRules control panel, site managers can make changes to their content rules and see them enacted in less than one minute – with no review needed from the NetDNA engineering team. This makes it possible for the first time to test, tweak and deploy very granular controls over how and when content is served.

“EdgeRules truly gives website manages the ability to manage their CDN services their way and to finely tune their pull zone content in a way that they never could before,” said David Henzel, NetDNA vice president of marketing.  “NetDNA is well known for giving site managers unprecedented control over their CDN service through our Control Panel.  With EdgeRules, we are at the forefront of CDN self provisioning again.”

A site manager can use EdgeRules to keep certain files from being proxied and thus protecting them from exposure on the Internet. For example, EdgeRules can prevent the exposure of directory indices due to misconfiguration, which is a common problem on cloud services such as Amazon’s S3 service.

The service allows different rules to be set for different files or classes of data so that frequently updated files can be classed differently from more static data.  This reduces calls to the origin server, which lowers bandwidth charges.

Site managers can also use the service to blacklist certain IP addresses, for example blocking web robots that are scraping data from the site.

The EdgeRules service can also read the operating system of a device and serve up optimized content for that device.  For example a smartphone-optimized image can be served up instead of a large image when the service detects a request from an Android or iOS device.

EdgeRules is now available for all NetDNA EdgeCaching customers.  For more information email sales@netdna.com or go to: http://www.netdna.com/products/add-ons/edgerules/.