Monitoring DNS Load Balancers | @CloudExpo #Cloud

If your website receives heavy traffic and you are hosted across multiple infrastructures across the world or you are using multiple CDNs, then you ought to use a DNS Load Balancer to reroute traffic for better performance.
While there are some large companies who create their own DNS load balancing system, which is complex and tedious to implement and manage, some companies make life easier by opting for ready-made solutions such as those from Cedexis, Dyn, NSONE, Rage4 etc
Most of the above mentioned companies are backed by Anycast networks spread across global locations.
While there are some differences between the feature sets offered by these companies and the number of POPs they own, it is very important to identify the one that works best for your business.

read more

Cisco to buy cloud orchestrator CliQr for $260 million

Cisco corporateCisco has announced its intention to buy CliQr Technologies, a Californian start up that specialises in making apps run faster in the new bare metal, virtualised and container environments that are becoming increasingly pivotal in cloud computing.

Under the terms of the agreement, Cisco will pay $260 million in cash and assumed equity awards, plus retention based incentives. The acquisition is expected to close in the third quarter of 2016, subject to closing conditions. The CliQr team will join Cisco’s Insieme Business Unit reporting to Prem Jain, Cisco’s general manager.

Announcing the acquisition at Cisco’s Partner Summit, Cisco’s VP of Corporate Development Robert Salvagno said that the new technology will help its systems integrators and service provider partners to simplify the marshalling of resources and help get private, public and hybrid cloud projects running quicker. CliQr has out-of-the box support for all major public cloud environments.

Cisco said it will now continue to integrate CliQr across its data centre portfolio.

Cisco had already integrated CliQr with its Cisco Application Centric Infrastructure (ACI) and Unified Computing systems (UCS) prior to acquisition, in a bid to improve the movement of applications between on-premise and cloud environments. Having achieved that it now aims to integrate CliQr across its data centre portfolio, extending the ‘orchestration of services’ to cover eventualities such as bare metal computing, containerised systems and all the various types of virtualisation.

CliQr simplifies management by giving customers a single system for managing the application lifecycle across hybrid IT environments. Cisco claims the system is intuitive and can simplify the most complex systems. With computing becoming a hybrid of traditional on premise IT and services running in the cloud, many CIOs and network managers have been left behind by the new complexity and inefficiencies and blockages have emerged which Cisco claims it can now smooth out.

Among the productivity improvements promised by CliQr is a feature that allows managers to create a single, secure application profile that can then be whizzed across any data centre, public or private cloud. Other managerial time savers include a consistent policy making scheme, an application optimiser across hybrid systems, a one click rollout and ‘complete visibility’ and control across applications, cloud environments and users.

OPNFV announces second major release – Brahmaputra

Digital illustration of Cloud computing devicesThe Linux Foundation-inspired OPNFV Project has taken a new step closer to its ideal of network liberalisation with a new release of its software.

Network Function Virtualisation (NFV), the telecoms industry’s answer to the Stock Market’s Big Bang, aims to open the market for creating software that runs the multitude of functions within any network. The OPNFV Project aims to create a carrier-grade, integrated, open source platform that uses NFV to create telecoms networks that are infinitely more flexible and adaptable than the traditional proprietary systems that locked the software within the rigid backbone of telecoms hardware.

The Project has announced the availability of new improved version of its original offering, code-named Arno, which Telecoms.com reported on in June 2015. The new release, Brahmaputra, offers a more comprehensive standard of tools for testing NFV functionality and use cases. Brahmaputra is OPNFV’s first full experience with a massively parallel simultaneous release process and helps developers to collaborate with upstream communities. By encouraging group collaboration on feature development and addressing multiple technology components across the ecosystem, the Project aims to improve the stability, performance and automation of the system, and to consolidate its features.

The extent of collaboration is ambitious, since OPNFV aims to bring together at least 165 developers from network operators, solution providers and vendors. The focus of their joint efforts will be on integration, deployment and the testing of upstream components to meet NFV’s needs. During the integration process to create the Brahmaputra release, code was contributed by programme writers in the OpenStack, OpenDaylight, OpenContrail ONOS and ETSI developer communities. Meanwhile, there were 30 different projects accepted which created new powers, specifications and community resources to the system.

Among the improvements are Layer 3 VPN instantiation and configuration, initial support for IPv6 deployment and testing in IPv6 environments, better fault detection and recovery, performance boosts through data plane acceleration and much fuller infrastructure testing.

“The strength of any open source project depends on the community developing it,” said OPNFV director Heather Kirksey, “with an entire industry involved in the development of NFV, we’re seeing more collaboration and the strides we made in Brahmaputra create a framework for even more developers to come together.”

Enterprises pay nearly a fifth more to use European cloud – report

Money cloudThe cloud service market in the US is much more competitively priced than in Europe but Latin America gets the worst deals in the world, according to a new study. Europeans pay up to 19% more for the same services when they are hosted in home territory.

According to the new Cloud Price Index report from 451 Research, Americans enjoy the most competitive prices globally. On average Europeans pay between 7 and 19% more, depending on the complexity of the application. Asia Pacific comes second bottom in the price performance study. However, anomalies exist and deals are available to those who shop around, says the report.

The ‘protection premium’, the extra price of hosting services in-country or in-region services, rather than using the cheaper option of US services, is not just the cost of compliance. The extra investment needed by European cloud users is a result of three pressures: the need to meet local regulations, the need to boost performance by bringing apps closer to users and the use of local customer service.

In Europe, soaring local cloud demand, driven by data protection legislation, has created uncertainty about access and responsibility and confused cloud buyers and service providers. The net effect of issues like Safe Harbor, the Patriot Act and the new US-EU Privacy Shield agreement is that european buyers will pay more.

Don’t expect that to change for the better just yet, said Penny Jones, Senior Analyst for European Services. “It won’t be clear what the European Court of Justice thinks about the legislation until they have reviewed a case or two,” said Jones.

Cloud services are even more pricey in Asia Pacific and Latin America, according to the report. Comparable hosting in Asia Pacific and Latin America can cost 38% more than in the US. Taking average prices as a benchmark, Latin America has the most extreme variations in prices, thanks to its limited selection of hosting providers.

There is also an extreme price polarity between the small and large applications in Europe. Users pay double the premium for a large application, composed of computing, storage, platforms and support, in comparison to simpler virtual machines. These discrepancies are the result of skills shortages and an SME market willing to pay more for support on complex applications.

The lesson is that cloud buyers must be more diligent about researching huge price variations according to 451 Research Director Dr Owen Rogers. “One provider charged more than twice the average US price for hosting in Latin America. Another offered an 11% discount for hosting in Europe compared to the US,” said Rogers.

APIs: A Costly Blind Spot for Your Application By @Tiwari_Piya | @CloudExpo #Cloud

APIs have taken the world by storm in recent years.
The use of APIs has gone beyond just traditional “software” companies, to companies and organizations across industries using APIs to share information and power their applications.
For some organizations, APIs are the biggest revenue drivers. For example, Salesforce generates nearly 50% of annual revenue through APIs. In other cases, APIs can increase a business’s footprint and initiate collaboration. Netflix, for example, reported over 5 billion calls per day to its API in 2014.

read more

G-Cloud – why being certified matters

Cloud computingIt might surprise you to know that more than £900m worth of sales have now taken place via the G-Cloud platform since its launch. The Government initiated the G-Cloud program in 2012 to deliver computing based capability (from fundamental resources such as storage and processing to full-fledged applications) using cloud and it has been hugely successful, providing benefits to both customers and suppliers alike.

The G-Cloud framework is offered via the Digital Marketplace and is provided by The Crown Commercial Service (CCS), an organisation working to save money for the public sector and the taxpayer. The CCS acts on behalf of the Crown to drive savings for the taxpayer and improve the quality of commercial and procurement activity. The CCS’ procurement services can be used by central government departments and organisations across the public sector, including local government, health, education, not-for-profit and devolved administrations.

G-Cloud approves framework agreements with a number of service providers and lists those services on a publicly accessible portal known as the Digital Marketplace. This way, public sector organisations can approach the services listed on the Digital Marketplace without needing to go through a full tender process.

G-Cloud has substantial benefits for both providers and customers looking to buy services. For vendors the benefit is clear – to be awarded as an official supplier for G-Cloud demonstrates that the company has met the standards laid out in the G-Cloud framework and it is compliant with these standards. Furthermore, it also opens up an exciting new opportunities to supply the public sector in the UK with the chance to reduce their costs. Likewise it brings recognition to the brand and further emphasises their position as a reputable provider of digital services.

Where public sector organisations are concerned, G-Cloud gives quick and easy access to a roster of approved and certified suppliers that have been rigorously assessed, cutting down on the time to research and find such vendors in the marketplace. This provides companies with a head start in finding the cloud services that will best address their business and technical needs.

I am proud to say that iland was awarded a place on the G-Cloud framework agreement for supplying Infrastructure-as-a-Service (IaaS) and Disaster-Recovery-as-a-Service (DRaaS) at the end of last year. We deliver flexible, cost-effective and secure Infrastructure-as-a-Service solutions from data centres in London and Manchester, including Enterprise Cloud Services with Advanced Security and Compliance, Disaster-Recovery-as-a-Service and Cloud Backup.

So if you are looking to source a cloud provider, I would recommend that you start your search with those that have been awarded a place on the G-Cloud framework agreement. It is important to then work with prospective providers to ensure their platform, service level agreements, native management tools and support teams can deliver the solutions that best address your business goals as well as your security and compliance requirements. Ask questions up front. Ensure the provider gives you full transparency into your cloud environment. Get a demonstration. You will then be well on your way to capitalizing on the promises of cloud.

Written by Monica Brink, EMEA Marketing Director, iland

New research reveals potential cloud pricing headaches for European customers

(c)iStock.com/Yamato1987

CIOs pay a ‘protection premium’ of up to 19% to host their data on a European cloud platform compared to the US, according to the latest Cloud Price Index report from 451 Research.

The report, which for the first time focuses predominantly on the European market, argues US providers are still ahead of their competitors across the Atlantic. The researchers targeted a variety of vendors with a large application specification, but only five providers could deliver it in Europe, and of that number, only one was headquartered in Europe. The natural conclusion, therefore, is that the US-based hypervendors are in prime position to capitalise as the European market builds.

End users are becoming savvier, according to the report, increasingly demanding clear value add, such as expertise in fulfilling local regulatory requirements, while the research also argues users can save substantially if they commit and negotiate, with the average discount for large apps in Europe being 38%. In terms of product, object storage is typically priced at approximately $0.05 per gigabyte in Europe, with SQL and NoSQL services at $11.42 per GB, or $0.10 per million transactions. Discounts range from 20% for storage and between 30-40% for database services.

The report, which aims to give enterprises insight into the cloud pricing landscape, argues the market in Europe is maturing, but issues are not likely to appear immediately. “With demand likely to increase, we don’t think the European cloud market is facing serious price pressure in the short term, as long as it is suitably showing its regional credentials and the value that locality affords,” the report notes, adding: “European providers also need to look beyond virtual machines and support if they want to take on the global players.”

“When evaluating cloud providers, enterprises should consider how they will take advantage of variances in prices in the short and long term to cut costs,” said Owen Rogers, research director of 451’s digital economics unit and author of the report. “We found one provider charged more than twice the average US price for hosting in Latin America, whilst another offered an 11% discount for hosting in Europe compared to the US.

“The global market for cloud is complex and cloud buyers need to understand typical pricing to properly evaluate their options and negotiate with suppliers,” he added.

This is a message which is often ignored somewhat when the likes of Amazon Web Services (AWS), Microsoft, and Google announce their latest price cuts – and complexity is often a stick with which the big vendors beat each other with. A report from Tariff Consultancy in January found cloud pricing was “starting to stabilise” after continued cuts.

You can find more about the 451 Research report here.

Scale up or scale out: Why this remains a key problem for data centres

(c)iStock.com/4x-image

Servers have always powered businesses and also formed the backbone behind how they run and operate. There has been a marked increase in reliance on servers, which are now being used to do things they never used to do before.

Looking at an industry where servers play an integral role, data centre administrators can be said to be the most advanced and complex folks in the IT world. Workloads on servers, and the servers themselves are getting more complex now especially post-virtualisation. As a result, data centre administrators are faced with the constant challenge of fitting the load to the infrastructure. Is it an intensive task which needs high performance compute-style servers or a financial transaction which needs speedy processing and a low-latency network connection?

Exponential demand means that increased performance is key and these data centre professionals face the constant challenge of getting new levels of performance out of servers that have existed for years in their data centres. The key question here for data centre professionals is how do you scale your data centre’s performance?

Scale up – vertical

This process involves installing more high performance components in your existing servers, usually in the order of processors, memory, and storage (by using SSDs to replace hard drives). Sometimes the order flips and we might lead with memory rather than processors; it all depends on the configuration of existing servers and what we’re working with.

If what we’re trying to improve is an older DDR3 server, the first thing to tackle would likely be the CPUs if the server was using Newhalem or Westmere processors. There’s a lot that goes into this though, and we might start with memory if the DRAM was sparsely populated. It all depends. Key questions to ask would be, what’s your workload? And which components are going to have the biggest impact on what you do?

Scale out – horizontal

This can be seen as a stair step process. Typically when scaling out or up, the first step is getting as much as possible out of existing servers. Once this is done, the only way is to go is horizontally, adding more servers. Scaling out is definitely more expensive, as it involves more physical servers – and thus more software licenses. Since software licenses usually cost far more than server hardware and hit your budget every single year, they’re one of the most important cost factors when considering whether to scale out.

Additional factors such as higher power and cooling costs all add up when you’re working with a greater number of servers, and these are just a few of the costs associated when scaling out, on top of the issue of having physical space in your data centre.

The decision making process

Determining which approach to use is all about knowing exactly where your organisation is in the expansion process and which scaling decision would be the best fit for where you’re at – and how you see your workload growing.

IT decision makers who need to take this first step in deciding between scaling up and upgrading servers – or scaling out and adding more servers – need to bear all these considerations in mind. Both routes have their advantages. It goes without saying that typically it’s more efficient to first maximise the performance potential of your existing infrastructure and servers by scaling up. Once a performance ceiling is reached then it’s time to scale out to further meet demand.

The challenge for IT decision makers is in identifying exactly where their organisation sits in that process, and therefore which route is the most effective way to go.

Read more: When to scale up – or scale out – your data centre

Microsoft Improves Cloud Security

Microsoft has recently announced its huge developments in the cloud security sector that will allegedly improve the security of online companies. Microsoft plans to unveil more at the RSA Conference, taking place from February 29th to March 4th in San Francisco. These new developments include Customer Lockbox, Microsoft Cloud App Security, Azure Active Directory Identity Protection, and Microsoft Operations Management Suite.

Customer Lockbox: Lockbox helps Microsoft engineers achieve more transparency when they require access to Office 365 accounts to aid in troubleshooting. Lockbox aims to make the customer approval process more efficient. Lockbox is already available to those using Exchange Online.

Microsoft Cloud App Security: This application provides both security and control over data stored in app clouds, including SalesForce and Office 365. This application comes after the addition of security broker Adallom to the cloud giant. Office 365 was also upgraded in conjunction with Microsoft Cloud App Security: users will be made aware of suspicious activity. Uses will also have the choice of approving third party services.

Azure Active Directory Identity Protection: Microsoft has yet to unveil much about thus application, but it is expected that it will provide threat detection. It will utilize Microsoft’s data to investigate threats such as authentications from unfamiliar locations.

Azure

Microsoft Operations Management Suite: Some improvements were made to the Microsoft Operations Management Suite; users will receive information pertaining to malware detections, network activity, and system updates.

 

Comments:

Bret Arsenault, Microsoft’s chief information security officer: “Keeping our network safe, while protecting our data and our customers’ data, is paramount. As Chief Information Security Officer at Microsoft, I am constantly looking for ways to improve our security posture, through new technologies that accelerate our ability to protect, detect and respond to cyber incidents… After years of examining crash dumps that our customers opted to send to Microsoft from more than a billion PCs worldwide, Microsoft has developed the capability to analyze this data to effectively detect compromised systems because crashes are often the result of failed exploitation attempts and brittle malware.”

 

Sarah Fender, principal program manager of Microsoft Azure Cybersecurity: “After years of examining crash dumps that customers sent to Microsoft from more than 1 billion PCs worldwide, we are able to analyze these events to detect when a crash is the result of a failed exploitation attempt or brittle malware. Azure Security Center automatically collects crash events from Azure virtual machines, analyzes the data, and alerts you when a VM is likely compromised…Starting next week, in addition to configuring a Security Policy at the subscription level, you can also configure a Security Policy for a Resource Group—enabling you to tailor the policy based on the security needs of a specific workload. Azure Security Center continually monitors your resources according to the policy you set, and alerts you if a configuration drifts or appropriate controls are not in place.”

The post Microsoft Improves Cloud Security appeared first on Cloud News Daily.

Taking Back IT – DevOps | @DevOpsSummit #DevOps #APM #Microservices

I am an ops guy. I have been in Ops since I started my career a long time ago. I still remember the glorious days of being the new guy on the Ops team and getting stuck with changing backup tapes. I remember moving servers in the wee small hours of the morning, upgrades that went horribly wrong with no prayer of reverting, and the joys and pains that went with being in operations, and really what we would call “Legacy IT” in general. It’s like the opposite of “Cheers”, because you want to be where NO ONE knows your name; if people know who you were in those days of IT, you were having a bad day.

read more