Applying IoT to Reduce Power Losses | @ThingsExpo #IoT #M2M #BigData

Electric power utilities face relentless pressure on their financial performance, and reducing distribution grid losses is one of the last untapped opportunities to meet their business goals. Combining IoT-enabled sensors and cloud-based data analytics, utilities now are able to find, quantify and reduce losses faster – and with a smaller IT footprint. Solutions exist using Internet-enabled sensors deployed temporarily at strategic locations within the distribution grid to measure actual line loads.

read more

Dell and Microsoft unveil joint hybrid cloud offering

Dell office logoDell has expanded its cloud portfolio with a new hybrid cloud offering with technology jointly developed with Microsoft. The new system is designed to break down the barriers to cloud adoption and offer a simpler but more secure payment system.

According to Dell’s own research, nine out of ten IT decision makers say a hybrid cloud strategy is important to achieve a Future-Ready Enterprise. The recently unveiled Dell Global Technology Adoption Index revealed that 55% of organisations around the world will use more than one type of cloud. The study also identified cost and security as the biggest barriers to adopting the cloud, with complexity being the biggest blockage associated with hybrid cloud.

The new Dell Hybrid Cloud System for Microsoft promises customers an on-premise private cloud with consistent Azure public cloud access in less than three hours. Clients are promised minimised downtime with non-disruptive, fully automated system updates that don’t impose themselves on users when not needed. It also offers workload templates to simplify service provision and governance models. The management of multiple clouds will be simplified by an out-of-the-box integration with Dell Cloud Manager (DCM) and Windows Azure Pack (WAP), Dell says.

The Dell Hybrid Cloud System for Microsoft is built around the CPS Standard, which combines optimised Dell modular infrastructure with pre-configured Microsoft CPS software. This will include Microsoft’s software stack and Azure Services for back-up, site recovery and operational insights.

Meanwhile the Dell Cloud Flex Pay programme gives customers a new flexible option to buy Dell’s Hybrid Cloud System for Microsoft without making a long-term commitment. Cloud Flex Pay will eliminate the risks of being locked into paying for services that aren’t used fully says Dell.

“Customers tell us their cloud journey is too complex, the cost-risk is too high and control isn’t transparent,” said Jim Ganthier, vice president and general manager of engineered solutions and cloud at Dell. “With our new Cloud Flex Pay program, cost-risk is all but eliminated.”

Why visibility and control are critical for container security

Reacting to the steady flow of reported security breaches in open source components such as Heartbleed, Shellshock and Poodle is making organisations focus increasingly on making the software they build more secure, improving application delivery, agility and security. As organisations increasingly turn to containers to improve application delivery and agility, the security ramifications of the containers and their contents are coming under increased scrutiny.

An overview of today’s container security initiatives 

Container providers such as Docker and Red Hat, are aggressively moving towards reassuring the marketplace about container security. Ultimately, they are focusing on the use of encryption to secure the code and software versions running in Docker users’ software infrastructure to protect users from malicious backdoors included in shared application images and other potential security threats.

However, this method is slowly being put under scrutiny as it covers only one aspect of container security, excluding whether software stacks and application portfolios are free of known, exploitable versions of open source code.

Without open source hygiene, Docker Content Trust will only ever ensure that Docker images contain the exact same bits that developers originally put there, including any vulnerabilities present in the open source components. Therefore, they only amount to a partial solution.

A more holistic approach to container security

Knowing that the container is free of vulnerabilities at the time of initial build and deployment is necessary, but far from sufficient. New vulnerabilities are being constantly discovered and these can often impact older versions of open source components. Therefore, what’s needed is an informed open source technology that provides selection and vigilance opportunities to users.

Moreover, the security risk posed by a container also depends on the sensitivity of the data accessed via it, as well as the location of where the container is deployed. For example, whether the container is deployed on the internal network behind a firewall or if it’s internet-facing will affect the level of risk.

In this context, a publicly available attack makes containers subject to a range of threats, including cross-scripting, SQL injection and denial-of-services which containers deployed on an internal network behind a firewall wouldn’t be exposed to.

For this reason, having visibility into the code inside containers is a critical element of container security, even aside from the issue of security of the containers themselves.

It’s critical to develop robust processes for determining; what open source software resides in or is deployed along with an application, where this open source software is located in build trees and system architectures, whether the code exhibits security vulnerabilities and whether an accurate open source risk profile exists.

Will security concerns slow container adoption? – The industry analysts’ perspective

Enterprise organisations today are embracing containers because of their proven benefits; improved application scalability, fewer deployment errors, faster time to market and simplified application management. However, just as organisations have moved over the years from viewing open source as a curiosity to understanding its business necessity, containers seem to have reached a similar tipping point. The question now seems to be shifting towards whether security concerns about containers will inhibit further adoption. Industry analysts differ in their assessment of this.

By drawing a parallel to the rapid adoption of virtualisation technologies even before the establishment of security requirements Dave Bartoletti, Principal Analyst at Forrester Research, believes security concerns won’t significantly slow container adoption. “With virtualization, people deployed anyway, even when security and compliance hadn’t caught up yet, and I think we’ll see a lot of the same with Docker,” according to Bartoletti.

Meanwhile, Adrian Sanabria Senior Security Analyst at 451 Research believes enterprises will give containers a wide berth until security standards are identified and established. “The reality is that security is still a barrier today, and some companies won’t go near containers until there are certain standards in place”, he explains.

To overcome these concerns, organisations are best served to take advantage of the automated tools available to gain control over all the elements of their software infrastructure, including containers.

Hence, the presence of vulnerabilities in all types of software is inevitable, and open source is no exception. Detection and remediation of vulnerabilities, are increasingly seen as a security imperative and a key part of a strong application security strategy.

 

Bill_LedinghamWritten by Bill Ledingham, EVP of Engineering and Chief Technology Officer, Black Duck Software.

IBM to create HPC and big data centre of excellence in UK

datacenterIBM and the UK’s Science & Technology Facilities Council (STFC) have jointly announced they will create a centre that tests how to use high performance computing (HPC) for big data analytics.

The Hartree Power Acceleration and Design Centre (PADC) in Daresbury, Cheshire is the first UK facility to specialise in modelling and simulation and their use in Big Data Analytics. It was recently the subject of UK government investment in big data research and was tipped as the foundation for chancellor George Osborne’s northern technology powerhouse.

The new facility launch follows the government’s recently announced investment and expansion of the Hartree Centre. In June Universities and Science Minister Jo Johnson unveiled a £313 million partnership with IBM to boost Big Data research in the UK. IBM said it will further support the project with a package of technology and onsite expertise worth up to £200 million.

IBM’s contributions will include access to the latest data-centric and cognitive computing technologies, with at least 24 IBM researchers to be based at the Hartree Centre to work side-by-side with existing researchers. It will also offer joint commercialization of intellectual property assets produced in partnership with the STFC.

The supporting cast have a brief to help users to cajole the fullest performance possible out of all the components of the POWER-based system, and have specialised knowledge of architecture, memory, storage, interconnects and integration. The Centre will also be supported by the expertise of other OpenPOWER partners, including Mellanox, and will host a POWER-based system with the Tesla Accelerated Computing Platform. This will provide options for using energy-efficient, high-performance NVIDIA Tesla GPU accelerators and enabling software.

One of the target projects will be a search for ways to boost application performance while minimising energy consumption. In the race towards exascale computing significant gains can be made if existing applications can be optimised on POWER-based systems, said Dr Peter Allan, acting Director of the Hartree Centre.

“The Design Centre will help industry and academia use IBM and NVIDIA’s technological leadership and the Hartree Centre’s expertise in delivering solutions to real-world problems,” said Allan. “The PADC will provide world-leading facilities for Modelling and Simulation and Big Data Analytics. This will develop better products and services that will boost productivity, drive growth and create jobs.”

Dell + (EMC+VMW) = A $67B Gamble | @CloudExpo #BigData #Microservices

Yesterday, Dell announced the largest technology M&A in history with a proposed$67B buyout of EMC and VMware (via EMC’s 80% ownership of VMW). The combined company will have over $80B in revenue, employ tens of thousands of people around the world and sell everything from PCs, servers & storage to security software and virtualization software. Not to be overlooked is the fact that Dell and EMC will be private companies and free from the scrutiny of activist investors.

read more

EMC, VMware unveil plans for Virtustream hybrid for the enterprise cloud

 EMC and VMware are to combine their cloud offerings under a jointly-owned 50/50 shared Virtustream brand led by its CEO Rodney Rogers.

The cloud service will be aimed at enterprises with an emphasis on hybrid cloud, which Virtustream’s owners identify as one of the largest markets for IT infrastructure spending. The company will provide managed services for on-premises infrastructure and its enterprise-class Infrastructure-as-a-Service platform. The rationale is to help clients make the transition from on-premise computing to the cloud, migrating their applications to cloud-based IT environments. Since many applications are mission critical, hybrid cloud environments will be instrumental in the conversion process and Virtustream said it will set out to provide a public cloud experience for its Federation Enterprise Hybrid Cloud service.

Nearly one-third of all IT infrastructure spending is going to cloud-related technologies, according to a research by The 451 Group, with cloud service buyers now investing on the application stack. Enterprise adoption is increasing, says the researcher, and buyers increasingly favour private and hybrid cloud infrastructure. Enterprise resource planning (ERP) software is increasingly being run on cloud systems, and enterprises will spend a total of $41.2B annually on ERP software by 2020, says The 451 Group.

Virtustream will incorporate EMC Information Infrastructure, VCE and VMware into one and will offer services using VMware vCloud Air, VCE Cloud Managed Services, Virtustream’s Infrastructure-as-a-Service and EMC’s Storage Managed Services and Object Storage Services offerings. VMware will establish a Cloud Provider Software business unit led by VMware’s senior VP Ajay Patel. The unit will incorporate existing VMware cloud management offerings and Virtustream’s software assets.

The business will integrate existing on-premises EMC Federation private cloud and take them into the public cloud, according to Virtustream. The aim is to maintain a common experience for developers, managers, architects and end users. Virtustream’s cloud services will be delivered directly to customers and through partners.

Virtustream addresses the changes in buying patterns and IT cloud operation models that both vendors are encountering now, said EMC CEO Joe Tucci. “Customers consistently tell us they’re on IT journeys to the hybrid cloud. The EMC Federation is now positioned as a complete provider of hybrid cloud offerings.”

Virtustream’s financial results will be consolidated into VMware’s financial statements beginning in Q1 2016.

Riak TS and Internet of Things | @ThingsExpo @Basho #IoT #Microservices

Riak TS is focused to handle time series application needs. It can do fast read/write to IOT devices, there in is its strength. It also targets financial and economics data as well as in scientific research applications.

This is straight from Riak TS:

«Riak TS automatically co-locates, replicates, and distributes data across the cluster to achieve fast performance and high availability. The unique master-less architecture enables near-linear scale using commodity hardware so you can easily add capacity as your time series data grows.»

read more

Microsoft and Dell team up again for “truly integrated” hybrid cloud offering

(c)iStock.com/JasonDoiy

Microsoft and Dell have continued their long-standing partnership by announcing a new “Azure-consistent integrated system for hybrid cloud” at the Dell World event in Texas.

The move is an updated ‘standard’ version of the companies’ ‘Azure in a box’ service announced last year, the Cloud Platform System (CPS). Utilising Dell’s hardware with Microsoft’s software, the two companies argue the platform is the only integrated system with a ‘true’ hybrid cloud experience, from the platform’s consistency with Azure.

CPS Standard, which is shipping immediately with Windows Azure Pack, System Center 2012 R2 and Windows Server 2012 R2, can be set up and running in three hours, according to the companies. Other features included with the platform include with a modular design, enabling customers to scale from four to 16 nodes based on business need, as well as a simplified business continuity and failover process.

“Digital transformation is an imperative for business today, and we are making our customers’ journey easier and faster through adoption of hybrid cloud,” said Dell CEO Michael Dell. “Dell shares a vision with Microsoft that open architectures and simplified cloud management will benefit customers of all sizes, freeing them to focus on their business and not their technology.”

Satya Nadella, Microsoft CEO, added: “By expanding our longstanding partnership with Dell to offer a truly integrated hybrid cloud, we will make the cloud more accessible to organisations of all sizes with the choice and flexibility to best meet their needs.”

Dell has made several other related announcements at the event. The computer giant has joined the Microsoft Cloud Solution Provider Program to provide better customer service capabilities and will sell Microsoft cloud products across Azure, Microsoft’s Enterprise Mobility Suite (EMS) and Office 365, while also announcing Cloud Flex Pay, a flexible solution available for the Dell hybrid cloud system which gives customers cost-risk payment options.

Recent analyst research shows the comparative positions of Microsoft and Dell in their respective markets. Synergy Research figures in recent quarters show Microsoft carving out a niche in second place in the infrastructure as a service (IaaS) market – yet still miles behind Amazon Web Services – while IDC numbers from July saw Dell in second place for cloud infrastructure providers, behind HP but ahead of Cisco.

Data centre security: Do you understand your risk?

(c)iStock.com/baranozdemir

Let’s assume for a moment that you still manage all or some of your data in-house.

By implication that means that somewhere in the building you have a room full of servers that need to be maintained and protected. And as a manager you’ll be aware of the physical risks that threaten the integrity of your data. These include not only flood, fire and incursions by malicious third parties but also the havoc that can be created by unauthorised members of staff entering the secure area and, accidentally or deliberately, tampering with the equipment. Naturally enough you do your level best to protect your hardware and software from all these threats.

So now let’s say that you’ve made an important decision to outsource your storage and IT functionality to an external data centre. As with the in-house operation, you’ll want to be absolutely assured that the risks will be effectively addressed. Certainly you will ask questions and your choice of provider depends heavily on the answers.

But will you be asking the right questions? Or, to put it another way, unless you fully understand where the main areas of risk lie, you may not be in position to assess the security provisions put in place by a potential provider.

Risk misconceptions

As a species, we’re not always terribly good when it comes to assessing real levels of risk and threat. The classic example is the motorist who drives many thousands of miles a month without a second thought while getting stressed at the (statistically) much safer prospect of catching a flight from London to New York.

There are good reasons why the latter is perceived as more dangerous – not least that driving gives us a sense of control while flying puts us in the hands of others and that air accidents tend to be both well publicised and unpleasant. Air travel is, therefore, scarier but actually much less risky.

And very often, data centre customers will focus on the ‘scary’ headline threats, such as terrorism, theft by organised criminals or a major accident. This leads to common questions such as:

  • What provision have you made to protect against an explosion?
  • What has been done to prevent an attack on the data centre from, say a gang driving a truck through the wall?
  • What has been done to ensure the centre continues to operate if there is a major incident in the area?

All good questions and your data centre manager should be able to provide the answers.  But the truth of the matter is that incidents of this kind are extremely rare. If we take the threat of bomb-blast as an example, there is currently no record of a data centre being attacked by terrorists in this way. Equally the incidence of data centres being affected by attacks on other installations is rare to the point of being negligible.

Common threats

And in reality the main and common threat to the integrity of data stems from a much more mundane source – namely the member of staff (or perhaps an external party) who gains access to the servers and maliciously or unintentionally causes an outage.

This was probably a threat that you were aware of when running an in-house operation, but the expectation is that in an external data centre all staff will be suitably qualified and skilled and those who aren’t will not be given access to key areas.

But the truth is that it’s vital to ensure that all those with physical access to your servers (within the data centre) should be thoroughly vetted and managed. At one level, a negligent or poorly skilled employee can cause an enormous amount of damage. At the malicious end of the spectrum, someone with a grudge or criminal intent could, in extreme circumstances, cripple your operations.

Going forward

So what is to be done? Well, first and foremost it’s important to thoroughly vet your own staff, and particularly those who may be visiting the data centre. Equally important, you should also be vetting anyone within your supply chain who might be given access.

It’s vital to establish how the outsource provider manages access to your IT hardware within the data centre. How are members of staff authenticated? What measures are in place to prevent an unauthorised person stealing the identities of others to obtain physical or virtual access?

Equally important, if security measures are ostensibly present, are they being actively enforced? For instance, let’s say an authorised person opens a secure door with a pass and is followed through by another party. Clearly the second party has no need to use a pass as the door is already opened but this is a breach of procedure. Will he or she be challenged, or are there electronic measures in place to prevent this kind of “tailgating”?

The value of locked-up in data is immeasurable. From client details and e-mail records, through to transactional and operational information the data lies at the heart of corporate operations. Those protecting the data should be security professionals and not simply data centre managers with an added security responsibility.

Outsourcing to a data centre can and should make information more, rather than less secure. Good data centres have the resources and expertise to ensure its integrity. However, before deciding on a provider it is vital to fully understand the risks and ask appropriate questions.

Load-Balancing Microservices | @DevOpsSummit #DevOps #API #Microservices

There’s no shortage of guides and blog posts available to provide you with best practices in architecting microservices. While all this information is helpful, what doesn’t seem to be available in such a great number are hands-on guidelines regarding how microservices can be scaled. Following a little research and sifting through lots of theoretical discussion, here is how load-balancing microservices is done in practice by the big players.

read more