The Evolution of Cloud Connectivity By @FrankGreco | @CloudExpo #Cloud

In case you missed it, the first phase of cloud computing has left the building. Thousands of companies are in the cloud. Practically all organizations regardless of size already have production applications in a public, off-premises cloud or a private cloud. Yep. Been there, done that.

And the vast majority of these applications use the classic “SaaS-style” public cloud model. Someone develops a useful service and hosts it on Amazon Web Services (AWS), Microsoft Azure, IBM Cloud Marketplace, Google Cloud Platform (GCP) or one of several other cloud vendors. Accessing this external service is typically performed via a well-defined API. Typically this API invocation is made using a simple REST call (or a convenient library wrapper around a REST call). This request originates from a web browser, native app on a mobile device or some server-side application and traverses the web. Using only port 443 or 80, it connects through a series of firewalls to the actual service running in the external cloud environment. The request is serviced by a process running in a service provider’s computing environment and returns a result to the client application.

read more

The state of DevOps in 2015: It’s strongly defined…but not the same definition

(c)iStock.com/franckreporter

A curious finding appears in the latest DevOps market assessment from Gleanster Research and Delphix: stakeholders have a strong definition of what DevOps comprises, but each definition varies wildly.

75% of the more than 2000 survey respondents agreed DevOps was either ‘strongly’ or ‘somewhat’ defined, yet when asked to pick an option, the consensus was less than unanimous.

The most popular definition (84%) for DevOps was “developers and system administrators collaborating to ease the transition between development and production”, while 69% plumped for “using infrastructure automation to facilitate self-service provisioning of infrastructure by development teams” and 60% opted with “evolving operations to meet the demands of agile software development teams.”

It is worth noting each definition offers plenty of overlap, and evidently more than one option could be chosen. Yet the report argues DevOps is ‘generally ill-defined’ and adds: “Some organisations view DevOps as an integral part of the day-to-day activities of the entire IT department. However, organisations with a stronger definition of DevOps tend to empower dedicated teams who are exclusively responsible for rolling out DevOps initiatives – and these are the organisations that see the greatest DevOps success.”

The survey also uncovered key drivers for DevOps initiatives. Faster delivery of software (66%) was the most popular choice, followed by identifying bugs earlier (44%) and delivering software more frequently (43%). Four in five respondents said they were under strong pressure to deliver higher quality software, spotting defects both during and after production, more quickly and with fewer resources.

Similarly, there is a discrepancy with regards to who leads DevOps teams. According to the survey, DevOps leaders argue development teams lead the process, while practitioners are more likely to note operations teams take charge.

According to a recent analysis from Rackspace, job roles requiring DevOps expertise continues to rise, with permanent roles posted increasing by 57% year on year, although at a slower pace than 2014, where job roles increased 351%.

The Emerging Technology Landscape: The New, the Hot, and the Unconventional

I recently did a video to discuss the emerging technology landscape around three primary areas:

  1. Revamping traditional customer-owned infrastructure
  2. Mobility
  3. Security

On the traditional side, hyper-converged infrastructure is huge. Players including SimpliVity, Nutanix and VMware with EVO:RAIL will be making a big impact over the next 12 months. We’re also seeing a lot of traction with our customer base around what they should move to a cloud environment. How do you rationalize your application portfolio? What about the people and process piece? How are you going to operationalize the technology you implement? How do you get your teams trained to be able to handle new challenges? This is where GreenPages’ Transformation Services really comes into play.

As far as mobility goes, security and access are huge here. Organizations need to look into segmenting mobile devices. For example, cutting a phone in half – having a personal side of the phone and a business side of the phone. Employees can have personal apps and games on one side and have the other be for business critical applications. The business side can be locked down and if an employee leaves, the business side can be wiped while leaving the personal side of the phone alone.

Enjoy the video & please reach out with any questions or comments!

 

Download eBook – The Evolution of Your Corporate IT Department

 

 

By Chris Ward, CTO, LogicsOne

SDN: How software has (re)defined networking

(c)iStock.com/Henrik5000

By Andrea Knoblauch

Over the last few years we’ve seen just about every part of the data centre move towards virtualisation and software. First we virtualised desktops, then storage, then even our security tools. So when the idea of software defined networking (SDN) started being floated around, it wasn’t a big surprise. But what exactly is it?

SDN’s early roots can be likened to the idea of MPLS, where we saw the decoupling of network control and forwarding planes. It’s also one of the key features in Wi-Fi, one of the most prevalent technologies today. But SDN isn’t just the decoupling of the network control plane from the network forwarding plane, it’s really more about providing programmatic interfaces into network equipment regardless if it’s coupled or not.

By being able to create APIs into these devices, we can replace manual interfaces and use the new software to automate tasks such as configuration and policy management, but also enable the network to dynamically respond to application requirements. Now you can deal with a pool of network devices as a single entity which means it is easier to control network flows with tools such as OpenFlow protocol.

So what does this mean for network folks? Well, first and foremost, SDN brings the promise to centralise and simplify how we control networks. It can make networks programmable and thus more agile when it comes to automation or enforcing policies. It also means that by having software at the heart of networking, it can keep up with virtualisation and cloud computing workflows.

Like a larger control console, SDN brings centralised intelligence that makes it easy to see the network end to end to make better overall decisions, and easily update the network as a whole rather than in segments.

Security folks will also benefit with advancements in SDN, giving them hopefully more insight into network issues, and quickly be able to respond to incidents.

The jury is still out when it comes to whether widespread adoption is ready for mainstream. There are still many startups driving this market, but the Open Networking Foundation (ONF), made up of board members from Microsoft, Yahoo, Facebook, Google and several other telecoms and investors are pushing for widespread adoption.

It’s yet to be seen what the true benefits of software defined networks will be, but the ability to adapt the network to different loads, be able to prioritise traffic or reroute, and of course the ability to see a better overall picture is reason enough for many organisations to start investigating this new methodology.

But the ability to start dropping in SDN for parts of your network and expand it as you change out of legacy gear is also going to get strong supporters who are looking for ways to reduce costs around gathering traffic and expand their network more with the returns on investment.

The post SDN: How Software has Re(Defined) Networking appeared first on Cloud Best Practices.

The Rise and Fall of SANTap | @CloudExpo #Cloud

I am not sure how many people remember Cisco SANTap. About ten years ago, Cisco introduced a data tapping mechanism in the MDS 9000 fibre channel switches. The idea was to allow the data path to be “tapped” at-will. Tapping in this case meant using a mechanism in the switch to split the data being written from client hosts to the storage, allowing the identical “split” data to be routed through a second, separate path.
SANTap therefore allowed a copy of the data to be seamlessly “mirrored” through the switch and subsequently used by other applications for multiple purposes (especially for backup). It facilitated real-time protection of critical data, and allowed advanced functions such as migration, snapshots, etc.

read more

Intel partners with OHSU in using cloud, big data to cure cancer

Intel is working with the OHSU to develop a secure, federate cloud service for healthcare practitioners treating cancer

Intel is working with the OHSU to develop a secure, federate cloud service for healthcare practitioners treating cancer

Intel is testing a cloud-based platform as a service in conjunction with the Oregon Health & Science University (OHSU) that can help diagnose and treat individuals for cancer based on their genetic pre-dispositions.

The organisations want to develop a cloud service that can be used by healthcare practitioners to soak up a range of data including genetic information, data about a patient’s environment and lifestyle to deliver tailored cancer treatment plans quickly to those in need.

“The Collaborative Cancer Cloud is a precision medicine analytics platform that allows institutions to securely share patient genomic, imaging and clinical data for potentially lifesaving discoveries. It will enable large amounts of data from sites all around the world to be analyzed in a distributed way, while preserving the privacy and security of that patient data at each site,” explained Eric Dishman director of proactive health research at Intel.

“The end goal is to empower researchers and doctors to help patients receive a diagnosis based on their genome and potentially arm clinicians with the data needed for a targeted treatment plan. By 2020, we envision this happening in 24 hours — All in One Day. The focus is to help cancer centres worldwide—and eventually centers for other diseases—securely share their private clinical and research data with one another to generate larger datasets to benefit research and inform the specific treatment of their individual patients.”

Initially, Intel and the Knight Cancer Institute at Oregon Health & Science University (OHSU) will launch the Collaborative Cancer Cloud, but the organisations expect two more institutions will be on board by 2016.

From there, Intel said, the organisations hope to federate the cloud service with other healthcare service providers, and open it up for use to treat other diseases like Alzheimer’s.

“In the same timeframe, we also intend to deliver open source code contributions to ensure the broadest developer base possible is working on delivering interoperable solutions. Open sourcing this code will drive both interoperability across different clouds, and allow analytics across a broader set of data – resulting in better insights for personalized care,” Dishman said.

Thames Tideway Tunnel taps Accenture in NetSuite deal

Accenture claims this is the first implementation of a multi-tenant cloud-based ERP system at a regulated utility in the UK

Accenture claims this is the first implementation of a multi-tenant cloud-based ERP system at a regulated utility in the UK

Thames Tideway Tunnel, the project company set up to manage London’s “super-sewer” overflow reduction project, has deployed NetSuite’s cloud-based ERP platform in a bid to reduce costs and drive flexibility in its financial and project planning operations.

The company, which is due to start construction on a super sewer system to tackle sewage overflowing into the River Thames, said it required a flexible, low-cost IT systems implementation to support its core financial and project planning operations.

It enlisted Accenture to help deploy NetSuite OneWorld across the organisation.

“An agile and intuitive back-office IT system is critical for effective management and delivery of large-scale infrastructure projects,” said Robin Johns, head of Information Systems at Thames Tideway Tunnel.

“We selected Accenture to help us with this implementation based on its extensive experience with NetSuite cloud ERP technology and complex system integrations. We also chose Accenture for its ability to offer practical solutions to deliver an IT platform that will help facilitate financing and construction of the super sewer, while keeping costs down for customers,” Johns said.

Maureen Costello, managing director of Accenture’s utilities practice in the UK and Ireland said this is the first implementation of a multi-tenant cloud-based ERP system at a regulated utility in the UK.

It “demonstrates the company’s innovative approach and commitment to efficiently manage the delivery of this capital project,” Costello said.

Basho, Cisco integrate Riak KV and Apache Mesos to strengthen IoT automation

Basho and Cisco have integrated Riak and Mesos

Basho and Cisco have integrated Riak and Mesos

Cisco and Basho have successfully demoed the Riak key value store running on Apache Mesos, an open source technology that makes running diverse, complex distributed applications and workloads easier.

Basho helped create and commercialise the Riak NoSQL database and worked with Cisco to pair Mesos with Riak’s own automation and orchestration technology, which the companies said would help support next gen big data and internet of things (IoT) workloads.

“Enabling Riak KV with Mesos on Intercloud, we can seamlessly and efficiently manage the cloud resources required by a globally scalable NoSQL database, allowing us to provide the back-end for large-scale data processing, web, mobile and Internet-of-Things applications,” said Ken Owens, chief technology officer for Cisco Intercloud Services.

“We’re making it easier for customers to develop and deploy highly complex, distributed applications for big data and IoT. This integration will accelerate developers’ ability to create innovative new cloud services for the Intercloud.”

Apache Mesos provides resource scheduling for workloads spread across distributed – and critically, heterogeneous – environments, which is why it’s emerging as a fairly important tool for IoT developers.

So far Cisco and Basho have only integrated Basho’s commercial Riak offering, Riak KV, with Mesos, but Basho is developing an open source integration with Mesos that will also be commercialized around a supported enterprise offering.

“By adding the distributed scheduler from Mesos, we’re effectively taking the infrastructure component away from the equation,” Adam Wray, Basho’s chief executive officer told BCN. “Now you don’t have to worry about the availability of servers – you literally have an on-demand model with Mesos, so people can scale up and down based on the workloads for any number of datacentres.”

“This is what true integration of a distributed data tier with a distributed infrastructure tier looks like, being applied at an enterprise scale.”

Wray added that while the current deal with Cisco isn’t a reselling agree we can expect Basho to be talking about large OEM deals in the future, especially as IoT picks up.

Everything You Need to Know About Parallels Desktop 11

By now, you’ve probably digested the news that Parallels Desktop 11 for Mac (including the Business Edition and the all-new Pro Edition) has arrived. Now, it’s time to get down to brass tacks—here’s everything you need to know about Parallels Desktop 11, including why you should upgrade from a previous version. What is Parallels Desktop […]

The post Everything You Need to Know About Parallels Desktop 11 appeared first on Parallels Blog.