Big Data and Cloud Computing Service Set to Improve Healthcare

Suvro Ghosh, founder and CEO of Texas-based ClinZen LLC, has developed a cloud application based upon Big Data that will help facilitate healthcare access for those living in the densely populated Indian City of Kolkata. The new platform, named 24by7.Care also aims to connect those living in rural areas to those in the metropolis.

Ghosh has reported to the media, “Given Kolkata’s dense population and the plethora of problems regarding accessibility to healthcare at any given time, we need to build a framework based on latest technologies such as cloud computing and Big Data. The 24by7.Care platform is a database dependent one and we are currently building a data base.”

Big data is extremely large data sets that may be analyzed computationally to reveal patterns, trends, and associations, especially relating to human behavior and interactions. This platform consisting of both big data and cloud computing would be able to aid Kolkata’s healthcare system by serving  needs such as booking a doctor or admitting a patient to the hospital.

cloud-computing

 

This new healthcare system is initially set to take place in Kolkata and will be available on every computing platform.  Cloud computing allows computing facilities to be accessed from anywhere over the network on a multitude of devices ranging from smartphones to laptops and desktop computers.  This system increases accessibility pertaining to information about healthcare and will therefore improve the current system that is in place in Kolkata.

This new service is set to be implemented in three months.

The post Big Data and Cloud Computing Service Set to Improve Healthcare appeared first on Cloud News Daily.

Box announces strong financial figures, raises forecast

(c)iStock.com/ngkaki

Enterprise cloud storage provider Box has announced first quarter revenue figures of $65.6 million, a 45% increase year over year.

The company noted billings of $69.8m for the first quarter of fiscal 2016 and a non-GAAP operating loss of $32.6m – 50% of revenue. This contrasts with first quarter of fiscal 2015, whereby non-GAAP operating loss was 69% of revenue. GAAP operating loss for this quarter was at 71% of revenue, compared to the previous year’s 83%.

Box also gave an update on its customer numbers; an addition of 2000 customers in the quarter to total more than 47000 customers globally, a growth of paying customers to include more than 51% of the Fortune 500, and surpassing 37 million registered users.

According to Reuters, the company raised its full-year forecast to $286m-$290m, up from $281m-$285m. Shares rose over 8% in extended trading on Wednesday.

Dylan Smith, Box co-founder and CFO, said in a statement: “We are proud to have achieved revenue growth of 45% year over year, driven by our continued success moving up market and closing more enterprise deals. While we continue to focus on investing in technology innovation and growth, we also remain committed to achieving positive free cash slow. Our Q1 results show the progress we have made toward this milestone as we demonstrated significant improvement in our operating cash flow.”

Box has made a serious of interesting announcements in recent months, ranging from the customer win with the US Department of Justice to the appointment of Sonny Hashmi, former CIO of the General Services Administration, to help lead the company’s efforts in the federal IT space. It’s certainly early days, but there are certainly encouraging signs for the California-based firm.

How vendors are building data centres to protect customers and how this affects compliance

(c)iStock.com/4x-Image

European service providers are currently busy with the creation of multiple data centres, yet in an effort to ensure customers are able to access their data in the locations they are looking for many are being left at risk of having to return to manual provisioning. This is a pitfall providers need to avoid and, to do so, will need to consider how they can offer self-service to their customers, with the choice of data location built-in.

Offering a choice of VM template, but no choice as to where to deploy it, means that data sovereignty for cloud solutions providers not only goes out the window, but can become a serious issue. This is where best of breed cloud orchestration solutions come into play. Offering customers both the choice of VM template and also where to deploy it, best of breed cloud allows cloud providers to create a range of more personal and flexible services for their end customers.

But these end customers need more than just the choice of location; they require a full set of reports across all the cloud infrastructures they use. These requirements are driving many customers to consider their own private, hybrid cloud platforms in an attempt to gain the control they demand. Not only will this allow them to choose if they should set-up a virtualised infrastructure in their data centre of choice, but also to use public clouds when appropriate. Alongside this, a useful by-product is support for DevOps as a strategy – one hybrid cloud platform for all clouds means one API for all clouds too!

How can organisations successfully negotiate the challenges of cloud computing?

Cloud computing provides organisations with many benefits, for example  the potential to reduce costs while increasing value with a more efficient use of IT resources and the ability to increase capacity while reducing the need to purchase capital equipment (servers, networking equipment etc.). These both allow enterprises to push that cash back into the core business.

It’s important for businesses to have a clear understanding around the use of cloud computing technology in order to implement an effective cloud strategy. Organisations must review their existing enterprise IT assets and ensure there is potential for cloud computing to bring value, which is specific to an organisation’s business processes and their existing way of doing IT. Most enterprises will find that cloud computing can solve many existing efficiency issues and moving to cloud computing will provide clear and measurable value.

Arguably however, the most vital decision an organisation has to make on the journey to successfully navigating cloud computing is to decide whether hybrid cloud is a key part of an IT strategy, or whether IT should be put in the hands of a trusted provider.

Once this decision has been made, organisations then need to look at their IT systems and decide which are truly critical and, for the ones that are, what the compliance requirements are. It’s at this point that an organisation should be able to decide which would be more beneficial; running a private hybrid cloud platform, or outsourcing to a provider offering a true hybrid cloud solution.

An organisation must remember to ask itself however; how can it get the data in and out of its chosen cloud or clouds? This should be a vital part of the thought process when making the final decision on a desired cloud platform.

Real World Example: Deploying VMware NSX in the Financial Sector

I recently finished up a project implementing VMware’s NSX and wanted to take a minute to recap my experience. The client I worked with provides call center services in the financial sector. They have to be able to securely access systems that have the ability to see credit card information along with other personal, sensitive information.

VMware NSXThe customer is building out new facilities to host their primary, PCI-related, applications.  In this environment, they have to be able to provide the highest levels of security, while providing high performing networking services. To achieve the necessary requirements, they have had to purchase new infrastructure: blade center systems, networking infrastructure (Nexus 5672s, Nexus 6000s, Nexus 7710s, Juniper SRXs, F5 load balancers, etc.), Software licensing, among other things.

They came across the need to purchase additional pairs of F5 load balancers but were up against their budget. When this happened, the Director / VP in charge of the project evaluated VMware’s NSX technology. After some initial discussions, he realized that NSX could not only provide the type of security the environment needed to drive higher efficiencies but could also provide some of the general networking services he was looking for.

Previous network designs included the need for complete isolation of some workloads and, to achieve this, the design called for trusted traffic to traverse a separate pair of distribution/access layer switches to reach external networks. This design also made it necessary to acquire separate F5 load balancers, as specific traffic was not allowed to comingle on the same physical infrastructure due to the way the security team wanted to steer trusted and untrusted traffic. This meant that the team was required to purchase twice the hardware; separate Nexus 6000s and separate F5 load balancers.

Because of the NSX Distributed Firewall capabilities, security teams have the ability to place required rules and policies closer to applications than has previously been achievable. Because of this, networking designs changed, and allowed for infrastructure requirements previously deemed necessary to be alleviated. The ability to stop untrusted traffic before it ever reaches a logical or physical wire gave the team the opportunity to converge more of their networking equipment; eliminating the need to utilize separate Nexus 6000s. In addition, with the NSX Edge Services Gateway having the ability to provide network load-balancing, they were no longer required to purchase additional physical equipment to provide this service. With the budget they put towards NSX licensing, they were able to get the all the security and load balancing services they were looking for and also put money back into their budget.

The Engagement:

Over the span of approximately one month, the security team, networking team, server / virtualization team, and an auditing team worked together in designing what the NSX solution needed to achieve and how it would be implemented. I believe this to be an important aspect of NSX projects because of the misconception that the server / virtualization teams are trying to take over everything. Without each team, this project would have been a disaster.

As requirements were put forth, we built out NSX in building blocks. First, we identified that we would utilize VXLAN as a means to achieve desired efficiencies: eliminating VLAN sprawl, segregating trusted traffic in the logical, software layer, and allowing Disaster Recovery designs to become easier when using the same IP address space. Once networks and routing were implemented, we were able to test connectivity from various sites, while achieving all requirements by the security team. The next item was implementing NSX security. This item required new ways of thinking for most teams. With VMware NSX, customers have the ability to manage security based on vCenter objects, which provides more flexibility. We had to walk through what the contents of each application were, what types of communications were necessary, what types of policies were required, and, in identifying these items, we were able to build dynamic and static Security Groups. We then built Security Policies (some basic that could apply to a majority of similar applications, some application specific) and were able to re-use these policies against various Security Groups, speeding the deployment of application security. We applied weights to these policies to ensure application specific policies took precedence over the generic. In addition to Netflow, we applied “Flow Monitoring” as a means for the networking and security teams to monitor traffic patterns within the NSX environment.

All in all, this was a very successful project. Our client can now better secure their internal applications as well as better secure sensitive customer data.

Remember, NSX can be mislabeled as a server team product, however, the network team and security team need to know how it works and need to be able to implement it.

Are you interested in learning more about how GreenPages can help with similar projects? Email us at socialmedia@greenpages.com

 

By Drew Kimmelman, Consultant

Food retail, robotics, cloud and the Internet of Things

Ocado is developing a white-label grocery delivery service

Ocado is developing a white-label grocery delivery service

With a varied and fast moving supply chain, loads of stock moving quickly through warehouses, delivery trucks, stores, and an increasingly digital mandate, the food retail sector is unlike any other retail segment. Paul Clarke, director of technology at Ocado, a leading online food retailer, explains how the cloud, robotics, and the Internet of Things is increasingly at the heart of everything the company does.

Ocado started 13 years ago as an online supermarket where consumers could quickly and easily order food goods. It does not own or operate any brick-and-mortar stores, though it effectively competes with all other food retailers, in some ways now more than ever because of how supermarkets have evolved in the UK. Most of them offer online ordering and food delivery services.

But in 2013 the company struck a £216m deal with Morrisons that would see Ocado effectively operate as the Morrisons online food store, a shift from its previous strategy of offering a standalone end-to-end grocery service with its own brand on the front-end – and a move that would become central to its growth strategy moving forward. The day the Morrisons platform went live in early 2014 the company set to work on re-platforming the Ocado service and turning it into the Ocado Smart Platform (OSP), a white-label end-to-end grocery service that can be deployed by food retailers globally. Clarke was fairly tight-lipped about some of the details for commercial reasons, but suggested “there isn’t a continent where the company is not currently in discussions” with a major food retailers to deliver OSP.

The central idea behind this is that standing up a grocery delivery service – the technical infrastructure as well as support services – is hugely expensive for food retailers and involves lots of technical integration, so why not simply deploy a white label end-to-end service that will still retain the branding of said retailer but offer all the benefits?

Paul Clarke is speaking at the Cloud World Forum in London June 24-25. Click here to register!

“In new territories you don’t need the size of facilities that we have here in the Midlands. For instance, our site in the Midlands costs over £230m, and that is fine for the UK which has an established online grocery business and our customer base, but it wouldn’t fit well in a new territory where you’re starting from scratch, nor is there the willingness to spend such sums,” he explains.

The food delivery service operates in a hub-and-spoke model. The cloud service being developed by Ocado connects the ‘spokes’, smaller food depots (which could even be large food delivery trucks) to customer fulfilment centres, which are larger warehouses that house the majority of the stock (the ‘hub’).

The company is developing and hosting the service on a combination of AWS and Google’s cloud platforms – for the compute and data side, respectively.

“The breadth and depth of our estate is huge. You have robotics systems, vision systems, simulation systems, data science applications, and the number of different kinds of use cases we’re putting in the cloud is significant. It’s a microservices architecture that we’re building with hundreds of different microservices. A lot of emphasis is being put on security through design, and robust APIs so it can be integrated with third party products – it’s an end-to-end solution but many of those incumbents will have other supply chain or ERP solutions and will want to integrate it with those.”

AWS and Google complement eachother well, he says. “We’re using most things that both of those companies have in their toolbox; there’s probably not much that we’re not using there.”

The warehousing element including the data systems will run on a private cloud in the actual product warehouses, so low latency real-time control systems will run in the private cloud, but pretty much everything else will run in the public cloud.

The company is also looking at technologies like OpenStack, Apache Mesos and CoreOS because it wants to run as many workloads as possible in Linux containers; they’re more portable than VMs and because of the variation between the regions (legislation and performance) where it will operate the company may have to change whether it deploys certain workloads in a public cloud or private cloud quite quickly.

The Internet of Things and the Great Data Lakes

IoT is very important for the company in several areas. Its warehouses are like little IoT worlds all on their own, Clarke says, with lots of M2M, hundreds of kilometres of conveyor, and thousands of things on the move at any one time including automated cranes and robotics.

Then there’s all of the data the company collects from drivers for things like route optimisation and operational improvement – things like wheel speed, tire pressure, road speed, engine revs, fuel consumption, cornering performance, which are all fed back to the company in real-time and used to track driver performance.

There’s also a big role for wearables in those warehouses. Clarke says down the line wearables have the potential to help it improve safety and productivity (“we’re not there yet but there is so much potential.”)

But where IoT can have the biggest impact in food retail, and where it’s most underestimated, Clarke explains, is the customer element: “This is where many companies underestimate the scale of transformation IoT is going to bring, the intersection of IoT and smart machines. In our space we see that in terms of the smart home, smart appliances, smart packaging, it’s all very relevant. The customers living in this world are going to demand this kind of smartness from all the systems they use, so it’s going to raise the bar for all the mobile apps and service we build.”

“Predictive analytics are going to play a big part there, as will machine learning, to help them do their shop up in our case, or knowing what they want before they even have a clue themselves. IoT has a very important part to play in that in terms of delivering that kind of information to the customer to the extent that they wish to share it,” he says.

But challenges, ones that straddle the legal, technical and cultural, persist in this nascent space. One of them, largely technical, is data management, which isn’t insurmountable. The company has implemented a data lake built on Google BigQuery, where it publishes a log of pretty much every business event onto a backbone that it persists through that service as well as data exhaust from its warehouse logs, alerts, driver monitoring information, clickstream data and front-end supply chain information (at the point of order), and it uses technologies like Dataflow and Hadoop for number crunching.

Generally speaking, Clarke says, grocery is just fundamentally different to non-grocery or food in ways that have data-specific implications. “When you go buy stationary or a printer cartridge you usually buy one or two items. With grocery there can often be upwards of 50 items, there are multiple suppliers and multiple people involved, sometimes at different places, often on different devices and different checkouts. So that journey of stitching that order, that journey together, is a challenge from a data perspective in itself.”

Bigger challenges in the IoT arena, where more unanswered questions lie, include security and identity management, discoverability, data privacy and standards – or the lack of. These are the problems that aren’t so straightforward.

“A machine is going to have to have an identity. That whole identity management question for these devices is key and so far goes unanswered. It’s also linked to discoverability. How do you find out what the device functions are? Discovery is going to get far too complex for humans. You get into a train station these days and there are already 40 different Wi-Fi networks, and hundreds of Bluetooth devices visible. So the big question is: How do you curate this, on a much larger scale, for the IoT world?”

“The type of service that creates parameters around who you’re willing to talk to as a device, how much you’re willing to pay for communications, who you want to be masked from, and so forth – that’s going to be really key, as well as how you implement this so that you don’t make a mistake and share the wrong kinds of information with the wrong device. It’s core to the privacy issue.”

“The last piece is standardisation. How these devices talk to one another – or don’t – is going to be key. What is very exciting is the role that all the platforms like Intel Edison, Arduino, BeagleBone have played in lowering the barrier by providing amazing Lego with which to prototype, and in some cases build these systems; it has allowed so many people to get involved,” he concluded.

Food retail doesn’t have a large industry-specific app ecosystem, which in some ways has benefited a company like Ocado. And as it makes the transition away from being the sole vendor of its product towards being a platform business, Clarke said the company will inevitably have to develop some new capabilities, from sales to support and consultancy, which it didn’t previously depend so strongly upon. But its core development efforts will only accelerate as it ramps up to launch the platform. It has 610 developers and is looking to expand to 750 by January next year across its main development centre in Hatfield and two others in Poland, one of which is being set up at the moment.

“I see no reason why it has to stop there,” he concludes.

IBM releases tool to advance cloud app development on OpenPower, OpenStack

IBM has announced a service to help other develop and test OpenPower-based apps

IBM has announced a service to help other develop and test OpenPower-based apps

IBM announced the launch of SuperVessel, an open access cloud service developed by the company’s China-based research outfit and designed for developing and testing cloud services based on the OpenPower architecture.

The service, developed by Beijing’s IBM Research and IBM Systems Labs, is open to business partners, application developers and university students for testing and piloting emerging applications that use deep analytics, machine learning and the Internet of Things.

The cloud service is based on the latest Power8 processors (with FPGAs and GPU-based acceleration) and uses OpenStack to orchestrate the underlying cloud resources. The SuperVessel service is sliced up into various “labs”, each focusing on a specific area, and is initially launching with four: Big Data, Internet of Things, Acceleration and Virtualization.

“With the SuperVessel open computing platform, students can experience cutting-edge technologies and turn their fancy ideas into reality. It also helps make our teaching content closer to real life,” said Tsinghua University faculty member Wei Xu. “We want to make better use of SuperVessel in many areas, such as on-line education.”

Terri Virnig, IBM Vice President of Power Ecosystem and Strategy said: “SuperVessel is a significant contribution by IBM Research and Development to OpenPower. Combining advanced technologies from IBM R&D labs and business partners, SuperVessel is becoming the industry’s leading OpenPower research and development environment. It is a way IBM commits to and supports OpenPower ecosystem development, talent growth and research innovation.”

The move is part of a broader effort to cultivate mindshare around IBM’s Power architecture, which it opensourced two years ago; it’s positioning the architecture as an ideal platform for cloud and big data services. Since the launch of the OpenPower Foundation, the group tasked with coordinating development with Power, it has also been actively working with vendors and cloud service provider to mashup a range of open source technologies – for instance, getting OpenStack to work on OpenPower and Open Compute-based hardware.

Toronto real-estate developer, Honeywell partner on IoT for facilities

Honeywell is working with Menkes to deploy IoT systems and analytics in its facilities

Honeywell is working with Menkes to deploy IoT systems and analytics in its facilities

Toronto-based real-estate firm Menkes Developments and industrial electronics giant Honeywell have announced a deal that will see the two combine Internet of Things sensors and cloud services to reduce energy and operational costs at one of the real-estate firm’s properties.

The companies will initially deploy a smart facilities system designed by Honeywell at the Telus Tower in Toronto, as well as a cloud-based analytics platform used to monitor and analyse facility performance data and offer recommendations to improve operations.

“We are committed to pushing the boundaries of smart buildings, identifying new methods to leverage connectivity and improve our facilities,” said Sonya Buikema, vice president, commercial property management, Menkes.

“Honeywell’s technology and services complement our philosophy, and expand the ways in which we’re able to drive performance and better serve our customers,” Buikema said.

The companies said they want to use the technologies to explore new opportunities for improving efficiency and environmental impact.

“Even the most advanced facilities will experience a gradual decrease in performance over time, and it can be difficult to identify and address those issues before they negatively impact the bottom line,” said John Rajchert, president of Honeywell Building Solutions. “Honeywell has the tools and expertise to make it easier for companies to not only know what is happening in their facilities, but to also take the appropriate actions to keep them operating at a high level.”

While facilities automation has been around for some time now only recently have vendors like Honeywell begun selling insights-as-a-service through various cloud-based analytics platforms.

Facilities, particularly mixed use spaces, are very complex to manage and often include a range of different systems (i.e. motion sensors, temperature control, air filtration, lighting, security systems, etc.), so it’s likely squeezing out any operational improvements or insights through the growing interconnection between these various technologies and services will play a big role in real-estate moving forward.

Docker startup Rancher Labs secures $10m for container-based IaaS software

Rancher is developing container-based IaaS software

Rancher is developing container-based IaaS software

Rancher Labs, a startup developing Linux container-based infrastructure-as-a-service software, has secured $10m in a series A round of funding, which it said would be used to bolster its engineering and development efforts.

Rancher Labs, which was started by CloudStack founder Sheng Liang and Cloud.com (which was acquired by Citrix in 2011) founder Shannon Williams, offers infrastructure services purpose-built for containers.It also developed a lightweight Linux OS called RacherOS. “We wanted to run Docker directly on top of the Linux Kernel, and have all user-space Linux services be distributed as Docker containers. By doing this, there would be no need to use a separate software package distribution mechanism for RancherOS itself,” the company explained.

The company said that as technologies like Docker become more popular in production mode so do other requirements around things like networking (i.e. load balancing), monitoring, storage management, and other infrastructure requirements needed to stand up a reliable cloud workload.

“Containers are quickly becoming the de-facto large-scale production platform for application deployment,” Liang said.

“Our goal is to provide organizations with the tools needed to take full advantage of container technology. By developing storage and networking software purpose-built for containers, we are providing organizations with the best possible experience for running Docker in production.”

The company’s goal is to develop all of the infrastructure services necessary to give enterprises confidence in deploying containers in production at scale, and it plans to use the funding to accelerate its development and engineering efforts.

Jishnu Bhattacharjee, managing director at Nexus Venture Partners, one of the company’s investors said: “Software containers have dramatically changed the way DevOps teams work, becoming an essential piece of today’s IT infrastructure. The team at Rancher Labs recognized the technology’s potential early on, along with the pain points associated with it.”

While the technologies and tools to support Linux containers are still young there seems to be growing volume around using them for production deployments; one of the things that makes them so attractive in the cloud world is their scalability, and the ability to drop them in almost any environment – whether bare metal or on a hypervisor.

Green America hits out at Amazon for its dirty cloud

Amazon has committed to bolstering its use of renewables, but Green America thinks it needs to go further

Amazon has committed to bolstering its use of renewables, but Green America thinks it needs to go further

Notforprofit environmental advocacy group Green America is launched a campaign to try and convince Amazon to reduce its carbon footprint and catch up with other large cloud incumbents’ green credentials.

Green America said Amazon is behind other datacentre operators – including some of its large competitors like Google, Apple and Facebook – in terms of its renewable energy use and reporting practices.

“Every day, tens of millions of consumers are watching movies, reading news articles, and posting to social media sites that all use Amazon Web Services.  What they don’t realize is that by using Amazon Web Services they are contributing to climate change,” said Green america’s campaigns director Elizabeth O’Connell.

“Amazon needs to take action now to increase its use of renewables to 100 percent by 2020, so that consumers won’t have to choose between using the internet and protecting the planet,” O’Connell said.

Executive co-director Todd Larsen also commented on Amazon’s green cred: “Amazon lags behind its competitors, such as Google and Microsoft, in using renewable energy for its cloud-based computer servers.  Unlike most of its competitors, it also fails to publish a corporate responsibility or sustainability reporting, and it fails to disclose its emissions and impacts to the Carbon Disclosure Project.”

Amazon has recently taken strides towards making its datacentres greener. In November last year the company committed to using 100 per cent renewable energy for its global infrastructure, bowing to pressure from organisations like Greenpeace which have previously criticised the company’s reporting practices around its carbon footprint. But organisations like Green America still believe the company is way off the mark on its commitment.

Green America’s campaign is calling on Amazon to commit to full use of renewables for its datacentres by 2020; submit accurate and complete data to the Carbon Disclosure Project; and issue and annual sustainability report.

An Amazon spokesperson told BCN that the company and its customers are already showing environmental leadership by adopting cloud services in the first place.

“AWS customers have already shown environmental leadership by moving to cloud computing, which is inherently more environmentally friendly than traditional computing. Any analysis on the climate impact of a datacentre should take into consideration resource utilization and energy efficiency, in addition to power mix,” the spokesperson said.

“On average, AWS customers use 77 per cent fewer servers, 84 per cent less power, and utilize a 28 per cent cleaner power mix, for a total reduction in carbon emissions of 88 per cent from using the AWS Cloud instead of operating their own datacentres. We believe that our focus on resource utilization and energy efficiency, combined with our increasing use of renewable energy, will help our customers achieve their carbon reduction and sustainability goals. We will continue to provide updates of our progress on our AWS & Sustainable Energy page,” she added.

Apple is Fixing UI Issue in iOS 9

It’s no secret that I‘m an Apple zealot down to the very marrow of my bones. But that doesn’t mean that I never use a Windows app—after all, there are some times that you just have to use IE, and that is what Parallels Desktop is for. My zealous nature also doesn’t mean that I […]

The post Apple is Fixing UI Issue in iOS 9 appeared first on Parallels Blog.