P2P RTC will impact the landscape of communications, shifting from traditional telephony style communications models to OTT (Over-The-Top) cloud assisted & PaaS (Platform as a Service) communication services. The P2P shift will impact many areas of our lives, from mobile communication, human interactive web services, RTC and telephony infrastructure, user federation, security and privacy implications, business costs, and scalability.
In his session at @ThingsExpo, Robin Raymond, Chief Architect at Hookflash, will walk through the shifting landscape of traditional telephone and voice services to the modern P2P RTC era of OTT cloud assisted services.
Archivo mensual: mayo 2015
Tech News Recap for the Week of 5/25/2015
Were you busy last week? Here’s a quick tech news recap of articles you may have missed from the week of 5/25/2015.
Microsoft’s discussions with Salesforce about a potential acquisition ended due to price issues EMC is buying Virtustream. The IRS reported that thieves stole tax info from 100,000 people. The Internet of Things market is projected to grow 19% in 2015 while the software-defined data center market is projected to hit $77 billion by 2020. Software-defined storage is reportedly gaining traction among businesses.
Tech News Recap
- Microsoft’s Salesforce Acquisition Talks Ended On Price Issues
- EMC buying Virtustream in $1.2B deal
- IRS Says Thieves Stole Tax Info from 100,000
- IoT Market Will Grow 19% in 2015, IDC Predicts
- Citrix Synergy 2015 Recap: Top News & Announcements
- Microsoft CEO bests the rest in tech leadership, says researcher
- Software-defined data center market to hit $77.18 billion by 2020
- Software-Defined Storage Gaining Traction Among Businesses
- Digital transformation can save the CIO
- CareFirst breach demonstrates how assumptions hurt healthcare
- What Data Breaches Now Cost and Why
- 6 Hottest IT jobs for new tech grads
- WordPress malware: Don’t let too-good-to-true deals infest your site
- Health providers lack IT infrastructure roadmap
- More than 3 billion people are now using the internet
- CEOs see IT as revenue creating more than cost saving
- 5 Steps to Secure Your Data After I.R.S Breach
- Convergence of the Internet of Things & Cloud
- Cloud and Mobility: Key Issues to Consider
- Stopping Data Breaches: Whose Job Is It Anyways?
The corporate IT department has evolved. Have you kept pace?
By Ben Stephenson, Emerging Media Specialist
Architecture for the ‘Internet of Things’ By @RedHatNews | @ThingsExpo [#IoT]
Explosive growth in connected devices. Enormous amounts of data for collection and analysis. Critical use of data for split-second decision making and actionable information. All three are factors in making the Internet of Things a reality. Yet, any one factor would have an IT organization pondering its infrastructure strategy.
How should your organization enhance its IT framework to enable an Internet of Things implementation? In his session at Internet of @ThingsExpo, James Kirkland, Chief Architect for the Internet of Things and Intelligent Systems at Red Hat, described how to revolutionize your architecture and create an integrated, interoperable, reliable system of thousands of devices. Using real-world examples, James discussd the transformative process taken by companies in moving from a two-tier to a three-tier topology for IoT implementations.
Tribeca Medical Center Issues Notice Regarding a Potential Privacy Issue Involving its Patients
NEW YORK, Dec. 9, 2014 /PRNewswire/ — We are providing this notice as part of Tribeca Medical Center’s commitment to patient privacy. We take patient privacy very seriously, and it is important to us that you are made fully aware of a potential privacy issue. We regret to inform you that on October 21, 2014, Tribeca Medical Center discovered a potential breach of protected health information. We are notifying individuals so you can take swift personal action, which along with our efforts, may help to reduce or eliminate potential future risk.
Hybrid Cloud Infrastructure By @VicomComputer | @CloudExpo [#Cloud]
Move from reactive to proactive cloud management in a heterogeneous cloud infrastructure.
In his session at 16th Cloud Expo, Manoj Khabe, Innovative Solution-Focused Transformation Leader at Vicom Computer Services, Inc., will show how to replace a help desk-centric approach with an ITIL-based service model and service-centric CMDB that’s tightly integrated with an event and incident management platform.
Learn how to expand the scope of operations management to service management. He will also discuss how to help people work better and smarter and allow work to flow seamlessly across all domains within IT.
EMC to Acquire @Virtustream | @CloudExpo @EMCcorp [#DevOps #Containers #Microservices]
EMC Corporation on Tuesday announced it has entered into a definitive agreement to acquire privately held Virtustream. When the transaction closes, Virtustream will form EMC’s new managed cloud services business. The acquisition represents a transformational element of EMC’s strategy to help customers move all applications to cloud-based IT environments. With the addition of Virtustream, EMC completes the industry’s most comprehensive hybrid cloud portfolio to support all applications, all workloads and all cloud models.
Google’s IoT land grab: will Brillo help or hinder?
The long rumoured Project Brillo, Google’s answer to the Internet of Things, was finally unveiled this week at the company’s annual I/O conference, and while the project shows promise it comes at time when device manufacturers and developers are increasingly being forced to choose between IoT ecosystems. Contrary to Google’s stated aims, Brillo could – for the same reason – hinder interoperability and choice in IoT rather than facilitate it.
It’s difficult to see Project Brillo as anything more than it really is – an attempt at grabbing highly sought-after ground in the IoT space. It has two key components. There’s Brillo, which is essentially a mini Android OS (made up of some of the services the fully fledged OS abstracts) which Google claims can run on tiny embeddable IP-connected devices (critically, the company hasn’t revealed what the minimum specs for those devices are); and Weave, a proprietary set of APIs that help developers manage the communications layer linking apps on mobile phones to sensors via the cloud.
Brillo will also come with metrics and crash reporting to help developers test and de-bug their IoT services.
The company claims the Weave programme, which will see manufacturers certify to run Brillo on their embeddable devices in much the same Google works with handset makers to certify Android-based mobile devices, will help drive interoperability and quality – two things IoT desperately needs.
The challenge is it’s not entirely clear how Google’s Brillo will deliver on either front. Full-whack Android is almost a case-in-point in itself. Despite have more than a few years to mature, the Android ecosystem is still plagued by fragmentation, which produces its fair share of headaches for developers. As we recently alluded to in an article about Google trying to tackle this problem, developing for a multitude of platforms running Android can be a nightmare; an app running smoothly on an LG G3 can be prone to crashing on a Xiaomi or Sony device because of architectural or resource constraint differences.
This may be further complicated in the IoT space by the fact that embeddable software is, at least currently, much more difficult to upgrade than Android, likely leading to even more software heterogeneity than one currently finds in the Android ecosystem.
Another thing to consider is that most embeddable IoT devices currently in the market or planned for deployment are so computationally and power-constrained (particularly for industrial applications, which is where most IoT stuff is happening these days) that it’s unclear whether there will be a market for Brillo to tap into anytime soon. This isn’t really much use for developers – the cohort Google’s trying to go after.
For device manufacturers, the challenge will be whether building to Google’s specs will be worth the added cost of building alongside ARM, Arduino, Intel Edison or other IoT architectures. History would suggest that it’s always cheaper to build to one architecture rather than multiple (which is what’s driving standards development in this nascent space), and while Google tries to ease the pain of dealing with different manufacturers on the developer side by abstracting lower level functions through APIs, it could create a situation where manufacturers will have to choose which ecosystem they play in – leading to more fragmentation and as a result more frustration for developers. For developers, at least those unfamiliar with Android, it comes at the cost of being locked into a slew of proprietary (or at least Google-owned) technologies and APIs rather than open technologies that could – in a truly interoperable way – weave Brillo and non-Brillo devices with cloud services and mobile apps.
Don’t get me wrong – Google’s reasoning is sound. The Internet of Things is the cool new kid on the block with forecast revenues so vast they could make a grown man weep. There are a fleet of developers building apps and services for Android and the company has great relationships with pretty much every silicon manufacturer on the planet. It seems reasonable to believe that the company which best captures the embeddable software space stands a pretty good chance at winning out at other levels of the IoT stack. But IoT craves interoperability and choice (and standards) more than anything, which even in the best of circumstances can create a tenuous relationship between developers and device manufacturers, where their respective needs stand in opposition. Unfortunately, it’s not quite clear whether Brillo or Weave will truly deliver on the needs of either camp.
Is your cloud provider HIPAA compliant? An 11 point checklist
(c)iStock.com/AndreyPopov
Healthcare organisations frequently turn to managed service providers (MSPs) to deploy and manage private, hybrid or public cloud solutions. MSPs play a crucial role in ensuring that healthcare organisations maintain secure and HIPAA compliant infrastructure.
Although most MSPs offer the same basic services – cloud design, migration, and maintenance – the MSP’s security expertise and their ability to build compliant solutions on both private and public clouds can vary widely.
Hospitals, healthcare ISVs and SaaS providers need an MSP that meets and exceeds the administrative, technical, and physical safeguards established in HIPAA Security Rule. The following criteria either must or should be met by an MSP:
1. Must offer business associate agreements
An MSP must offer a Business Associate Agreement (BAA) if it hopes to attract healthcare business. When a Business Associate is under a BAA, they are subject to audits by the Office for Civil Rights (OCR) and could be accountable for a data breach and fined for noncompliance.
According to HHS, covered entities are not required to monitor or oversee how their Business Associates carry out privacy safeguards, or in what ways MSPs abide by the privacy requirements of the contract. Furthermore, HHS has stated that a healthcare organisation is not liable for the actions of an MSP under BAA unless otherwise specified.
An MSP should be able to provide a detailed responsibility matrix that outlines which aspects of compliance are the responsibility of whom. Overall, while an MSP allows healthcare organisations to outsource a significant amount of both the technical effort and the risk of HIPAA compliance, organisations should still play an active role in monitoring MSPs. After all, an OCR fine is often the least of an organisation’s worries in the event of a security breach; negative publicity is potentially even more damaging.
2. Should maintain credentials
There is no “seal of approval” for HIPAA compliance that an MSP can earn. The OCR grants no such qualifications. However, any hosting provider offering HIPAA compliant hosting should have had their offering audited by a reputable auditor against the HIPAA requirements as defined by HHS.
In addition, the presence of other certifications can assist healthcare organisations in choosing an MSP that takes security and compliance concerns very seriously. A well-qualified MSP will maintain the following certifications:
- SSAE-16
- SAS70 Type II
- SOX Compliance
- PCI DSS Compliance
While these certifications are by no means required for HIPAA compliance, the ability to earn such qualifications indicates a high level of security and compliance expertise. They require extensive (and expensive) investigations by 3rd party auditors of physical infrastructure and team practices.
3. Should offer guaranteed response times
Providers should indicate guaranteed response times within their Service Level Agreement. While 24/7/365 NOC support is crucial, the mere existence of a NOC team is not sufficient for mission-critical applications; healthcare organisations need a guarantee that the MSP’s NOC and security teams will respond to routine changes and to security threats in a timely manner. Every enterprise should have guaranteed response times for non-critical additions and changes, as well.
How such changes and threats are prioritized and what response is appropriate for each should be the subject of intense scrutiny by healthcare organisations, who also have HIPAA-regulated obligations in notifying authorities of security breaches.
4. Must meet data encryption standards
The right MSP will create infrastructure that is highly secure by default, meaning that the highest security measures should be applied to any component where such measures do not interfere with the function of the application. In the case of data encryption, while HIPAA’s Security Rule only requires encryption for data in transit, data should reasonable be encrypted everywhere by default, including at rest and in transit.
When MSPs and healthcare organisations encrypt PHI, they are within the “encryption safe harbor.” Unauthorised disclosure will not be considered a breach and will not necessitate a breach notification if the disclosed PHI is encrypted.
Strong encryption policies are particularly important in public cloud deployments. The MSP should be familiar with best practices for encrypting data both within the AWS environment and in transit between AWS and on-site back-ups or co-location facilities. We discuss data encryption best practices for HIPAA compliant hosting on AWS here.
It is important to note that not all encryption is created equal; look for an MSP that guarantees at least AES-256 Encryption, the level enforced by federal agencies. It is useful to note that AWS’ check-box encryption of EBS volumes meets this standard.
5. Should have “traditional IT” and cloud expertise
Major healthcare organisations have begun to explore public cloud solutions. However, maintaining security in public clouds and in hybrid environments across on-premises and cloud infrastructure is a specialty few MSPs have learned. “Born in the Cloud” providers, whose businesses started recently and are made up exclusively of cloud experts, are quite simply lacking the necessary experience in complex, traditional database and networking that would enable them to migrate legacy healthcare applications and aging EHR systems onto the public cloud without either a) over-provisioning or b) exposing not-fully-understood components to security threats.
No matter the marketing hype around “Born in the Cloud” providers, it certainly is possible to have best-in-class DevOps and cloud security expertise and a strong background in traditional database and networking. In fact, this is what any enterprise with legacy applications should expect.
Hiring an MSP that provides private cloud, bare metal hosting, database migrations, legacy application hosting, and also has a dedicated senior cloud team is optimal. This ensures that the team is aware of the unique features of the custom hardware that currently supports the infrastructure, and will not expose the application to security risks by running the application using their “standard” instance configuration.
6. Must provide ongoing auditing and reporting
HIPAA Security Rule requires that the covered entity “regularly” audit their own environment for security threats. It does not, however, define “regularly,” so healthcare organisations should request the following from their MSPs:
- Monthly or quarterly engineering reviews, both for security concerns and cost effectiveness
- Annual 3rd party audits
- Regular IAM reports. A credential report can be generated every four hours; it lists all of the organisations users and access keys.
- Monthly re-certification of staff’s IAM roles
- Weekly or daily reports from 3rd party security providers, like Alert Logic or New Relic
7. Must maintain compliant staffers and staffing procedures
HIPAA requires organisations to provide training for new workforce members as well as periodic reminder training. As a business associate, the MSP has certain obligations for training their own technical and non-technical staff in HIPAA compliance. There are also certain staff controls and procedures that must be in place and others that are strongly advisable. A covered entity should ask the MSP the following questions:
- What formal sanctions exist against employees who fail to comply with security procedures?
- What supervision exists of employees who deal with PHI?
- What is the approval process for internal collaboration software or cloud technologies?
- How do employees gain access to your office? Is a FOB required?
- What is your email encryption policy?
- How will your staff inform our internal IT staff of newly deployed instances/servers? How will keys be communicated, if necessary?
- Is there a central authorisation hub such as Active Directory for the rapid decommissioning of employees?
- Can you provide us with your staff’s HIPAA training documents?
- Do you provide security threat updates to staff?
- What are internal policies for password rotation?
- (For Public Cloud) How are root account keys stored?
- (For Public Cloud) How many staff members have Administrative access to our account?
- (For Public Cloud) What logging is in place for employee access to the account? Is it distinct by employee, and if federated access is employed, where is this information logged?
While the answers to certain of these questions do not confirm or deny an MSP’s degree of HIPAA compliance, they may help distinguish a new company that just wants to attract lucrative healthcare business versus a company already well versed in such procedures.
8. Must secure physical access to servers
In the case of a public cloud MSP, the MSP should be able to communicate why their cloud platform of choice maintains physical data centres that meet HIPAA standards. To review AWS’s physical data centre security measures, see their white paper on the subject. If a hybrid or private cloud is also maintained with the MSP, they should provide a list of global security standards for their data centres, including ISO 27001, SOC, FIPS 140-2, FISMA, and DoD CSM Levels 1-5, among others. The specific best practices for physical data centre security that healthcare organisations should look out for is well covered in ISO 27001 documentation.
9. Should conduct risk analysis in accordance with NIST guidelines
The National Institute of Standards and Technology, or NIST, is a non-regulatory federal agency under the Department of Commerce. NIST develops information security standards that set the minimum requirements for any information technology system used by the federal government.
NIST produces Standard Reference Materials (SRMs) that outline the security practices, and their most recent Guide for Conducting Risk Assessments provides guidance on how to prepare for, conduct, communicate, and maintain a risk assessment as well as how to identify and monitor specific risk factors. NIST-800 has become a foundational document for service providers and organisations in the information systems industry.
An MSP should be able to provide a report that communicates the results of the most recent risk assessment, as well as the procedure by which the assessment was accomplished and the frequency of risk assessments.
Organisations can also obtain NIST 800-53 Certification from NIST as a further qualification of security procedures. While again this is not required of HIPAA Business Associates, it indicates a sophisticated risk management procedure — and is a much more powerful piece of evidence than standard marketing material around disaster recovery and security auditing.
10. Must develop a disaster recovery plan and business continuity plan
The HIPAA Contingency Plan standard requires the implementation of a disaster recovery plan. This plan must anticipate how natural disasters, security attacks, and other events could impact systems that contain PHI and develops policies and procedures for responding to such situations.
An MSP must be able to provide their disaster recovery plan to a healthcare organisation, which should include answers to questions like these:
- Where is backup data hosted? What procedure maintains retrievable copies of ePHI?
- What procedures identify suspected security incidents?
- Who must be notified in the event of a security incident? How are such incidents documented?
- What procedure documents and restores the loss of ePHI?
- What is the business continuity plan for maintaining operations during a security incident?
- How often is the disaster recovery plan tested?
11. Should already provide service to large, complex healthcare clients
Although the qualifications listed above are more valuable evidence of HIPAA compliance, a roster of clients with large, complex, HIPAA-compliant deployments should provide extra assurance. This pedigree will be particularly useful in vendor decision discussions with non-technical business executives. The MSPs ability to maintain healthcare clients in the long-term (2-3+ years) is important to consider.
The post Is Your Cloud Provider HIPAA Compliant? 11 Point Checklist appeared first on Gathering Clouds.
Equinix, Telecity reach merger agreement as Interxion gets kicked to the curb
Equinix and TelecityGroup have agreed the terms of a merger that will see the American datacentre incumbent pay $2.35bn for all issued Telecity shares. The deal also means the proposed merger between Telecity and Interxion is dead in the water.
Under the terms of the merger each Telecity shareholder will be entitled to receive £5.72 for each share and 0.0327 new Equinix shares. Following the merger’s completion Telecity shareholders will hold just over 10 per cent of the shares in the combined group.
John Hughes, executive chairman of the board of TelecityGroup will also be joining the Equinix board.
“On behalf of the Board of TelecityGroup, I am very pleased to recommend the combination of TelecityGroup and Equinix to our shareholders today. Having carefully considered all our options, the Board believes this is a compelling offer and an excellent outcome for shareholders, employees and customers,” Hughes said.
“Through this transaction, our customers will have new global opportunities for their connected datacentre requirements. The combination of Equinix and TelecityGroup services and people will ensure the expanded business leads the way in the provision of highly-connected data centre services for customers in Europe and all over the world.”
Stephen Smith, chief executive officer and president of Equinix said TelecityGroup will “considerably strengthen” its current offerings in Europe and help reinforce its position in the interconnection business.
“The transaction will allow Equinix to benefit from increased scale and extend the global reach of our platform. We believe our offer is compelling to TelecityGroup shareholders who will realise significant value for their holdings while having the opportunity to participate in the future strengths of the combined business,” Smith said.
“We are especially pleased to be welcoming John Hughes onto the Board of the combined business and will greatly benefit from his experience in the technology space,” he added.
The move also means that the proposed merger between TelecityGroup and Interxion is dead. When news broke of the merger talks earlier this month Equinix’s board called Interxion out, claiming an Equinix merger would be more beneficial from the perspective of shareholders.
If the merger is approved TelecityGroup will give Equinix a stronger presence in the UK and extend its footprint into new locations with identified cloud and interconnection needs including Dublin, Helsinki, Istanbul, Milan, Stockholm and Warsaw, something Equinix is clearly willing to splurge on. Telecity’s market cap when news of the potential merger originally broke earlier this month stood at £1.4bn, so Equinix is paying a premium of around £950m.
Google unveils cloud-based testing lab to combat Android fragmentation
Google unveiled a cloud-based testing service for Android apps it hopes will help combat fragmentation in the growing Android ecosystem.
The service, unveiled at Google’s annual I/O conference this week and based on Appurify’s technology – an acquisition it announced at the conference last year, allows developers to run their applications on simulated versions of thousands of different Android devices.
The company said much like other app testing services the Cloud Test Lab can record what happens just before an app crashes, and provides a crash log to help users debug their apps after having tested them on tons of different devices with a wide range of specs and capabilities.
“From nearly every brand, model and version of physical devices your users might be using, to an unlimited supply of virtual devices in every language, orientation and network condition around the world. You can get rid of that device closet—ours is bigger,” the company said.
“Out of the box, without any user-written tests, robot app crawlers know just what to look out for and will find crashes in your app for you. Augment this with user-written instrumentation tests to make sure that your most important user flows work perfectly.”
There has always been fragmentation in the Android world, and while it’s considered by some users to be one of the benefits of playing in Google’s ecosystem it’s also a major headache for app developers because building crash-proof apps for a range of devices can be quite time-consuming; not getting that right can as a result cause users grief (just check out a few reviews on the Google Play store).
With a wide range of low-cost Android devices flowing in from China, coupled with other large incumbents like Samsung, LG and Sony contributing to the heterogeneity themselves, fragmentation only seems to be increasing (OpenSignal has put together an impressive report detailing the scale of Android fragmentation – and how it compares with the iOS ecosystem). These testing services will also be critical for Google developers as the company looks to target the Internet of Things with a new OS and doubles down on Chromebooks, which are both based on Android.