Donna Yasay, President of HomeGrid Forum, today discussed with a panel of technology peers how certification programs are at the forefront of interoperability, and the answer for vendors looking to keep up with today’s growing industry for smart home innovation.
“To ensure multi-vendor interoperability, accredited industry certification programs should be used for every product to provide credibility and quality assurance for retail and carrier based customers looking to add ever increasing numbers of devices to the home network,” commented Yasay. “The successful global adoption of the Internet of Things is dependent on a robust and secure home network.”
Monthly Archives: October 2016
Six Big Topics at @CloudExpo | #BigData #IoT #ML #DevOps #DigitalTransformation
As we enter the final week before the 19th International Cloud Expo | @ThingsExpo in Santa Clara, CA, it’s time for me to reflect on six big topics that will be important during the show.
Hybrid Cloud
This general-purpose term seems to provide a comfort zone for many enterprise IT managers. It sounds reassuring to be able to work with one of the major public-cloud providers like AWS or Microsoft Azure while still maintaining an on-site presence.
Innovation and automation: Examining development in a multi-tenant world
Right now, students all over the country are leaving home, starting university, and moving into their halls of residence. From my experience, that’s usually a single building, teeming with rooms, where they can get to know one another, collaborate on ideas, and share student facilities, like cooking. It’s a cheaper way of living too.
A true multi-tenant architecture, in other words.
This got me thinking about software development within that same multi-tenanted architecture.
First things first: what is multi-tenant? It is defined as a single instance of a software application serving multiple customers. These ‘tenants’ can customise some parts of the application, but not the application’s core. It’s cheaper because development and maintenance costs are shared, in contrast with single-tenancy, where each customer has their own software instance. Moreover, you only need to make updates once—in a single-tenancy architecture, the updates are endless.
Multi-tenancy can light the touch paper on your application development. As organisations embrace cloud, big data, the Internet of Things, and other digital disruptors that are redefining the role of IT in business, they need a proactive, comprehensive and dynamic automation strategy. Especially release automation, which accelerates the launch of innovative new services, while supporting reliability and control to enable the modern agile enterprise.
Multi-tenant release automation reduces TCO
As part of this digital transformation, organisations want to play in sandboxes: test-driving their neat new services and (mini) applications to make certain they don’t fall over in production. Automic is the only ARA vendor offering a multi-tenant service that can be deployed in the clouds. This enables enterprises to serve multiple departments and clients in isolation from each other on a single, shared multi-tenant platform, scaling for extremely large environments while driving cost of ownership down, simplifying operations, and maintenance.
Here at Automic, for example, more than 300 customers have signed up for their own private sandbox on the Automic multi-tenant cloud. It operates on its own cloud, so organizations can get a feel for how the Automic platform can help drive digital transformation in a world of multiple rapid releases and deployments. This “try it before you buy it” service is the first of its kind in the release automation market.
This multi-tenant model isn’t just designed to help test drive Automic—it helps managed service providers (MSPs) and other large organisations kick sand in the faces of competitors. In cloud computing, the meaning of multi-tenancy architecture has broadened because of new service models that take advantage of virtualisation and remote access. MSPs, for instance, can run one instance of its application on one instance of a database and provide web access to multiple customers. In such a scenario, each tenant’s data is isolated and remains invisible to other tenants. It means MSPs can deliver great service to more customers at lower cost.
It doesn’t stop there. This multi-tenant release automation model helps large enterprises—saddled with slow-moving, legacy infrastructures—to bridge the gap between development teams launching mobile and web-based services almost non-stop, and the operations team who are more concerned with maintaining reliability and continuity. A multi-tenant release automation product enables organisations like these to be agile in the back end, and be compliant and scalable in the front office.
One more thought. Most university halls of residence are very tall buildings. Some so high they reach into the clouds. Even more reason to consider multi-tenancy release automation in the cloud.
Dyn’s DDoS Attack – What it Means for the Cloud?
Prominent websites like Twitter, Netflix, Airbnb, and Spotify were having sporadic problems since Friday, thanks to a Distributed Denial of Service (DDoS) attack on Dyn’s servers. Dyn is one of the largest ISPs in the world, so an attack on its servers meant a significant chunk of DNS (Internet’s address directory) went down. DNS is something similar to a phone book, and it allows users to connect to different websites and applications. Thus, when the DNS servers were attacked, users could not connect to certain IP addresses.
In most DDoS attacks, the information is intact, but temporarily unavailable. But in this case, Dyn’s core Internet infrastructure was hacked, so any organization that is directly dependent on Dyn or a service provider that uses Dyn’s servers were affected.
Besides websites, a whole lot of Internet of Things (IoT) devices that are hooked to the Internet were also affected. Cameras, baby monitors, and home routers are some of the devices that were affected by the outage. Also, corporate applications that are used to perform critical business operations were affected, thereby raking up huge losses for different companies.
Out of these companies, the ones that were worst-affected are those that rely on SaaS for critical business operations. This attack, in many ways, exposes the vulnerability of cloud computing, and the consequences of depending on third-party servers for the most critical of operations. Had these companies used multiple DNS providers or if they had stored their critical business applications in local servers, the impact of such an attack would have been greatly negated.
Going forward, what does it mean for businesses that depend on the cloud?
First off, this is a complex attack that is believed to have been done by a large group of hackers. The nature and source of the attack is still under investigation, so at this point in time, it’s hard to tell who’s behind the attack. But such complex attacks can’t happen every day as it requires enormous amounts of planning and coordination. That said, Verisign came up with a report recently that showed a 75 percent increase in such attacks from April to June. How much of it translated to loss for companies? Only a miniscule when compared to the direct security attacks that companies face.
Secondly, we’ve come too far ahead with cloud, to imagine a world without it. SaaS, PaaS, and IaaS have become integral aspects of businesses, and the benefits that come from it are enormous as well. So, compromising on the huge benefits for a rare attack is not a sound decision.
From the above argument, we can say that this DDoS attack is not going to change the cloud market overnight. However, it will make users more aware of the vulnerabilities of the cloud, so they will be better prepared to handle such situations in the future. This is also a good learning experience for companies like Dyn, as it’ll look at ways to beef up its security arrangements.
In short, though the DDoS attack was dangerous and widespread, its impact on cloud may be minimal because the benefits from cloud are huge, and such attacks are seen as rare instances when compared to direct attacks on large companies.
The post Dyn’s DDoS Attack – What it Means for the Cloud? appeared first on Cloud News Daily.
OpenStack revenues to exceed $5bn by 2020 – with private cloud the lynchpin
(c)iStock.com/peshkov
Revenues from OpenStack business models will surpass $5 billion (£4.08bn) by 2020 and grow at a CAGR of 35%, according to the latest figures from 451 Research.
The numbers, which are being released to coincide with the day before the latest OpenStack Summit, puts the exact figure at $5.7bn by 2020, from $4.6bn the previous year and $3.5bn in 2018.
“We continue to believe the market is still in the early stages of enterprise use and revenue generation,” said Al Sadowski, research vice president at 451 Research. “We expect an uptick in revenues from all sectors and geographic regions, especially from those companies in the OpenStack Products and Distributions category that are targeting enterprises.”
451 Research argues the primary area where OpenStack will see success is in the private cloud space.
Alongside more traditional use cases, such as DevOps, platform as a service (PaaS) and big data, the research firm sees significant benefits with regard to software defined networking (SDN), network function virtualisation (NFV), mobile, and the Internet of Things.
The primary user base, according to the research firm, remains enterprises looking to deploy cloud-native applications in private cloud environments, with “limited” appeal for organisations who are already using hyperscale cloud providers, as well as on legacy applications.
These figures make for interesting reading alongside those put out by OpenStack themselves earlier this month. According to data taken from the OpenStack user base, containers – cited by 78% of respondents – SDN and NFV (61%) and bare metal (56%). are the primary technologies of interest going forward.
451 warns that while container software is ‘mostly’ beneficial and complementary to OpenStack, “persistent attention to containers and their management threatens to eclipse OpenStack, similar to how OpenStack surpassed its rival CloudStack in mindshare and then market share.”
Putting the ‘converged’ in hyperconverged support: What to do on the second day
(c)iStock.com/ismagilov
By Geoff Smith, Senior Manager, Managed Services Business Development
Today’s hyperconverged technologies are here to stay it seems. I mean, who wouldn’t want to employ a novel technology approach that ‘consolidates all required functionality’ into a single infrastructure appliance that provides an ‘efficient, elastic pool of x86’ resources controlled by a ‘software-centric’ architecture? I mean, outside of the x86 component, it’s not like we haven’t seen this type of platform before (hello, mainframe anyone?).
But this is not about the technology behind HCI (hyperconverged infrastructure), nor about whether this technology is the right choice for your IT demands. It’s more about what you need to consider on day two, after your new platform is happily spinning away in your data centre. Assuming you have determined that the hyperconverged path will deliver technology and business value for your organisation, why wouldn’t you extend that belief system to how you plan on operating it?
Today’s hyperconverged vendors offer very comprehensive packages that include some advanced support offerings. They have spent much time and energy (and VC dollars) in creating monitoring and analytics platforms that are definitely an advancement over traditional technology support packages.
While technology vendors such as HP, Dell/EMC, Cisco and others have for years provided phone-home monitoring and utilisation/performance reporting capabilities, hyperconverged vendors have pushed these capabilities further with real-time analytics and automation workflows, such as Nutanix Prism, SimpliVityOmniWatch, and OmniView. Additionally, these vendors have aligned support plans to business outcomes – ‘mission critical’, ‘production’, ‘basic’, and so on.
Now you are asking: “Okay Mr. Know-It-All, didn’t you just debunk your own argument?” Au contraire I say – I have just reinforced it.
Each hyperconverged vendor technology requires its own separate platform for monitoring and analytics. And these tools are restricted to just what is happening internally within the converged platform. Sure, that covers quite a bit of your operational needs, but is it the complete story?
Let’s say you deploy SimpliVity for your main data centere. You adopt the ‘mission critical’ support plan, which comes with OmniWatch and OmniView. You now have great insight into how your OmniCube architecture is operating, and you can delve into the analytics to understand how your SimpliVity resources are being utilised. In addition, you get software support with one, two, or four hour response (depending on the channel you use – phone, email, web ticket). You also get software updates and RCA reports. It sounds like a comprehensive, ‘converged’ set of required support services.
And it is, for your selected hyperconverged vendor. What these services do not provide is a holistic view of how the hyperconverged platforms are operating within the totality of your environment. How effective is the networking that connects it to the rest of the data centre? What about non-hyperconverged based workloads, either on traditional server platforms or in the cloud? And how do you measure end user experience if your view is limited to hyperconverged data-points? Not to mention, what happens if your selected hyperconverged vendor is gobbled up by one of the major technology companies or, worse, closes when funding runs dry?
Adopting hyperconverged as your next-generation technology play is certainly something to consider carefully, and has the potential to positively impact your overall operational maturity. You can reduce the number of vendor technologies and management interfaces, get more proactive, and make decisions based on real data analytics. But your operations teams will still need to determine if the source of impact is within the scope of the hyperconverged stack and covered by the vendor support plan, or if its symptomatic of an external influence.
Beyond the awareness of health and optimised operations, there will be service interruptions. If there weren’t we would all be in the unemployment line. Will a one hour response be sufficient in a major outage? Is your operational team able to response 7X24 with hyperconverged skills? And, how will you consolidate governance and compliance reporting between the hyperconverged platform and the rest of your infrastructure?
Hyperconverged platforms can certainly enhance and help mature your IT operations, but they do provide only part of the story. Consider carefully if their operational and support offerings are sufficient for overall IT operational effectiveness. Look for ways to consolidate the operational information and data provided by hyperconverged platforms with the rest of your management interfaces into a single control plane, where your operations team can work more efficiently. If you’re looking for help, GreenPages can provide this support via its Cloud Management as a Service (CMaaS) offering.
Convergence at this level is even more critical to ensure maximum support of your business objectives.
Dev in DevOps? | @DevOpsSummit #APM #DevOps #CD #Microservices
DevOps is a term that comes full of controversy. A lot of people are on the bandwagon, while others are waiting for the term to jump the shark, and eventually go back to business as usual.
Regardless of where you are along the specturm of loving or hating the term DevOps, one thing is certain. More and more people are using it to describe a system administrator who uses scripts, or tools like, Chef, Puppet or Ansible, in order to provision infrastructure. There is also usually an expectation of being able to deliver this in 100% cloud, or hybrid cloud environments.
A Harsh #BigData Message | @BigDataExpo #Analytics #DigitalTransformation
This is a short blog with a harsh message for Big Data vendors.
Camera fades in to Pastor Schmarzo heading to the pulpit…
What does the future hold for today’s big data vendors? Hundreds of startups are rushing into the big data market to stake their claim to a market that IDC predicts will reach $187 billion by 2019. Dang, that’s a big market, especially considering that the Global Business Intelligence market will only reach a trifling $20.8 billion by 2018 or the long-running ERP applications market is expected to reach a trivial $84.1 billion by 2020. Yes, the big data market opportunity is very exciting indeed!
[session] #IoT and Transportation | @ThingsExpo @JAdP #M2M #Sensors
In past @ThingsExpo presentations, Joseph di Paolantonio has explored how various Internet of Things (IoT) and data management and analytics (DMA) solution spaces will come together as sensor analytics ecosystems. This year, in his session at @ThingsExpo, Joseph di Paolantonio from DataArchon, will be adding the numerous Transportation areas, from autonomous vehicles to “Uber for containers.” While IoT data in any one area of Transportation will have a huge impact in that area, combining sensor analytics from these different areas will impact government, industry, retail and other processes, as well as life-style choices.
[session] #DevOps in the #API Economy | @DevOpsSummit @CAinc #IoT #ML #CD
Today every business relies on software to drive the innovation necessary for a competitive edge in the Application Economy. This is why collaboration between development and operations, or DevOps, has become IT’s number one priority. Whether you are in Dev or Ops, understanding how to implement a DevOps strategy can deliver faster development cycles, improved software quality, reduced deployment times and overall better experiences for your customers.