Too often with compelling new technologies market participants become overly enamored with that attractiveness of the technology and neglect underlying business drivers. This tendency, what some call the “newest shiny object syndrome” is understandable given that virtually all of us are heavily engaged in technology. But it is also mistaken. Without concrete business cases driving its deployment, IoT, like many other technologies before it, will fade into obscurity.
Monthly Archives: November 2015
When CRM Is Not Enough By @Xeniar | @CloudExpo #Cloud
Bluenose empowers SaaS businesses to proactively manage and engage their customers through our insight-driven toolkit and built-in play books. We rely on big data analytics and trended analyses to forecast where a customer relationship is going.
A Practical Guide to Popular Node.js MVC Frameworks By @OmedHabib | @CloudExpo #Cloud
Using any programming framework to the fullest extent possible first requires an understanding of advanced software architecture concepts. While writing a little client-side JavaScript does not necessarily require as much consideration when designing a scalable software architecture, the evolution of tools like Node.js means that you could be facing large code bases that must be easy to maintain.
Big Data Enables Top User Experiences | @CloudExpo #IoT #M2M #BigData
Intuit uses deep-data analytics to gain a 360-degree view of its TurboTax application’s users’ behavior and preferences for rapid applications improvements.
The next BriefingsDirect big-data innovation case study highlights how Intuit uses deep-data analytics to gain a 360-degree view of its TurboTax application’s users’ behavior and preferences. Such visibility allows for rapid applications improvements and enables the TurboTax user experience to be tailored to a highly detailed degree.
Testing Streaming Media | @CloudExpo #APM #IoT #DevOps #BigData
Here’s the thing: as sure as we’ll have another record-setting year for NFL streaming, you can also be sure that apps will fail and streaming services will go down. Whether you are dabbling in streaming or diving in whole-hog, you need to know what to do to give your users the most reliable experience possible. Here are a few tips.
What Are the Advantages of Cloud Computing? By @CHBoorman | @CloudExpo #IoT #Cloud
Here’s a bold claim: Cloud computing has the potential to be as transformative as the advent of the automobile. Before the age of cars and buses, everything was undertaken at a fraction of the pace it is now-transport, distribution, socializing. The automobile revolutionized all of that, changing forever the way we moved, made friends and worked. Make no mistake, cloud computing is the 21st Century equivalent of the automobile.
In case you’ve just returned from several years orbiting Mars, let me just explain what we mean by cloud computing. It refers to storing and accessing data, programs and services over the Internet instead of your computer hard drive. Working off your hard drive is how the computer industry functioned for decades-all your data lived somewhere inside that desktop PC. With cloud computing, you divest of the hard drive and enjoy the freedom of working anywhere, anytime, on any device. Your data, however much there is of it, is stored in huge data centers and storage farms often in places people have never heard of.
Verizon and VMTurbo collaborate over smart cloud brokerage
US telco Verizon and control system maker VMTurbo have jointly created the Verizon Intelligent Cloud Control to help Verizon customers migrate workloads to the most suitable public cloud service.
The system works by calculating the enterprise customer’s performance and resource needs and matching them up to the most likely provider. The partners claim this is the first automated system of its kind on the market.
Existing cloud brokerages, they claim, have to manually recommend workload placement to public cloud service providers (CSPs). However Intelligent Cloud Control gives Verizon customers a system that automatically makes instant calculations on price, performance and placements, while taking in compliance considerations. It also makes all the sizing and configuration decisions needed in order to install and migrate workloads to public cloud providers.
Verizon claims the system will be easy to use with a single interface and detailed cost controls that will eliminate billing surprises. The system will also help end users keep on top of performance and compliance issues through rigourous cloud monitoring.
The ‘Verizon Intelligent Cloud Control powered by VMTurbo’ service will launch during the first quarter of 2016. Initially the service will include connections to Amazon Web Services, IBM SoftLayer and Microsoft Azure.
Verizon’s customers said they needed a better way to manage their risk when moving to the public cloud, according to Victoria Lonker, director of enterprise networking for Verizon. “We are removing the complexities and myriad trade-offs between price, performance and compliance in various public cloud services,” said Lonker, “now they can focus on the applications and services.”
VMTurbo’s Application Performance Control system is used by 1200 enterprises to guarantee Quality of Service for applications and to make full use of all resources in cloud and virtualized environments.
“Intelligent Cloud Control is different from today’s cloud brokers and managers as it factors in application performance and price,” said Endre Sara, VP of advanced solutions of VMTurbo.
SAP to become a Genband reseller as Kandy improves relationship in the cloud
Texas-based comms software specialist Genband has signed SAP as a global reseller of its comms platform-as-a-service (PaaS) Kandy. Under the terms of the arrangement, Kandy will be repackaged as the SAP Real-Time Communicator Web application by Genband.
The system is designed to help any sized enterprise to improve its workflow by improving its communications processes, making them simpler to use and more effective. This is achieved by making it easier for sales, service and business professionals to adopt the chat, videoconference and collaboration systems that are often under-used in many companies. By improving real time communications between customers and co-workers SAP says its cloud offering will makes its enterprise clients far more effective sales organisations.
SAP claims its Real-Time Communicator creates personalized engagement and helps them stand out from competitors through a superior customer experience. In its capacity as a reseller SAP has integrated Real-Time Communicator into the rest of its portfolio and embedded communications within its business applications, giving them presence, instant messaging, voice and video chat and conferencing. The Real-Time Communicator is integrated natively into SAP Cloud for Customer, and can be integrated with the SAP Hybris Commerce system.
Genband’s executive VP of Strategy and Cloud Services Paul Pluschkell said SAP, as the world’s top cloud player, is the ideal reseller partner to collaborate with. “Integrating with SAP creates a powerful customer experience that empowers customers to work smarter and more efficiently,” said Pluschkell.
The combination creates dramatic improvements in productivity for clients said Nayaki Nayyar, senior VP of Cloud for Customer Engagement at SAP. Managing vital relationships helps make the experience richer, more contextual and highly efficient, said Nayyar. SAP is reselling Genband because it has created an advanced market offering and the only one that could help SAP launch new offerings across its applications. “Genband’s technology performance leadership, global presence and comprehensive product portfolio, all factored into our decision to select this platform,” said Nayyar.
Happy (belated) birthday, OpenStack: you have much to look forward to
Now past the five-year anniversary of OpenStack’s creation, the half-decade milestone provides an opportunity to look back on how far the project has come in that time – and to peer thoughtfully into OpenStack’s next few years. At present, OpenStack represents the collective efforts of hundreds of companies and an army of developers numbering in the thousands. Their active engagement in continually pushing the project’s technical boundaries and implementing new capabilities – demanded by OpenStack operators – has defined its success.
Companies involved with OpenStack include some of the most prestigious and interesting tech enterprises out there, so it’s no surprise that this past year has seen tremendous momentum surrounding OpenStack’s Win the Enterprise program. This initiative – central to the future of the OpenStack project – garnered displays of the same contagious enthusiasm demonstrated in the stratospheric year-over-year growth in attendance at OpenStack Summits (the most version of the event, held in Tokyo, being no exception). The widespread desire of respected and highly-capable companies and individuals to be involved with the project is profoundly assuring, and proves the recognition of OpenStack as a frontrunner for the title of most innovative software and development community when it comes to serving enterprises’ needs for cloud services.
With enterprise adoption front of mind, these are the key trends now propelling OpenStack into its next five years:
Continuing to Redefine OpenStack
The collaborative open source nature of OpenStack has successfully provided the project with many more facets and functionalities than could be dreamt of initially five years ago, and this increase in scope (along with the rise of myriad new related components) has led to the serious question: “What is OpenStack?” This is not merely an esoteric query – enterprises and operators must know what available software is-and-is-not OpenStack in order to proceed confidently in their decision-making around the implementation of consistent solutions in their clouds. Developers require clarity here as well, as their applications may potentially need to be prepared to operate across different public and private OpenStack clouds in multiple regions.
If someone were to look up OpenStack in the dictionary (although not yet in Webster’s), what they’d see there would be the output of OpenStack’s DefCore project, which has implemented a process that now has a number of monthly definition cycles under its belt. This process bases the definition of a piece of software as belonging to OpenStack on core capabilities, implementation code and APIs, and utilizes RefStack verification tests. Now OpenStack distributions and operators have this DefCore process to rely on in striving for consistent OpenStack implementations, especially for enterprise.
Enterprise Implementation Made Easy
The OpenStack developer community is operating under a new “big tent” paradigm, tightening coordination on project roadmaps and releases through mid-cycle planning sessions and improved communication. The intended result? A more integrated and well-documented stack. Actively inviting new major corporate sponsors and contributors (for example Fujitsu, a new Gold member of OpenStack as of this July) has helped to better inform the ease of implementation with which enterprise can get on board with OpenStack.
Of course, OpenStack will still require expertise to be implemented for any particular use case, as it’s a complicated, highly configurable piece of software that can run across distributed systems – not to mention the knowledge needed to select storage sub-systems and networking options, and to manage a production environment at scale. However, many capable distribution and implementation partners have arisen worldwide to provide for these needs (Mirantis, Canonical, Red Hat, Aptira, etc), and these certainly have advantages over proprietary choices when looking at the costs and effort it takes to get a production cloud up and running.
The OpenStack Accelerator
A positive phenomena that enterprises experience when enabling their developers and IT teams to work within the OpenStack community is seen in the dividends gained from new insights into technologies that can be valuable within their own IT infrastructure. The open collaborations at the heart of OpenStack expose contributors to a vast ecosystem of OpenStack innovations, which enterprises then benefit from internalizing. Examples of these innovations include network virtualization software (Astara, MidoNet), software-defined storage (Swift, Ceph, SolidFire), configuration management tools (Chef, Puppet, Ansible), and a new world of hardware components and systems offering enough benefit to make enterprises begin planning how to take advantage of them.
The pace of change driven by OpenStack’s fast-moving platform is now such that it can even create concern in many quarters of the IT industry. Enterprise-grade technology that evolves quickly and attracts a lot of investment interest will always have its detractors. Incumbent vendors fear erosion of market share. IT services providers fear retooling their expertise and workflows. Startups (healthily) fear the prospect of failure. But the difference is that startups and innovators choose to embrace what’s new anyway, despite the fear. That drives technology forward, and fast. And even when innovators don’t succeed, they leave behind a rich legacy of new software, talent, and tribal knowledge that we all stand on the shoulders of today. This has been so in the OpenStack community, and speaks well of its future.
Stefano Maffulli is the Director of Cloud and Community at DreamHost, a global web hosting and cloud services provider whose offerings include the cloud computing service DreamCompute powered by OpenStack, and the cloud storage service DreamObjects powered by Ceph.
DevOps Is the Future of SIAM | @CloudExpo #DevOps #IoT #Microservices
Enterprises with internally sourced IT operations typically struggle with typical tensions associated with siloed application and infrastructure organizations. They are characterized by finger pointing and an inability to restore operational capabilities under complex conditions that span both application and infrastructure configurations. These tensions often are used to characterize the need for a DevOps movement, which focuses on organizational, process and cultural changes needed to bring about a more fluid IT delivery that embodies higher quality and greater overall agility.