Building video calling apps is no small task. Learning about video codecs, signaling, and presence is just the beginning when it comes to implementation. At PubNub, we have partnered our technology with WebRTC to make integration fast and easy to build video chat software. Out of the box, our WebRTC Framework will power audio, video, and data communication between two browsers.
Want to get an idea of what it’ll look like when you’re finished? Take a look at our live, working demo and code walk through, or watch the video below, or keep reading.
Archivo mensual: noviembre 2013
NRRC Video Series – Video 3 : Federated Identity and Access Management
In September, the NCOIC delivered the Geospatial Community Cloud (GCC) demonstration. Sponsored by the National Geospatial-Intelligence Agency, this demonstration showed how an interoperable, hybrid-cloud operating environment can be quickly enabled and used as a rapid response capability.While this demonstration was designed around lessons learned in the 2010 Haitian Earthquake, the effort showed how a cloud services brokerage approach could be used to quickly provide critical information technology infrastructure support to an unplanned event.
The NCOIC is an international organization for accelerating the global implementation of network centric principles and systems–to improve information sharing among various communities of interest for the betterment of their productivity, interactivity, safety, and security. The NCOIC Rapid Response Capability (NRRC) video series supports that mission by broadly disseminating information about the GCC demonstration for the good of the global community.
NRRC Video Series – Video 3 : Federated Identity and Access Management
In September, the NCOIC delivered the Geospatial Community Cloud (GCC) demonstration. Sponsored by the National Geospatial-Intelligence Agency, this demonstration showed how an interoperable, hybrid-cloud operating environment can be quickly enabled and used as a rapid response capability.While this demonstration was designed around lessons learned in the 2010 Haitian Earthquake, the effort showed how a cloud services brokerage approach could be used to quickly provide critical information technology infrastructure support to an unplanned event.
The NCOIC is an international organization for accelerating the global implementation of network centric principles and systems–to improve information sharing among various communities of interest for the betterment of their productivity, interactivity, safety, and security. The NCOIC Rapid Response Capability (NRRC) video series supports that mission by broadly disseminating information about the GCC demonstration for the good of the global community.
What are the big data and predictive analytics market trends for 2014?
Senior executives have been waiting for actionable insight from the mass of information that their IT departments already gather about their customers. But many have become very impatient — because they want to see results, now. Do you anticipate that data analytics and real-time visualization will advance next year?
Ovum expects a significant wave of business technology ramp-ups in 2014, in response to the market demand. They predict a growing third-party vendor and IT services ecosystem that creates Big Data and Fast Data tools and solutions for the enterprise data warehousing and applications markets.
This growing trend is occurring as SQL and Hadoop platforms are diversifying, adopting multiple personalities, and providing overlapping functions. According to Ovum’s latest market study, SQL queries can now be run against Hadoop, and many SQL databases will be able to handle JSON document-centric queries.
And as silicon-based storage — DRAM (dynamic random-access memory) and SSD/Flash …
Amazon continues to “dwarf all competition” in IaaS and PaaS, analyst claims
The latest report from Synergy Research has claimed that Amazon Web Services’ quarterly revenues continue to be greater than its biggest rivals combined in IaaS and PaaS.
Amazon grew by 55% between Q212 and Q213 to over $700m of revenue in the quarter, with Salesforce, Microsoft, IBM and Google combined making over just $600m.
John Dinsdale, of the Synergy Research Group, noted that the battle for second place will be of more interest to analysts.
“We’ve been analysing the IaaS/PaaS markets for quite a few quarters now and creating these leadership metrics, and the relative positioning of the leaders really hasn’t changed much,” Dinsdale said.
“While Amazon dwarfs all competition, the race is on to see if any of the big four followers can distance themselves from their peers,” he added.
With the worldwide market expanding 46%, it appears the competition isn’t making much of a …
Ebook: Hybrid Cloud Management with System Center 2012 R2 App Controller
This month, Yung Chou, Mitch Tulloch and I have published a new book on Hybrid Cloud Management with System Center 2012 R2 App Controller, titled Microsoft System Center: Cloud Management with App Controller. As part of a series of specialized guides on System Center, this book focuses on using App Controller to manage virtual machines and services across private and public clouds.
Microsoft System Center 2012 R2 App Controller is uniquely positioned as both an enabler and a self-service vehicle for connecting clouds and implementing the hybrid computing model. In Microsoft’s cloud computing solutions, both System Center and Windows Azure play critical roles. System Center can be used to transform enterprise IT from a device-based infrastructure and deployment strategy to a service-based user-centric consumption model based on private cloud computing. Windows Azure on the other hand is a subscription-based public cloud platform that enables the development, deployment, and management of cloud solutions. App Controller is the glue that unifies these two platforms by providing a single interface that enables administrators to perform complex operations without overwhelming them with the underlying technical complexities involved.
In this article, I’ll provide an introduction to this book as well as the details for downloading your FREE copy …
Cloud Marketplaces: SaaS or Pseudo-SaaS?
There is a lot of confusion and hype about the cloud and SaaS (Software as a Service), and at Corent we experience it on a regular basis. One of the things I’ve been seeing and hearing about is the concept of a marketplace of cloud applications. I’ve observed that applications sold this way are often described as SaaS even though the key criteria of SaaS aren’t being provided to either the application vendor, or the application customer.
While it’s true that the cloud is a SaaS product, often the business applications provided on the cloud marketplaces are not.
Why Automate? What to Automate? How to Automate?
By John Dixon, Consulting Architect
Automation is extremely beneficial to organizations. However, the questions often come up around why to automate, what to automate, and how to automate.
Why automate?
There are several key benefits surrounding automation. They include:
- Saving time
- Employees can be retrained to focus on other (hopefully more strategic) tasks
- Removing human intervention reduces errors
- Troubleshooting and support is improved when everything is deployed the same way
What to automate?
Organizations should always start with the voice of the customer (VoC). IT departments need to factor in what the end user wants and what the end user expects to improve their experience. If you can’t trace back something you’re automating to an improved customer experience, that’s usually a good warning sign that you should not be automating it. In addition, you need to be able to track back to how automation has provided a benefit to the organization. The benefit should always be measurable and always financial.
What are companies automating?
Requests management is the hot one because that’s a major component of cloud computing. This includes service catalogues and self-service portals. Providing a self-service portal, sending the request for approval based on the dollar amount requested, and fulfilling the order through one or more systems is something that is commonly automated today. My advice here is to automate tasks through a general purpose orchestrator tool (such as CA Process Automation or similar tools) so that automated jobs can be managed from a single console. This is instead of stitching together disparate systems that call each other in a “rat’s nest” of automation. The general purpose orchestrator also allows for easier troubleshooting when an automated task does not complete successfully.
How to automate?
There are some things to consider when sitting down to automate a task, or even determining the best things to automate. Here are a few key points:
- Start with the VoC or Voice of the Customer, and work backwards to identify the systems that are needed to automate a particular task. For example, maybe the customer is the Human Resources department, and they want to automate the onboarding of a new employee. It may have to setup user accounts, order a new cell phone, order a new laptop, and schedule the new employee on their manager’s calendar on their first day of work. Map out the systems that are required to accomplish this, and integrate those – and no more. You may find that some parts of the procedure may already be automated; perhaps your phone provider already has an interface to programmatically request new equipment. Take every advantage of these components.
- Don’t automate things that you can’t trace back to a benefit for the organization. Just because you can automate something doesn’t mean that you should. Again, use the voice of the customer and user stories here. A common user story is structure as follows:
- “As a [role],
- I want to [get something done]
- So that I can [benefit in the following way]”
- Start small and work upwards to automate more and more complex tasks. Remember the HR onboarding procedure in point #1? I wouldn’t suggest beginning your automation journey there. Pick out one thing to automate from a larger story, and get it working properly. Maybe you begin by automating the scheduling of an appointment in Outlook or your calendaring system, or creating a user in Active Directory. Those pieces become components in the HR onboarding story, but perhaps other stories as well.
- Use a general purpose orchestrator instead of stitching together different systems. As in point #3, using an orchestrator will allow you to build reusable components that are useful to automate different tasks. A general purpose orchestrator also allows for easier troubleshooting when things go wrong, tracking of automation jobs in the environment, and more advanced conditional logic. Troubleshooting automation any other way can be very difficult.
- You’ll need someone with software development experience. Some automation packages claim that even non-developers can build robust automation with “no coding required.” In some cases, that may be true. However, the experience that a developer brings to the table is an absolute must have when automating complex tasks like the HR onboarding example in point #1.
What has your organization automated? How have the results been?
High-Performance Services Fabric
In the early days of cloud computing we talked a lot about how the economy of scale offered by cloud was achieved mainly through abstraction of resources. Compute, network and storage resources were abstracted and pooled together such that they could be provisioned as services on-demand.
That economy of scale ensured that the cost of using those services decreased, making them affordable for even the smallest of organizations.
In the data center, however, similar economy of scale has been difficult to achieve because the abstraction at the network layers has remained elusive. SDN has recently emerged as a front-runner in the data center as a provider of that abstraction, turning disparate network elements into a cohesive fabric of network resources, dynamically adjusted to deliver the best performance and availability to any application that might be delivered over its formerly rigid pipes.
High-Performance Services Fabric
In the early days of cloud computing we talked a lot about how the economy of scale offered by cloud was achieved mainly through abstraction of resources. Compute, network and storage resources were abstracted and pooled together such that they could be provisioned as services on-demand.
That economy of scale ensured that the cost of using those services decreased, making them affordable for even the smallest of organizations.
In the data center, however, similar economy of scale has been difficult to achieve because the abstraction at the network layers has remained elusive. SDN has recently emerged as a front-runner in the data center as a provider of that abstraction, turning disparate network elements into a cohesive fabric of network resources, dynamically adjusted to deliver the best performance and availability to any application that might be delivered over its formerly rigid pipes.