[video] IBM’s Hybrid Approach to Hybrid Cloud | @CloudExpo @IBM @SoftLayer #Cloud #DevOps

What does it look like when you have access to cloud infrastructure and platform under the same roof? Let’s talk about the different layers of Technology as a Service: who cares, what runs where, and how does it all fit together.
In his session at 18th Cloud Expo, Phil Jackson, Lead Technology Evangelist at SoftLayer, an IBM company, spoke about the picture being painted by IBM Cloud and how the tools being crafted can help fill the gaps in your IT infrastructure.

read more

AWS and Microsoft get FedRAMP approval for sensitive cloud data

(c)iStock.com/dkfielding

Another day, another piece of good news for both Microsoft Azure and Amazon Web Services (AWS); the vendors are two of three companies which have been given authority by the US government for federal agencies to use them for sensitive cloud data.

Azure and AWS, alongside CSRA’s ARC-P IaaS, have been given the green light under the new FedRAMP High Baseline requirements. The full, mammoth spreadsheet documenting each guideline can be found on the FedRAMP website (XLS), but at a general level the requirements enable government bodies to put ‘high impact’ data – including data which involves the protection of life and financial ruin – in the cloud.

Chanelle Sirmons, communications lead for FedRAMP, explained in an official post: “While 80% of federal information is categorised at low and moderate impact levels, this only represents about 50% of federal IT spend. Now that FedRAMP has set the requirements for high impact levels, that breaks open the remaining 50% of the $80 billion a year the US government spends on IT that could potentially move to the cloud securely.”

“We are pleased to have achieved the FedRAMP high baseline, giving agencies a simplified path to moving their highly sensitive workloads to AWS so they can immediately begin taking advantage of the cloud’s agility and cost savings,” said Teresa Carlson, AWS VP worldwide public sector in a statement. A statement from Microsoft read: “Microsoft remains committed to delivering the most complete, trusted cloud platform to customers. This accreditation helps demonstrate our differentiated ability to support the unique needs of government agencies as they transition to the cloud.”

Amazon and Microsoft have had their clouds FedRAMP accredited since June and October 2013 respectively – back when the latter was still known as Windows Azure – while ARC-P was the first vendor to receive the federal stamp of approval in 2012. Three years on, this represents a major step forward for government use of cloud technologies.

The next phase of SDN: Software defined campus networks

(c)iStock.com/Aapthamithra

Enterprise and university campus networks have enjoyed decades of architectural permanence. For years, these networks have been built with cookie cutter designs, with the only critical decision points being the number of ports and users. But new challenges presented today – more devices (both in number and type), mobility, security, and diverse application traffic – the management of these networks is finally coming to the forefront. Software-defined networking (SDN) is an ideal methodology to push policies to campus networks in a systematic and automated way. 

OpenFlow, one of the cornerstones of SDN, was built to facilitate the separation of control from forwarding within network devices.  One aspect of this is that it also allows operators to centralise the control of these devices, thereby simplifying the task of managing the network. And the campus network needs management simplification. Between bring your own device (BYOD) and the Internet of Things (IoT), networks are becoming more complex every day. Gartner projects that 6.4 billion connected things will be in use worldwide in 2016 – up 30 percent from 2015 – and would reach 20.8 billion by 2020.

In the IoT, everything from lighting to cameras to alarm, sprinkler and HVAC systems can be connected to a campus network, and these devices consume bandwidth and create traffic that must be managed. And employees, students and staff now carry multiple devices connected via Wi-Fi, so it’s no longer a matter of managing a fixed number of physical ports.

As OpenFlow deployments have progressed through proofs of concept and data centre deployments, we have recognised that it provides three fundamental advantages:

  • It maps business logic to the network, enabling policy-driven networking
  • It makes the network programmable, so policies can be applied automatically
  • It enables the use of a packet broker so you can manage the network around visibility and analytics.

Let’s see how these apply to campus networks.

In a campus network, policy-driven networking is important because a lot of the configuration requirements are around policy: security policy, device access policies, and usage policies among others. OpenFlow allows you to map a policy into how you want the network to behave. For example, if a university wants to regulate the amount of bandwidth used for peer-to-peer applications or non-school activities, you could create a policy that examines traffic and regulates it accordingly. In an enterprise campus, you might want to have a policy that assigns more bandwidth resources to the data centre for overnight backups, and then re-assigns the bandwidth to users during the day.

The second point is programmability. You want to be able to dictate what those policies are and you want to be able to automate them, typically through an application. For any kind of an SDN infrastructure you have a northbound API that allows the application layer to communicate with the network layer, so it’s possible to install applications that apply policies. Companies like HPE have pioneered the concept of an “SDN app store” where the applications are turnkey with the infrastructure components, making it easier to deploy campus management.. An example of this would be the HPE Network Optimizer, which uses SDN and OpenFlow to automate the QoS policies for unified communications applications for enhanced user experience.

The third advantage of OpenFlow is being able to achieve greater visibility into the traffic flows within the campus.  OpenFlow can be used to support a network packet broker.

Traditionally, campus networks have been managed through a combination of element management systems (EMS) and network management systems (NMS). These consoles provide fault management, monitoring, configuration and provisioning functions, but are typically only used in a reactive way.

It’s one thing to manage the network in terms of fault detection or configuration changes, but another aspect of management is around visibility and analytics. Network packet brokers are now using OpenFlow as a means to mirror traffic for out-of-band monitoring of traffic flows.  Feeding this data into an application is a better way to keep track of what apps are running through the campus network, which users are consuming a lot of bandwidth, and what are some of the bandwidth patterns.

We’re starting to see analytics applied to this data for more proactive management. For example, suppose you see a high level of congestion happening from 8-10AM on Monday through Friday, and it’s usually coming from certain hosts. What you might do with OpenFlow is to get that network visibility, and then apply a more proactive policy to move that traffic around, or bring in a load balancer app to aggregate multiple links to address the problem.

More applications are leveraging SDN and OpenFlow into their solutions.  Traffic monitoring companies such as IXIA, Gigamon, and VSS and even traditional networking vendors like Cisco, Brocade and HPE are using this approach to extract greater visibility from the network.

In the coming years, campus networks will become far more complex and difficult to manage through traditional EMS and NMS. We have already seen how OpenFlow is making data centres and edge networks more manageable (for example, AT&T stated that it expects to save 40% of its operations costs through the use of OpenFlow and SDN), and it can make campus networks more manageable and responsive as the number of devices, users, and applications grows.

What is Citrix Receiver and how does it work?

Citrix Receiver is a client software that is required to access applications and full desktops hosted on Citrix servers from a remote client device. This tool provides access to XenApp/XenDesktop installations from different types of client devices including iPhone, BlackBerry, Mac OS X, iPad, Windows, Linux, Windows Mobile, Android, Google Chromebook, thin clients, and embedded […]

The post What is Citrix Receiver and how does it work? appeared first on Parallels Blog.

How to Maximize #SaaS Revenue with @_Anexia | @CloudExpo #BigData

SaaS companies can greatly expand revenue potential by pushing beyond their own borders. The challenge is how to do this without degrading service quality.
In his session at 18th Cloud Expo, Adam Rogers, Managing Director at Anexia, discussed how IaaS providers with a global presence and both virtual and dedicated infrastructure can help companies expand their service footprint with low “go-to-market” costs.

read more

[video] Zero to Cloud in 60 Minutes | @CloudExpo @Accelerite #API #Cloud

The cloud market growth today is largely in public clouds. While there is a lot of spend in IT departments in virtualization, these aren’t yet translating into a true “cloud” experience within the enterprise. What is stopping the growth of the “private cloud” market?
In his general session at 18th Cloud Expo, Nara Rajagopalan, CEO of Accelerite, explored the challenges in deploying, managing, and getting adoption for a private cloud within an enterprise. What are the key differences between what is available in the public cloud and the early private clouds?

read more

Announcing @Commvault Named “Bronze Sponsor” of @CloudExpo Silicon Valley | #Cloud

SYS-CON Events announced today that Commvault, a global leader in enterprise data protection and information management, has been named “Bronze Sponsor” of SYS-CON’s 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA.
Commvault is a leading provider of data protection and information management solutions, helping companies worldwide activate their data to drive more value and business insight and to transform modern data environments. With solutions and services delivered directly and through a worldwide network of partners and service providers, Commvault solutions comprise one of the industry’s leading portfolios in data protection and recovery, cloud, virtualization, archive, file sync and share.

read more

Interface Masters Technologies to Exhibit at @CloudExpo Silicon Valley | #Cloud

SYS-CON Events announced today that Interface Masters Technologies, a leader in Network Visibility and Uptime Solutions, will exhibit at the 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA.
Interface Masters Technologies is a leading vendor in the network monitoring and high speed networking markets. Based in the heart of Silicon Valley, Interface Masters’ expertise lies in Gigabit, 10 Gigabit and 40 Gigabit Ethernet network access and network connectivity solutions that integrate with monitoring systems, inline networking appliances, IPS, UTM, Load Balancing, WAN acceleration, and other mission-critical IT and security appliances.

read more

[slides] Redis Functions and Data Structures | @CloudExpo #DigitalTransformation

Redis is not only the fastest database, but it has become the most popular among the new wave of applications running in containers. Redis speeds up just about every data interaction between your users or operational systems.
In his session at 18th Cloud Expo, Dave Nielsen, Developer Relations at Redis Labs, shared the functions and data structures used to solve everyday use cases that are driving Redis’ popularity.

read more

Announcing @Tintri to Exhibit at @CloudExpo Silicon Valley | #Cloud

SYS-CON Events announced today that Tintri Inc., a leading producer of VM-aware storage (VAS) for virtualization and cloud environments, will exhibit at the 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA.
Tintri VM-aware storage is the simplest for virtualized applications and cloud. Organizations including GE, Toyota, United Healthcare, NASA and 6 of the Fortune 15 have said “No to LUNs.” With Tintri they manage only virtual machines, in a fraction of the footprint and at far lower cost than conventional storage.

read more