Why digital business transformation depends on public cloud networking

Companies increasingly seek digital business transformation. From a purely technology perspective, most pieces are in place for this transformation to occur. But too often, one thing is inhibiting the process: public cloud networking complexity.

The public cloud is becoming the new foundation for what the cloud does. Important things will continue to happen in on-premises data centres, intelligent edge devices, and branch offices. But more new enterprise applications are emerging whose centre of gravity is the public cloud.

Within public cloud networking, virtual private clouds (VPCs) and virtual private networks (VPNs) represent a significant operational challenge for most organizations. Already, they far out-number data centres and branch networks. In fact, I’ve heard colleagues at Amazon Web Services (AWS)—arguably the leader in public cloud—predict a fourfold increase in VPCs over the next three years.

As more enterprise applications are shifted to the public cloud, network traffic patterns are changing.

Instead of data flow being largely asymmetrical from the cloud (or Internet) down to users, now intelligent connected devices, machine learning, data analytics, and artificial intelligence applications are sending traffic back in the other direction, from the edge to the cloud. More and more, the receiving end of network traffic is the public cloud.

Unfortunately, networking complexity is inhibiting the process. Here’s why: The number of VPCs in public cloud infrastructures—whether AWS, Microsoft Azure, or Google Cloud Platform (GCP)—is exploding. But managing secure connections among VPCs is still daunting for most cloud and DevOps teams, regardless of their markets.

The challenges multiply for enterprises whose footprints span AWS, Azure and GCP public cloud environments. It’s increasingly common for companies to find themselves with multiple public clouds, often because different teams within the enterprise choose different public cloud providers based on best-of-breed products and services.

Enterprises with such multi-cloud architectures need their enterprise applications and workloads to run seamlessly everywhere, including between public clouds, between clouds and on-premises data centres, and to users.

Addressing this challenge requires a virtual cloud network architecture built specifically for modern cloud environments, where applications, users, and data are highly distributed. In such an architecture, the complexities of networking (think manual configuration, building VPN tunnels, and troubleshooting) are eliminated.

Next-generation secure public cloud networking makes public clouds, and their VPCs, interoperable. Engineers (but not necessarily highly skilled networking gurus) can create the applications they need to achieve their business outcomes—without worrying about how to move workloads between cloud resources.

The rewards of having the right public cloud networking in place can be dramatic. As an example, building a secure tunnel using traditional networking technologies might take eight hours or more; with secure public cloud networking of the kind offered by my company, a non-networking engineer can have a secure tunnel up and running in 15 minutes or less.

In a connected, cloud-based world, applications are inseparable from the networks they run on. Business outcomes are measured less in total cost of ownership (TCO) and return on investment (ROI) than in acceleration of innovation. Digital business transformation depends directly on the network, which is now a mandatory, foundational part of any business strategy.

To make digital business transformation a reality, companies need consistent cloud networking to connect the various segments of the cloud and the diverse edges. A modern public cloud networking architecture can help companies navigate more smoothly to the digital business transformation future they envision.

Sponsorship Opportunities For @CloudEXPO New York Open | #BigData #AI #DevOps #IoT #Blockchain #SmartCities

CloudEXPO | DevOpsSUMMIT | DXWorldEXPO are the world’s most influential, independent events where Cloud Computing was coined and where technology buyers and vendors meet to experience and discuss the big picture of Digital Transformation and all of the strategies, tactics, and tools they need to realize their goals. Sponsors of DXWorldEXPO | CloudEXPO benefit from unmatched branding, profile building and lead generation opportunities.

read more

How to boost your business Wi-Fi


Steve Cassidy

17 Jul, 2018

There’s a sense in many offices that Wi-Fi represents a great break for freedom – as if your old Ethernet infrastructure was some kind of authoritarian dystopia. There’s something romantic in that idea, but it’s apt to turn sour when the realisation dawns that an overloaded or poorly configured wireless network can be every bit as flaky as a wired one.

Indeed, the experience can be even more disagreeable if you don’t understand what’s going on. I’ve seen one business resort to adding more and more DSL lines and Wi-Fi-enabled routers, to try to resolve an issue where wireless users were intermittently losing internet access. Nothing helped: in the end, it turned out that the wireless network itself was working fine. The problem was the ISP rotating its live DNS servers in some baroque plan to knock out hackers or spammers.

So lesson one is: before you start planning to upgrade your wireless provision, first of all ask yourself what the problem is you’re trying to solve, and then investigate whether it could conceivably be caused by bugs or bottlenecks elsewhere on the network. If that’s the case then a large, expensive Wi-Fi upgrade project may be no help to you at all. You might get better results from simply spending a few quid to replace old trampled patch leads.

1 – Multiple services make for resilient networks

When people talk about «boosting» their Wi-Fi, they’re almost always talking about speed. But there’s no single way to increase the throughput of a wireless network.

It may be that you need a ripout and redesign of your entire setup. Or it might be a case of tracking down a misconfiguration, in which all the machines simply sit showing their busy cursors because of a poor DSL link or a foolishly chosen cloud dependency.

The culprit might not even be connected to your network: it could be a machine like an arc welder that generates RF interference as a by-product of its regular duties, and flattens the wireless connection of any device within a 10m radius. Upgrading your Wi-Fi is rarely just about picking a quicker router.

Speed isn’t the only consideration, either. Do you want to control or log guest accesses – or will you in the future? Should you prioritise internal staff or internal IT people’s allocated bandwidth? Might you even want a honeypot machine to divert and ensnare would-be intruders? These functions are likely to exceed the capabilities of your standard small plastic box with screw-on antenna ears.

If your Wi-Fi is important enough to warrant an upgrade then don’t limit your thinking (or your spend) to a slightly better router. Finally, think about robustness. Investing in multiple DSL lines with multiple providers makes it harder for random outages and blips to knock your business offline. Being able to route internally over an Ethernet programmable router (look for «layer 3 routing and VLANs» in the description) at least gives you some ability to respond on a bad day.

2 – Remember, it’s radio, not X-rays

If you’re ready to upgrade your wireless network – or to set one up for the first time – then you should start by taking a look at your premises. You need to work out how you can achieve reasonably uniform coverage. You can do the basic research by just wandering about the building holding a smartphone loaded with a free signal-strength metering app.

There are much more satisfyingly complex devices than that, of course. These may become useful when you have the problem of a wireless footprint that overlaps with that of your neighbours. The issue might be overcrowded channels, or it might be down to the general weirdness of RF signal propagation, which can mean that you get horrific interference from a next-door network that, by rights, ought to be weak and distant.

Almost never is the solution to boost the transmission power of your APs. Turning the power down on your base stations and installing more of them, in collections that make best use of wired back-links and collective operation, is much more likely to fix dead spots and interference than a single huge, throbbing, white-hot emitter in the corner of your office.

3 – Wi-Fi over a single cable

Once you start shopping for business-grade Wi-Fi gear, you’ll quickly encounter Power over Ethernet (PoE). This can be a convenient solution for devices that don’t draw much power and don’t necessarily want to be situated right next to a mains socket.

However, PoE can also be a dangerous temptation to the rookie network designer. «Look, it just runs off one wire – without the annual testing and safety considerations of a 240V mains connection!»

The catch is that the power still has to come from somewhere – most often a PoE-capable switch. This might be a convenient way to work if you want to run 24 access points from a single wiring cupboard with one (rather hot) Ethernet switch carrying the load. But very few businesses require that kind of density of access points. It’s more likely you’ll have only a few PoE devices.

So for your medium-sized office, you’ll probably end up acquiring and setting up additional PoE switches alongside your main LAN hardware – which is hardly any simpler or cheaper than using mains power. It also brings up the situation of having your wireless estate on one VLAN and everything else on another.

4 – Strength in numbers

More APs is almost always better than trying to increase signal strength. It does have implications for management, though.

Businesses taking their first steps beyond a traditional single-line DSL router often have a hard time converting to a setup where access control and data routing are entirely separate jobs from the business of managing radio signals, advertising services and exchanging certificates.

How you handle it depends – at least partly – on what sort of access points you’ve chosen. Some firms opt for sophisticated devices that can do all sorts of things for themselves, while others favour tiny dumb boxes with barely more than an LED and a cable port.

The larger your network grows, the more sense the latter type makes: you don’t want to be setting up a dozen APs individually, you want them all to be slaves to a central management interface. That’s especially so if you need to service a site with peculiar Wi-Fi propagation, handle a highly variable load or deal with a large number of guests wandering in and out of the office.

5 – The temptation of SSO

Single sign-on (SSO) is something of a holy grail in IT. The idea is that users should only have to identify themselves once during a normal working day, no matter how many systems they access.

It’s not too hard to achieve when it comes to Wi-Fi access, but it’s not a very slick system, on either the network side or the clients’. The bit of the Wi-Fi login cache that handles SSO, and decides if a password saved in a web page can be used to sign in to a particular WLAN, is also the bit that gets sniffed by hotel Wi-Fi systems to tag a single location as «definitely my home» and overcome all other applicants for the tag: set this attribute on your Wi-Fi for guests at your peril.

And while it sounds attractive to have to enter just a single password – after which a portfolio of machines, routers and cloud services will recognise your user as already validated – the reality isn’t as great. For one thing, people are used to typing in passwords these days: it isn’t a scary techie ritual any more. You don’t need to shield them from it.

Then there’s the continual and unresolvable fight between vendors as to who owns the authentication database itself. Nobody with a real job to do could possibly keep up with the in-depth technical mastery required to shift from one authentication mechanism to another – but that doesn’t stop various players from trying to tempt you to take up their system or proprietary architecture. The result is an unwelcome chunk of extra complexity for you to master.

6 – Beware compatibility gotchas

On the subject of proprietary approaches, it’s a fact that many base stations and Wi-Fi enabled devices just don’t work together.

Sometimes the problem is about range, or about contention (how many devices in total you can get into one repeater) or concurrency (how many devices can communicate at the same time). Other times it’s an idiosyncratic firmware issue, or some quirky issue with certificates on one side of the conversation, which renders the other side effectively mute.

I’ve seen plenty of firms run into these problems, and the result tends to be cardboard boxes full of phones, still with months on their contracts but unable to connect to the company WLAN since the last upgrade. It’s not a good look for the IT man in the spotlight: «You’ve broken the Wi-Fi!» is an accusation that always seems to come from the best-connected, least calm member of your company.

The real solution is to acknowledge the reality of compatibility issues, and plan for them. You don’t have to delve into the technical minutiae of your shiny new service, but you do need to work out how, and for how long, you need to keep the old one running in parallel to sidestep any generational problems. Thus, your warehouse barcode readers can keep connecting to the old SSIDs, while new tablets and laptops can take advantage of the new Wi-Fi.

If users are educated about this «sunset management» then hopefully they’ll feel their needs are being respected, and legacy devices can be upgraded at a manageable pace and at a convenient time.

7 – Manage those guests

One pervasive idea about Wi-Fi is that it can and should be «free». It’s a lovely vision, and it has perhaps helped push the telephone companies to cheapen up roaming data access – but within a business it’s a needless indulgence that makes it difficult to fully secure your IT portfolio. After all, it’s your responsibility not to get hacked, nor to facilitate someone else’s hack; opening up your network to all and sundry, with no questions asked, is hardly a good start.

That doesn’t mean you can’t let visitors use your network at all – but it does mean you should give them managed guest access. Think about how much bandwidth you want guests to have, and what resources you want to let them access. Do you want to treat staff and their personal devices as if they were visitors, or do they get a different level of service?

8 – What about cloud management?

The bigger your network grows – the more users, APs and network resources it embraces – the more important management becomes. And it’s not just about convenience but, again, security.

Our own Jon Honeyball became a fan of Cisco’s cloud-based Meraki management service when it enabled him to see that over 3,000 new devices had tickled his wireless perimeter in a week. It’s a statistic that makes for instant decisions in boardrooms. It’s very unlikely that all of these contacts were malicious. Most were probably just cars driving past with Wi-Fi-enabled phones.

Spotting the difference is where threat-detection systems really start to sort themselves into sheep and goats, and that’s something you can operate in-house: you don’t absolutely have to run all your devices from a vendor’s cloud service layer. Your local resources, like separate DSL lines and routers, already sit behind cloud-aggregated, collectively managed base stations.

If you’re in a business that doesn’t touch the Wi-Fi from one year to the next, cloud management may hardly matter at all. And while a cloud-based solution may seem to offer security advantages, it’s still necessary to protect your own network, so it’s not as if you can forget about security. Advanced password management for both users and administrators should be an absolute must for any cloud-managed Wi-Fi campuses.

Images: Shutterstock

Announcing @ICCUSA to Exhibit at @CloudEXPO NY | #AI #SDN #DataCenter #Storage #SmartCities

ICC is a computer systems integrator and server manufacturing company focused on developing products and product appliances to meet a wide range of computational needs for many industries. Their solutions provide benefits across many environments, such as datacenter deployment, HPC, workstations, storage networks and standalone server installations. ICC has been in business for over 23 years and their phenomenal range of clients include multinational corporations, universities, and small businesses.

read more

How to get the right kind of control over your cloud: A guide

Trust in the cloud hasn’t always been universal. There was a time when security and risk management leaders feared entrusting critical data and infrastructure to a third-party cloud provider. This was understandable, arising from the history of network management, where IT teams were intimately familiar with managing the resources that made up their IT infrastructures, from the buildings they were housed in, to the electricity and cooling supply, through to the server, all the way down to the storage and networking infrastructure.

However, this familiarity isn’t possible when you delegate responsibility to your cloud provider and hanging onto it can prevent organisations from gaining optimal cloud efficiency and security. Clearly, a shift in mindset is needed.

In their report, “CISO Playbook: How to Retain the Right Kinds of Control in the Cloud” Gartner make the analogy that moving to the cloud is a bit like flying somewhere on a plane, compared to driving your own car on a journey. You are relinquishing control of your journey to the flight crew of a plane, which can cause anxiety. However, this anxiety is not rational because whereas you might check the oil, tyres and windshield washer fluid on your car once in a blue moon, the plane will be checked rigorously, every flight. To sum up, this means that migrating to the cloud requires a new outlook on how you control your data and a better understanding of what cloud service providers do to ensure security so that you feel comfortable giving up ownership of the underlying platform.

In today’s context, customers still own their data but share stewardship with cloud providers. The concept of “control” has changed from physical location-based ownership to control of processes. Information security and risk management leaders therefore need to adopt a new approach of indirect control to achieve efficiency, security and above all peace of mind. With this in mind, we will try to define how you can get the right kind of control over your cloud.

Design the right identity and access management strategy

Security teams and developers can find cloud-based control concepts difficult to grasp. But really, it’s a similar situation to giving up ownership of the fibre and copper in their wide-area networks: telecommunications carriers own the physical infrastructure, but   data remains owned and controlled by their customers. It’s all about delineating security responsibility. Once you’ve defined the hand-off point, you’ll know that beyond this your CSP is responsible for security.

Your responsibility lies in designing an Identity Access Management strategy that covers not only the cloud platform but also the applications and services that the cloud platform is presenting to the outside world. Access should be based on giving users permissions on a “least privilege” basis, rather than giving blanket authority to all. This improves audit capabilities and reduces the risk of unauthorised changes to the platform.

On top of that, you should work with your cloud provider to ensure encryption for higher degrees of logical isolation. Encryption of data at rest and in transit is often seen as another way to secure, segregate and isolate data on a public cloud platform. While it is highly unlikely that anyone would be able to break into a public cloud data centre and physically steal a disk drive containing your data, it is highly recommended that you consider using encryption of data at rest.

Increase monitoring and re-orient audit objectives

With the regulatory environment growing in complexity, organisations using the cloud are increasingly asked to demonstrate their strong governance. The fact that you’ve delegated some control to your CSP means that you’ll have to demonstrate that governance procedures are in place and are being followed.

In order to do so, you should seek to work with a cloud service provider that provides security and compliance monitoring and reporting. And, has the necessary approach and compliance attestations that ensures your cloud workloads will be able to meet the necessary requirements come audit time.

Compare your security requirements and measure CSP performance against SLAs

Another point to pay close attention to is the contractual terms that bind the CSP with respect to protection of customer data and privacy. Contracts with hyperscale cloud providers tend to overwhelmingly protect those CSPs, but it is possible to work with some CSPs to reach agreement on terms more favourable to customers.

The final impact and recommendation is around cloud service provider contracts and SLAs. Many CSPs, especially the hyperscale providers, can be extremely rigid with their SLAs, and can be very inflexible when asked to change them. It’s important to find out where your CSP stands on different aspects of compliance. Are they able to share their certifications and attestations? How flexible are they with their SLAs on subjects such as availability? Will they pay out service credits if service is not available according to the SLA? These are questions you will need to have answers to before going forward with your CSP. An extra piece of advice I would give is to compare your security requirements for externally hosted data to the capabilities of CSPs in the context of your risk appetite.

To summarise, with security risks and compliance regulations only increasing, along with the adoption of cloud services, it’s important to understand shared responsibility with regards to cloud security. Striking the right balance between relinquishing and maintaining control in the cloud will enable your business to securely leverage the many benefits of cloud services. Having control of your cloud doesn’t mean you should manage every aspect of it, but make sure you know what you are accountable for instead to gain the right kind of control.

All in Mobile to Exhibit at @CloudEXPO NY | @AllinMobileApps #Mobile #iPhone #Samsung #iOS11 #CIO

All in Mobile is a place where we continually maximize their impact by fostering understanding, empathy, insights, creativity and joy.

They believe that a truly useful and desirable mobile app doesn’t need the brightest idea or the most advanced technology. A great product begins with understanding people.

It’s easy to think that customers will love your app, but can you justify it?

They make sure your final app is something that users truly want and need. The only way to do this is by researching target group and involving users in the designing process.

read more

Announcing #Blockchain «Power Panel» Moderated by Ed @Featherston | @ExpoDX #FinTech #Hyperledger #SmartCities

Whenever a new technology hits the high points of hype, everyone starts talking about it like it will solve all their business problems. Blockchain is one of those technologies. According to Gartner’s latest report on the hype cycle of emerging technologies, blockchain has just passed the peak of their hype cycle curve. If you read the news articles about it, one would think it has taken over the technology world.

No disruptive technology is without its challenges and potential impediments that frequently get lost in the hype. The panel will discuss their perspective on what they see as they key challenges and/or impediments to adoption, and how they see those issues could be resolved or mitigated.

read more

Registration Opens for @MapR IoT Data Session | @CloudEXPO #AI #IoT #IIoT #SmartCities #DigitalTransformation

The challenges of aggregating data from consumer-oriented devices, such as wearable technologies and smart thermostats, are fairly well-understood. However, there are a new set of challenges for IoT devices that generate megabytes or gigabytes of data per second. Certainly, the infrastructure will have to change, as those volumes of data will likely overwhelm the available bandwidth for aggregating the data into a central repository. Ochandarena discusses a whole new way to think about your next-gen applications and how to address the challenges of building applications that harness all data types and sources.

read more

What makes a good bespoke app?


Sandra Vogel

18 Jul, 2018

When we think about apps, we usually think about what we can see on our smartphones. In reality, that only scratches the surface of a highly lucrative industry where software is changing the face of business.

There are countless bespoke apps out there commissioned by organisations who need tools that can help people do their jobs that bit more easily.

Every part of an organisation can benefit from bespoke apps, from logistics to human resources, from customer or user services to resources management. But getting an app that’s fitted precisely to the organisation’s needs isn’t easy. The whole point of ‘bespoke’ is that it is tailored, not ‘off the shelf’ or ‘one size fits all’. The tailoring is the skill, and it’s where bespoke apps succeed – or fail.

Design thinking

Nick Ford, chief technology evangelist at Mendix, a software platform that allows apps to be created without coding and whose clients include Kwik Fit, ING and War Child, says there’s no «silver bullet» when it comes to developing your own app, but that «taking a design thinking approach can set you off on the right path».

Design thinking is absolutely central to the whole process of creating effective bespoke apps. Ford tells Cloud Pro that it «ensures the finished app is solving the right problems through keeping the end-user at the heart of the entire design and development process».

«That might sound basic, but you’d be amazed at how many businesses develop apps that fail to identify or address the right problems,» he adds.

In practice, that means making sure the people who are going to use the bespoke app are involved in creating its look, its feel, and the services offered. Achieving this requires technical teams to take a step backwards, work alongside specialists in a user involvement, and figure out how to implement user requirements in the app’s design.

Keep on talking

Importantly, users need to be involved throughout the whole development cycle – and beyond. It’s no good just asking them what they want at the start, going away and asking the tech teams to produce an app, presenting it to users for a short period of testing before it is unleashed, making a couple of changes, and then retiring to focus on the next project.

Eveline Oehrlich, director of market strategy at New Relic, a company that specialises in monitoring the efficiency of apps, tells us: «As a company’s custom app usage grows, it can be increasingly difficult to ensure these apps deliver consistent quality and reliability. If left unchecked, there’s a risk that there will be a negative impact on employee productivity, customer and partner satisfaction, and, ultimately, the company’s bottom line».

Nick Ford adds that usually «the IT team is so separated from the wider business that it’s impossible for the right conversations to be had at the right time. When apps are developed in isolation, there’s no room for snags to be caught and dealt with early».

Avoiding avoidance

In the end, the ultimate goal in producing a bespoke app is to create something people will use. Fail to take the design thinking approach, and fail to keep end users involved throughout the process, and an organisation is simply forking out more money on bespoke for the same experience offered by an off-the-shelf product.

As Michael Macauley, general manager at Liferay, tells us: «If workers feel they’re fighting against an application, they are more likely to either try and circumvent it, or give up on that process entirely.»

Moreover, for Liferay, whose clients include T-Mobile, Airbus and Domino’s, this is about more than just apps. The design thinking approach needs to extend into every aspect of digital life – web, apps and beyond. Consistency is all.

To achieve the required level of user-friendliness requires what Nick Ford refers to as a ‘feedback loop’ – a continuous process of gathering user feedback, making its collection «part of the environment».

«[This means] users can take an active role across the complete application lifecycle – so the finished app works for everyone,» he explains.

Just like tailored clothing that gets taken in here and let out there, as time goes on, an app needs to respond to user needs throughout its life. And that, after all, is the point of a bespoke app. As Macauley put it, «the ultimate aim of good design should be ease of use».

Image: Shutterstock

SolarWinds acquires Trusted Metrics to add real-time threat monitoring to cloud security mix

SolarWinds is on the acquisition trail again – this time confirming the acquisition of Trusted Metrics, a real-time threat monitoring and management software provider.

The acquisition will enable SolarWinds to release a new security product under the name of SolarWinds Threat Monitor, which is an automated tool which aims to make threat detection easier for IT operations teams, managed service providers and managed security service providers.

As regular readers of this publication will testify, organisations’ cloud initiatives are becoming ever-more complex – and with that, the security factor goes up significantly. Alex Bennett, of Firebrand Training, noted security as the number one skill businesses and employees need to know in 2018 back in May, while new security snafus are rarely out of the news.

Writing for this publication in May, Srivats Ramaswami, CTO at 42Q, cited manufacturing as an industry where cloud security needed to be taken more seriously. “Remember, the best application providers and data centres have large, dedicated security teams who have implemented automated threat monitoring systems that operate 24×7,” he wrote. “In the end, the best cloud software companies have dedicated more time, resources and budget to securing our systems than most organisations are able to provide themselves.”

This makes for interesting reading when it compares to what SolarWinds are attempting to do. The new product, utilising the technology of Trusted Metrics, will aim to aggregate a plethora of data sources, such as asset data, security events, and network intrusion detection, and correlate it with continuously updated threat intelligence to ‘identify the danger signals amidst all the innocent noise of a normal network.’

“The acquisition of Trusted Metrics will allow us to offer a new product in the SolarWinds mould – powerful, easy to use, scalable – that is designed to give businesses the ability to more easily protect IT environments and business operations,” said Kevin Thompson, SolarWinds CEO in a statement.

The move complements the company’s acquisition of software as a service provider Loggly, announced at the start of this year.