Todas las entradas hechas por JamieTyler

From artisans to autonomics: How the automation polar shift will happen

(c)iStock.com/micheldenijs

Since the end of the 19th century, humans have been fascinated with the idea of artificial intelligence and self-governing machines – from the early novels by HG Wells to the Hollywood blockbusters 2001: A Space Odyssey, The Terminator and Ex-Machina.  And while this fascination will probably never subside, one thing is certain: In the not so distant future, machines will become increasingly useful to us.

One area ripe for this advancement is IT infrastructure management. Machine intelligence will allow infrastructure to regulate, protect and heal itself from external attacks or threats.

Evolution of IT services: From the artisans to autonomics

In the coming years, there will be a polar shift as autonomics begins to play an increasing role in the delivery of services. The polar shift (or, more accurately, the geomagnetic reversal) happens when magnetic north and magnetic south swap places (about once every 500,000 years). The IT equivalent means humans and machines swapping places. In today’s workplace, humans use systems and processes to fix problems when infrastructure goes wrong. With true autonomics, the machines self-regulate and only occasionally escalate decision making to humans.

Over time autonomics will display ever-more sophisticated cognitive characteristics where machines will interact more effectively with customers, clients and operators. The upside of this transformation is not only economic – it’s about delivering a higher quality of service. With autonomics, it’s about eliminating mistakes – which humans are prone to – and enabling machines to do things faster and more thoroughly. This is my vision of the future.

Cloud’s key role in autonomics

At its heart, autonomics is partly about the automation of key processes and tasks but it is also about machines being able to make decisions as the environment in which they operate changes.

The on-ramp for these advanced autonomics is cloud computing. The beauty of having a cloud platform that is compatible with the traditional ITIL world, and the highly automated cloud environment, is that this multimodal way of working can become streamlined. The marriage of cloud computing and autonomics means that machine intelligence is no longer restricted to an on-premise architecture, but can be spread across various cloud-based services that run simultaneous instances of autonomics. In other words, businesses running machine learning on the cloud will be able to scale their services to meet demand across a variety of platforms, anywhere in the world.

Because machine learning improves with continued use, widespread use of machine intelligence through the cloud will lead to autonomics, a natural evolution of automation. At that point, autonomic computing can evolve almost naturally, much like human intelligence has over the millennia. There’s one stark difference, though: with cloud computing, the advances will happen at a much faster pace.

Read more: A trillion tiny robots in the cloud: The future of AI in an algorithm world

The white space between hand-crafted and automated approaches to the cloud

(c)iStock.com/ymgerman

The journey to the cloud will be faster for some than others. For startups, the decision is easy, they are “cloud first”. They have low overheads and can test products and services in new markets with ease. This model is one of the main reasons that the likes of Uber and Airbnb have succeeded where the dot-com companies of the late 90s failed (that, and consumer Internet access and the rise of smartphones).

For CIOs of larger organisations, it’s a different picture. They need to move from a complex, heterogeneous IT infrastructure into the highly automated – and, ultimately, highly scalable – distributed platform.

On the journey to homogeneous cloud, where the entire software stack is provided by one vendor, the provision of managed services remains essential. And to ensure a smooth and successful journey, enterprises need a service provider that can bridge the gap between the heterogeneous and homogeneous. In specific terms that means a bridge between: the network and the data centre; the IT infrastructure library (ITIL) and DevOps; infrastructure and professional services; and finally, local and global services.

This transformation in IT services is being driven by a need to match IT capabilities to an increasingly demanding business. CEOs are beginning to see the connection between digital investments and business objectives. According to a recent PwC survey, 86% of CEOs say a clear vision of how digital technologies can help achieve competitive advantage is key to the success of digital investments.

Businesses demand a lot from technology. They expect 100 per cent connectivity and application uptime. Their customers likewise expect availability of services around the clock – and flock to social media to let the world know if they can’t get them. Businesses expect IT to support the mobile revolution, to help employees become more productive and to ultimately help the company become more profitable. It’s a big ask. 

At the same time, there’s increasing pressure on the technology function. Cost containment, shorter implementation cycles and (the dreaded) regulatory compliance are all issues for the CIO to contend with ‒ not to mention finding motivated, skilled and loyal staff to help them with these challenges or the myriad of other problems that go with their day-to-day jobs.

It’s easy to see the attraction of moving infrastructure and applications to the cloud; flexibility and choice. Those two attributes deliver the business agility that’s every bit as important as potential cost savings. This contrasts with the traditional service value proposition where complex processes are managed by a third party.

This 20-year-old traditional service model can be compared to a restaurant where the chef cooks the meal in front of the guests. There’s a time where you do want to go to the restaurant and watch them cook right in front of you because you want to watch the preparation process step by step. It might be important to you from an architecture or a compliance perspective, or just an awareness standpoint, to know every single step of the way.

There’s nothing wrong with this model in principle but it has been superseded, in many instances, by the modern era of cloud and software as a service (SaaS). These are models that deliver data at the press of a button. Even when you take the hand-crafted approach at one end and the highly automated cloud and SaaS model on the other, there’s a lot of white space in between.

This white space is where many companies are today. The situation is like an 80s robot serving drinks: if it feels like an antiquated robot and looks like an antiquated robot, the chances are that you have a dud robot on your hands. You’re not really getting the true automated user experience you expected. You may be able to glean that there is some virtualisation and automation in the core of the service, but you’re missing out on good user experience, or getting something that’s bolted together.

Modernising these old service approaches requires a lot of thinking, but at the end of the day, automation always wins over headcount. Autonomic management systems can also ensure that there is full integration between the workflow and platform as a service (PaaS) framework with security/attack detection, application scheduler and dynamic resource provisioning algorithms.

Eventually that automated user experience will be able to take those edge cases that we typically reserve for human instruction and deliver something much more sophisticated.  Some of the features of autonomic cloud computing are available today for organisations that have completed – or at least partly completed – their journey to the cloud.

The cloud era has been with us for a while but the momentum is gathering pace. IT managers now trust and understand the benefits. With more and more demand on digitisation and the monetisation of IT services, companies need – and now expect – an agile IT service that can finally match the demands it faces.

Four best practice tips for creating an effective hybrid IT union

(c)iStock.com/Hreni

It’s truly amazing that weddings actually work. The bride and groom bring together a vast collection of people and service providers for a celebration and expect it all to go to plan. It’s hard to imagine another scenario where all these personalities – your university mates, your buddies from work, the uncle you’ve not seen in years – would end up in the same room together, let alone in a conga line together.

The hybrid IT environment today isn’t dissimilar to a wedding. Companies combine independent application stacks, vendors, and facilities into one portfolio and hope it will work both as expected and effectively – the difference is that in IT, it has to work for an extended period of time, not just one special day.

Hybrid infrastructure is becoming the norm. Research from RightScale’s 2015 State of Cloud survey indicates that more than half (55%) of companies are planning for hybrid clouds and distributing their workloads across public and private clouds. We see customers begin their hybrid journey each day. Below are four ways you can set your business up for hybrid IT success.

Look for reasonable continuity

When planning a wedding, it’s handy if possible to find vendors that are familiar with each other and can easily collaborate on things like food and entertainment. Hybrid environments can benefit from the same thing, except it’s almost equally important to know where familiarity is not possible.

The chance that all your infrastructure providers will use the same hardware vendors is extremely slim, but that’s okay. If you’re deeply into virtualisation, then a common hypervisor is helpful. The application or workload is the main thing that matters, but having the common thread at the virtualisation layer can simplify the migration and ongoing management process. You also should be able to find common identity management protocols across your hybrid providers.

Networking can be particularly risky — never assume that all the vendors within your hybrid architecture can support the same network topology and appliances. One underrated area of continuity is managed services. Having a single provider for managed services across your hybrid infrastructure can go a long way to simplifying operational costs.

Group like-minded items together

One of the most fun – yet frustrating – parts of planning any wedding is the seating arrangements. Who can sit together? Who needs to stay far, far away from each other? Consider the same things when plotting out your application portfolio, especially when it comes to performance, and minimising latency.

Try not to overthink your migration. Instead of breaking them up into fragments, move entire systems as a single unit. Keep applications close to the data they use. You want to have the systems that require synchronous access to data to be physically co-located. For distributed systems, invest in your messaging backbone so that you can efficiently and reliably transfer data over a long distance when physical pairing is neither possible nor practical.

When you have shared resources, such as a data warehouse, there isn’t a single, ideal location to put it where each system can access it with low latency. Put those particular assets into the most logical place, then work out the paths that applications and users have to take to reach it.

In your hybrid environment, deploy applications and data together, and geographically near their user base. If that’s not possible, explore caching and replication options that provide those applications close access to data, even if it’s not the master copy.

Don’t put all your faith in a single broker

When it comes to hybrid infrastructure, I have some bad news for you. The ‘single pane of glass’ is a myth. While incredibly seductive, the notion that you can manage your distributed IT assets from a single tool is not a reflection of reality.

Of course, there are some fantastic multi-cloud tools in today’s market. VMware’s vRealize does an excellent job brokering communication across clouds and provides operators with a handy day-to-day interface for managing infrastructure. But it’s difficult, perhaps even impossible, to find one, single platform that entirely aggregates all the functionality of both cloud and on-premises environments. Those applications are useful for many routine activities, but you often have to go directly to the vendor yourself when you need something specific such as account management, billing, or unique platform features.

We wholly recommend partnering with vendors that can help you manage your distributed assets more easily, but be careful not to fall into the trap of believing that this is the only way that you can manage your infrastructure. In the majority of cases, a native interface gives the best experience for dealing with that provider’s environment.

Accept that things don’t always go to plan

Your wedding day is often the biggest event of your life. The bride may have planned every detail in her head since the age of eight, but then real life happens. You’re not actually going to get Bon Jovi to sing for you all, and you couldn’t have anticipated that the charming ring bearer would bring a pet bunny to the ceremony.

The IT transformation that comes alongside a hybrid strategy is both exciting and scary. While you may think that tons of upfront planning will mitigate your risk, bringing a stack of “requirements” to your vendors may leave you very disappointed with the complexity needed to support it. You should instead be prepared to challenge your own status quo, and stay open minded about the process and technology changes that may need to happen as things evolve. Keep the plan adaptable as you learn more. This will yield you far better results than fixed specifications that your partners will struggle to comply with.

21st century companies do best when they’re positioned for adaptability, rather than just efficiency. You can unleash your organisation’s creativity with hybrid infrastructure, but only if your approach is practical and you keep your expectations realistic.

Reliability matters in the cloud: Check the performance markers

(c)iStock.com/giac

Cloud vendors claiming to be the “fastest” or “best performing” deserve a little scepticism. While speed and performance can appear impressive if conditions are aligned in a particular way, it doesn’t mean they will hold up in the long-term.

For companies searching for the right cloud provider, data on performance expectations is essential as it relates to picking the right host, managing scalability, and effectively spending on resources. Companies can use the following three tips to correctly gauge cloud performance and reliability.

Get the most out of every pound, euro or dollar

In the best case scenario, you want reliable performance at a fair price while also escaping any hidden costs that may ruin the true ROI. You want servers that can offer superb disk read performance and disk write capabilities while also performing well under varied test scenarios.

For instance, ask for test data that shows the servers’ I/O profile for both large and small block sizes. Review several comparable pieces of hardware to find outstanding performers. Why does this type of performance matter?

Here’s an example: If you are running a SQL Server database that typically works on 64k blocks, you want a server that offers consistent storage, reliable performance and no additional charges for provisioned input or output operations per second (IOPS). You want the perfect mix of fewer required resources and transparent costs.

Focus on efficient decision making

Remember the book “The Paradox of Choice”, which explores choice overload and why offering too many options is detrimental? The same issues apply to picking a server.

Some cloud providers offer many server choices, which can lead customers into selecting one that has too much RAM in order to meet another criteria such as having enough CPUs. Aim to find a vendor that doesn’t bury you under a mountain of canned server sizes, but rather lets you choose the server capacity that best suits your workload.

You also want the flexibility to choose the amount of CPUs or memory that makes sense for your business – similar to how you would purchase traditional servers. Spend less time reviewing dozens of server configurations and more time focusing on app and services development.

Scalability and predictable performance are crucial

While performance testing can help you understand the best way to scale an application, a conclusion can’t be reached without understanding how the platform reacts to spikes in its capacity.

Performance metrics from reputable third-parties, such as CloudHarmony, can be used to compare a range of cloud servers to a bare metal reference system. You want to be sure this performance metric improves linearly with the addition of CPU cores.

Understanding this server performance data can help you make the most of a cloud portfolio; it will give you peace of mind to know ahead of time that you can add resources to a VM before requiring more hardware, and that you can reduce costs too. If you choose cloud hardware that can scale both up and out, you’ll be in the best situation to plan your scaling events.

Performance metrics only show a moment in time, but having a long-term performance profile allows companies to make educated choices while still lowering costs.