Siri-ous Apple Event Announcements!

Did you catch the Apple Special Event that announced the latest innovations surrounding iOS, iPhone, macOS High Sierra, and watchOS? Don’t worry if you missed it – you’ll learn all you need to below thanks to our team quickly working to summarize the event for you below! To provide some contextual insight on what to […]

The post Siri-ous Apple Event Announcements! appeared first on Parallels Blog.

Intent-Based Networking: How Close Are We (and Should You Prepare)? | @CloudExpo #ML #SDN #Cloud

Over the last several months, intent-based networking (IBNS) has gained momentum as a newly viable technology that aims to further automate traditional network management. Although IBNS has existed for a few years now as a general concept, it was more buzz than reality until Cisco® launched its first IBNS software package earlier this year.
What is intent-based networking?
Traditionally, network administrators manually translate business policies into network device configurations, a time-intensive and error-prone activity that contributes to rising OpEx. But as digital transformation initiatives continue to reshape the way organizations approach their business and IT strategy, it’s becoming more difficult to stay on top of policy and configuration changes by hand.

read more

Silos Are Dead! Long Live the Silos! | @CloudExpo #DX #Cloud #Agile #DevOps

Everyone could agree that silos created unnecessary separation, protectionism, and bureaucracy. No one would dare argue that having rigid silos were somehow good for the organization.
Silos were, therefore, the easy target. They became the mantle onto which leaders could lay all past transgressions, and, in so doing, they became a convenient artifice to allow the leader to proclaim the dawn of a new era of integration, collaboration, and communication.
Silos are dead!
Except they never quite died, did they? In spite of all the talk, silos have persisted. They now just have different names. But the danger remains just as real, and their negative impact grows more significant every day.

read more

What is disruptive technology?

Disruptive technology, as the name suggests, is an emerging technology that will create a new market and value network, and eventually, disrupt the markets and uses of existing technology. This is a broad term used to describe any new technology that’s overpowering enough to affect the existing technologies.

If you look back, this is not a new concept. When emails became popular, it eventually disrupted the system of letter writing. Likewise, the emergence of cloud changed the way we store data and it eventually caused a decline of the physical storage market. These are all disruptive technologies in their own way.

It’s just that the term “disruptive” is becoming popular now because of our understanding and the way we are embracing technology. Also, these technologies have the power to change the way we live and work.

Let’s now look at a few examples of disruptive technologies.

Internet of Things (IoT)

The Internet of Things is a technology that connects our everyday devices like watches and alarm clocks, so we can get more out of them. For example, this technology can power our refrigerators to monitor milk levels and if it goes below a threshold level, it can send a message to our smartphone to order milk right away!

Artificial intelligence

This technology mimics the cognitive powers of the human brain to solve real-time problems. Machines with this artificial intelligence can be programmed to learn from the environment just like how humans do, and based on it, they can make the right decisions and take necessary actions.

This is a truly disruptive technology as it take away the routine jobs that are now being done by humans.

3D printing

Imagine how convenient it would be if you can print a Mercedes sports car right from a printer? Well, that’s exactly what a 3D printer can do for you. We can print buildings, clothes, food, body parts and so much more, which means, almost every traditional industry has a potential to be affected by it.

High speed travel

How convenient it would be if we can travel from Japan to the U.S within minutes? That’s the power of high speed travel. Already, the presence of Hyperloop between Dubai and Abu Dhabi has set an example. If this picks up, then our airplane industry could go out of business soon.

Robotics

Robotics is another interesting disruptive technology, as these robots could potential change our manufacturing and hospitality industries. In fact, the possibilities for robotics are endless and can extend to other industries as well.

Self-driving cars

Self-driving cars may be a reality sooner than we think. It also means, the existing car and oil industry will be down on its knees when this happens.

In short, disruptive technologies have the power to change the way we live and can have a profound impact on our economies. While they are for the larger good for mankind, this disruption is something that we have to anticipate and take it in our stride.

The post What is disruptive technology? appeared first on Cloud News Daily.

Rackspace acquires Datapipe to further bolster managed services play

Rackspace has announced it is to acquire managed service provider Datapipe in what is being touted as the ‘biggest acquisition by far’ in the company’s history.

The company added it would make Rackspace the world’s leading provider of multi-cloud managed services, managed public cloud services across all the hyperscale infrastructure vendors, and – by a larger margin – the managed hosting and private cloud market.

“As we’ve learned more about one another, leaders of Rackspace and Datapipe have been struck by how similar our two companies are,” wrote Joe Eazor, Rackspace CEO in a company blog post announcing the news. “Rackspace intends to build on the industry leadership the two companies have established in reliability and support to create a new level of end to end customer experience.”

The acquisition once completed – sometime in the fourth quarter, Rackspace added – will overtake the previous buyout of TriCore Solutions, announced back in May, in size. “When Rackspace went private late last year, we did so mainly because, at this point in our history, we need to make major, long-term investments in the capabilities our customers are demanding,” added Eazor. “And that’s just what we’re doing.”

Since Eazor took over, around the same time as the TriCore acquisition was announced, the company’s focus on managed cloud services has been clear. Writing his debut blog as CEO, Eazor noted: “Thanks to the strategy Rackspace adopted a few years ago, it’s got the early lead in the managed cloud space. My goal here is to build on that foundation and make us the world’s preeminent IT services company.”

The acquisition of Datapipe therefore plays very nicely into this trend. The company has been featured in this publication on various occasions, not least when the British Medical Journal (BMJ) used Datapipe to transform its infrastructure. As CloudTech noted when telling this story in September last year, the BMJ’s environment was not the easiest to work with; one release per month changed to up to four releases per day, with content and services built and delivered around APIs as opposed to weekly batch file transfers.

Financial terms of the deal were not disclosed.

Picture credit: Rackspace Afterparty TechStars Boulder 2011, by Andrew Hyde, used under CC BY / Modified from original

Don’t take the cloud plunge without a formal ROI assessment, Unisys warns

If you’re taking the plunge and migrating to cloud technologies, make sure you do a full return on investment (ROI) assessment – as companies who do are almost 50% more successful in realising cost savings.

This is the key finding from a new study issued by global IT provider Unisys. The report, conducted by IDG, polled 400 IT and business executives across eight countries and found four out of five respondents expected cost savings from adopting the cloud. An even higher number (82%) of those polled who had conducted formal ROI analysis up front said cost savings matched their expectations, compared to only 57% who had not previously assessed ROI.

In a similar vein, more than two thirds (68%) of respondents said they had contracted with a third party for cloud migration, with almost three quarters (72%) using the partner for cloud strategy and planning, and an even higher number still (79%) saying this partnership had helped their organisation achieve expected cost savings.

Organisations report the number of on-premises data centres continues to go down, from 43% usage per organisation today to 29% by 2019, with private cloud usage up to 28% from 20% and public cloud up to 21% from 18% in the same timeframe. As is often the case with such surveys, the questions also focused on the benefits of cloud computing, with improved disaster recovery and business continuity, agility and flexibility, more efficient storage, reduced capital costs and standardisation of IT all cited.

“Migration offers a plethora of cloud options…however, those choices can create unforeseen complexities that can easily derail expectations,” said Paul Gleeson, vice president of cloud and infrastructure services at Unisys. “Those organisations that plan their cloud migration carefully, drawing on the expertise of established partners where it makes the most strategic sense, are the ones best positioned to realise operational, financial and competitive success from cloud transformation.”

Why have a strategy for cloud management?

With the general cloud hype and the widely declared corporate intent to base future business solutions on cloud technology stacks, businesses may take the view that deploying cloud solutions involves little more than accessing a vendor’s cloud portal and following a few clicks, have your virtual services deployed and ready for operations.

Whilst this approach will give an enterprise a cloud-based virtual environment, what’s delivered is far from being part of a corporate information ecosystem.

Businesses need to view cloud as providing a virtual data centre, in which they store virtualised versions of traditional infrastructure, including computing platforms, storage, operating systems, networks, firewalls, gateways and Internet/private connectivity, all of which needs careful design.

The major benefits which businesses will see are an OPEX-based cost model with vendors actively competing on price, fast times to deploy, and the flexibility to dynamically right-size environments.

One of the often overlooked challenges of deploying solutions into this new environment is around how to monitor and manage what is essentially a fairly dynamic set of services. How to maintain an accurate record of the components which have been deployed when there are specified policies to auto-scale these environments on demand for example?

Responding to an anomalous event or incident from a cloud solution is particularly difficult if your IT service management solution has no record of the virtual devices which have generated it. Also, in an increasingly complex cyber-landscape, how can businesses detect the difference between normal workload scaling throughout the day or in-line with customer demand, and a DDoS attack, in order to respond to each appropriately?

At risk of complicating matters further, most large organisations will likely be deploying solutions to multiple clouds from multiple providers, integrating these with in-house legacy estate. I also expect compute loads to be continually migrated between cloud providers, as each offers increasingly compelling services and price points, so far from offering a simplified IT environment, it’s clear that cloud adoption will drive a new set of service management challenges.

Traditionally, many organisations have relatively static IT estates, and have incrementally procured and integrated the service tooling to monitor and manage these estates, often from multiple vendors, with the on-going cost and risk of this tooling integration lying with the end-user organisations.

Whilst a degree of syntactic integration is typically achieved between tools via this approach, deep semantic integration generally is not. Consequently, it’s becoming clear to a number of public and private sector organisations that their existing service management tooling won’t be able to effectively manage cloud-based solutions without a tightly integrated set of tools based around a single, consistent view of the estate under management.

Evidence of this growing challenge is found in the growing number of cloud management offers in the market, where this complexity can be managed for you at a price, with Gartner for example now publishing assessments of the capabilities being offered. These providers include established IT consultancies, traditional hosting providers transitioning their business to cloud management and a number of new challengers.

Looking across the market however, most large organisations are currently opting to manage their own cloud or hybrid estates and will almost certainly encounter the various challenges which I have described above. So, in order to ensure the cohesiveness of the cloud ecosystem of an enterprise, the following learning points should be built into cloud management strategy:

  • Ensure that the cloud management solution is based around a single view of the inventory under management, both real and virtual, with all tools sharing that view. Using separate inventory views for different tools will create inconsistencies, degrade the overall functionality offered and present long-term integration challenges.
  • Deploy a robust capability to discover assets deployed in both cloud and legacy environments and their respective dependencies, using this to keep the system inventory current, tracking demand driven, automated cloud deployments for example.
  • Consider deploying tools which offer policy-based automated remediation capabilities in order to drive down cost of service, and provide timely resolution of incidents.
  • To provide consistent deployment patterns across different cloud platforms in-line with internal policy, consider deploying an enterprise class cloud management tool supporting policy-based deployment patterns, alongside an overall governance framework. This is especially relevant where third parties or contract staff are provisioning cloud solutions on your behalf, and their compliance with corporate policy needs to be managed.
  • Look at options to converge cyber and environment monitoring and management, in order that both views of the estate under management can be reconciled to a single solution view, allowing automated remediation capabilities within the service management toolset to deliver policy-based interventions in detected cyber incidents.
  • To avoid piecemeal tooling integrations and related on-going costs, look at solution stacks from a single vendor, or alternatively options where a single vendor will underwrite the on-going integration of third party tools into their stack. Also, use tooling configurations which are as close as possible to the vendors’ defaults, avoiding excessive configurations which will drive up both deployment times and lifetime ownership costs.

In conclusion, whereas cloud offers significant potential to increase business agility, whilst driving down the investment required to achieve this, businesses must not neglect the complexity of managing cloud ecosystems. A key test for any business case for cloud adoption within an enterprise should be whether in includes adequate provision for end-to-end management of the solution, as without this, the probability of the described benefits actually being realised reduces significantly.

Who is building the world’s smartest cloud?

Cloud computing has come a long way since it first came into existence almost a decade ago. During this time, it has evolved to transform application development, hosting, deployment, administration and more. In these years, it has also helped businesses to streamline their processes, increase productivity, reduce costs and widen its customer base.

But is that all? Can we expect cloud computing to remain in this robust way over the next decade?

Definitely not, as some of the major cloud service providers in the world are constantly working to create cloud platforms that are faster, less expensive and can store more data. In many ways, they are always working to create the world’s smartest cloud quicker than that of their competitors.

Let’s see how the three major cloud providers, namely, Alphabet, Amazon Web Services and Microsoft, have fared so far.

Alphabet

During the annual developers conference conducted by Team Google, the showcased new services that will herald the next phase of cloud computing. It’s powerful data processors, popularly called as tensor processing units, are using machine learning and artificial intelligence to automate many of the tasks that are currently being done by humans today.

Each of these tensor processing units or TPU for short, will have a minimum of 180 teraflops of processing power and each pod will have a group of 64 TPUs. You can now imagine the massive computing power that Google plans to offer soon.

It is expected to be soon available for individuals and businesses through Google Cloud Platform.

Amazon

Amazon has been beefing up efforts to have its own smart cloud. To this end, it is integrating artificial intelligence and machine learning capabilities into its platform, so developers can get more out of the AWS platform.

It is also integrating a host of other tools such as Amazon Lex chatbot, Alexa skills set and more to give greater depth, versatility and power to its platform.

Microsoft

Microsoft is not to be left behind in this race for the smartest cloud. Project Brainwave, a next-generation project comprising of some of the most talented of researchers, is working on field-programmable gate array chipsets that can power artificial intelligence.

Many analysts predict that this platform would be more versatile and powerful when compared to the one that’s being developed by Google. This is partly because artificial intelligence is expected to change the way applications are built and deployed, so a platform with built-in AI capabilities is sure to have an edge.

So, who is going to win this three way battle between AWS, Microsoft and Alphabet for creating the smartest cloud? At this point, Microsoft seems to have the lead, but it’s going to be hard to say who will lead this market. Regardless of the winner, this is sure to be an interesting ridr for customers.

The post Who is building the world’s smartest cloud? appeared first on Cloud News Daily.

Enterprise container adoption remains slow despite the hype, research argues

Enterprise interest in container technologies continue to grow, but adoption has not gone up with it, according to a report from the Cloud Foundry Foundation.

The Global Perception Study report, which polled more than 540 enterprise developers across different industries, found that only 25% of organisations polled were using containers in 2017, up only 3% from 22% in 2016. There was, however, a greater uptick in companies evaluating options, at 42% from 31% in 2016.

Perhaps this is somewhat to be expected; as the report notes, “nothing in enterprise moves as quickly as anyone predicts when it comes to the adoption of the latest and greatest technologies.” But the conversation, Cloud Foundry argues, is shifting from ‘why’ containers to ‘how’, which will drive larger scale adoption as organisations move forward.

When it comes to individual solutions, the report assessed several sources of information for the answers and found a conflicted data set. Sysdig found 43% of its users run Kubernetes, compared to 9% for Mesos and 7% for Docker Swarm, while Evans’ cloud development survey last year put Mesos at 44%, Kubernetes at 18% and Docker Swarm at 17%.

“Anecdote might lead us to believe the world already runs on containers, but the reality remains: enterprises continue to lag behind,” the report notes. “Increasingly, though, orchestration tools are not the problem. Companies have more mature options than ever before, with a particular interest in Kubernetes.”

In June, the Cloud Foundry Foundation saw Microsoft join its ranks as a gold member, with the Redmond giant announcing the launch of Azure Container Instances (ACI) a month later having also joined the Cloud Native Computing Foundation.

This is a reasonable way of noting the technology’s rise – and it is starting to pervade the enterprise level, as Marco Ceppi, Ubuntu product strategist at Canonical wrote for this publication in August. “We see a lot of small to medium organisations adopting container technology as they develop from scratch, but established enterprises of all sizes, and in all industries, can channel this spirit of disruption to keep up with the more agile and scalable new kids on the block,” Ceppi wrote.

“In the 2016 Container Report we saw the excitement around containers and their potential – yet this excitement was constrained by the complex challenges of deploying, managing and orchestrating containers at scale,” said Abby Kearns, Cloud Foundry executive director in a statement. “In our follow up one year later, we see the same steady growth in interest but actual adoption of containers has still failed to accelerate.

“We believe the gradual or even glacial adoption of containers in production reflects more on the central challenge we pointed to in the Container Report – the challenges of container management are real, and loom larger at scale,” added Kearns.

You can read the full Cloud Foundry report here (registration required).

MapR secures $56 million in funding as it celebrates ‘outstanding’ financial quarter

MapR Technologies, a data management platform provider, has announced an equity round of $56 million (£42.5m) from existing investors.

The company said it would use the money to “continue product innovation of the industry’s first modern enterprise platform for all data, accelerate country and regional growth to fulfil the demand for the MapR Converged Data Platform throughout APAC and Europe, and bolster MapR’s thriving global partner community.” According to CrunchBase, the latest funding round puts the company at $250m of venture capital raised after six rounds.

This comes amidst what MapR called an ‘outstanding’ financial quarter, with more than 100% year over year growth in new subscription billings and 70% annual billings growth year over year.

Highlights for the company in the most recent quarter included a partnership with big data integration vendor Talend to help organisations meet GDPR requirements, as well as a collaboration with NTT DATA Business Solutions Asia Pacific around SAP deployments.

“Our customers and partners continue to be at the forefront of this 30 year re-platforming the industry is going through today,” said Matt Mills, MapR CEO in a statement. “We are working closely with them to ensure their success and helping them to execute on their digital transformation and data strategies.

“Our performance in recent quarters and the additional equity from our existing investors are proof points that we are successfully executing on our strategy as we continue to be heads-down and focused,” Mills added.

Speaking to The Register in June, Mills said the company’s trajectory was towards IPO but without specific timeframes at that moment. An interesting example of a competitor in this space is Cloudera – MapR was called out as a competitor of Cloudera’s in the open source space in the former’s IPO filing – who went public in April this year. As this publication reported last week, Cloudera’s latest financials beat expectations, with total revenues hitting $89.8m for the most recent quarter.