GoogleCloud #DigitalMarketing Keynote | @ExpoDX @VidyaNagarajan #AI #IoT #IIoT #DigitalTransformation

Coca-Cola’s Google powered digital signage system lays the groundwork for a more valuable connection between Coke and its customers. Digital signs pair software with high-resolution displays so that a message can be changed instantly based on what the operator wants to communicate or sell. In their Day 3 Keynote at 21st Cloud Expo, Greg Chambers, Global Group Director, Digital Innovation, Coca-Cola, and Vidya Nagarajan, a Senior Product Manager at Google, discussed how from store operations and optimization to employee training and insights, all ultimately create the best customer experience both online and in-store.

read more

Visio for Mac and Other Frequent Requests from Mac Users

Mac users aren’t at all shy about making their requests for applications known: How about Visio for Mac? When will IE return to Mac? Will we ever see a Microsoft Project for Mac? Where can I get Access for Mac? When will (insert name of hot new game here) be available for Mac? Most of […]

The post Visio for Mac and Other Frequent Requests from Mac Users appeared first on Parallels Blog.

The Right Microservices | @CloudExpo @IBMcloud @IBMDevOps @JRMcGee #DevOps #Serverless #Microservices

We all know that end users experience the internet primarily with mobile devices. From an app development perspective, we know that successfully responding to the needs of mobile customers depends on rapid DevOps – failing fast, in short, until the right solution evolves in your customers’ relationship to your business. Whether you’re decomposing an SOA monolith, or developing a new application cloud natively, it’s not a question of using microservices – not doing so will be a path to eventual business failure. The real and more difficult question, in developing microservices-based applications, is this: What’s the best combination of cloud services and tools to use to get the right results in the specific business situation in which you need to deliver what your end users’ want. Considering that new streams of IoT data are already raising the stakes on what end users expect in their mobile experiences, the versatility and power of cloud services is going to become the key to innovation that’s meaningful in the market.

read more

Evaluating container-based VNF deployment for cloud-native NFV

The requirements of cloud native VNFs (virtual network functions) for telecom are different than IT applications – and VNF deployment using microservices and containers can help realising cloud-native NFV implementation success.

The best application for NFV is how it will be integrated, architected and further matured to strengthen 5G implementation for telecom service providers. Based on the current pitfalls related to VNF deployment and orchestration, making cloud-native VNF is the only solution in front of service providers today.

Yet telecom applications’ requirements of VNFs are different than any cloud-native IT application. Telecom VNF applications are built for data plane/packet processing functions, along with control, signalling and media processing. An error, or harm to VNF may break down the network and will impact the number of subscribers. Due to such a critical processing requirement, VNFs in telecom should be resilient, offer ultra-high performance, low latency, scalability, and capacity. Telecom VNFs need to be a real-time application having latency sensitivity to fulfil network data, control and signalling processing requirements.

Decomposition of cloud-native VNFs into microservices

VNFs are network functions-embedded software taken out of network peripherals and hosted on virtual machines as an application. Any kind of update to VNFs raises a time-consuming manual effort which hammers overall NFV infrastructure operations. To get ready for cloud native, bundled VNF software needs to be microservices-based, wherein monolithic VNFs are decomposed into different smaller sets of collaborative services having diverse but related functionalities, maintaining their own states, having different infrastructure resources consumption requirements, should be communicated, automatically scaled and orchestrated using well-defined APIs.

There are various benefits of microservice-based VNF decomposition:

  • Decomposed VNF sub-services are deployed on hardware which is best suited to be efficiently run and managed. It can scale as needed
  • Any error or glitch in the microservice causes failure to only that specific function, which allows easy troubleshooting and enables high availability
  • Decomposition allows reusability of service within VNF lifecycle in NFV environment. It also allows some services to get rollout quickly
  • Whole VNF becomes lightweight as functions like load balancing and deep packet inspection (DPI) are stripped out from the core application

As VNFs get divided in microservices, service providers may face operation complexity as the number grows. To manage all microservices well in production environment, high level automation needs to be implemented with NFV MANO layer and cloud orchestrator.

Evaluating deployment method of VNF using virtual machine and containers

Containers are a form of virtualisation at the operating system level. It encapsulates application dependencies, required libraries and configuration in a package which is isolated from other containers in the same operating system. Containers allow applications to run in an independent way and can be easily portable.

As a move towards cloud-native, VNF microservices can be deployed in containers which enable the continuous delivery/deployment of large, complex applications. But this approach is still in the early stage for cloud-native NFV.

Concerns with using containers for VNF

To use in NFV, there are certain concerns of using container technology:

  • The ecosystem is still evolving and immature compared with virtual machines
  • Security risks are involved with containers – all containers in OS share a single kernel, any breach on kernel OS breaks down all containers dependent on it
  • Isolating a fault is not easy with containers and a fault can be replicated to subsequent containers

Service providers who may want to use containers in an NFV environment may face challenges in multi-tenancy support, multi-network plane support, forwarding throughput, and limited orchestration capabilities. It is still possible to use containers in mobile edge computing (MEC) environments, which is going to co-exist with NFV in 5G in the future. MEC will be taking user plane function near to the edge of the network, closer to user application to provide very low latency, agility and enable real-time use cases like IoT, augmented reality, or virtual reality.

Containers can possibly be used along with virtual machines in an NFV environment as well. The deployment of VNFs can be virtual machine only, containers only, hybrid – where a container will run in virtual machines providing security and isolation features – and heterogeneous mode, where some VNFs will run in VM, some in containers, alongside a mix of both.

Service providers can evaluate their deployment methods as per their requirements at NFV infrastructure level.

Benefits of containers for cloud-native NFV path

Having a container in place to host microservices can allow active schedule and management to optimise resource utilisation. Container orchestration engines enable provisioning of host resources to containers, assigning containers to hosts, instantiate and reschedule containers. With containers, service providers can realise successful implementation of DevOps methodologies, allowing ease in automation tasks like scaling, upgrading, healing, and become resilient.

A major benefit of containerised microservices is the ability to orchestrate the containers so that separate lifecycle management processes can be applied to each service. This allows for each service to be versioned and upgraded singularly, as opposed to upgrading the entire VNF in virtual machine. While upgrading a whole application or VNF, a container scheduler determines which individual services have changed and deploys only those specific services.

Containers enable cloud-native ability into NFV infrastructure with added performance, portability and agility benefits, for telecom-specific application deployment and orchestration. To have fully featured cloud-native 5G networks, it is imperative for service providers to have containers to deploy more than virtual machines. But service providers will seek further research and development from open source communities like ONAP and OPNFV.

How containers impact NFV at application, infrastructure, and process levels

Applications (VNFs):
– It packages microservices along with its dependencies, libraries and configuration, and make it isolated
– Containers can build quickly with existing images in place for microservices
– Enables faster time to market due to highly automated deployment
– Programmable API enables a complete DevOps approach to be implemented with VNF development, deployment and lifecycle management

Infrastructure (VNF orchestration):
– Containers are portable packages which can move from one environment to another
– Containers can scale in/scale out as per requirement at NFV infrastructure
– Enables higher density
– Enables multi-tenancy to serve multiple requests
– Ease in upgrades and rollbacks as containers allow versioning

Process (VNF deployment):
– Containers can be immutable and can be pushed to any platform
– Allows smooth transition from dev to test to ops
– Enables highly efficient automation
– With containers, service providers can drive continuous integration/deployment to VNF onboarding and lifecycle management

Containers play a vital role on the path to achieve a complete 5G network built with highly automated cloud-native NFV. Successful deployment of 5G will depend on how service providers build a strategy around usage of containers in NFV infrastructure. Aside from the security risks involved in using containers, there might be use case challenges of containers in telecom applications that demand much higher performance. Containerisation can be possibly implemented in mobile edge computing to provide its benefits, but full integration will be expected by service providers to enable cloud-native NFV.

References

The post Evaluating Container Based VNF Deployment For Cloud Native NFV appeared first on Sagar Nangare.

Is recruitment holding your business back?


Jane McCallion

3 Apr, 2018

What is business about? Executing a great idea maybe? Or perhaps delivering value to shareholders? Whatever your beliefs, the fact is that no business can run properly without an effective team. And, depending on what kind of business you’re running, your team may be the most important physical resource you have.

Building the best team you can, then, is vital to your organisation’s livelihood and success. But the job market, from a recruiter’s point of view, is becoming increasingly competitive. Figures published in March 2018 by the Office for National Statistics (ONS) showed that the number of overall job vacancies in the UK between December 2017 and February 2018 stood at 816,000 — 10,000 more than September to November 2017 and 56,000 more than the same period 12 months before.

Businesses are therefore fighting over a smaller talent pool to try and build their perfect teams. So how can you attract the best talent out there from this diminished stock? And then, once you’ve attracted the right employees to your business, keep them loyal, engaged and motivated too? 


What are the growth challenges facing businesses and HR professionals in today’s landscape? Read more in this survey of over 500 HR leaders.

Download now


Building a better workplace

Over the past 10 years, what people want and expect from their place of work has changed dramatically.

Thanks to greater mobility with the advent of smartphones and the cloud, increased knowledge of startup and Silicon Valley work culture and perhaps as a reaction to the instability wrought by the global financial crisis of 2007-2008, flexible and agile working have become more desirable to workers.

In a blog post, online recruiter Jobsite listed “the agile workforce” as one of its top trends for this year.

“The fact that organisations like Sky, Google, Facebook and the NHS are among agile working’s earliest large-scale adopters says a lot about its potential. Its rapid rise as a software development methodology in the IT sector is also telling, and the value is not lost on candidates,” Jobsite says. 

Citing research carried out towards the end of 2017, the blog continues: “While 77 percent of recruiters say agile working hasn’t significantly affected the hiring process, a massive 86 percent of candidates say they’d consider changing roles if it meant working in an agile environment. And, on average, candidates with an understanding of agile working said they’d give up 16 percent of their salary to make the same switch.”

That’s not to say wages count for nothing, though, nor is an agile working environment the only way to offer an “extra something” to entice workers to your business and encourage them to stay.

Responding to the ONS’ January statistics, which covered the September – November 2017 period, Kevin Green, chief executive of the Recruitment and Employment Confederation (REC), said: “Employers who want an edge over the competition have to design new ways to attract people, like flexible work patterns. Some may need to go to specialist recruiters to get help sourcing talent in areas where there are very few candidates.

“Our data shows employers are increasing starting salaries in a bid to get applicants. However, this isn’t translating into broader pay rise for current staff and workers are facing hard times as inflation continues to outstrip pay growth. Employers need to think about salaries and benefits for all of their staff – otherwise employees could be tempted by better offers from rival companies.”

This raises a key issue: building the best team isn’t just about the high-flyers, it’s about everyone who works within the business. After all, your organisation isn’t just made up of those at the top of the tree, it’s a contingent whole. As the Chartered Institute for Professional Development (CIPD) has pointed out, excessive pay and rewards at the executive level can have a negative effect on the rest of the workforce.

Guest stars

When deciding to augment their workforce, businesses don’t necessarily have to look to recruit on a permanent basis.

For certain projects, taking on temporary contract workers or freelancers may be the best way to bring in the skills and expertise you need without creating what could be unnecessary permanent positions.

Indeed, according to a March 2018 report by the Association of Independent Professionals and the Self Employed (IPSE), the growth in the number of workers classed as self-employed over the past 10 years has been driven largely by more and more highly-skilled individuals opting to work this way. 

This means that, while there is still competition among businesses for this kind of talent, its fluid nature gives organisations a greater chance of finding the right person for the role at the time you need them.  

Additionally, contract work can act as a facilitator to finding a standout new team member. If one contractor fits particularly well within the business and did a great job in an area where you’re likely to continue investing, there’s always the possibility of offering them a permanent role once the project they were brought on to do comes to a close.  

Bringing it all together  

Underpinning all of this is the need for proper human resource management. No matter what size business you’re running, choosing the right software or service is vital to managing remuneration and benefits, keeping track of absences and holidays, or knowing when contracts start and end.

Some of the more advanced platforms out there can also help with recruitment and onboarding, or even collaboration between different sites and locations. 

So when you’re considering how to build the best team possible for your business, it’s worth also looking at the various cloud and on-premise solutions available to make sure your HRM system can keep up with your vision for the future of your company.

Take our survey for your chance to win £100 Amazon vouchers

Microsoft announces new Australia and New Zealand Azure regions

Microsoft’s latest Azure update has managed to cover two of its more recent trends with the announcement of new regions for Australia and New Zealand.

The company has already this month focused on expanding its geographic footprint, as well as beefing up its government cloud options. With the new regions, Microsoft says it is the only global cloud provider to deliver services ‘specifically designed to address the requirements of the Australian and New Zealand governments and critical national infrastructure, including banks, utilities, transport and telecommunications.’

Microsoft offers Azure from three cities in Australia – Sydney, Melbourne and Canberra – with connectivity to Perth, Brisbane and Auckland. The latest development is in partnership with Canberra Data Centres, whereby customers can deploy their own applications and infrastructure hosted on the Canberra data centre, directly connected via Azure ExpressRoute to Microsoft’s network, or in the case of federal government, through their Intra Government Communications Network (ICON).

The move now gives Microsoft four regions in the continent, with the two new Australia Central regions alongside East and Southeast. For comparison, Amazon Web Services (AWS) has three availability zones in its Sydney region, with Microsoft pointing out it is the only major provider to offer availability from more than one city.

“Around the world, government and critical national infrastructure providers are transforming operations and the services they deliver to citizens and customers,” wrote Tom Keane, Azure head of global infrastructure in a blog post. “They are rapidly modernising their business and mission-critical applications through the flexibility, scale and reach of Azure, partnering with our unmatched partner ecosystem, and placing their trust in us to create a resilient and responsive platform for growth.”

With the latest additions, Microsoft now has 50 regions worldwide in 140 countries.

Why Kubernetes networking is hard – and what you can do about it

History tells us that networking is always the last piece of the puzzle. From mainframes, to virtual machines, and now with containers, providing compute and storage are the first two steps before the realisation sets in that all these little entities have to all communicate with each other.

For most of the last 30 years, that communication has been facilitated over Ethernet where different ends of a communication at the application layer bind to each other using IP addresses and port numbers. But when those compute pieces shrink to container size, does that necessarily make sense anymore? 

If two containers are sitting on the same physical machine, on the same hypervisor, on the same Docker instance, do you really need to jump all the way out to the NIC to facilitate communication between them?  Does the application layer addressing stay the same?  Is it better to facilitate that communication using an overlay?  Do you do it over L2 or L3?  What about multi-tenancy?

All these questions, and more, is why Kubernetes networking is hard.

Kubernetes networking basics

Before understanding Kubernetes basics, it is useful to understand the limitations of Docker networking that Kubernetes overcomes.  This is not to say Docker networking is inherently evil, it’s just that the scope of the container engine tends to be a single physical or virtual machine so naturally that perspective runs into issues when considering a cluster of container engines that may or may not be spread across multiple physical or virtual machines.

The “Docker model”, as it is known in Kubrentes circles, uses host-private networking by default that creates a virtual bridge and a series of mappings that makes it easy for containers to talk to each other on the same machine.  However, containers on different machines require port allocations and forwards or proxies in order to communicate with each other.

As applications grow in size and utilise a microservices-based application architecture that requires many dozens if not many hundreds of containers spread across multiple machines, this does not scale well.  And, again, to be fair this networking scheme was intended to run on a single machine and it does support a CNM model that enables mulit-host networking but given its original intent it should not be surprising that it struggles with clustering.

The “Kubernetes model” had to not only solve the core clustering issue, but do so in a way that allowed for multiple implementations for different situations and be backward compatible with single node perspectives as well.  The fundamentals of this model are that all containers and nodes can communicate with each other without NAT and the IP address that a container sees itself as is the same IP address that others see it as.

The basic definition of a pod in Kubernetes terminology is that it is “a group of one or more containers with shared storage/network, and a specification for how to run the container.”

So, when containers are within the same pod, they share the same IP and port space and are reachable to each other using localhost.  This satisfies the backward compatibility design goal for single container engine perspectives.

More commonly, though, microservices within an application run in different pods, so they have to discover and reach each other in more complex ways than simply referring to localhost.  This mechanism is abstracted in Kubernetes so that a variety of implementations are possible, but the most popular ones use overlay, underlay, or native L3 approaches.

An overlay approach uses a virtual network that is decoupled from the underlying physical network using some sort of tunnel.  Pods on this virtual network can easily find each other and these L2 networks can be isolated from one another, requiring L3 routing between them when necessary.

An underlay approach attaches an L2 network to the node’s physical NIC, exposing the pod directly to the underlying physical network without port mapping.  Bridge mode can be used here to enable pods to internally interconnect so that the traffic does not leave the host when it doesn’t have to.

A native L3 approach contains no overlays on the data plane, meaning that pod-to-pod communications happen over IP addresses leveraging routing decisions made by node hosts and external network routers.  Pod-to-pod communication can utilize BGP peering to not leave the host and NAT can be used for outgoing traffic if necessary.

The needs and scale of your applications, including what other resources it might need to consume outside the cluster, will guide which networking approach is right for you and each approach has a variety of open source and commercial implementation alternatives.

But Kubernetes is not operating in a vacuum

Rarely does a Kubernetes cluster get deployed in a purely greenfield environment.  Instead, it gets deployed in support of rapid iteration efforts a line-of-business development team is working on to inject innovation into a market alongside existing services in an enterprise that exists on VMs or physical machines. 

As an example – shown here on the right when choosing an overlay approach – should a container on a VM host need to talk to a service elsewhere on the physical network, it now has multiple layers to jump through, each of which may inject different amounts of latency that can degrade performance.  Because these microservices-based applications do not often operate in a vacuum, this needs to be carefully considered when choosing an approach and an implementation and the choice made for one application may differ from that of another in the same portfolio of managed applications.

Why policy-based Kubernetes networking management makes sense

Developers love microservices because it enables them to architect solutions with smaller, more isolated components that talk to each other over APIs.  The APIs act as contracts between the components so as long as those APIs do not change, the components can be deployed independent of one another, making it easier to release more quickly as the search for innovative change iterates over time.

But just like all other underlying infrastructure management, this creates management headaches due the increased complexity that makes those Kubernetes clusters all hum along efficiently.  How many nodes should your cluster have?  What happens when you change your mind later?  How can you manage one cluster that uses overlay networking while another uses native L3 side by side because the multiple applications running on them have slightly different needs?  What governance do you put in place to keep it all consistent and secure?

These questions, and more, will confront a team managing Kubernetes clusters and the pathway to the answers comes from the same form of aspirin that helps soothe other infrastructure management headaches: policy.

Administrators discovered while managing software-defined networks and the virtual machines that sit on top of them, the scale of the number of “things” to be managed manually becomes unsustainable at some point.  With Kubernetes cluster administration, the number of “things” to be managed grows substantially and manual intervention becomes equally unsustainable in this new container cluster universe.  Automating administration and enforcing best practices through policy-based management becomes a clear choice regardless of what specific approaches to Kubernetes networking might be made for individual applications.  Nothing else scales to the task.

So, for the growing list of microservices-based applications you are probably managing and regardless of whether those applications need overlay, underlay, or native L3 networks be sure whatever implementation you choose provides you the option of managing your Kubernetes cluster networking via policy using the appropriate plug-in.  Otherwise, implementing changes and maintaining consistency among clusters will quickly become impossible. But by managing intent with policy automation, you’ll be ready for whatever your applications need.

Read more: Kubernetes takes step up as it ‘graduates’ from Cloud Native Computing Foundation

Q&A: Apay Obang-Oyway, Ingram Micro


Cloud Pro

3 Apr, 2018

What does cloud mean to you and what benefits do you think it brings to businesses?

The interesting thing about cloud doesn’t really have anything to do with the technology, but what it enables you to do. At its heart, it’s about business transformation. It gives organisations the flexibility to react to changing conditions and frees up resources to focus on innovation – rather than on just treading water and keeping the lights on.

Do you think the UK cloud industry has an advantage over other geographies? Are we excelling?

The UK has a deep heritage of technological innovation, and I think we’re leading the way towards the fourth industrial revolution, in relation to cloud, but also a host of other next-generation technologies.

Last year was a record for UK tech investment and London is the technology capital of Europe, but it’s clear the capital doesn’t have a monopoly on tech innovation. We work with innovative technology companies throughout the breadth of the UK, and I think that’s what makes the UK cloud industry so dynamic, but also so sustainable.

What else do you think needs to be done to champion innovation in the UK cloud industry?

There are broadly two elements that need to be looked at to further champion innovation in the UK. The first is digital transformation in the industry, ensuring organisations across all industries have a strategy that they are executing.

The second is centred on building a diverse and vibrant technology talent pool. We have a lot of homegrown talent, but there’s a looming skills shortage in the UK that could stunt innovation in all parts of the industry. This will need to be addressed if we are to stay at the top of our game.

As a country, we need to do more to focus on STEM skills from an earlier age and encourage participation from all parts of society to build a more diverse tech workforce. Additionally, businesses need to invest in upskilling the existing workforce by improving access to dedicated cloud training.

Fresh blood is also important to the on-going health of the industry, so it’s important that we can continue to attract the best possible talent internationally.

Please can you provide a bit more detail for those not familiar with your company?

Ingram Micro Cloud works with thousands of partners in the UK to help them responsibly transform their business through specialism, diversity, and innovation while helping end-users accelerate business outcomes from their technology investments. We’ve got partnerships with the leading innovative technology vendors in the industry, whose services we offer through the Ingram Micro Ecosystem of cloud, which provides partners with the ideal platform to deliver premium cloud solutions to their end customers.

The channel is an integral part of the Ingram Micro ecosystem of cloud and we recognise that we can only succeed as a business if we help our partners succeed. That’s why we invest heavily in our partners, giving them the tools, capabilities, and knowledge they need to deliver transformative cloud solutions to the market.

Why have you decided to get involved with the UK Cloud Awards 2018?

This is the third time we’ve sponsored the UK Cloud Awards and we’ve been big supporters of the event since it was established for the simple reason that, as an industry, we’ve got a lot to celebrate!

What key trends/challenges are you seeing with your customers around cloud?

It’s difficult to generalise. Some end users are already on their second and third wave cloud adoptions and are using cloud to springboard into next-generation technologies, like IoT, AI and Big Data. But many businesses are right in the infancy of their could migration journeys and need more assistance.

The same applies to the channel in some respects, and while some channel partners are leading the way in terms of innovation with a “cloud-first” motion, there’s still a certain amount of education needed to get all partners on the same page.

How is your company helping customers address these challenges?

As a business, it’s important that we can support our partners regardless of their stage of cloud maturity. We help our partners through envisioning, enablement, skills and a leading portfolio of cloud solutions offering within the Ingram Micro Ecosystem of cloud, all of which helps them to succeed. 

How do you think the cloud landscape has evolved in the past five years? 

Acceptance of cloud has grown exponentially, and for many end users, it’s just another way that they do IT. There’s been an enormous uplift in demand for cloud services from end users, which is great, but it’s been a bit of a scramble for the channel to satisfy this growing demand. 

Selling IT-as-a-service is a very different proposition to selling it as an asset, and partners have had to make some significant changes to the way that they operate to accommodate it, changing things like commission structures and upskilling staff to be able to support cloud services.

We’re seeing the partners that have embraced this change doing really well, but it’s going to be increasingly difficult for those in the channel haven’t gone to the cloud – you’ve got to go where your customers are! Ultimately, partners have to understand they are in a services world.

What do you think has driven this shift?

The industry itself has definitely matured, which has helped to drive end-user acceptance, but I think there’s also greater awareness of the business benefits that cloud can deliver. Technology is a key differentiator today, much more so than it was just five years ago, and businesses are starting to recognise that they can’t thrive in the age of disruption by managing their IT in the way that they always have done.

The workforce is increasingly technologically savvy at all levels, and this will continue to contribute to the speed of cloud adoption. Flexibility and agility are the orders of the day in uncertain times, which is exactly what cloud provides. 

What other trends and patterns do you see around cloud computing and related technologies?

Cloud is the gateway to a whole host of other technologies, and you can’t begin to explore things like IoT and Big Data in a way that is economically viable without cloud. We have been speaking about the IoT and Big Data for quite some time but, up until very recently, they have been the reserve of only the largest enterprises with enough available capital to invest in the computing resources needed to power them.

Cloud infrastructure, which offers the opportunity to successfully rent these flexible and scalable resources, effectively democratises the IoT and Big Data and lowers the barriers to entry for all organisations looking to exploit these technologies.

What role do you see cloud playing in business life a year or five years from now?

All of the upcoming technologies – AI, machine learning, quantum computing progress, the IoT, chatbots, robotics – will grow and develop, with cloud driving them. The channel absolutely needs to understand their role in the next generation of key technologies and to identify and realise the opportunities available to them. 

There are some businesses that are quite comfortable with where they are, and others who know they want to accelerate with the cloud. But only those who recognise how important it is to play a role in the future intelligence of cloud will truly thrive.

Paint for Mac using Parallels Desktop

Question: Is there a Microsoft Paint for Mac®? Answer: There is no MS Paint program for Mac, BUT there are a couple options. You can explore the similar Mac Paint programs, OR get Paint on Mac with Parallels Desktop® for Mac. Option 1: Use Mac Paint alternatives. One option that already exists on your Mac […]

The post Paint for Mac using Parallels Desktop appeared first on Parallels Blog.

Microservices at @CloudExpo | @IBMcloud @CloudTroll #AI #SDN #CloudNative #DevOps #Microservices

We all know that end users experience the Internet primarily with mobile devices. From an app development perspective, we know that successfully responding to the needs of mobile customers depends on rapid DevOps – failing fast, in short, until the right solution evolves in your customers’ relationship to your business. Whether you’re decomposing an SOA monolith, or developing a new application cloud natively, it’s not a question of using microservices – not doing so will be a path to eventual business failure.

read more