Salesforce sued over dealings with sex trafficking site


Connor Jones

28 Mar, 2019

Salesforce is facing a lawsuit which alleges the company profited from and knowingly facilitated sex trafficking on now-defunct website Backpage.com.

The suit has been filed by 50 anonymous women who claim to be victims and survivors of sex trafficking, abuse and rape as a result of the activity that was taking place on Backpage.com, a website that used Salesforce software during its operation.

Ironically, the lawsuit points to the Silicon Valley CRM giant’s publicly promoted anti-human trafficking campaign at the time of its work with Backpage.

“Salesforce knew the scourge of sex trafficking because it sought publicity for trying to stop it,” according to documents filed in the Superior Court in San Francisco. “But at the same time, this publicly traded company was, in actuality, among the vilest of rogue companies, concerned only with their bottom line.”

Salesforce started working with Backpage in 2013, right around the time the website’s numbers began to fall and the human trafficking accusations piled up, prompting loud calls from social and legal bodies to have the site pulled offline.

A spokesperson from Salesforce said that while the company is unable to comment on ongoing litigation, it is “deeply committed to the ethical and humane use of our products and take these allegations seriously”.

The core argument from the legal filing contends that Salesforce publicly campaigned against human rights violations while privately supplying software to Backpage, the claim being that such services provided the “backbone of Backpage’s exponential growth”. It also claims that Salesforce “designed and implemented a heavily customised enterprise database tailored for Backpage’s operations”.

Backpage was seized by the FBI in April 2018 after investigations showed that it was guilty of harbouring human traffickers on its site who targeted adults and children.

The site was widely used like the popular Craigslist whereby users posted ads and listings to sell items, advertise rental homes and jobs but also had an ‘adult services’ section where the crimes took place.

Backpage’s CEO Carl Ferrer faces five years in prison and is due to be sentenced in July.

Operating and maintaining systems at scale with automation: A guide

For the large or midsize MSP, managing numerous customers with unique characteristics and tens of thousands of systems at scale can be challenging. In this article I want to pull back the curtain, so to speak, on some of the automation and tools that I have used to solve these problems. The approach has three main components: collect, model and react.

Collect

The first problem facing us is an overwhelming flood of data. I prefer using CloudWatch metrics, CloudTrail events, custom monitoring information, service requests, incidents, tags, users, accounts, subscriptions, and alerts. The data is all structured differently, tells us different stories, and is collected at an unrelenting pace. We need to identify all the sources, collect the data, and store it in a central place so we can begin to consume it and make correlations between events.

Most of the data I described above can be gathered from AWS and Azure APIs directly, while others may need to be ingested with an agent or by custom scripts. We also need to make sure we have a consistent core set of data being brought in for each of our customers, while also expanding that to include some specialized data that perhaps only certain customers have. All data can be gathered and sent to Splunk indexers, for example, in order to build an index for every customer and to ensure that data stays segregated and secure.

Model

Next we need to present the data in a useful way. The modeling of the data can vary depending on who is using it or how it is going to be consumed. A dashboard with a quick look at several important metrics can be very useful to an engineer to see the big picture. Seeing this data daily or throughout the day will make anomalies very apparent. This is especially helpful because gathering and organizing data at scale can be time consuming, and thus could reasonably only be done during periodic audits.

Modelling data in a tool like Splunk allows for a low overhead view with up-to-date data so an engineer can do more important things. A great example is provisioned resources by region. If an engineer looks at the data on a regular basis, he or she would quickly notice that the number of provisioned resources has drastically changed. A 20% increase in the number of EC2 resources could mean several things; perhaps a customer is doing a large deployment or someone accidently put an AWS access key and secret key on GitHub.

I like to provide customers with regular reports and reviews of their cloud environments. I also use the data collected and modeled in Splunk for providing that data. Historical data trended over a month, quarter and year can prompt questions or tell a story. It can help business forecasting or the number of engineers needed to support a given project.

I recently used historical trending data to show progress of a large project that included waste removal and a resource tagging overhaul for a customer. Not only was I able to show progress throughout the project, but I used the same view to ensure that waste did not creep up and that the new tagging standards were being applied going forward.

React

Finally, it’s time to act on the data we collected and modelled. Using Splunk alerts, I am able to provide conditional logic to the data patterns and act upon them. From Splunk I can call our ticketing system’s API and create a new incident for an engineer to investigate concerning trends or notify the customer of a potential security risk. I can also call our own APIs that trigger remediation workflows. A few common scenarios are encrypting S3 buckets, deleting old snapshots, restarting failed backups and requesting cloud provider limit increases.

Because we have several independent data sources providing information, we can also correlate events and have more advanced conditional logic. If we see that a server is failing status checks, we can also look to see if it recently changed instance families or if it has the appropriate drivers. This data can be included in the incident and available for the engineer to review without having to check it.

The entire premise of this idea and the solution it outlines is about efficiency and using data and automation to make quicker and smarter decisions. Operating and maintaining systems at scale brings forth numerous challenges and if you are unable to efficiently accommodate the vast amount of information coming at you, you will spend a lot of energy just trying to keep your head above water.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Oracle cuts staff count as it tries to keep pace with AWS


Clare Hopping

28 Mar, 2019

Oracle is prepping to cut more than 350 jobs so it can better focus on its cloud business, despite already having made cuts with its IaaS business.

The company reportedly wants to stay as close as possible to AWS’s cloud model and this means it has to make cut-backs in other areas of its business – specifically the Oracle Cloud Infrastructure (OCI) unit and its IaaS business aimed at compute, storage, and network resources.

Some 352 jobs will disappear as of 21 May, including 255 at its headquarters in Redwood City, California and 97 jobs from its Santa Clara campus, according to a notice filed last week, first spotted by Bloomberg.

“As our cloud business grows, we will continually balance our resources and restructure our development group to help ensure we have the right people delivering the best cloud products to our customers around the world,” an Oracle spokesperson said, speaking to Bloomberg.

Apparently, the move will push forward Oracle CEO Larry Ellison’s “vision for the future,” Oracle executive vice president Don Johnson told staff in a company-wide email sent last week.

“It will streamline our products and services, focus investments on our most strategic priorities, and help us to more effectively and rapidly deliver the full promise and reach of Oracle’s Gen 2 Cloud,” wrote Johnson.

There’s no denying that Oracle’s cloud business seems to be faltering at present, with its fiscal Q3 2019 revenues down compared to the previous period.

Oracle will lose staff from its Redwood City, California, headquarters and its Santa Clara, California office from the end of May.

However, there’s likely to be additional redundancies outside of the company’s California offices, but it hasn’t revealed where or how many others will lose their jobs, although IEEE Spectrum reported that there’s expected to be at least 200 job losses across India, Mexico, and New Hampshire.

Organisations looking to get best of breed and guaranteed app availability with ‘true’ multi-cloud

Multi-cloud is rapidly becoming the de facto deployment model for organisations of all sizes – and a new report from hybrid cloud software provider Turbonomic has found that the vast majority of respondents ‘expect workloads to move freely across clouds.’

The study, the company’s 2019 State of Multicloud Survey which polled almost 850 respondent across multiple IT functions, found the key drivers for ‘true’ multi-cloud elasticity were a desire to leverage best-of-breed cloud services and guaranteed application availability.

Of those services, Amazon Web Services, cited by 55% of respondents, and Microsoft Azure (52%) held the clear advantage. 45% of those polled said they still used private cloud of some sort, while Google (22%) and IBM (8%) trailed. “Choice is not only critical in terms of the freedom to choose the best services for their business, but it’s also about leverage,” the report explained. “Clouds must compete, which ultimately drives the industry as a while forward with the innovation that will differentiate their offerings.”

If multi-cloud is the clear medium of choice, then containerisation is not far behind it. Almost two thirds (62%) of those polled said they had begun their cloud-native or container journey, with on average a quarter (26%) of environments currently using containerised applications. Almost a third of containerised apps were seen as mission-critical.

One of the clearest benefits of multi-cloud implementation, the report noted, was around saving time as the move to workload automation intensified. The exploration of artificial intelligence (AI) and machine learning (ML) is one which divides opinion. Will this time saved lead to more productive workforces, or reduced workforces? Naturally, those polled expressed an optimistic view. Nine in 10 said it would either elevate their careers, or have no impact. Almost half (45%) of organisations surveyed said they were adopting AI and ML for application management.

The report’s conclusion focused around a common theme for regular readers of this publication – the evolution of cloud and how emerging technologies and services are augmenting it. “Clouds today are not just infrastructure as a service, but providers of application and business services,” the report noted. “These services are their true differentiation and will increasingly become their competitive advantage. When clouds compete, customers win.

“Culture and complexity are frequently cited as the main obstacles to success,” it added. “How quickly can people – teams of people – adapt to the speed these technologies enable, embrace the new mindsets they necessitate, and manage the dynamic complexity they create? These questions are compelling organisations to value their people as creative problem solvers and innovators more than ever before.”

The report makes for interesting reading when compared to a study released by Turbonomic and Verizon in 2016. Back then, it was more about outlining a business case for multi-cloud and balking at the cost issues. 81% of those polled back then said choosing the right workloads for the right clouds was a problem yet to be solved.

“The move toward hybrid and multi-cloud is well underway,” said Tom Murphy, CMO at Turbonomic. “This move is driven by an acute need for IT modernisation, as IT continues to elevate its value by increasingly driving innovation and new revenue opportunities.

“Containers and cloud-native are central to IT modernisation initiatives, creating a tipping point in complexity,” Murphy added. “Across industries, IT staff are seeking to minimise human-assisted automation, which is why they are increasingly turning toward workload automation.”

You can read the full 2019 State of Multicloud survey here (email required).

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Why should you attend The UK Cloud Summit 2019?


Cloud Pro

27 Mar, 2019

Emerging technologies such as artificial intelligence (AI), machine learning (ML), automation, Blockchain, and cloud cyber security solutions have transformed businesses of all sizes and presented new opportunities for continued innovation for the future of anything-as-a-service (XaaS). 

It’s just over two months away to Ingram Micro’s leading cloud event and if you haven’t registered already, here are some reasons to tempt you to do so…

Held on 21 and 22 May at the Landmark Hotel in London, the UK Cloud Summit features leaders from Acronis, Buffalo, Code 42, Dropbox Business, and Microsoft. Channel Pro, Cloud Pro and IT Pro are proud to be media partners this year, too.

“In 2019, UK Cloud Summit promises to bring you deeper insights and more ‘aha’ moments than ever before,” the event site teases.

The third UK Cloud Summit promises to be must-attend event of the year for anyone who wants to build, buy or sell cloud and digital technology. By attending, you can learn how to bridge the gap between technological innovation and commercial success. You will also benefit from real-world advice and best practice guidance on how your business can embrace the infinite potential to the infinite reality of cloud.

Attending the event offers a fantastic opportunity for business leaders and IT decision makers to:

  • Find out more about the latest cloud solutions in the world’s largest cloud ecosystem
  • Derive actionable insights thanks to practical learning sessions
  • Listen to world-class speakers talk about the value and potential of cloud
  • Network and build relationships with industry influencers, experts, peers, and top-level executives
  • Understand how to commercialise IaaS, XaaS, AI, cyber security and IoT solutions

Alex Hilton, CEO of CIF, will host the event and will be joined by a fantastic array of speakers, including TV presenter Alexis Conran, who is probably best known as the host of The Real Hustle.

Outside of keynote sessions, there are three dedicated tracks, designed to help educate and inform attendees so they’re equipped to make insight-driven decisions on the emerging trends and technologies that are disrupting the industry.

Track one is focused on ‘Infinite Possibilities’ and will examine the new and disruptive technologies that are being fuelled by the cloud.

Track two is dubbed ‘Infinite Ecosystem’ and will focus how to derive value from industry leading cloud solutions.

Track three takes ‘Infinite Growth’ as its moniker. This track is designed to help delegates understand and overcome the challenges related to digital transformation.

To honour the partners who are tackling digital transformation head on, there will be a gala dinner and awards ceremony taking place on the evening of 21 May and nominations for a variety of award categories are now open. The awards will recognise partners across a number of categories including championing the women leading in the channel, MSP of the year, charitable partner of the year and more…  

“Discover new, disruptive technologies fuelled by the cloud (Infinite Possibilities), explore ways that they can benefit from category leading cloud solutions (Infinite Ecosystems), and grasp the challenges of transforming their business in the digital economy (Infinite Growth),” said Scott Murphy, director of cloud and advanced solutions and Ingram Micro in the UK and Ireland.

“You’ll learn about the extraordinary shift that’s taking the channel in a bold, new direction. Where the unknown becomes known. Where your business can leap forward in cloud enablement, and where the infinite potential becomes infinite reality.”

To find out more about the event and sign up, visit the UK Cloud Summit website.  

 

AWS makes double swoop for Volkswagen and Standard Bank


Keumars Afifi-Sabet

27 Mar, 2019

Amazon’s cloud arm has struck separate agreements with Volkswagen (VW) and Standard Bank to boost the two companies’ cloud platform and customer-facing applications respectively.

VW’s deal with Amazon Web Services (AWS) will aim to pave the way for a transformation of the car maker’s manufacturing and logistical processes across its 122 plants, including the effectiveness of assembly equipment, as well as track parts and vehicles.

Together, the two companies will develop a new platform, dubbed the Volkswagon Industrial Cloud, which will deploy technologies such as the Internet of Things (IoT) and machine learning to realise this wider ambition.

AWS’ IoT services will be deployed in full across VW’s new platform in order to detect and collect data from the floor of each plant, then organise and conduct sophisticated analytics on the information to gain operational insights.

Moreover, VW will feed all the information into a data lake built on an Amazon S3 bucket on which data analytics will be conducted. This, the two companies hope, will lead to improvements in forecasting and insight into operational trends. The manufacturing process could also be streamlined, as well as identifying gaps in production and waste management.

“We will continue to strengthen production as a key competitive factor for the Volkswagen Group. Our strategic collaboration with AWS will lay the foundation,” said the chairman of the Porsche AG executive board Oliver Blume.

“The Volkswagen Group, with its global expertise in automobile production, and AWS, with its technological know-how, complement each other extraordinarily well. With our global industry platform, we want to create a growing industrial ecosystem with transparency and efficiency bringing benefits to all concerned.”

AWS has also announced that the South African Standard Bank has decided to use its services to migrate production workloads onto the public cloud provider’s systems. These include many customer-facing platforms and banking apps.

Subject to regulations, the migration will ideally take place across all banking departments including personal banking and corporate investment banking. The firm will also adopt AWS’ data analytics and machine learning tools, to automate financial operations and generally improve web and mobile apps used by its customers.

“Standard Bank Group has been a trusted financial institution for more than 150 years. We look forward to working closely with them as they become Africa’s first bank in the cloud, leveraging AWS to innovate new services at a faster clip, maintain operational excellence, and provide secure banking services to customers around the world,” Andy Jassy, CEO of AWS. 

An AWS cloud centre of excellence will also be created within the bank, featuring a team dedicated exclusively to the public cloud migration. This centre will also build training and certification programmes within the firm to boost employees’ digital skills. This will also be extended one step further with an educational and digital skills programme to be launched across South Africa.

AMP for Gmail launches in bid to boost productivity


Connor Jones

27 Mar, 2019

Google has officially announced its AMP for Gmail feature, which aims to revitalise the long-standing, barebones email user interface into a more interactive web page-like experience.

Accelerated Mobile Pages (AMP) was announced over a year ago as part of an open-source framework for developers to create faster loading content for the web.

Instead of basic plain text messages, over the course of the next few weeks and months as more organisations start to adopt the new feature, you will receive and be able to send more interactive messages – a feature which Google thinks will boost business productivity.

For example, someone may send over some conditions to a contract which demand written agreement. Instead of typing back a traditional three-word response, AMP can embed an instant messaging window which can be used for further purposes if required.

“Starting today, we’re making emails more useful and interactive in Gmail,” said Aakash Sahney, product manager at Gmail. “Your emails can stay up to date so you’re always seeing the freshest information, like the latest comment threads and recommended jobs. With dynamic email, you can easily take action directly from within the message itself, like RSVP to an event, fill out a questionnaire, browse a catalogue or respond to a comment.”

This will help prevent cluttered inboxes and save users from having to redirect out of the email client to overlook something that can just be presented within the original email, or just answer a questionnaire. AMP aims to streamline the email-based productivity process.

Major companies you’re likely to have some connection to that support AMP for Gmail already include Booking.com, Doodle and Pinterest – so you’re likely to start receiving AMP emails from these firms pretty soon.

If you’re familiar with working in G Suite, you’ll know how annoying it is to get an email through for each individual comment made on a Google Doc. With AMP for mobile, all comments can be viewed in just one email that allows you to edit them directly from your inbox.

Like most new tech advancements that change a major process to which many are accustomed, AMP will probably take a bit of getting used to. Some will argue that Google is trying to re-invent the wheel, but as long as it keeps turning as well as it used to, it doesn’t bother us.

AMP will be rolling out to desktop Gmail clients first with mobile support coming later down the line. It will be interesting to see the extent to which companies will adopt the new format, considering the email format won’t be accessible using some other email clients (Outlook and Yahoo Mail are also currently supported).

How ‘AI at the edge’ is creating new semiconductor demand

As more CIOs and CTOs focus attention on selecting the best-fit IT infrastructure for their particular cognitive computing needs, vendors of semiconductor technologies are exploring new ways to optimize their investment in solutions at the edge of enterprise networks.

Revenue from the sale of artificial intelligence (AI) chipsets for edge inference and inference training will grow at 65% and 137% respectively between 2018 and 2023, according to the latest worldwide market study by ABI Research.

During 2018, shipment revenues from edge AI processing reached $1.3 billion, and by 2023 this figure is forecast to reach $23 billion. While it's a massive increase, that doesn’t necessarily favor current market leaders Intel and NVIDIA.

AI chipset market development

According to the ABI assessment, there will be intense vendor competition to capture this revenue between established players and several prominent startup players.

"Companies are looking to the edge because it allows them to perform AI inference without transferring their data. The act of transferring data is inherently costly and in business-critical use cases where latency and accuracy are key, and constant connectivity is lacking, applications can’t be fulfilled," said Jack Vernon, industry analyst at ABI Research.

Moreover, locating AI inference processing at the edge also means that companies don’t have to share private or sensitive data with public cloud service providers, a scenario that has proven to be problematic in the healthcare and consumer sectors.

That said, edge AI is going to have a significant impact on the semiconductor industry. The biggest winners from the growth in edge AI are going to be those vendors that either own or are currently building intellectual properties for AI-related Application-Specific Integrated Circuits (ASICs).

By 2023, it's predicted that ASICs could overtake GPUs as the architecture supporting AI inference at the edge, both in terms of annual vendor shipments and revenues.

In terms of market competition, on the AI inferencing side, Intel will be competing with several prominent AI startups – such as Cambricon Technology, Horizon Robotics, Hailo Technologies, and Habana Labs – for dominance of this market segment.

NVIDIA with its GPU-based AGX platform has also been gaining momentum in industrial automation and robotics. While FPGA leader Xilinx can also expect an uptick in revenues on the back of companies using FPGAs to perform inference at the edge, Intel as an FPGA vendor is also pushing its Movidius and Mobileye chipset.

Outlook for AI chipset applications growth

For AI training, NVIDIA will hold on to its current position as the market leader. However, other AI applications at the edge will likely favor alternative vendors.

"Cloud vendors are deploying GPUs for AI training in the cloud due to their high performance. However, NVIDIA will see its market share chipped away by AI training focused ASIC vendors like Graphcore, who are building high-performance and use-case specific chipsets," concluded Vernon.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Addressing the concerns of data management and sovereignty in multi-cloud and edge scenarios

MWC Barcelona last month heavily focused on two fast-emerging technology trends; 5G and edge computing. Together, they will significantly impact businesses by enabling massive volumes of digital data to transfer between cloud servers located in multiple regions around the world as well as between IoT devices and edge nodes. This is due to the hyper-fast speed of 5G networks and edge computing architectures that have micro-clouds and data centres located closer to data-generating IoT devices.

To seize new opportunities and stay ahead of competitors, businesses are in the process of transforming their operational models to take advantage of 5G and edge computing.

Currently, this data generated by multiple devices is stored in the cloud; this could either be on-premises, in a public cloud like Amazon Web Services (AWS), Azure or Google, hybrid, or multi-cloud. Additionally, the edge can also be seen as a ‘mini-cloud’ where some data will surely reside to support endpoint applications. With the edge, an increasing number of data storage servers are emerging to host data. In a few years, large amounts of data will be scattered across clouds and edges located in different countries and continents.

However, growing amounts of digital data are bounded by the regulations of many countries and regions, which helps to gain data sovereignty, enabling the protection of both general and sensitive information from external access for misuse. Last year, for example, the European Union implemented GDPR. Similarly, India, China and Brazil, among other nations, established their own data protection bills. The varied and growing number of regulations creates concerns for businesses, which are in the midst of transformation driven by 5G and edge benefits. Businesses, including technology infrastructure vendors and service providers, will want ownership of data which is generated by consumers, whether that occurs locally or across borders.

The key question therefore is: how can data in multi-cloud and multi-node environments be managed? Will data sovereignty be a roadblock to latency-sensitive 5G use cases?

I came across one company, Kmesh, and found it was working on compelling solutions for data mobility in edge and multi-cloud scenarios. I got in touch with Jeff Kim, CEO of Kmesh, to learn about the core of their technology.

Kmesh, founded only in 2018, today has several solution offerings to address challenges with data used in multi-cloud environments, different countries, and edges. The offerings are SaaS solutions for data sovereignty, edge data and multi-cloud, and each provides a centralised software portal where users can set up policies for the ways they wish to distribute data. These SaaS offerings allow organisations to transform centralised data into distributed data, operating over multiple clouds, countries and edges as a single global namespace.

Kmesh enables businesses to take full control of their data generated at various data centres and residing in different geographies. Businesses can also move or synchronise the data in real time. So how do their SaaS offerings work? “Using our SaaS, you install a Kmesh software agent on-premises and another Kmesh software agent on any cloud or clouds,” said Kim. “Then, using our SaaS, you control which data gets moved where. Push a button, and the data gets moved/synced in real time, with no effort by the customer.”

With this approach, Kmesh aims to deliver significant efficiency improvements in operations involving data by providing the ability to orchestrate where data generated by end devices will reside and be accessed across edge, multi-cloud and on-prem.

Kmesh also aims to offer agility and flexibility in application deployment when used with Kubernetes, the de facto technology for orchestrating where applications reside. Businesses gain the flexibility to deploy applications anywhere and can leverage data ponds, which are placed at different locations. Like Kubernetes, Kmesh follows the native design principles targeted at cloud, hybrid cloud, and multi-cloud use cases.

Leading public clouds are known to have excellent artificial intelligence (AI) and machine learning (ML) capabilities for data provided to them. Kim explained how Kmesh can focus on data mobility in the age of AI and ML. “Enterprise customers still have their data predominantly on-premises,” he said. “Cloud providers have great AI/ML applications, such as TensorFlow and Watson, but moving data to the cloud and back again remains a challenge. Kmesh makes that data movement easy and eliminates those challenges, allowing customers to focus on what they want – the AI/ML application logic.”

Kmesh offerings reduce the burden on network resources by eliminating the need to transfer huge amounts of data between cloud and digital devices. In addition, businesses can substantially lower their storage costs by eliminating the need for data replication on different clouds.

I also asked if Kmesh could benefit telecom service providers in any way. “We can help in two ways, with them as partners and as customers,” said Kim. “As customers, telcos have massive amounts of data, and we can help them move it faster and more intelligently. As partners, if they offer cloud compute solutions, then they can resell Kmesh-based services to their enterprise customers.

“One early sales entry point to enterprises is by supporting data sovereignty in countries where the big clouds – AWS, Azure, Google – have little or no presence,” added Kim. “Many countries, particularly those with high GDPs, now have regulations that mandate citizen data remains in-country. Telcos in countries like Vietnam, Indonesia, Switzerland, Germany [and] Brazil can use Kmesh to offer data localisation compliance.”

The technology world is looking for flexible IT infrastructure that will easily evolve to meet changing data and performance requirements in support of the onslaught of upcoming and lucrative use cases. Kmesh is one company which aims to address data management and data sovereignty concerns while decreasing costs associated with storage and network resources.

The post Addressing the Concerns of Data Management and Sovereignty in Multi-Cloud and Edge appeared first on Calsoft Inc. Blog.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

How CIOs can build effective cross-business collaboration


Mark Samuels

26 Mar, 2019

Shadow IT is now a business-as-normal activity, with non-IT professionals having more interest in technology than ever before, both in terms of using and procuring systems. What does this rise of decentralised IT mean for the role of the CIO and how can digital leaders ensure cross-business collaboration?

For Phil Armstrong, global CIO at finance firm Great-West Lifeco, the answer is simple: digital leaders must spend less time in the IT department and more time talking with executive peers across the rest of the organisation. “Communication skills are paramount for modern CIOs,” he says. “They need to talk in terms the business can understand.

IT and non-IT collaboration

The good news is that most leading CIOs appear to be savvy enough to understand the importance of looking beyond the safe confines of the IT department. The best digital leaders understand the myriad levers of business, not just the tools of the technology trade. They explain how digital transformation can change the business for the better.

Yet the challenge of cross-collaboration communication between IT leaders and their non-IT peers remains significant. While most employees are keen to use a combination of smart devices and the cloud to download their own apps, that growing awareness hasn’t necessarily sponsored a commensurate awareness in how to use systems and services securely.

All executives must work to ensure technology isn’t just bought and forgotten about until the IT department picks up problems at a later stage. Gideon Kay, European CIO at Dentsu Aegis Network, says the onus is on all executives to take an active interest in the strengths and weakness of technology. Specialist appointments to advise the board can help in this regard.

“I do believe we’re at a point where boards generally need to be more consciously aware of the opportunity that technology creates and the risk it brings, too,” he says. “It’s time for boards to have technologists as non-executives. That’s really important to help businesses understand and manage their entire portfolio.”

Experienced CIO and former CMO Sarah Flannigan is one executive who is fulfilling such a role. After leaving her position as CIO at EDF Energy at the end of last year, Flannigan is now pursuing a portfolio role, where she is providing consultancy and non-executive advice to a range of organisations, including Royal Botanic Gardens, Kew and the Heritage Lottery Fund.

“You can make all the difference in those moments,” she says. “You learn and grow as an individual, while helping others – and the more that there are others who are learning, the more everyone benefits collectively. Exponentially, you become increasingly helpful to the executives you work with and you also keep learning as an individual. The non-exec role helps you create greater value for everyone involved across the organisations you serve.”

The dual-role CIO

Sometimes, the answer for organisations and for digital executives is to take a novel approach to the challenge of effective cross-business collaboration. Rather than being the IT chief for a single organisation, David Walliker is CIO at both Liverpool Women’s NHS Foundation Trust and the Royal Liverpool and Broadgreen University Hospital NHS Trust.

Liverpool Women’s NHS Trust (pictured) is one of a number of trusts to successfully share a CIO

Walliker joined Liverpool Women’s in April 2013 and assumed the CIO role at the Royal in January 2015. Walliker is now running an IT-led change programme concurrently across both organisations, and working arrangements have varied since taking on the dual role. He currently works full-time for the Royal and is loaned back two days a week to the Women’s Hospital – and it’s an approach that works well for Walliker and the NHS trusts he serves.

“I enjoy the split because they’re two fundamentally different organisations,” he says. “The Royal’s exactly what you’d expect from a large, city centre hospital – it’s extremely busy and very fast-paced. The Women’s Hospital is a specialist Trust with a very different culture. The difference between the two creates an interesting split through the working week.”

Walliker says the aim is the same across both organisations: using the digitisation of paper records and electronic forms (e-forms) to deliver innovative healthcare to the people of Liverpool – and he is receiving cross-business support from senior executives in both NHS Trusts. “Digitising costs money in the shorter term but the long-term payback is huge,” he says. “It makes your organisation much more efficient and cost-effective.”

Like Walliker, Julie Dodd also fulfils a dual role at Parkinson’s UK. As director of digital transformation and communication at the charity, Dodd overseas IT and marketing functions. This combined role makes it easier for her to ensure there is a strong working bond between professionals across the organisation. As part of this process, Dodd encourages agile working.

“That digital spirit of collaborative, cross-functional teams is really exciting when it allows people to break from their silos and enter debates across the wider organisation as people from separate teams who can bring something different to the conversation on an equal footing,” she says.

Friend-maker, problem-solver

Richard Corbridge, who is chief digital and information officer at Leeds Teaching Hospitals NHS Trust, is another IT leader who recognises the power of joined-up thinking. Corbridge believes the key role for the CIO is often as a friend-maker and problem-solver. While strong bonds with marketing chiefs will be crucial in this regards, other functions matter too.

“The decentralisation of IT means procurement is increasingly led by others on the board,” he says. “The CIO is then the person who provides detail on why an investment in IT makes sense. Technology chiefs also need to stay close to audit and risk functions. Cyber security remains a risk and the key is knowing how to react when something happens.”

Corbridge believes it is crucial to remember the CIO role as an ever-evolving position. Without the ability to evolve, IT leaders get stuck in an operationally focused mindset. This inherent change in the role means CIOs must concentrate on building cross-organisational bonds – and relationships with communications professionals are likely to be crucial.

“You can and should lean on the shoulders of the digital marketing team,” says Corbridge. “That for me is when we, as CIOs, can make a difference. I think the convergence of digital and communications, and the growing importance of the CIO role, is where we’ll end up as the digital leadership role continues to evolve.”

Photo by Rodhullandemu / CC BY 2.0