Amazon Web Services and Nokia team up for greater service provider cloud transition

Amazon Web Services (AWS) and Nokia are coming together to speed up the migration of service provider applications to the cloud, the latter has announced.

Through the deal, Nokia will support service providers throughout their AWS implementation with consulting, design, integration, migration and operation, alongside Internet of Things (IoT) use cases, through AWS Greengrass and Amazon Machine Learning among others.

On the strategic side, both companies will work together for 5G and edge cloud, including reference architectures, as well as providing an improved user experience for customers of Nuage Networks SD-WAN who also use AWS.

It’s worth noting that this isn’t ‘telco cloud’ in the strictest sense, but more ‘cloud-enabled’ for service providers. As far back as June 2014, this publication has examined Nokia’s strategy in this department, with Clare McCarthy, practice leader for telco IT at Ovum, saying at the time that the ‘key challenge’ in the coming few years for customer experience management providers was in assuring and monetising the IoT ecosystem.

“Our collaboration with AWS will accelerate the migration of service provider applications to the cloud and enable us to forge new opportunities together by delivering on next-generation connectivity and cloud services,” said Kathrin Buvac, Nokia chief strategy officer. “This is a wide-ranging collaboration, spanning our services capabilities in application migration, SD-WAN from Nuage Networks, 5G, and IoT, allowing new growth opportunities for our top customers across both the service provider and large enterprise market segments.”

“Service providers are accelerating their migration to AWS in order to drive innovation for their customers and deliver lower total cost of IT to their organisations,” said Terry Wise, AWS global vice president of channels and alliances in a statement. “We are excited to partner with Nokia to accelerate cloud transformation for service providers, and enable the digital transformation journey for our mutual large enterprise customers.”

Analysing Gartner’s top 10 predictions for IT in 2018 – and beyond

  • In 2020, AI will become a positive net job motivator, creating 2.3m jobs while eliminating only 1.8m jobs.
  • By 2020, IoT technology will be in 95% of electronics for new product designs.
  • By 2021, 40% of IT staff will be versatilists, holding multiple roles, most of which will be business, rather than technology-related.

These and many other insights are being presented earlier this month at the Gartner Symposium/ITxpo 2017 being held in Orlando, Florida. Gartner’s predictions and the series of assumptions supporting them illustrate how CIOs must seek out and excel in the role of business strategist first, technologist second. In 2018 and beyond CIOs will be more accountable than ever for revenue generation, value creation, and the development and launch of new business models using proven and emerging technologies. Gartner’s ten predictions point to the future of CIOs as collaborators in new business creation, selectively using technologies to accomplish that goal.

The following are Gartner’s 10 predictions for IT organizations for 2018 and beyond:

By 2021, early adopter brands that redesign their websites to support visual- and voice-search will increase digital commerce revenue by 30%

Gartner has found that voice-based search queries are the fastest growing mobile search type. Voice and visual search are accelerating mobile browser- and mobile app-based transactions and will continue to in 2018 and beyond. Mobile browser and app-based transactions are as much as 50% of all transactions on many e-commerce sites today. Apple, Facebook, Google and Microsoft’s investments in AI and machine learning will be evident in how quickly their visual- and voice-search technologies accelerate in the next two years.

By 2020, five of the top seven digital giants will willfully “self-disrupt” to create their next leadership opportunity

The top digital giants include Alibaba, Amazon, Apple, Baidu, Facebook, Google, Microsoft, and Tencent. Examples of self-disruption include AWS Lambda versus traditional cloud virtual machines, Alexa versus screen-based e-commerce, and Apple Face ID versus Touch ID.

By the end of 2020, the banking industry will derive $1bn in business value from the use of blockchain-based cryptocurrencies

Gartner estimates that the current combined value of cryptocurrencies in circulation worldwide is $155bn (as of October 2017), and this value has been increasing as tokens continue to proliferate and market interest grows. Cryptocurrencies will represent more than half of worldwide blockchain global business value-add through year-end 2023 according to the Gartner predictions study.

By 2022, most people in mature economies will consume more false information than true information

Gartner warns that while AI is proving to be very effective in creating new information, it is just as effective at distorting data to create false information as well. Gartner predicts that before 2020, untrue information will fuel a major financial fraud made possible through high-quality falsehoods moving the financial markets worldwide. By the same year, no significant internet company will fully succeed in its attempts to mitigate this problem. Within three years a significant country will pass regulations or laws seeking to curb the spread of AI-generated false information.

By 2020, AI-driven creation of “counterfeit reality,” or fake content, will outpace AI’s ability to detect it, fomenting digital distrust

AI and machine learning systems today can categorize the content of images faster and more consistently accurate than humans. Gartner cautions that by 2018, a counterfeit video used in a satirical context will begin a public debate once accepted as real by one or both sides of the political spectrum. In the next year, there will be a 10-fold increase in commercial projects to detect fake news according to the predictions study.

By 2021, more than 50% of enterprises will be spending more per annum on bots and chatbot creations than traditional mobile app developments

Gartner is predicting that by 2020, 55% of all large enterprises will have deployed (used in production) at least one bot or chatbot. Rapid advances in natural-language processing (NLP) make today’s chatbots much better at recognizing the user intent than previous generations. According to Gartner’s predictions study, NLP is used to determine the entry point for the decision tree in a chatbot, but a majority of chatbots still use scripted responses in a decision tree.

By 2021, 40% of IT staff will be versatilists, holding multiple roles, most of which will be business, rather than technology-related

By 2019, IT technical specialist hires will fall by more than 5%. Gartner predicts that 50% of enterprises will formalize IT versatilist profiles and job descriptions. 20% of IT organizations will hire versatilists to scale digital business. IT technical specialist employees will fall to 75% of 2017 levels.

In 2020, AI will become a positive net job motivator, creating 2.3m jobs while eliminating only 1.8m jobs

By 2020, AI-related job creation will cross into positive territory, reaching 2 million net-new jobs in 2025. Global IT services firms will have massive job churn in 2018, adding 100,000 jobs and dropping 80,000. By 2021 Gartner predicts, AI augmentation will generate $2.9tn in business value and recover 6.2 billion hours of worker productivity.

By 2020, IoT technology will be in 95% of electronics for new product designs

Gartner predicts IoT-enabled products with smartphone activation emerging at the beginning of 2019.

Through 2022, half of all security budgets for IoT will go to fault remediation, recalls and safety failures rather than protection

Gartner predicts IoT spending will increase sharply after 2020 following better methods of applying security patterns cross-industry in IoT security architectures, growing at more than 50% compound annual growth rate (CAGR) over current rates. The total IoT security market for products will reach $840.5M by 2020, and a 24% CAGR for IoT security from 2013 through 2020. Combining IoT security services, safety systems, and physical security will lead to a fast-growing global market. Gartner predicts exponential growth in this area, exceeding more than $5B in global spending by year-end 2020.

Gartner has also made an infographic available of the top 10 Strategic Technology Trends for 2018, in addition to an insightful article on Smarter with Gartner.  You can find the article here, at Gartner Top 10 Strategic Technology Trends for 2018.

Sources:
Gartner Reveals Top Predictions for IT Organizations and Users in 2018 and Beyond
Smarter With Gartner, Gartner Top 10 Strategic Technology Trends for 2018
Top Strategic Predictions for 2018 and Beyond: Pace Yourself, for Sanity’s Sake (client access read)

Cuphead on Mac with Parallels Desktop

“Now Is the Time for a Great Adventure” Cuphead is a beautifully illustrated “run and gun” action game that’s heavily focused on boss battles to repay a debt to the devil. The game is made by StudioMDHR and runs on Unity engine. Inspired by cartoons of the 1930s (think “Steamboat Willy”), the visuals and audio […]

The post Cuphead on Mac with Parallels Desktop appeared first on Parallels Blog.

Alibaba gives update on cloud roadmap – with blockchain, IoT and AI of note going forward

Alibaba’s ambition in the cloud arena is well-known – with progress certainly being made if reports are to be believed – and to illustrate, the company outlined a variety of clients and technology opportunities at its Computing Conference in Hangzhou.

Simon Hu, president of Alibaba Cloud, took to the stage to offer a progress update about the company’s cloud push and vision.

More than one million paying customers are on Alibaba Cloud – a fact revealed during the company’s most recent financial report in August, with CEO Daniel Zhang saying at the time the milestone was ‘merely a starting point’. Hu added some meat to this bone; one third of China’s top 500 companies are on board, while 80% of customers were designated as ‘innovation companies’. “They are observing the changes brought by the Internet and brought by cloud computing,” said Hu.

The list of companies utilising Alibaba’s cloud – not just in China but pan-Asia – was particularly of interest. Philips China has moved entirely to the cloud, owning no servers or data centres of their own, and has seen IT costs drop by 15%, said Hu. Bank of Nanjing has moved from IBM and Oracle, while Moutai, a Chinese liquor distributor, is using Alibaba’s cloud technology and is particularly interested in blockchain going forward. Other companies – Air Asia, State Administration Taxation and the IOC (International Olympic Committee) – kept on coming.

The plan, alongside the company's partners (below), was to move from one million customers to 10 million.

Products were unveiled; some new, some not entirely new. In the former was the X-Dragon cloud server, which was described as a ‘mixed cloud service’, combining a physical server’s stability with the scalability of a virtual machine. “This can connect the demand with the lowest possible price and offer the best performance,” said Hu.

Another on this theme was DataV, a 3D city visualisation aimed squarely at the Internet of Things (IoT). Using Hangzhou as the example, Hu said more than 100,000 3D buildings can be managed through the visualisation. The latter category included MaxCompute – launched officially last month – as a proprietary big data processing platform which features greater machine learning capability alongside scalability and security protection.

Alongside the one million paying customer milestone, Alibaba has received recent praise from analysts. Gartner put the company in third place, behind Amazon Web Services (AWS) and Microsoft, in public cloud infrastructure as a service (IaaS) last month, while Fung Global Retail & Technology (FGRT), an analyst firm covering the Computing Conference, put it this way.

“We are convinced that Alibaba is at the forefront of the development of various cutting-edge technologies, such as AI and voice recognition,” the company wrote. “We are optimistic about its various initiatives in promoting innovation and empowering entrepreneurs in China.

“With its vision, we believe Alibaba will continue to play an important role in driving industries such as eCommerce, healthcare, urban planning and manufacturing in China, and the world.”

A letter to shareholders from Zhang, published yesterday, said the past year was where investors “gained appreciation for the clarity of Alibaba’s vision, mission and strategic blueprint.” Aligned to this was the cloud arm, which Zhang added was ‘still in hyper-growth mode’. “Together with a large and diverse community of developers and service providers, a new ecosystem has formed around our cloud business,” he wrote.

Photo source: www.alibabagroup.com

Why financial services companies love Docker containers

Tech-savvy banks were among the first and most enthusiastic supporters of Docker containers.

Goldman Sachs invested $95 million in Docker in 2015. Bank of America has its enormous 17,500-person development team running thousands of containers. Top fintech companies like Coinbase also run Docker containers on AWS cloud. Nearly a quarter of enterprises are already using Docker and an additional 35% plan to use it.

It may seem unusual that one of the most risk-averse and highly regulated industries should invest in such a new technology. But for now, it appears that the potential benefits far outweigh the risks.

Why containers?

Containers allow you to describe and deploy the template of a system in seconds, with all infrastructure-as-code, libraries, configs, and internal dependences in a single package. This means your Docker file can be deployed on virtually any system; an application in a container running on an AWS-based testing environment will run exactly the same in production environments on a private cloud.

In a market that is becoming increasingly skittish about cloud vendor lock-in, containers have removed one more hurdle to moving containers across AWS, VMware, Cisco, etc. A survey of 745 IT professionals found that the top reason IT organizations are adopting Docker containers is to build a hybrid cloud.

In practice, teams are usually not moving containers from cloud to cloud or OS to OS, but rather benefiting from the fact that developers have a common operating platform across multiple infrastructure platforms. Rather than moving the same container from VMware to AWS, they benefit from being able to simplify and unite process and procedures across multiple teams and applications. You can imagine that financial services companies that maintain bare metal infrastructure, VMware, and multiple public clouds benefit from utilising the container as a common platform.

Containers also are easier to automate, potentially reducing maintenance overhead. Once OS and package updates are automated with a service like CoreOS, a container becomes a maintenance-free, disposable “compute box” for developers to easily provision and run code. Financial services companies can leverage their existing hardware, gaining the agility and flexibility of disposable infrastructure without a full-scale migration to public cloud.

On large teams, the impact of these efficiencies — magnified across hundreds or thousands of engineers — can have a huge impact on the overall speed of technological innovation.

The big challenges: Security and compliance

One of the first questions enterprises ask about containers is: What is the security model? What is the fallout from containerization on your existing infrastructure security tools and processes?

The truth is that many of your current tools and processes will have to change. Often your existing tools and processes are not “aware” of containers, so you must apply creative alternatives to meet your internal security standards. It’s why Bank of America only runs containers in testing environments. The good news is that these challenges are by no means insurmountable for companies that are eager to containerize; The International Securities Exchange, for example, runs two billion transactions a day in containers running CoreOS.

Here are just a few examples of the types of changes you’d have to make:

Monitoring: The most important impact of Docker containers on infrastructure security is that most of your existing security tools — monitoring, intrusion detection, etc. — are not natively aware of sub-virtual machine components, i.e. containers. Most monitoring tools on the market are just beginning to have a view of transient instances in public clouds, but are far behind offering functionality to monitor sub-VM entities.In most cases, you can satisfy this requirement by installing your monitoring and IDS tools on the virtual instances that host your containers. This will mean that logs are organized by instance, not by container, task, or cluster. If IDS is required for compliance, this is currently the best way to satisfy that requirement.

Incident response: Traditionally, if your IDS picks up a scan with a fingerprint of a known security attack, the first step is usually to look at how traffic is flowing through an environment. Docker containers by nature force you to care less about your host and you cannot track inter-container traffic or leave a machine up to see what is in memory (there is no running memory in Docker). This could potentially make it more difficult to see the source of the alert and the potential data accessed.

The use of containers is not really understood by the broader infosec and auditor community yet, which is potential audit and financial risk. Chances are that you will have to explain Docker to your QSA — and you will have few external parties that can help you build a well-tested, auditable Docker-based system. Before you implement Docker on a broad scale, talk to your GRC team about the implications of containerization for incident response and work to develop new runbooks. Alternatively, you can try Docker in a non-compliance-driven or non-production workload first.

Patching: In a traditional virtualized or public cloud environment, security patches are installed independently of application code. The patching process can be partially automated with configuration management tools, so if you are running VMs in AWS or elsewhere, you can update the Puppet manifest or Chef recipe and “force” that configuration to all your instances from a central hub. Or you can utilize a service like CoreOS to automate this process.

A Docker image has two components: the base image and the application image. To patch a containerized system, you must update the base image and then rebuild the application image. So in the case of a vulnerability like Heartbleed, if you want the ensure that the new version of SSL is on every container, you would update the base image and recreate the container in line with your typical deployment procedures. A sophisticated deployment automation process (which is likely already in place if you are containerized) would make this fairly simple.

One of the most promising features of Docker is the degree to which application dependencies are coupled with the application itself, offering the potential to patch the system when the application is updated, i.e., frequently and potentially less painfully.

In short, to implement a patch, update the base image and then rebuild the application image. This will require systems and development teams to work closely together, and responsibilities are clear.

Almost ready for prime time

If you are eager to implement Docker and are ready to take on a certain amount of risk, then the methods described here can help you monitor and patch containerized systems. At Logicworks, we manage containerized systems for financial services clients who feel confident that their environments meet regulatory requirements.

As public cloud platforms continue to evolve their container support and more independent software vendors enter the space, expect these “canonical” Docker security methods to change rapidly. Nine months from now or even three months from now, a tool could develop that automates much of what is manual or complex in Docker security. When enterprises are this excited about a new technology, chances are that a whole new industry will follow.

The post Why Financial Services Companies Love Docker Containers appeared first on Logicworks.

Exploding cloud myths, part 1: Why the cloud does not solve all business continuity and DR pains

With all the hype about how cloud delivery brings new levels of flexibility and availability, many organisations may be falling for misleading reports claiming that moving to a cloud model somehow diminishes the need to worry about business continuity (BC) or disaster recovery (DR).

Nothing could be further from the truth.

Like the parody disaster movie Airplane, it would just take one thing to go wrong and you could find yourself in quite a pickle, but without any of the humour. And you will need more than an inflatable autopilot to get you back on course.

Some cloud providers have done a good job clarifying that responsibility for continuity and recovery in the supported IT service remains with the customer. Cloud providers do offer some help – for example, providing multiple availability zones – and organisations should, at the outset, consider this, as well as the benefits of a multi–vendor cloud strategy to reduce the risk of any critical services going down.

Three different areas are needed to address BC and DR: continuity planning to ensure the relevant events, risks and business impacts are represented in the solution requirements; recovery solutions that are well-designed, professional managed and inspire confidence through effective testing; and crisis management to escalate incidents through a clear process, up to DR invocation and beyond to repatriation to the primary/BAU state.

While continuity planning is clearly the responsibility of the business, rather than the IT function, it is still important for IT to ensure its service impact assessments cater for all relevant risks. Cloud service dependencies may need more analysis, as a number of low-criticality business services sharing the same infrastructure-as-a-service can incur a combined impact even greater than that from a single high-criticality service.

The role of the IT function – and particularly the IT incident desk – in crisis management is obvious and requires a close collaboration with the business. IT has to quickly recognise any underlying or growing crisis-level events and invoke the necessary action plans. Never underestimate the importance of executing the action plans in a controlled manner. A certain degree of panic is unavoidable in a crisis, severely limiting the ability to analyse situations and make the right decisions. Simple, broadly understood and well rehearsed action plans mitigate this risk.

Recovery solutions is the only area that falls clearly under IT’s remit. In this regard, it is also true that cloud suppliers are likely to manage technology platforms better than most of their customers – it is their core business after all — but these platforms are merely ingredients in the bigger solution.

While we may expect small platform resilience improvements after moving to the cloud, solution availability and recovery remain the responsibilities of the IT function. The benefits of using a cloud model can include the use of multiple availability zones, or even multiple suppliers, as well as the management and support of these platform components.

AWS, Google, Oracle and others have promoted availability zones for some time. Recognising the importance of this customer need, Microsoft recently announced it would follow suit. The growth of multi-cloud strategies also reflects the customer requirement for flexibility between suppliers, including protecting against any single vendor outage or issues.

Even when all this is taken into account, organisations must consider consolidated service impacts. Cloud providers’ SLAs will likely reflect minimal periods of downtime, but it is the organisation’s responsibility to address any repeated points of failure within the business impact analysis, such as multiple services relying on the same infrastructure provision service.

This becomes more critical as companies undergo digital transformation, leading to a growing number of business-critical applications becoming reliant on the same cloud services. Suddenly this minimal outage period is a major business headache.

IT’s domain extends beyond the data centre space and is increasingly responsible for workplace recovery through digital workspace 'work anywhere’ practices. Extending this further, a business function may misinterpret their adoption of software-as-a-service as being separate from the IT function. However, as with other shadow IT, management of the service wrapper and service inter-dependencies must remain the remit of the IT function.

While IT is accountable for recovery solutions, and partially responsible for crisis management, it is clear that it cannot own business recovery. It is down to the individual business functions to manage their working practices to support service continuity through to complete business recovery. Since IT cannot address this, cloud certainly cannot.

The SIAM effect

With DR, and particularly BC, we are reminded that impacts on people and process are at least as high risk as those from technology. This is where the service integration and management (SIAM) and operating models have to be updated with an eye on end-to-end service delivery and skills coverage.

When IT service layers are supported across multiple internal or external suppliers, it is vital that all incidents are clearly tracked and consistently managed end-to-end through a single SIAM function. This is critical for effective crisis management as well as being a cornerstone of successful cloud adoption.

In fact, unless it is managed well, cloud adoption can have a negative impact on the people and process, driven by the false view that cloud is simply another technology platform. For example, cloud adoption will involve a significant shift in the knowledge base and focus areas of the IT staff. The IT function must recognise how this impacts DR and BC solutions and ensure access to the expertise it needs, whether internally or externally.

Although cloud adoption should never be thought of as a solution to an organisation’s DR and BC challenges, it certainly can play a positive role. The combination of an effective SIAM model, an evolved cloud operating model, diligent IT service impact assessments and use of availability zones across multiple cloud suppliers, can help organisations ensure their business have the built-in resilience to survive any outages and disasters. If IT and the business work hand-in-hand on the planning and execution of DR and BC strategies, you can leave your inflatable autopilot at home.

Editor’s note: This is the first in a three-part series exploring common cloud myths and misconceptions. Stay tuned to CloudTech for the next instalment.

KRACK & Adobe Flash Vulnerabilities: How to Protect Now & Prevent Later

Security VulnerabiilityAs you may know, there were multiple major security vulnerabilities announced yesterday. One specifically related to the WPA2 WiFi Security Protocol dubbed “KRACK” and another related to Adobe Flash. What happened and how can you protect your environment from the KRACK & Adobe Flash vulnerabilities? Below is what we shared with our current Managed Services customers, but even if you work with another provider or handle all of your IT system monitoring and management yourself, this may be helpful toward further understanding your risks and how to protect your environment.

WPA2 “KRACK” Vulnerability


What is it?: A critical vulnerability in the WiFi Protected Access II (WPA2) protocol which could allow someone within range of your wireless network to gain unauthorized access to traffic over that connection. 

This vulnerability applies to any device that utilizes the WPA2 protocol to establish secure connections, including Wireless Access Points, Endpoints (laptops, desktops), and Mobile Devices.

Microsoft has already released a patch and it is included in the October Security Rollup. For customers currently enrolled in our desktop patching program, this roll-up has been approved for immediate install. For customers enrolled in our Server patching program, we will apply the October Security Rollup per the normal patching schedule as servers typically will not have WiFi enabled. 

Further – some recommendations for your end users:

  • Avoid public WiFi (such as coffee shops, hotels, etc.)
  • When connected to WiFi, try to limit browsing to HTTPS sites
  • Consider using a VPN which will encrypt traffic end-to-end

While patching your endpoints will substantially mitigate the vulnerability, GreenPages will be watching for upcoming available patches and updates for the network devices in your environment in the coming days and weeks and will work with you to apply those expeditiously.

More specific details on this WiFi vulnerability can be found here.

Adobe Flash Vulnerability:

Adobe released a security update for a vulnerability that was recently discovered that could lead to remote code execution. 

  • If you are currently enrolled in a 3rd party patching program that includes Adobe Flash, we have already approved this patch for deployment to your environment.
  • If you are not enrolled, due to the risk potential for this vulnerability, it is highly recommended that you apply this patch to all devices in your environment. 

The Adobe Flash Security Bulletin can be found here.  

We’ll be writing a follow-up post next week about the KRACK & Adobe Flash vulnerabilities once the dust has settled to see how the industry has reacted and responded to these vulnerabilities so please check back then.

To learn more about GreenPages Server, Desktop, 3rd Party Patching, and Managed Services Programs, please call 800-989-2989 and we can set up a call to discuss.

By:

Jay Keating, VP Cloud & Managed Services
Aaron Boissonnault, Director, Hybrid Cloud Operations
Steve Stein, Director, Client Services

Guest Blog Article: In the Crosshairs – Ransomware is Targeting Macs

Guest blog article from Peter Hale, Content Manager (Consumer), Acronis In the (usually) friendly rivalry between Mac and PC users, there was always one fact that gave the edge to Mac – they were safer because they weren’t vulnerable to viruses or ransomware. Unfortunately, that’s no longer the case: cybercriminals are casting a wider net and turning […]

The post Guest Blog Article: In the Crosshairs – Ransomware is Targeting Macs appeared first on Parallels Blog.

A look into IBM’s new services

IBM has launched two new services with an aim to bring more businesses to the cloud.

Currently, many businesses are stuck with legacy systems that make it difficult to migrate to the cloud, and as a result, they miss out on the benefits that come with cloud. To overcome this limitation, IBM has come up with two services that will make this migration easy.

These two processes are IBM Cloud Migration Services and IBM Cloud Deployment Services. Cloud migration services, as the name implies, helps businesses to get ready to move to the cloud. In this service, IBM works with you to identify your existing IT infrastructure and your goals, and based on this, they create a plan to help you migrate to the cloud.

Cloud deployment services, on the other hand, is an automated service hat eases the deployment process. Essentially, it models infrastructure and application solutions and repeats this pattern to automate the entire process.  This product is available for public and hybrid cloud providers, including non-IBM products and services.

So, how are these services different from the large repertoire of services that IBM currently offers?

The biggest advantage is that these services make it a lot less expensive and easy for companies to orchestrate their workloads, regardless of the underlying cloud delivery model. The cloud deployment services, in particular, are a next generation set of tools that can automate based on the existing workflows. As a result, the service provisioning time including the time it takes to design, build and deploy is greatly reduced.

Another advantage is that all these services are tied with Watson, so businesses can leverage the power of this cognitive platform as well. This means, over time, the system can learn from patterns and predict behavior and solutions. It can be most helpful for identifying problems, self learning, self healing and avoiding disruptions to existing services.

From IBM’s perspective, both these services are the perfect addition to its existing portfolio of products and services. With more such innovative products, IBM could expand its client base, especially in emerging markets in the Asia Pacific region where the potential for cloud services still remains huge.

The post A look into IBM’s new services appeared first on Cloud News Daily.

Why combining access governance with authorisation management is key to identity success

In virtually every organisation or university, data is stored on multiple file servers throughout the network, often in a somewhat haphazard or random structure. Access to the data is likely just as unstructured and may put the organisation at risk by allowing employees access rights where none are required. Managing access to this unstructured data is incredibly difficult, resulting in a significant challenge when the time for an IT audit rolls around.

There are methods to bring order to this madness and maintain an audit trail, resulting in all access permissions being visible, and obtaining recommendations about how to structure and restrict access for optimal security. Software technology exists to allow for monitoring of all file actions and can maintain an audit trail of all the actions a user performs on the file server. For example, when a user modifies a file, deletes it, copies it, or moves it, a detailed record of who carried out what action in the file system and when can be made readily available.

This technology can provide an overview of all access rights, including what rights a user has or conversely, details on the users who have access rights to a particular file and how often, if ever, they exercise those rights. Finally, with the technology, it is possible to regularly collect and categorise all unstructured data and access rights per user. It is then possible to make a recommendation about what access rights should be cleaned up to keep the network structured and compliant.

Gartner estimates that more than 80 percent of business information is stored in an unstructured manner. The risks associated can be devastating if the wrong person accesses sensitive information for nefarious purposes. Authorisation management technology drastically reduces the complexity of access management protocols. Without it, it is impossible to guarantee that data is effectively secured.

Authorisation management software provides direct insight into access privileges relevant to the file system through the group memberships in Active Directory, ACLS and direct access. Likewise, it provides an audit trail of the actions that each employee has performed on what file, in which directory and at what time. Further, the technology also allows manager to determine how a user received access to a folder or file – was it through a Active Directory group or via some other method that may not be appropriate.

Authorisation management is really the latest component of the complete access governance, or identity and access management, umbrella. In regards to security, automating operations and managing compliance and audits through access governance is now more vital to an organisation’s survival. In a sense, the visibility provided into an organisation through identity management solutions simply is not there across all systems and the authorisation management component provides that visibility.

You can easily spot accounts where cases excessive access, or access creep have occurred and have the information needed  to resolve potential issues. IT leaders or departmental can perform periodic account reviews and to make informed decisions about who should retain, lose or be granted access to applications or data sets. Access governance also shows you an overview of every system available, and then the information can be drilled down to the granular level.

In so doing, you can review accounts on particular systems or applications and you can examine individual employees and review their access to various resources. Access governance protocol takes on stale accounts, orphan accounts, and shared accounts with no one individual that can take ownership and responsibility for their use.

Access governance, when enhanced with authorisation management, allows IT leaders to conduct on demand security audits to ensure the network resources are only accessible by people with a bonafide reason to do so.  As access governance and authorisation management continue to become integrated, organisations gain the ability to easily peer into every aspect of their network operation, creating unprecedented visibility to protect company data and defend it from the threat of outside hackers or employees with less than honourable intentions.