Demytisfying the public or private cloud choice: Compliance, cost, and technical requirements

Every business wants to operate like a tech company today. Companies can’t thrive without improving IT, and executives must decide where to house and process data – under these circumstances, cloud strategies are increasingly nuanced.

A Forrester study found that just 4% of organisations run their applications exclusively in the public cloud today, and 77 percent of organisations are using multiple types of clouds, both on-premises and off-premises.

So do you take the public or private cloud route? This can be a complicated question for companies, so let’s look at some starting considerations.

Chick-fil-A uses a mixture of private and public cloud to support operations by deploying small Kubernetes clusters in each store to support transactions

Most of IT’s budget and attention is focused on what used to be called “off-the-shelf applications”: email and calendaring, collaboration apps and industry specific software. These applications are often slow-moving fodder in a cloud strategy and should be moved to public cloud first. Gartner expects more than 70 percent of businesses will be substantially provisioned with cloud office capabilities by 2021.

Moving these types of applications off-premises frees up resources to focus on building out larger software development and delivery capabilities, the core asset for any successful digital transformation.

Complying with regulation

When collecting user data – location, personal information, credit card information – there are a whole list of compliance issues that will drive cloud choice.

Sifting through various regulations and barriers to decide whether to use a public or private cloud for storing a user’s data, will throw up many questions that need answers. For instance, how do government policies shape operations and strategies? Certain safety measures or auditing points can create huge costs and public cloud solutions might have done the work already. What rules and regulations govern the data being collected? Do we own the data? What is the geographical definition of ownership – does anyone else share it?

While compliance issues may seem like a productivity blocker, understanding why they exist and working with auditors will help determine business imperatives.

Regulations are aimed at avoiding nefarious uses such as selling personal data to advertisers or stockpiling profiling data to meddle in politics. The data management needs of the GDPR are driving many organisations to reconsider where they store user data. Often, running their software on private cloud affords more control. On the other hand, there are cases where using a public cloud service is better. Complying with all payment handling and tax regulations globally might be easier to achieve with public cloud-based services. Handling sensitive documents might also be better outsourced.

Of course, pure public cloud is rarely an option. Retailers, for instance, often have competitive concerns that drive them away from using Amazon Web Services (AWS), or other cloud software companies might not want to use Google’s tools.

Pinning down technical requirements

Nailing a comprehensive list of technical requirements will create a good checklist. These should include operability of different database frameworks, load balancing, licensing ramifications and bandwidth limitations. For example, Chick-fil-A uses a mixture of private and public cloud to support operations by deploying small Kubernetes clusters in each store to support transactions.

When moving to public clouds, engineering teams lose certain operational controls and often need to re-architect their code. New runtime environments in public cloud often require new skills as well. However, none of these concerns are impossible to solve.

Cloud costs

Different cloud solutions don’t lend themselves to easy comparisons like new phones do: run down a checklist of features and specs, then weigh against the price tag. Cloud architectures are complex and need to be visualised too far forward in time. The process is similar to buying solar panels, where the upfront cost hurts, but businesses are playing a longer game with the investment. However, businesses need to be sure they are staying put (to keep the analogy going) with strategy, features and hardware, such as servers and an ops team. Those can quickly become painful losses if, in a couple years, overhead costs aren’t assessed correctly.

There are some basic starting points:

  • What features of public cloud would be better than private cloud – and how can real financial value be assigned to them?
  • How useful are machine learning tools in the cloud being considered? A retailer could use such services easily to start targeting ads or upselling recommended items, and so they might choose Google’s cloud. Or maybe for regulatory reasons, or because the retailer can do it better themselves, they’ll do this processing on their own, private cloud

The focus on business outcomes is what should drive the choice of public versus private cloud. It’s all too easy to look at either option based purely on cost. When IT is a core business enabler, the best approach is to consider how much money the chosen service can make the company. Focus then shifts to what type of infrastructure enables software teams.

Compliance issues may seem like a productivity blocker – but understanding why they exist and working with auditors will help determine business imperatives

A platform that focuses on delivery speed to enable designing better, more productive and profit driving software is preferable. In some cases, this might mean modernising an existing, private cloud-based stack. Oftentimes, organisations operate under five, ten, or even decades old notions of how software should be developed and run. Shifting to a more contemporary, agile approach can drive dramatic results.

Seeing through the clouds

A business has to understand what it is building – it’s surprising how many engineering teams still build in the dark.

Companies should ask themselves: how much traffic will the application get? Will it only be used internally? Who affects the load? What data handling and process regulations need to be followed? Will the application branch out to other areas of the business? If it touches the public – will it be mobile?

The questions don’t end. In a point of transition like we’re seeing in IT, it’s good to err towards maximising flexibility to provide the most options in the future as needs change. Over the next five years (if not longer) businesses will experiment with new strategies and business models, and they’ll need an IT partner who is equality deft and ready for whatever exciting adventure comes next.

Intel spies $200bn in ‘data-centric’ opportunity combining cloud, edge and AI

Intel has upped its total addressable market (TAM) for what it calls the ‘data-centric’ era of computing from $160 billion to $200bn – with Navin Shenoy, president and general manager of the company’s data centre group, saying it is “the biggest opportunity in the history of the company.”

Shenoy was speaking at the company’s Data-Centric Innovation Summit in Santa Clara, and took to a company editorial to outline his plans.

“I find it astounding that 90% of the world’s data was generated in the past two years – and analysts forecast that by 2025 data will exponentially grow by 10 times and reach 163 zettabytes,” Shenoy wrote. “But we have a long way to go in harnessing the power of this data.

“A safe guess is that only about 1% of it is utilised, processed and acted upon – imagine what could happen if we were able to effectively leverage more of this data at scale.”

Shenoy noted how the confluence of edge computing, mapping, cloud, computer vision and artificial intelligence (AI) was making this opportunity more apparent. Naturally, the company has a variety of products which aim to make the process more seamless. Silicon photonics, combining a silicon integrated circuit and a semiconductor laser, aims to provide high performance computing in hyperscale data centres, while Intel’s Optane DC persistent memory focuses on quicker performance with greater affordability.

What’s more, Intel added that more than $1 billion in revenue came from its processors designed for artificial intelligence workloads.

“We’ve entered a new era of data-centric computing,” Shenoy explained. “The proliferation of the cloud beyond hyperscale and into the network and out to the edge, the impending transition to 5G, and the growth of AI and analytics have driven a profound shift in the market, creating massive amounts of largely untapped data.

“When you add the growth in processing power, breakthroughs in connectivity, storage, memory and algorithms, we end up with a completely new way of thinking about infrastructure,” he added.

“To help our customers move, store and process massive amounts of data, we have actionable plans to win in the highest growth areas, and we have an unparalleled portfolio to fuel our growth – including performance-leading products and a broad ecosystem that spans the entire data-centric market.”

Autonomous driving was cited as a key example of how these technologies will converge – Shenoy described it as having life-saving potential – and it makes sense given Intel’s other bets in this area. But perhaps a small note on the maths may be required. Last June, Intel said the ‘passenger economy’ – a strategy for autonomous cars, as well as the potential gains made by time saved driving – could have the potential to hit $7 trillion across the market. Earlier that year, the company said it had ‘unwavering confidence’ in its chances of taking the autonomous driving market.

You can read Shenoy’s editorial in full here.

Making the cloud a safe space: Organisational security, identity, and more

The cloud has brought about many benefits for organisations and adoption is understandably increasing. Gartner earlier this year projected that the worldwide public cloud services market would grow 21.4 percent in 2018 whilst Forrester has found that global cloud services revenues totaled £112.5 billion in 2017, and is predicted to grow up to £137.2 billion by the end of 2018. With this huge growth in cloud adoption, effective security is paramount. Recent cyber-attacks have highlighted that organisations across all industries and of all sizes are the target of ongoing attacks.

With all the advantages that cloud brings including flexibility, efficiency and strategic organisational value, it is certainly a development many ambitious businesses are looking to utilise. It can provide the platform that enables a modern organisation to grow, expand into new markets and coordinate their strategy and plans. With many organisations now encouraging remote and home-working and operating internationally with diverse, multi-cultural teams the cloud is increasingly important to helping organisations collaborate, organise, share information (securely) and scale up.

Some of the biggest companies in the world, for example Google, Microsoft and Amazon are committing massively to the cloud, underlining the belief that the technology has huge commercial potential. These companies expect to see significant growth in the market which will fuel their future financial performance. Indeed, in Microsoft’s most recent financial result in July cloud was credited as driving a record fourth quarter result for the company.

It is another indication that the cloud is growing and adoption is increasing. Even Luddites will – perhaps slower than most – come to realise the huge benefits cloud can bring to an organisation, provided that security is kept front of mind. Ineffective and security-compromising use of the cloud is worse than not using the cloud at all. As such, proper planning is crucial.

With any new technology and system, it is vital that proper procedures are put in place to keep data safe and secure and to ensure employees use the system properly and maximise the impact it can have. Training needs to coordinate these efforts. The cloud is no different. It is IT’s job to make sure that the cloud creates the ROI and efficiency gains that senior executives will be looking for. This means taking the time to plan the implementation and then invest in training and support for employees.

Security has to be one of the main considerations when it comes to using the cloud. As with any IT system it can lead to a breach and loss of data. The cloud does not eradicate this vulnerability, it changes the dynamic, meaning CISOs and their teams need to be on the front foot when it comes to keeping the cloud secure. A successful breach will be a major setback for adoption of the technology within an organisation, especially if the context in which the breach takes place is a management that see it as a cost rather than an opportunity and a gain.

To ensure cloud has the backing of management therefore, there must be a laser focus on security. There won’t be much credit when the cloud remains secure – that is expected – but there will be a major downside if it goes wrong. With all this in mind let’s focus further on some of the key issues and questions around cloud security:

What is the impact of the cloud in terms of organisational security?

Cloud introduces new security risk to organisations because publicly exposed APIs are the underlying infrastructure that makes the cloud and cloud applications run. Unlike the http/s view of websites, which is largely choreographed for user experience and constrained on what is exposed or exploitable, APIs are built with fully exposed controls to support orchestration, management and automated access to the environment and applications. APIs provide a rich target for exploitation and introduce another dimension the challenges of expanding boundaries that were not seen in traditional enterprise on-premises perimeters.

Is security in the modern digital world like an open city, as opposed to traditional corporate computing, which is more like a castle?

Attackers will take the path of least resistance, and employees – and IT in many instances – will unwittingly help them. There will always be employees who will fall prey to phishing, surf exploited sites, or use free Wi-Fi from a coffee shop to open the door for the attacker. Also, common infrastructure weaknesses are the ‘exploit of choice’ to land a beachhead within an organisation, such as using an SQL query to find cached credentials, or finding a publicly exposed unpatched server to exploit. And then there is always the fallback to first-initial-plus-last-name with password1234.

How do we stop hackers from taking over the identities of victims in order to gain access to systems? Any real-life examples that demonstrate this?

There is no way to prevent intrusion through exploiting identity. The best that can be done is to slow attackers down by using good identity hygiene: implementing multi factor authentication, using longer pass phrases over passwords, deprecating expired employee accounts and monitoring access logs. However, the industry is making improvements in identity around trust by using multi-context analysis strategies that include time of access, country of origin, host computer in use, and other behavioural analyses to add weight to identity.

At the end of the day, organisations need to put in place robust procedures and make employees accountable for keeping networks safe and secure. The cloud introduces new security risks for organisations that will need to be managed effectively by the CISO; failure to do so could be very costly to an organisation both financially and reputationally. We have seen cyber-attacks generate headlines around the world recently – think WannaCry and Petya – to see notable examples of this.

Then you have the recently implemented GDPR, effecting any company who works within the EU. Inadequate data protection procedures under this regulation leads to increased penalties and fines for companies. This should focus the minds of executives on the challenges of implementing robust cyber defences, but too often this is not the case.

I would not want to see the adoption of cloud held back by fears over security, instead I believe cloud should be adopted by organisations that are ambitious to grow and effectively collaborate to solve problems and drive business performance. The penalties resulting from GDPR for example and from other regulations should not be a deterrent to implementing new technologies and systems. To me the focus should instead be on planning effectively and then implementing a solution that works and by this, I mean it is safe, secure and enables improved operational performance.

Why NVMe protocols are important for new data centre workloads

Today, data is the new fuel for business. New age technologies like artificial intelligence, Internet of Things, blockchain, and machine learning – all needs data to be stored, processed and analysed. Large amounts of data are generated exponentially, with a rise in internet users over the past several years. According to ‘Data Never Sleeps’, the report from Domo, 2.5 quintillion bytes of data is generated every day.

This data tsunami puts forth challenges for IT infrastructure to provide low latency and higher storage performance as many enterprises need real-time data processing and faster access to stored data. Access to high performance SSDs using legacy storage protocols like SATA or SAS are not enough as they still have higher latency, lower performance, and quality issues.

NVMe-enabled storage infrastructure

NVMe is a high performance scalable host controller interface protocol that is needed to access high performance storage media like SSDs over PCI bus. NVMe is the next generation technology which is replacing SATA and SAS protocols, and offers features required by enterprises that focus on processing high volume real-time data.

The main differentiator in NVMe, SATA, and SAS is the number of commands supported in a single queue. SATA devices support 32 commands, SAS supports 256 commands, while NVME supports up to 64k commands per queue, and up to 64k queues. Queues are designed to take advantage of parallel processing capabilities of multi-core processors.

Source: http://www.nvmexpress.org/wp-content/uploads/NVMe_Overview.pdf

NVMe protocol is characterised by the fact that existing applications are getting accelerated and enabled by real-time workload processing within NVMe enabled infrastructure. Infrastructure can be a legacy data centre or at an edge. Such performance is achieved as NVMe consumes significantly fewer CPU cycles as compared to SATA or SAS where CPU consumption is on a higher side. This feature allows businesses to get maximum returns from their existing IT infrastructure.

NVMe-based infrastructure for IoT workloads

NVMe-based systems will be the key element in processing IoT and machine learning workloads.

Multiple sensors that stream generated data at a faster rate and need to push into databases require a higher bandwidth. Also, consumed data needs to get processed and analysed at a higher computing rate, and return back to devices with analysed data. This entire operation needs high performance and a low latency network, plus a storage ecosystem to respond at an equal rate to the network. NVMe over fabrics can be used with IoT use cases that utilise message-based commands to transfer data between a system and a target SSD or system over a network (Ethernet, Fibre Channel or InfiniBand).

Conclusion

Any enterprise using SSDs will get the benefit from the application of NVMe protocols. NVMe-based infrastructure will be ideal for use cases representing SQL/NoSQL databases, real-time analytics, and high performance computing (HPC). NVMe enables new applications for machine learning, IoT databases, and analytics, as well as real-time application performance monitoring and security audits. NVMe offers scalable performance and low latency options that optimise the storage stack – and is architected to take full advantage of multi-core CPUs which will drive rapid proliferation in technology advancement in upcoming years.

Editor’s note: Find out more about IPv6 here – understanding the benefits, imperatives and barriers to IPv6 transition

The post Why NVMe is Important for New Age Data Center Workloads? appeared first on Calsoft Inc. Blog.

IBM’s 2018 data breach study shows why we’re in a Zero Trust world now

  • Digital businesses that lost less than 1% of their customers due to a data breach incurred a cost of $2.8M, and if 4% or more were lost the cost soared to $6M.
  • U.S. based breaches are the most expensive globally, costing on average $7.91M with the highest global notification cost as well, $740,000.
  • A typical data breach costs a company $3.86M, up 6.4% from $3.62M last year.
  • Digital businesses that have security automation can minimize the costs of breaches by $1.55M versus those businesses who are not ($2.88M versus $4.43M).
  • 48% of all breaches are initiated by malicious or criminal attacks.
  • Mean-time-to-identify (MTTI) a breach is 197 days, and the mean-time-to-contain (MTTC) is 69 days.

These and many other insights into the escalating costs of security breaches are from the 2018 Cost of a Data Breach Study sponsored by IBM Security with research independently conducted by Ponemon Institute LLC. The report is downloadable here (PDF, 47 pp. no opt-in).

The study is based on interviews with more than 2,200 compliance, data protection and IT professionals from 477 companies located in 15 countries and regions globally who have experienced a data breach in the last 12 months. This is the first year the use of Internet of Things (IoT) technologies and security automation are included in the study. The study also defines mega breaches as those involving over 1 million records and costing $40M or more. Please see pages 5, 6 and 7 of the study for specifics on the methodology.

The report is a quick read and the data provided is fascinating. One can’t help but reflect on how legacy security technologies designed to protect digital businesses decades ago isn’t keeping up with the scale, speed and sophistication of today’s breach attempts. The most common threat surface attacked is compromised privileged credential access. 81% of all breaches exploit identity according to an excellent study from Centrify and Dow Jones Customer Intelligence, CEO Disconnect is Weakening Cybersecurity (31 pp, PDF, opt-in).

The bottom line from the IBM, Centrify and many other studies is that we’re in a Zero Trust Security (ZTS) world now and the sooner a digital business can excel at it, the more protected they will be from security threats. ZTS begins with Next-Gen Access (NGA) by recognizing that every employee’s identity is the new security perimeter for any digital business.

Key takeaways from the study include the following:

US-based breaches are the most expensive globally, costing on average $7.91m, more than double the global average of $3.86m

Nations in the Middle East have the second-most expensive breaches globally, averaging $5.31M, followed by Canada, where the average breach costs a digital business $4.74M. Globally a breach costs a digital business $3.86M this year, up from $3.62M last year. With the costs of breaches escalating so quickly and the cost of a breach in the U.S. leading all nations and outdistancing the global average 2X, it’s time for more digital businesses to consider a Zero Trust Security strategy. See Forrester Principal Analyst Chase Cunningham’s recent blog post What ZTX Means For Vendors And Users, from the Forrester Research blog for where to get started.

The number of breached records is soaring in the US, the third leading nation of breached records, 6,850 records above the global average

The Ponemon Institute found that the average size of a data breach increased 2.2% this year, with the U.S. leading all nations in breached records. It now takes an average of 266 days to identify and contain a breach (Mean-time-to-identify (MTTI) a breach is 197 days and the mean-time-to-contain (MTTC) is 69 days), so more digital businesses in the Middle East, India, and the U.S. should consider reorienting their security strategies to a Zero Trust Security Model.

French and US digital businesses pay a heavy price in customer churn when a breach happens, among the highest in the world 

The following graphic compares abnormally high customer churn rates, the size of the data breach, average total cost, and per capita costs by country.

US companies lead the world in lost business caused by a security breach with $4.2m lost per incident, over $2m more than digital businesses from the Middle East

Ponemon found that U.S. digitally-based businesses pay an exceptionally high cost for customer churn caused by a data breaches. Factors contributing to the high cost of lost business include abnormally high turnover of customers, the high costs of acquiring new customers in the U.S., loss of brand reputation and goodwill. U.S. customers also have a myriad of competitive options and their loyalty is more difficult to preserve. The study finds that thanks to current notification laws, customers have a greater awareness of data breaches and have higher expectations regarding how the companies they are loyal to will protect customer records and data.

Conclusion

The IBM study foreshadows an increasing level of speed, scale, and sophistication when it comes to how breaches are orchestrated. With the average breach globally costing $4.36M and breach costs and lost customer revenue soaring in the U.S,. it’s clear we’re living in a world where Zero Trust should be the new mandate.

Zero Trust Security starts with Next-Gen Access to secure every endpoint and attack surface a digital business relies on for daily operations, and limit access and privilege to protect the “keys to the kingdom,” which gives hackers the most leverage. Security software providers including Centrify are applying advanced analytics and machine learning to thwart breaches and many other forms of attacks that seek to exploit weak credentials and too much privilege. Zero Trust is a proven way to stay at parity or ahead of escalating threats.

Samsung Heavy Industries chooses AWS to help take shipbuilding into the cloud

Another example of cloud computing infiltrating key enterprises; shipbuilding firm Samsung Heavy Industries is moving to Amazon Web Services (AWS) as its preferred cloud provider.

The company says it wants to be seen as a ‘cloud-first maritime business’, with Samsung using a variety of AWS’ services. These include EC2 and S3, naturally, alongside Amazon’s relational database, RDS, AWS Key Management, and governance and compliance tool CloudTrail.

By putting sensors in a variety of devices and crunching the data the systems generate, all backed up by cloud technologies, organisations in the shipping and maritime sector can make significant changes in efficiency and productivity. Take the Port of Rotterdam as an example. In February the port, Europe’s largest by cargo tonnage, said it was signing up with IBM to provider greater insights on water and weather conditions, as well as manage traffic and reduce waiting times at the port.

“We’re digitising our shipping fleet by using the most advanced technologies in the world to enhance our approaches to shipbuilding, operations, and delivery, and chose AWS as our preferred cloud provider to help us quickly transform Samsung Heavy Industries into a cloud-first maritime business,” said Dongyeon Lee, Samsung Heavy Industries director of ship and offshore performance research centre.

“By leveraging AWS, we’ve successfully released several smart shipping systems so that our customers can manage their ships and fleets more efficiently, and we continue to test new capabilities for ocean-bound vessel navigation and automation,” added Lee. “AWS delivers a highly flexible environment, with the broadest and deepest portfolio of cloud services, that is ideal for accelerating research and development across the company, and it has enabled our developers and data scientists to bring new ideas to market at an unprecedented pace.”

AWS, whose revenues went up 49% year over year to $6.1 billion, according to the most recent quarter’s financial report, has been issuing a flurry of recent customer wins. Alongside Samsung, Formula 1, Ryanair, and Major League Baseball were all confirmed as AWS users over the past three months.

Sponsorship Opportunities at @EXPOFinTech NY Opens | #FinTech #Blockchain #Hyperledger #IoT #SmartCities #DigitalTransformation

FinTech Is Now Part of the CloudEXPO New York Program. Financial enterprises in New York City, London, Singapore, and other world financial capitals are embracing a new generation of smart, automated FinTech that eliminates many cumbersome, slow, and expensive intermediate processes from their businesses. Accordingly, attendees at the upcoming 22nd CloudEXPO | DXWorldEXPO November 12-13, 2018 in New York City will find fresh new content in two new tracks called: FinTechEXPO New York Blockchain Event which will incorporate FinTech and Blockchain, as well as machine learning, artificial intelligence and deep learning in these two distinct tracks. FinTech brings efficiency as well as the ability to deliver new services and a much improved customer experience throughout the global financial services industry. FinTech is a natural fit with cloud computing, as new services are quickly developed, deployed, and scaled on public, private, and hybrid clouds. More than US$20 billion in venture capital is being invested in FinTech this year. We’re pleased to bring you the latest FinTech developments as an integral part of our program.

read more

Oracle marks ‘major milestone’ in autonomous strategy as Ellison takes more swipes at AWS

Oracle’s CTO and executive chairman Larry Ellison announced the launch of the company’s latest autonomous database service around transaction processing (ATP) last night – but a recent report around claims by Amazon also caught his eye.

At an event in California, Ellison responded to a story, originally broken by CNBC, which claimed that Amazon was planning to move completely away from Oracle’s databases by 2020.

Responding to an analyst question around customers moving off Oracle on the company’s Q218 earnings call back in December, Ellison said: “Let me tell you who’s not moving off of Oracle – a company you’ve heard of that gave us another $50 million this quarter. That company is Amazon. Our competitors, who have no reason to like us very much, continue to invest in and run their entire business on Oracle.”

Ellison reiterated the $50m price and told attendees of his doubt that Amazon would reach its reported target. “They don’t like being our best reference,” he said. “They think of themselves as a competitor, so it’s kind of embarrassing when Amazon uses Oracle, but they want you to use Aurora and Redshift.”

Aurora and Redshift, of course, are Amazon’s primary database products around relational database and data warehousing respectively. Ellison also took the opportunity to tout Oracle’s greater performance when compared to its rival (below) – 12 times faster than Aurora for its autonomous transaction processing database for pure transaction processing, and more than 100 times faster for a mixed workload.

Oracle’s press materials accompanying the ATP release described it as ‘a major milestone in the company’s autonomous strategy’, and Ellison did not hold back in his praise of a technology he described as ‘revolutionary’ at last year’s OpenWorld.

“This machine learning-based technology not only can optimise itself for queries, for data warehouses and data marts, but it also optimises itself for transactions,” said Ellison. “It can run batch programs, reporting, Internet of Things, simple transactions, complex transactions, and mixed workloads. Between these two systems [for data warehousing and transaction processing], the Oracle autonomous database now handles all of your workloads.”

Another barb at Amazon – and it’s worth noting that Andy Jassy is not averse to firing shots back during his keynote speeches – came when Ellison described Oracle’s autonomous database as ‘truly elastic’. It was truly pay as you go, with automatic provisioning and scaling, adding and deleting servers while running, and being serverless when not running.

“Amazon’s databases can’t do that,” he told the audience. “They can’t dynamically add a server when the system is running, they can’t dynamically add network capacity, they can’t dynamically take a server away when there is not demand and it’s not serverless when it’s idle. [Oracle] is a truly elastic system – you only pay for the infrastructure that you use.”

Ellison added that full autonomy – ‘nothing to learn, nothing to do’ became something of a mantra during the presentation – meant Oracle was “as simple to use as the simplest databases on the planet.”

CloudTech has reached out to AWS for comment and will update this piece accordingly.

Picture credits: Oracle/Screenshot

Hammering home public cloud shared security obligations: The importance of education

Public cloud customers need to become clearer on what their responsibility is for securing their data and applications hosted by public cloud providers. I believe there is a misunderstanding on how much responsibility the likes of AWS, Azure, and Google Cloud Platform have for securing their customers. Their platforms are definitely secure and migrating workloads into the cloud can be much more secure than on premise data centers, however organisations do have a responsibility in securing their workloads applications, and operating systems.

Even though every customer’s journey to the cloud is unique, and there are different levels of understanding this model, I hear some very common questions repeatedly. “Why do I need to put my own security in the cloud? I thought it was already secure?” “Why can’t I just move my virtual security appliances in the cloud?” “What does this mean for my network firewall? How do I ensure connectivity and access for my employees?” “How do I secure a cloud application? Aren’t Office 365 and Salesforce already secure?”

If you find yourself asking questions like this, you may want to talk with an experienced partner to help with your migration. Until then, here are some considerations that can help clear things up.

Shared responsibility

The public cloud operates on a shared responsibility model. This means that the cloud providers give you the responsibility and flexibility to secure what you bring to the cloud. Therefore, without question, as a customer your responsibility is to configure, patch and layer security on applications, workloads and operating systems you spin up. Configuration includes identity management, access levels, and security groups. Customers are also responsible for data protection and availability of workloads.

Public cloud providers are only responsible for the physical security, global and regional connectivity, and power and cooling of the data centers that they own.

This model maintains the highest possible efficiencies for the cloud provider, and relieves the customer of the burden of providing the infrastructure such as a data centre or the server hardware that provides scalability on demand.

The model also enables customers to customise their cloud security to meet the needs of their unique workloads. Application and data security are in the hands of the people who know them best, rather than being left to a public cloud provider to provide a cookie cutter protocol.

Public cloud providers work with vendors to ensure that the solutions available will operate properly on their platforms. AWS, Azure, and GCP partnership programs ensure that vendors have access to tools and specifications needed to design their products for optimum performance on each platform. Once the vendor's products have met the standards set by the provider, a certification or competency is awarded. This shows customers that the solution is part of the fabric of the public cloud.

The public cloud fabric

When we talk about the public cloud fabric, we are talking about native integration into the platform. Consider this: the model for shared security means that the cloud provider owns the infrastructure for security. All aspects to visibility, monitoring, remediation, and protection, are all substantiated in the public cloud through APIs and tools like CloudWatch and Insights. These are the things that constitute the fabric of the public cloud.

Native integration into a cloud platform requires that a solution be built on a cloud-centric architecture and engineered specifically for that public cloud. While it may be tempting to use a virtualised version of your on-premises security in the cloud, these VMs simply aren't designed to take advantage of what you're buying.

They may seem to work, but they lack certain functionality. Some common questions I hear are: can the VM auto-scale for performance and capacity? Can it be provisioned and deployed within minutes, on either AWS or Azure? Does it offer pay as you go, metered billing, and other flexible consumption models? Is it built on a cloud-centric architecture?

These are the features that will distinguish an on-premises solution from a 'cloud ready' solution. To take full advantage of what the cloud has to offer, you will need to have a solution that is part of the cloud fabric.

The numbers don’t lie

My belief that many organisations misunderstand this shared responsibility model is supported by recent research of the public cloud customer market. In a recent study conducted by research firm Vanson Bourne, Public Cloud – Benefits, Strategies, Challenges, and Solutions, 77 percent of organisations reported the belief that public cloud providers are responsible for securing customer data in the cloud. 68 percent of decision makers are under the impression that cloud providers are responsible for securing customer applications as well. More concerning in this study is that a nearly a third (30 percent) of organisations have not added additional security layers to their public cloud deployments.

More secure than on premise

Many organisations realise that their cloud deployments can be inherently more secure than on premise deployments because cloud providers are collectively investing more into security controls than they could on their own. However, the organisations benefiting the most from public cloud are those that understand that their public cloud provider is not responsible for securing data or applications and are augmenting security with support from third party vendors.

Google Cloud secures support for NVIDIA’s Tesla P4 GPUs with more machine learning goodness

Google has announced its support for NVIDIA’s Tesla P4 GPUs to help customers with graphics-intensive and machine learning applications.

The Tesla P4, according to NVIDIA’s data sheet, is ‘purpose-built to boost efficiency for scale-out servers running deep learning workloads, enabling smart responsive AI-based services.’ The P4, which is run on NVIDIA’s Pascal architecture, has a GPU memory of 8GB, and memory bandwidth of 192 GB per second.

While not at the same performance level as the V100, run on Volta instead of Pascal architecture, Google said the P4 accelerators, which are now in beta, represent a ‘good balance of price/performance for remote display applications and real-time machine learning inference.’

“Graphics-intensive applications that run in the cloud benefit greatly from workstation-class GPUs,” wrote Ari Liberman, Google Cloud product manager in a blog post. “We now support virtual workstations with NVIDIA GRID on the P4 and P100, allowing you to turn any instance with one or more GPUs into a high-end workstation optimised for graphics-accelerated use cases.

“Now, artists, architects and engineers can create breathtaking 3D scenes for their next blockbuster film, or design a computer-aided photorealistic composition,” Liberman added.

As is often the case with these announcements, a brand new, shiny customer was rolled out to explain how Google’s services had improved their operations. Except this one wasn’t quite as new; regular readers of this publication may remember oilfield services provider Schlumberger from Google’s GPU price reduction news back in November. The company said it was using Google’s workstations, powered by NVIDIA GPUs, to help visualise oil and gas scenarios for its customers.

The link with machine learning capabilities is again an irresistible one, with Google saying the P4 is ideal for use cases such as visual search, interactive speech, and video recommendations.

Whither NVIDIA, however? The big cloud providers are certainly a key opportunity for the graphics processor. Speaking at the end of last year, the company said its V100 GPU had been chosen by every major cloud firm, saying the applications for GPU servers had ‘now grown to many markets.’