View from the airport: DataWorks Summit 2019

Keumars Afifi-Sabet

22 Mar, 2019

Are you from the Hortonworks side or the Cloudera side? It’s a question I found myself asking a lot at this year’s DataWorks Summit, the first major event since the two companies completed their $5.2 billion merger just months ago. Naturally, a marriage of this scale throws up a tidal wave of questions. Unfortunately, there were no answers to be found in Barcelona.

It’s difficult to put my finger on the mood in the air, but it was closest to uncertainty. Given that DataWorks Summit has conventionally been a Hortonworks event, having the ‘new’ Cloudera spearhead it was jarring. Not just for the press, but the comms team. The reason? The May-time Washington DataWorks Summit, as well as Cloudera’s two Strata Data Conferences had already been planned and organised months before the merger was tied up. So the company has to effectively go through the motions with its 2019 events.

But it was especially confusing given the Hortonworks branding appears to have been discarded entirely. Instead, the two companies, now operating under the Cloudera umbrella, have undergone a complete image refresh, with a newly-designed logo and several buzzy slogans to boot.

A new image is always something to get excited about. But the fact Cloudera was handing out metal pins emblazoned with the company’s old logo summed up the feeling quite effectively. Its Twitter page, too, is still displaying the company’s old logo at the time of writing.

Meanwhile, the event was fronted by the firm’s chief marketing officer (CMO) Mick Hollison. This underpinned the company’s almost singular focus on ‘image’ this week, which on one level made sense. Earnings day the week before made for grim reading. Revenue grew, sure. But so did expenditure, by quite a lot. This doubled losses to more than $85 million. Yet Cloudera is setting itself a target of becoming a billion dollar company before the end of the year, and reinforced its ambitions to target only the largest companies.

But it didn’t seem appropriate that a significant portion of the top brass was left at home. Anybody who could give serious answers about Cloudera’s financial performance, or specific details about the merger, was not available to chat. Then it hit me during the main keynote when it became clear CMO Hollison would be the only Cloudera voice addressing the press, analysts and delegates that morning. Chiefly, at Cloudera’s first major public event since the merger, it begged the question: Where was the CEO?

It’s not fair to say that everybody with prominence was left at home. Hilary Mason, Cloudera’s resident data scientist and the lead on its research division, dazzled on the evolving nature of AI. Meanwhile, there were some interesting insights to gain on data warehousing, open source, and GDPR. The thematic substance of DataWorks Summit 2019 was actually quite positive despite the company’s considered efforts to push its new marketing slogans, namely ‘from the edge to AI’ and ‘the enterprise data cloud’.

But the merger, undoubtedly, was at the forefront of everyone’s minds, with many questions lingering. Though now that it has mostly been completed, it was interesting to hear discussions with Hortonworks were actually underway for three-and-a-half years before the two firms tied the knot.

Yet we still don’t fully know what its flagship service, named the Cloudera Data Platform (CDP), will look like. We do, however, know it’s a mash-up of Hortonworks and Cloudera’s legacy systems, Cloudera Distribution Including Apache Hadoop (CDH) and Hortonworks Data Platform (HDP).

Neither do we know when this will launch, with Cloudera officially saying it will come within the next two quarters. But one customer, Swiss insurance firm Zurich, told Cloud Pro it was coming in June. Meanwhile, while customers are allowed to keep these legacy platforms until around 2022, for Zurich, currently in the process of migrating from HDP 2.0 to 3.0, does this then mean a second big transition in quick succession? The aim is, of course, to transition all customers to CDP eventually.

The future is uncertain. So much so that nobody really knows if the DataWorks Summits held in 2019 will be the last ever. Nevertheless, this presented a fantastic opportunity for Cloudera to address the world post-merger, and take on its major challenges head-on.

But this was an opportunity missed. The fact its most senior staff were left at home spoke volumes, even though the substance of the conference was for the most part engaging. It became clear over the course of the event that there hasn’t been, and probably won’t be, a honeymoon period for the ‘new’ Cloudera as it begins to find its feet in a turbulent market.

Portworx secures $27 million series C funding, unveils update to container management platform

Portworx, a provider of storage and data management software for containers, has announced a $27 million (£20.5m) series C funding round – and touted record revenue and customer growth in the process.

The company’s series C, putting its total funding at $55.5m, was co-led by Sapphire Ventures and the venture arm of Mubadala Investment Company, with new investors Cisco Investments, HPE and NetApp joining Mayfield Fund and GE Ventures in the round.

The company noted its customer base had expanded by more than 100%, with total bookings going up 50% just between the third and fourth quarters of 2018. Among the new customers are HPE, with the infrastructure giant eating its own dog food in the funding stakes by investing after purchasing the Portworx Enterprise Platform.

Portworx also took the opportunity to unveil the latest flavour of its platform, Portworx Enterprise 2.1. New features include enhanced disaster recovery as well as a role-based security option, with organisations able to access controls on a per-container volume basis integrated with corporate authorisation and authentication.

The opportunity for Portworx and other companies of their ilk is a big one. As this publication noted in September, a study from the company noted more than two thirds of companies were ‘making investment’ in containers, noting that the enterprise had caught up. Ronald Sens, director at A10 Networks, noted at the time that while Kubernetes is “clearly becoming the de facto standard”, certain areas, such as application load balancing, are not part of their service.

“Kubernetes alone is not sufficient to handle critical data services that power enterprise applications,” said Murli Thirumale, CEO and co-founder of Portworx in a statement. “Portworx cloud-native storage and data management solutions enable enterprises to run all their applications in containers in production.

“With this investment round the cloud-native industry recognises Portworx and its incredible team as the container storage and data management leader,” Thirumale added. “Our customer-first strategy continues to pay off.” in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Monitoring cloud app activity for better data security: Five key tips

Digitisation has dramatically changed how work gets done. Business-critical apps and data are a keystroke away, no matter where an employee is or what time it is. Perhaps it is this familiarity with data that makes employees feel so connected to it that, when they switch jobs, they often take some of it with them. Maybe it’s why most of them don’t think this is a criminal act.

Whatever the reasoning for this willful exfiltration of data, a lack of security can impact an organisation’s growth and ability to retain a competitive advantage. But with more visibility into insider threats, organisations can drive bad actors out and improve their overall security posture.

Below are the top five events that organisations monitor cloud applications for and how paying attention to them can help to promote good security hygiene within a company.

Look at login activity

Dig into who is logging in, from where and when, is likely to turn up some surprises related to application interaction. Terminated users who have not been properly deprovisioned may be able to gain access to sensitive data after employment, in the case of a departed employee, or at the end of a contract with a third party. Login activity can also tell you a user’s location, hours, devices and more – all of which can uncover potential security incidents, breaches or training opportunities.

Organisations can keep data safe from those who shouldn’t have access anymore, like a former employee or contractor, by monitoring for inactive user logins. Login activity can also tell you whether employees are logging in after hours or from a remote location. This may be an indicator of an employee working overtime – but it may also be a red flag for a departing employee, logging in after hours to steal data, or of compromised credentials.

Examine what’s being exported

Exporting reports is an easy way for employees to extract large amounts of sensitive data from Salesforce and other cloud applications. Users can run reports on nearly anything within Salesforce, from contacts and leads to customers. And those reports can be exported for easy reference and analysis.

The other side of the coin is that this ability can also make a company vulnerable to data theft and breaches. Departing employees may choose to export a report of customers, using the list to join or start a competitive business.

But if a company is monitoring for exports, this activity helps to:

  • Secure sensitive customer, partner and prospect information, which will increase trust with your customers and meeting key regulations and security frameworks (e.g., PCI-DSS).
  • Find employees who may be taking data for personal or financial gain and stop the exfiltration of data before more damage occurs.
  • Lessen the severity and the cost of a data breach by more quickly spotting and remediating the export activity.
  • Find likely cases of compromised credentials and deactivate compromised users.

Research all reports being run

Companies focus their security efforts on which reports are being exported, but simply running a report could create a potential security issue. The principle of least privilege dictates that people only be given the minimal amount of permissions necessary to complete their job – and that applies to data that can be viewed. But many companies grant broad access across the organisation, even to those whose job does not depend on viewing specific sensitive information.

Job scope is an important consideration in which reports are appropriate. If you look at which reports have been run, top report runners and report volume, you can track instances where users might be running reports to access information that’s beyond their job scope. Users may also be running – but not necessarily exporting – larger reports than they normally do or than their peers do.

A third benefit comes from monitoring for personal and unsaved reports, which can help close any security vulnerability created by users attempting to exfiltrate data without leaving a trail. Whether it’s a user who is attempting to steal the data, a user who has higher access levels than necessary, or a user who has accidentally run the report, monitoring for report access will help you spot any additional security gaps or training opportunities.

Keep track of creation and deactivation

Creating and deactivating users is a part of managing users. Organisations can monitor for deactivation – which, if not done properly after an employee leaves the organisation, may result in an inactive user gaining access to sensitive data or an external attacker gaining hold of their still-active credentials. For this and other cloud applications, a security issue may also arise when an individual with administrative permissions creates a “shell,” or fake user, under which they can steal data. After the fact, they can deactivate the user to cover their tracks.

Monitoring for user creation is an additional step security teams can take to keep an eye on any potential insider threats. And by keeping track of when users are deactivated, you can run a report of deactivated users within a specific time frame and correlate them with your former employees (or contractors) to ensure proper deprovisioning. Monitoring for creation and/or deactivation of users is also required by regulations like SOX and frameworks like ISO 27001.

Check changes in profiles and permissions

What a user can and can’t do in cloud applications is regulated by profiles and permissions. For example, in Salesforce, every user has one profile but can have multiple permissions sets. The two are usually combined by using profiles to grant the minimum permissions and access settings for a specific group of users, then permission sets to grant more permissions to individual users as needed. Profiles control object, field, app and user permissions; tab settings; Apex class and Visualforce page access; page layouts; record types; and login hours and IP ranges.

Permission level varies by organisation. Some give all users advanced permissions; others grant only the permissions that are necessary for that user’s specific job roles and responsibilities. But with over 170 permissions in Salesforce, for instance – and hundreds or thousands of users – it can be difficult to grasp the full scope of what your users can do in Salesforce.

Monitor that data

Digital transformation has brought about great freedom and productivity, enabling employees to work from anywhere at any time. Cloud-based business apps have become the norm, with data flowing to and fro along a countless number of endpoints connected to employees with different levels of responsibility.

To oversee all this activity, many companies today are monitoring user interactions with cloud apps and data. This creates greater visibility, which helps both your organisation and your customers have greater peace of mind that security measures are in place to protect data. in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Alibaba Cloud looks to integrated and intelligent future at Beijing summit

Alibaba Cloud has taken the opportunity provided by its most recent Beijing Summit to elaborate on its near 10-year history – and explore a more integrated and intelligent future.

The company noted that its cloud arm was ‘becoming the main business focus of Alibaba Group’ and that cloud adoption was ‘expected to continue and become more immersive in the traditional sectors across China.’  

“Alibaba has championed cloud computing in China over the past 10 years and has been at the forefront of rapid technology development,” said Jeff Zhang, CTO at Alibaba Group and president of Alibaba Cloud. “In the future, our highly compatible and standards-based platform will allow SaaS partners to onboard easily and thrive.

“The offerings will also be enriched by our continued investment in research through the Damo Academy, [which] will align data science with the development of our products,” Zhang added. “To empower all participants in our ecosystem, we will boost the integrated development of technology, products and services on our open platform.”

The event saw three primary products announced: the SCC-GN6, claimed to be the most powerful super-computing bare metal server instance issued by the company to date; a cloud-native relational database service, PolarDB, and SaaS Accelerator, a platform for partners to build and launch SaaS applications as well as utilise Alibaba’s consultancy.

The past six months have seen a period of expansion for Alibaba Cloud. A new data centre complex in Indonesia was launched in January, while the London site opened its doors in October. The company says its IaaS dominance in China is such that it commands a larger market share than the second to eighth largest players put together.

Synergy Research noted in June that the top five cloud infrastructure players in China were all local companies, while across the whole of Asia Pacific (APAC) Alibaba ranked second, behind AWS. In October, data and analytics firm GlobalData noted how Alibaba was gaining across APAC as a whole, saying it was a ‘force to be reckoned with’ and ‘betting big on emerging markets such as India, Malaysia and Indonesia while competing with others in developed markets.’

Asia Pacific remains a region of vastly differing expectations when it comes to cloud computing, as the Asia Cloud Computing Association (ACCA) found in its most recent Cloud Readiness Index report. Those at the top end, such as Singapore and Hong Kong, have overall rankings – based on connectivity, cybersecurity and data centre infrastructures among others – ahead of the US and UK. India, China and Vietnam meanwhile, the bottom three nations, scored lower than 50%.

China itself, according to the ACCA report, had made progress despite retaining its modest ranking from 2016, but its lowest scoring areas – power sustainability and broadband quality – reflected the issues of nationwide adoption of cloud technologies across such a vast area. The report did note that the Chinese government “continues to devote considerable fiscal resources to the development and improvement of infrastructure… a move that will undoubtedly pay off in the next few years.” in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Practical cloud considerations: Security and the decryption conundrum

Compute in the cloud may be cheap but it isn't free. Most of today’s apps are delivered via secure HTTP. That means TLS or the increasingly frowned upon SSL. It means cryptography, which traditionally has been translated to mean performance problems.

Thanks to advances in technology, CPUs are now incredibly fast and many client (and server-side) hardware natively integrates what was once specialised cryptographic hardware. This means that, on a per-connection basis, speed is not as much of an issue on an individual basis for cryptography as it once was.

But that doesn't mean that cryptography still isn’t a source of performance and operational expense. 

Applications today are not comprised of a single endpoint. There are multiple intermediaries and proxies through which a message must travel before that "single endpoint" is ever encountered. They are security and access control, load balancing and routing endpoints. Each needs to inspect the message – in the clear – in order to execute its designated role in the complex dance that is the modern data path.

Here is where the argument that cryptographic isn't as expensive starts to fall apart. On its own, a single endpoint introduces very little delay. However, when repeated multiple times at every endpoint in the data path, those individual delays add up to something more noticeable and, particularly in the case of public cloud, operationally expensive.

Cryptography is naturally a computationally expensive process. That means it takes a lot more CPU cycles to encrypt or decrypt a message than it does to execute business logic. In the cloud, CPU cycles are analogous to money being spent. In general, it's an accepted cost because the point is to shift capital costs to operational expense.

But the costs start to add up if you are decrypting and encrypting a message several times. You are effectively paying for the same cryptographic process multiple times. What might be computed to cost only a penny when executed once suddenly costs five pennies when executed five times. Do the math for the hundreds of thousands of transactions over the course of a day (or an hour) and the resulting costs are staggering.

Also remember that each CPU cycle consumed by cryptographic processing is a CPU cycle not spent on business logic. This means scaling out sooner than you might want to, which incurs even more costs as each additional instance is launched to handle the load.

Suffice to say that "SSL everywhere" should not result in "decrypt everywhere" architectures in the cloud.

Decrypt once

To reduce the costs and maximise the efficacy of the CPUs you're paying for, it is worth the time to design your cloud-based architecture on a "decrypt once" principle. "Decrypt Once" means you should minimise the number of endpoints in the data path that must decrypt and re-encrypt messages in transit.

Naturally, this requires forethought and careful consideration of different application services you're using to secure and scale applications. If you aren't subject to regulations or requirements that demand end-to-end encryption, architect your data path such that messages are decrypted as early as possible to avoid additional cycles wasted on decryption later. If you are required to maintain end-to-end encryption, the combining of services whenever possible will net you the most efficient use of compute resources.

Combining the services – i.e. load balancing with web application firewall – on a single platform means reducing the number of times you need to decrypt messages in transit. It also has the added advantage of reducing the number of connections and time on the network, which translates into performance benefits for users and consumers. But the real savings are in CPU cycles that aren't spent on repeated decryption and re-encryption. 

It may seem a waste of time to consider the impact of encryption and decryption for an app that's lightly used today. The pennies certainly aren't covering the cost of the effort. But as apps grow and scale and live over time, those pennies are going to add up to amounts that are impactful. Like pennies, microseconds add up. By considering the impact of cryptography across the entire data path, you can net benefits in the long run for both users and the business. in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Oracle boosts Slack partnership with customer experience integrations

Clare Hopping

21 Mar, 2019

Oracle has unveiled a deeper relationship with Slack, combining its Customer Experience (CX) Cloud with the collaboration platform to make it easier for teams to communicate about sales opportunities.

The tie-up means sales staff will be able to work more closely with others in their organisation, such as account executives, product specialists and contract managers – those responsible for collaboratively close deals – by sharing knowledge of scenarios and building upon each other’s experience.

Staff can have discussions and share deal details on one user interface, making it a much more streamlined process, rather than having to access separate messaging and CRM platforms to find the answers they need.

“As customer expectations continue to change, the way teams work with each other and the role individuals play on those teams, are changing as well,” said Stephen Fioretti, vice president of product management for Oracle CX Sales and Service.

“To support employees as their roles evolve and change, organizations need technology that can enable new ways of working.”

Oracle and Slack’s partnership will also have benefits for customer service staff, providing those in direct contact with customers the communication channel to collaborate on service requests in real time. For example, the contact centre staff can talk to the support team to find a solution to service requests faster.

“The latest integrations between Oracle and Slack will help sales and customer service collaborate more effectively and build on our commitment to providing CX professionals with the tools they need to meet the needs of the Experience Economy,” Fioretti added.

How to Sponsor @DevOpsSUMMIT | #CloudNative #Serverless #DevOps #APM #DataCenter #Monitoring #Kubernetes

The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to wait for long development cycles that produce software that is obsolete at launch. DevOps may be disruptive, but it is essential.

DevOpsSUMMIT at CloudEXPO expands the DevOps community, enable a wide sharing of knowledge, and educate delegates and technology providers alike.

read more

HPE aims to deliver on hybrid cloud consultancy prowess with Right Mix Advisor launch

Hewlett Packard Enterprise (HPE) has been focused on the long road of what it calls the ‘innovative enterprise’ and building up its cloud consultancy with the acquisitions of RedPixie and Cloud Technology Partners. Now, it is ready to put that knowledge to the test.

The company has announced the launch of HPE Right Mix Advisor, which is claimed to be an industry-first product recommending the ‘ideal hybrid cloud mix’ to organisations.

The product is based on more than one thousand hybrid cloud ‘engagements’, as well as automated discovery. One recent example saw nine million IP addresses across six data centres analysed, and alongside data from configuration management databases and external cloud vendor pricing models, to provide a roadmap for which workloads would fit private and public respectively.

Cloud Technology Partners, acquired in 2017 for its AWS expertise, and RedPixie a year later for Azure, both fall under HPE Pointnext, the company’s services and consultancy unit. Using this experience, HPE claims, an action plan for hybrid cloud can take only weeks as opposed to months. According to the company’s own work, migrating the right workloads can lead to up to a 40% reduction in cost of ownership.

“I like to tell customers there are a thousand things they could be doing – but they need to find the 10 most impactful things they should start on tomorrow morning,” said Erik Vogel, HPE Pointnext global vice president for hybrid cloud in a statement. “HPE Right Mix Advisor helps organisations get the insight and methodology that they need to drive innovation, deliver predictable optimised customer experiences and remain competitive.”

HPE’s interest in hybrid cloud has been well documented. The company’s Discover Madrid event in November was to unveil the next part of its ‘composable strategy’ – putting together on-premise hardware, software and cloud into a single server platform. In June, HPE announced that it was investing $4 billion into what it called the intelligent edge; technologies to deliver personalised user experiences and seamless interactions in real-time.

As Antonio Neri, HPE president and CEO explained at the time, it’s all about the data – and where you invest in it. “Companies that can distil intelligence from their data – whether in a smart hospital or an autonomous car – will be the ones to lead,” he said. “HPE has been at the forefront of developing technologies and services for the intelligent edge, and with this investment, we are accelerating our ability to drive this growing category for the future.” in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Rockstar Kubernetes Faculty Announced | @KubeSUMMIT #CloudNative #Serverless #DataCenter #Monitoring #Containers #DevOps #Docker #Kubernetes

As you know, enterprise IT conversation over the past year have often centered upon the open-source Kubernetes container orchestration system. In fact, Kubernetes has emerged as the key technology — and even primary platform — of cloud migrations for a wide variety of organizations.

Kubernetes is critical to forward-looking enterprises that continue to push their IT infrastructures toward maximum functionality, scalability, and flexibility.

As they do so, IT professionals are also embracing the reality of Serverless architectures, which are critical to developing and operating real-time applications and services. Serverless is particularly important as enterprises of all sizes develop and deploy Internet of Things (IoT) initiatives.

read more

Sponsorship Opportunities at @CloudEXPO | #Cloud #IoT #Blockchain #Serverless #DevOps #Monitoring #Docker #Kubernetes

CloudEXPO has been the M&A capital for Cloud companies for more than a decade with memorable acquisition news stories which came out of CloudEXPO expo floor. DevOpsSUMMIT New York faculty member Greg Bledsoe shared his views on IBM’s Red Hat acquisition live from NASDAQ floor. Acquisition news was announced during CloudEXPO New York which took place November 12-13, 2019 in New York City.

read more