DataWorks Summit 2019: Cloudera allays post-merger fears with ‘100% open-source’ commitment


Keumars Afifi-Sabet

20 Mar, 2019

The ‘new’ Cloudera has committed to becoming a fully open-source company, having followed an open-core model prior to its $5.2 billion merger with former rival Hortonworks.

All 32 of the current open source projects found between both Hortonworks and Cloudera’s legacy platforms will remain available as cloud-based services on its new jointly-developed Cloudera Data Platform (CDP).

There were fears Cloudera’s influence could undermine the “100% open source” principles that underpinned Hortonworks, given the former had previously been just an ‘open-core’ company. This amounted to a business model in which limited versions of Cloudera projects were offered in line with open source principles, with additional features available at a cost.

Cloudera first made reassurances over its commitment to open source on a conference call with journalists last week. This call was made to explain the firm’s dismal Q4 2018 financial results which saw the company’s net losses double post-merger to $85.5m.

The commitment, which Cloudera elaborated at the company’s DataWorks Summit 2019 hosted in Barcelona this week, has also coincided with a complete rebranding of the company logo, and further elaboration over its vision for an ‘enterprise data cloud’.

This, according to the firm’s chief marketing officer Mick Hollison, includes multi-faceted data analytics and support for every conceivable cloud model from multiple public clouds to hybrid cloud to containers like Kubernetes.

It would also be underpinned with a common compliance and data governance regime, and would retain a commitment to “100% open source”, with Hollison insisting several times to journalists at a press briefing the term “isn’t just marketing fluff”.

Cloudera’s vice president for product management Fred Koopmans told journalists at the same press briefing that both company’s existing customers valued the principles of ‘openness’ – which starts with open APIs.

“They don’t view that there is one vendor that’s going to serve all of their needs today and in the future,” Koopmans said. “Therefore it’s critical for them to have open APIs so they can bring in other software development companies that can extend it and enhance the platform.

“What open source provides them is no dead-ends; if they’re trying to develop something, and there’s a particular feature they need. They always have the option of going and adding a feature with their own development team. So this is a huge driver for a lot of our larger customers in particular.”

Cloudera also used the DataWorks Summit to outline its intentions to exclusively chase the biggest enterprise customers, insisting the firm is only interested in tackling big data problems for large companies.

CDP, the embodiment of the new vision, is due to make its way to customers as only a public cloud platform later this year, with a private cloud iteration to follow in late 2019 or early 2020. The platform is a mashing-together of Cloudera’s Cloudera Distribution including Apache Hadoop (CDH) and Hortonworks’ Hortonworks Data Platform (HDP).

Microsoft adds ‘Infinite Whiteboards’ to Teams


Bobby Hellard

20 Mar, 2019

Microsoft has added a host of features to its collaboration tool Teams, to celebrate the apps 2nd birthday.

The updates included customisable backgrounds, new security features and support for users who are hard of hearing or death. But, the most interesting addition is a camera-based feature that lets users share whiteboard content.

‘Infinite Whiteboards’ is a digital canvas for all participants to work on. It even allows for content to be transferred from a physical whiteboard by using a USB camera with Microsoft’s Intelligent Capture function. It allows one user to capture content from their whiteboard, re-focus or resize it, enhance the images and text so remote attendees can clearly see what is being brainstormed on the whiteboard. If changes are made to the board during the call, the content will still be seen by the remote attendees in real-time.

The feature will be available later in the year, but it’s one of a number of new functions for Teams that offer a different way to collaborate – making it more inclusive. Not everyone digests information in the same way and images may offer a better explanation to a complex project.

Similarly, Teams have also added support of those who are hard of hearing of death with live captions. These are essentially real-time subtitles for users to stay in sync with the conversation.

Also added to the service are secure private channels, where a user can restrict or enable participants to see certain channels without the need for creating separate teams. There is also ‘Information Barriers’ that limit who can see and share certain content, adding a layer of privacy and security to sensitive or private content.

“This week marks the second anniversary of the worldwide launch of Microsoft Teams,” said Lori Wright, GM of Microsoft 365. “Over the past two years, Teams has grown significantly in both new capabilities and customer usage, as the hub for teamwork that brings people together and fosters a culture of engagement and inclusion. Today, more than 500,000 organizations, including 91 of the Fortune 100, are using Teams to collaborate across locations, time zones and languages.

“Microsoft Teams is improving workplace collaboration by helping organizations move from an array of disparate apps to a single, secure hub that brings together what teams need including chat, meetings and calling, all with native integration to the Office 365 apps. Users can customize and extend their experience with third-party apps, processes and devices, giving them the tools they need to get work done.”

Avaya focuses on AI natural conversation and resolution with extended Google Cloud integration


Clare Hopping

20 Mar, 2019

Avaya has boosted its partnership with Google by integrating the company’s cloud services with its contact centre offering.

Google’s Cloud Contact Center AI is now embedded into Avaya’s contact centre products, powering its virtual agents, agent assist and conversational topic modeling.

Its virtual agent tool uses Google Contact Centre AI to hold human-like conversations with customers, gleaning as much information as possible from them before passing them onto a human contact centre operative.

Customers can choose when they go through to a real person. But because Avaya’s platform collects the data about exchanges, it can use this to better refine its approach and decide if it’s a better idea to pass a customer over to real-world agents faster.

Agent Assist uses Google’s Cloud Platform to provide relevant insights to staff to help them fix customer problems. It can present information to staff using either voice or text exchanges and determine how the contact centre agent should respond. This is especially useful if an exchange becomes heated, for example.

Avaya’s Conversational Topic Modeling works in the background, constantly collating data about the topics customers are talking about. It helps agents understand the most popular topics at any time, helping them prepare for calls and communicate the best actions for the scenario.

“We continue to expand our AI-enabled solutions as well as our cloud offerings for customers ranging from small-medium business to the largest global enterprises, and further collaboration with Google is providing additional capabilities to augment the innovation,” said Chris McGugan, Avaya senior vice president of solutions and technology.

“By bringing these innovations to market for Avaya customers and partners, we enable them to make every customer interaction more meaningful and insightful, and more productive for their businesses.”

Mozilla unveils Firefox 66 update with a suite of tweaks


Clare Hopping

20 Mar, 2019

Mozilla has introduced the Firefox 66 update, which the company hopes will make its user experience better than rivals like Chrome, Edge and Safari.

One of the updates headline features is blocking autoplay content, so if you’re using a website that starts videos off without your permission (such as video adverts), they will be blocked from playing. You can of course just tap on the play button if you do want to watch the video, but it will stop loud videos echoing round the office for those that have their speakers turned up.

For websites that don’t automatically switch the sound on when the video starts playing, such as on Facebook, the video will still play, just without the sound.

For other websites that have been built specifically to play content and where you want videos to autoplay, you can allow them to do this in your Firefox permissions.

Page jumps is another feature dealt with within the Firefox 66 update. If you find it frustrating when the page jumps as an advert loads from the above the content, the latest update will block this jump. So although the advert will still show, the browser window will remember where you were in the image, making sure your user experience isn’t negatively affected.

Search has also been revamped in the latest release of Firefox 66.

If you use Firefox on multiple devices using a Firefox Sync, or even if you’re only using it on one device, but have multiple tabs open, you can search across all using the tab overflow menu, which automatically appears when you have lots of tabs open. If you click on the down arrow, you can choose to search across multiple tabs and devices from one place, rather than using the % parameter in the Awesome Bar.

Another search-focused enhancement is search while in Private Browsing mode. If you opt to hide your search history by using Private Mode, you can use your preferred search engine, including private search engines such as DuckDuckGo.

20 Most Popular Sessions at @KubeSUMMIT | #HybridCloud #CloudNative #Serverless #Containers #DevOps #AWS #Lambda #Docker #Kubernetes #HybridCloud

As you know, enterprise IT conversation over the past year have often centered upon the open-source Kubernetes container orchestration system. In fact, Kubernetes has emerged as the key technology — and even primary platform — of cloud migrations for a wide variety of organizations.

Kubernetes is critical to forward-looking enterprises that continue to push their IT infrastructures toward maximum functionality, scalability, and flexibility.

read more

Puppet to Present at @DevOpsSUMMIT Silicon Valley | @PKoneti @Puppetize #CloudNative #Serverless #DevOps #Docker #Kubernetes

Technology has changed tremendously in the last 20 years. From onion architectures to APIs to microservices to cloud and containers, the technology artifacts shipped by teams has changed. And that’s not all – roles have changed too. Functional silos have been replaced by cross-functional teams, the skill sets people need to have has been redefined and the tools and approaches for how software is developed and delivered has transformed. When we move from highly defined rigid roles and systems to more fluid ones, we gain agility at the cost of control. But where do we want to keep control? How do we take advantage of all these new changes without losing the ability to efficiently develop and ship great software? And how should program and project managers adapt?

read more

AWS will support NVIDIA’s T4 GPUs focusing on intensive machine learning workloads

Amazon Web Services (AWS) will release its latest GPU-equipped instance with support for NVIDIA’s T4 Tensor Core GPUs with a particular focus on machine learning workloads.

The specs, while not fully there yet, promise so far AWS-custom Intel CPUs, up to 384 gibibytes (GiB) of memory, up to 1.8 TB of fast local NVMe storage, and up to 100 Gbps in networking capacity.

From NVIDIA’s side, T4 will be supported by Amazon Elastic Container Service for Kubernetes. “Because T4 GPUs are extremely efficient for AI inference, they are well-suited for companies that seek powerful, cost-efficient cloud solutions for deploying machine learning models into production,” Ian Buck, NVIDIA general manager and vice president of accelerated computing wrote in a blog post.

“NVIDIA and AWS have worked together for a long time to help customers run compute-intensive AI workloads in the cloud and create incredible new AI solutions,” added Matt Garman, AWS vice president of compute services in a statement. “With our new T4-based G4 instances, we’re making it even easier and more cost-effective for customers to accelerate their machine learning inference and graphics-intensive applications.”

The announcement came at NVIDIA’s GPU Technology Conference in San Jose alongside various other news. From the AWS stable it was also announced that the NVIDIA Jetson AI platform now supports robotics service AWS RoboMaker, while AI Playground, as reported by sister publication AI News, is an accessible online space for users to become familiar with deep learning experiences.

It’s worth noting that AWS is not the first company to score in this area. In November Google touted itself as the ‘first and only’ major cloud vendor to offer T4 compatibility. In January, the company announced its T4 GPU instances were available in beta across six countries.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

How are faster networks advancing the next generation of data centres?

We are witnessing significant uplift in data transmission speeds offered by network connectivity providers. Service providers are now promising speeds in hundreds of MBs to GBs with which, for instance, we can see live streaming of Blu-ray movie prints without any buffering.

Such network speeds are set to trigger many new technology possibilities. Businesses cannot afford to stay behind, as they have to take into account new technologies which are widely adopted by the competitive market landscape. Therefore, the focus of businesses has become clear and narrow; to constantly satisfy customer demands with lucrative digital offerings and push businesses ahead for gaining competitive advantage.

To align with this trend, businesses have already started to optimise and redesign their data centres to handle a vast amount of data generated by a growing number of consumer devices. It is obvious for businesses to transform the data centre for addressing the need for upgrading. A transition would involve the use of:

  • Virtual network functions (VNFS), which replaces server hardware with software-based packages to specific work – network function virtualisation (NFV)
  • Software-defined networking to gain a central control of the network using a core framework which will allow admins to define network operations and security policies
  • Seamless orchestration among several network components using ONAP, ETSI OSM, Cloudify among others
  • Workloads (VM and containers) and data centre management by implementing OpenStack, Azure Stack, Amazon S3, CloudStack, and Kubernetes. Containers are getting widely adopted due to features like faster instantiation, integration, scaling, security, and ease in management

The next thing which will disrupt the data centre is the adoption of edge architecture. Edge computing will bring a mini data centre closer to where data is going generated by devices like smartphones, industrial instruments, and other IoT devices. This will add more endpoints before data is gathered by the central data centre. But the advantage is that maximum computing will be done at the edge that will help to reduce the load on network transmission resources. Adding to this, hyperconvergence can be used at edge nodes to bring simplification in the required mini data centre.

Mobile edge computing (MEC), a core project maintained by ETSI, is emerged at an edge computing model to be followed by telecom operators. ETSI is maintaining and working on innovations to improve the delivery of core network functionalities using MEC, as well as guiding vendors and service providers.

Aside from edge computing, network slicing is a new architecture introduced in 5G that will have an impact on how data centres are designed for particular premises, and dedicated for specific cases such as Industrial IoT, transportation, and sports stadia.

Data centre performance for high speed networks

In this transforming age, a large amount of data will transfer between devices and the data centre as well as between data centres. As low latency and high bandwidth is required by new use cases, it is important to obtain a higher level of performance from the data centre. It is not possible to achieve such paramount performance with legacy techniques and by adding more capacity to data centres.

With the ‘data tsunami’ of recent years, data centre technology vendors came up with new inventions and communities formed to address performance issues raised by different types of workloads. One of the techniques which has been significantly utilised in new age data centres is to offload some of the CPU tasks to network or server interconnecting switches and routers. Let’s take an example of the network interface card (NIC) which, when used to connect servers to network components of the data centre, has become a SmartNIC, offloading processing tasks that the system CPU would normally handle. SmartNICs can perform network-intensive functions such as encryption/decryption, firewall, TCP/IP, and HTTP processing.

Analyst firm Futorium conducted a Data Centre Network Efficiency survey targeted to IT professionals about their perceptions and views on data centres and networks. Apart from virtualising network resources and workloads, for efficient processing of data for high-speed networks, SmartNIC usage and process offload techniques have emerged as the top interest for IT professionals. This reveals how businesses are relying more on smart techniques which can save costs, along with notable data centre performance improvements for faster networks.

Workload accelerators, like GPUs, FPGAs, and SmartNICs are widely used in current enterprise and hyperscale data centres to improve data processing performance. These accelerators interconnect with CPUs for generating faster processing of data and require much lower latency for transmitting data back and forth from the CPU server.

Most recently, to address the high speed and lower latency requirements between workload accelerators and CPUs, Intel, along with companies including Alibaba, Dell EMC, Cisco, Facebook, Google, HPE and Huawei, have formed an interconnect technology called Compute Express Link (CXL) that will aim to improve performance and remove the bottlenecks in computation-intensive workloads for CPUs and purpose-built accelerators. CXL is focused to create high speed, low latency interconnect between the CPU and workload accelerators, as well as maintain memory coherency between the CPU memory space and memory on attached devices. This allows for resource sharing for higher performance, reduced software stack complexity, and lower overall system cost.

NVMe is another interface introduced by the NVM Express community. It is a storage interface protocol used to boost access to SSDs in a server. NVMe can minimise CPU cycles from applications and handle enormous workloads with lesser infrastructure footprints. NVMe has emerged as a key storage technology and has had a great impact on businesses, which are dealing with vast amounts of fast data particularly generated by real-time analytics and emerging applications.

Automation and AI

Agile 5G networks will result in the growth of edge compute nodes in network architecture to process data closer to endpoints. These edge nodes, or mini data centres, will sync up with a central data centre as well as be interconnected to each other.

For operators, it will be a task ahead to manually set up several edge nodes. The edge nodes will regularly need initial deployment, configuration, software maintenance and upgrades. In the case of network slicing, there could be a need to install, or update VNFs for particular tasks for devices in the slice. It is not possible to do this manually. At this point, automation comes into the picture where operators need to get the central dashboard at the data centre to design and deploy configuration for edge nodes.

Technology businesses are demonstrating or implementing AI and machine learning at the application level for enabling auto-responsiveness – for instance, using chatbots on a website. Much of the AI is applied for a data lake, to generate insights from self-learning AI-based systems. These types of autonomous capabilities will be required by the data centre.

AI systems will be used for monitoring server operations for tracking activities meant for self-scaling for a sudden demand in compute or storage capacity, as well as self-healing from breakdowns, and end-to-end testing of operations. Already, tech businesses have started offering solutions for each of these use cases; for example, a joint AI-based integrated infrastructure offering by Dell EMC Isilon and NVIDIA DGX-1 for self-scaling at the data centre level.

Conclusion

New architectures and technologies are being introduced with the revolution in the network. Most of this infrastructure has turned software-centric as a response to the growing number of devices and higher bandwidth. Providing lower latency – up to 10 microseconds – is a new challenge for operators to enable new technologies in the market. For this to happen, data centres need to complement the higher broadband network. It will form the base for further digital innovation to occur.

Editor’s note: Download the eBook ‘5G Architecture: Convergence of NFV & SDN Networking Technologies’ to learn more about the technologies behind 5G and the status of adoption, along with key insights into the market

The post Analysis: How are Faster Networks Advancing the New-Age Datacenters appeared first on Calsoft Inc. Blog.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

MySQL: Your password does not satisfy the current policy requirements

Mediante el plugin validate_password podemos definir unos requisitos minimos de seguridad para las contraseñas de MySQL:

mysql> GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, REFERENCES, INDEX, ALTER ON `dbdemo`.* TO 'demouser'@'%' identified by 'demopassword';
ERROR 1819 (HY000): Your password does not satisfy the current policy requirements
mysql> GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, REFERENCES, INDEX, ALTER ON `dbdemo`.* TO 'demouser'@'1.2.3.4' identified by 'demopassword';
ERROR 1819 (HY000): Your password does not satisfy the current policy requirements

Evidentemente para ciertos entornos dicho plugin puede resultar un problema por lo que podemos querer desactivarlo.

Podemos desactivar el plugins mediante el siguiente comando:

mysql> uninstall plugin validate_password;
Query OK, 0 rows affected (0.04 sec)

A continuación los password ya no serán examinados con los criterios mínimos de seguridad:

mysql> GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, REFERENCES, INDEX, ALTER ON `dbdemo`.* TO 'demouser'@'%' identified by 'demopassword';
Query OK, 0 rows affected, 1 warning (0.06 sec)

mysql> GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, REFERENCES, INDEX, ALTER ON `dbdemo`.* TO 'demouser'@'1.2.3.4' identified by 'demopassword';
Query OK, 0 rows affected, 1 warning (0.01 sec)

Tags:

Sennheiser taps into AWS for cloud-based call and management capabilities


Clare Hopping

19 Mar, 2019

Sennheiser has teamed up with Amazon to launch a range of USB-connected headsets that are compatible with Amazon Connect and Amazon Chime.

The Amazon Connect integration means users will be able to manage and control their calls, including viewing incoming calls and accepting them, muting, unmuting and ending the call via Google Chrome.

“With Amazon Connect becoming increasingly popular for customer service, we are proud to join achieve Standard Technology Partner status in the AWS Partner Network,” said Theis Moerk, Sennheiser’s vice president of enterprise solutions product management.

“By combining the benefits of Amazon Connect with Sennheiser’s premium headsets and cloud-based IT-management solution Sennheiser HeadSetup Pro Manager, cloud contact center customers can now manage their calls and headsets more efficiently.”

Additionally, headsets and speakers can be managed via Sennheiser’s HeadSetup Pro Manager, meaning IT staff can manage all the headsets in operation within the organisation, such as exception handling, firmware updates and device configuration, allowing for all devices to be managed at once if need be.

Amazon Chime allows businesses to take advantage to a unified communications platform. The integration allows all devices – whether headsets or speakers – to stay in sync during conference calls, whether voice-only or voice and video calls. Because Amazon Chime runs on AWS, IT departments don’t have to manage the infrastructure and workers can experience less disruption. 

“We are excited to connect our premium products with Amazon Connect in order to provide an even better customer experience,” Moerk added.