All posts by cloudtech

Mirantis to help AT&T build Network Cloud for 5G

US telecom giant AT&T will be building the next generation of its Network Cloud for 5G with the help of cloud computing firm Mirantis, as part of a three-year deal reportedly to be more than £7.7mn ($10mn).

AT&T’s Network Cloud for 5G will be built via software from an open source project known as “Project Airship”, which was initially formed by the telecom company, along with SK Telecom, the OpenStack Foundation, and Intel in May of 2018.

Amy Wheelus, AT&T’s vice president of cloud and Domain 2.0 Platform integration, said:

“Simply put, Airship lets you build a cloud easier than ever before. Whether you’re a telecom, manufacturer, healthcare provider, or an individual developer, Airship makes it easy to predictably build and manage cloud infrastructure.”

Boris Renski, co-founder and CMO of Mirantis, said that Project Airship is very crucial for AT&T because it is what makes it possible to roll out many data centres and manage them on a single life cycle.

“AT&T had the foresight to start building this in open source about one and a half years ago,” he commented.

AT&T’s Integrated Cloud (AIC), which was built using cloud infrastructure software from OpenStack, is used by more than hundred data centres. When A&T first announced the Project Airship, it said that it was changing the name of AIC to Network Cloud. And since then, the company is referring it as Network Cloud for 5G.

It has been years now that Mirantis is working with the American telecom giant. Moreover, the cloud computing firm was an early OpenStack collaborator and now provides commercial OpenStack distribution. OpenStack’s original software was built to use virtual machines (VMs) in the data centre.

AT&T’s Network Cloud for 5G is being built on OpenStack using containers as opposed to VMs. These containers are managed by a container management system, called Kubernetes.

The Network Cloud for 5G will be built based on containers and Kubernetes. However, it will still have ample of workloads running on VMs within its AIC.

(Photo by Markus Spiske on Unsplash)

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Golden State Warriors use Google Cloud to up their game

American professional basketball team Golden State Warriors will use Google Cloud’s data-crunching technology to improve the performance of the players.

Apart from improving their game on court, the NBA champions will also use Google Cloud to track analytics to crunch scouting reports and better rope in athletes. According to Google Cloud, this data can easily be shared with coaches, staff and players.

It is also reported that the basketball team is planning to host a mobile app on the Google Cloud platform. This app — which will be developed by the Warriors and Accenture — will leverage Google Cloud technologies, such as App Engine and Firebase, for personalisation and Maps to support navigation. The app will also allow Warriors’ fans to find their seats.

Moreover, Google Cloud will also become a founding partner of the Chase Centre in San Francisco, California. This will become the new home venue for the team as it will be opening in September 2019.

According to Google Cloud, the Chase Centre will use the same analytics to promote sporting and entertainment events at the venue.

Kirk Lacob, assistant general manager and vice president of GSW Sports Ventures at the Warriors, said: “Today, 70% of the Golden State Warriors analytics team's time is spent collecting and shaping data and only 30% of time is spent analysing it. Partnering with Google Cloud to automate the collection of valuable data will allow us to free up resources and spend more time turning these insights into action.”

(Photo by TJ Dragotta on Unsplash)

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Databricks secures $250m funding as a16z claims its victory in big data platforms

It’s another big day for big data; analytics platform provider Databricks has raised $250 million (£193.3m) in a series E funding round and now sits atop a $2.75 billion valuation.

The funding was led by Andreessen Horowitz (a16z), and featured participation from Coatue Management, Microsoft, and New Enterprise Associates (NEA). In a statement, Ben Horowitz said that a16z was “thrilled” to invest in this funding round and that Databricks was the “clear winner in the big data platform race.”

The company helped create big data processing framework Apache Spark and has offerings based around Azure and Amazon Web Services (AWS). The technology continues to have a wide-ranging influence; only last week Google launched a Kubernetes operator for Apache Spark in beta.

Like many big data companies with technological foundations on open source software, Databricks’ bread and butter is through putting a platform, Unified Analytics, on top of it. The platform aims to unify data management and its myriad of languages and tools, with Databricks claiming it is up to 100 times faster than open source Spark.

The presence of Microsoft as one of the funders may raise the odd eyebrow, but its positioning is more than sound. The company offers a product called Azure Databricks, a Spark-based analytics service.

“Databricks has shown tremendous leadership in big data, data science and is uniquely positioned with Microsoft to meet customer needs across big data, data warehousing and machine learning,” said Rohan Kumar, corporate vice president of Azure data at Microsoft.

As is customary, the key to big data analysis is through the incorporation of artificial intelligence (AI) and machine learning. Speaking to this publication last month following the $5.2 billion merger with Hortonworks, Cloudera chief marketing officer Mick Hollison – who can certainly be considered a competitor of Databricks – noted how a major curve was about to take place.

“Most of the ML and AI that has been done in enterprises to date has been pretty bespoke,” Hollison explained. “It hasn’t necessarily been done against well secured and governed data sets supported by IT. It’s often been scraped onto a laptop by a data scientist, putting that data at risk.”

If Databricks has in the opinion of a16z won the race for big data platforms, the next challenge is ensuring artificial intelligence capability to help organisations get the most valuable insights.

(Photo by Pepi Stojanovski on Unsplash)

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Cloud adoption in the UK is outpacing the EU average

Research published by Eurostat reveals the UK is the sixth largest cloud users among other countries in the EU.

According to statistics published by the European Statistical Office, British enterprises claim to have a relatively high rate of cloud adoption, with 41.9% of companies adopting some form of cloud service. This is compared with the average of 26.2% in EU nations.

The UK is marginally behind some predominant Scandinavian countries like Finland, Sweden, and Denmark, which have 65.3%, 57.2%, and 55.6% of cloud users, respectively.

Figures show organisations in the UK are overtaking the rest of Europe in cloud adoption with a 17.9% increase over 2014, compared to a relatively modest EU-wide average increase of 7.2%.

Eurostat experts Magdalena Kaminska and Maria Smihily said:

“Cloud computing is one of the strategic digital technologies considered important enablers for productivity and better services. Enterprises use cloud computing to optimise resource utilisation and build business models and market strategies that will enable them to grow, innovate and become more competitive.

Growth remains a condition for businesses' survival and innovation remains necessary for competitiveness. In fact, the European Commission in the wider context of modernisation of the EU industry develops policies that help speed up the broad commercialisation of innovation.”

The rate of cloud adoption in France and Germany was far below average with just 19% and 22%, respectively.

Furthermore, the data shows that only 23% of European businesses use cloud computing power for enterprise software and just 29% of firms use cloud-based customer relationship management (CRM) tools and apps.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Amazon will improve its cloud efficiency using ARM-based processor

Amazon is deploying an ARM-based Graviton processor to improve its cloud-based computing services. As per the Seattle-based company, this will lead to cost savings of up to 45% for "scale-out" services.

Amazon became the world’s biggest player in cloud computing via Amazon Web Services (AWS) which is the company’s $27 billion cloud business. AWS provides on-demand cloud computing platforms to individuals, companies, and governments on a paid subscription basis.

The e-commerce giant is changing the technology behind its cloud services to deliver faster performance and to save costs. The new system is expected to provide the company with a performance-per-dollar advantage.

The ARM Graviton processor contains 64-bit Neoverse cores and is based on the Cosmos 16nm processor platform, highlighted ARM Senior Vice President Drew Henry.

In addition, the Israeli-designed Graviton operates on the Cortex-A72 64bit core which functions at clock frequencies up to 2.3GHz. The servers run on Intel and AMD processors.

The system will assist Amazon with scale-out workloads. Users of the service can share the load across a group of smaller instances such as containerized microservices, web servers, development environments, and caching fleets.  

There are other advantages to Amazon from the new technology, centred around being more independent in relation to technology providers.

Amazon will now have the ability to license Arm blueprints, via Annapurna. In addition, the company will now be allowed to customize and tweak those designs and will have the ability to go to contract manufacturers like TSMC and Global Foundries and get competitive chips made.

Additionally, AWS is also building a custom ASIC for AI Inference called Inferentia. This could be capable of scaling from hundreds to thousands of trillions of operations per second and further reduce the cost of cloud-based services.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series.pngInterested in hearing industry leaders discuss subjects like this and sharing their Cyber Security & Cloud use-cases? Attend the Cyber Security & Cloud Expo World Series events with upcoming shows in Silicon Valley, London and Amsterdam to learn more.

Packet and Wasabi join hands to offer better cloud services than AWS

Cloud and edge computing infrastructure provider Packet, and hot cloud storage firm Wasabi, have joined hands to integrate their respective platforms to offer their customers cloud computing and storage services for less compared to Amazon Web Services (AWS).

David Friend, CEO of Wasabi, said: “Amazon has 100-some-odd cloud services. They do everything, but they don’t do anything particularly well. They’ve got one big integrated environment. But if you want the best content delivery network, Amazon doesn’t have it. If you want the best storage, Amazon doesn’t have it.”

At the moment, Packet and Wasabi’s offering is very limited in scope if compared to AWS’ multiple services. Unlike AWS’ be-everything-to-everybody approach, the companies are focusing only on cloud storage and cloud computing.

According to Friend, Wasabi’s cloud storage is 80% cheaper and six-times faster than Amazon S3 storage.

Zac Smith, CEO of Packet, said: “How can we create an experience for enterprise buyers that gives the best of both worlds: the best, low-cost storage option and the best compute, but at the same time not with a lower experience for the developer? We’re not trying to solve this from a technology standpoint. We’re trying to solve this from an operations and business standpoint.”

Packet claims that its bare-metal cloud supports more than 60,000 installations every month and is available in more than 18 countries. Its cloud automation platform enables bare metal installations in less than 60 seconds.

Also, both companies will be offering joint services via their individual infrastructure-as-a-service (IaaS) management consoles and APIs, which is likely to be available in Q1 2019. This integrated console will let Packet compute customers to use Wasabi storage and Wasabi customers to use Packet compute resources.

These joint cloud services will be connected over a high-capacity, low-latency fibre network with no transfer fees between compute and storage elements.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series.pngInterested in hearing industry leaders discuss subjects like this and sharing their Cyber Security & Cloud use-cases? Attend the Cyber Security & Cloud Expo World Series events with upcoming shows in Silicon Valley, London and Amsterdam to learn more.

OpenStack ‘Rocky’ makes it easier to deploy on bare metal

Open source cloud platform OpenStack now powers 75+ public cloud data centres and thousands of private cloud services at a scale of more than 10 million compute cores.

Earlier versions of the platform were difficult to upgrade from one version to another. Moreover, it has been hard to deploy on bare metal. But now, these problems have been resolved with OpenStack 'Rocky' — the platform’s 18th version.

OpenStack has always run on a variety of hardware architectures. However, bare metal has always been a bit complicated.

The platform's new bare metal provisioning module, Ironic, simplifies deployment by bringing more advanced and management and automation capabilities to the bare metal infrastructure.

OpenStack Nova, which manages large networks of VMs, now also supports bare-metal servers. This means that the platform also supports multi-tenancy so that users can manage physical infrastructure in the same way they manage VMs.

There are other new features of the Ironic, which includes: User-managed BIOS settings, conductor groups, and RAM Disk deployment interface.

Julia Kreger, Red Hat principal software engineer and OpenStack Ironic project team lead, said:

“OpenStack Ironic provides bare metal cloud services, bringing the automation and speed of provisioning normally associated with virtual machines to physical servers.

This powerful foundation lets you run VMs and containers in one infrastructure platform, and that's what operators are looking for.”

Oath’s IaaS architect James Penick said that OpenStack is already in use in his company to manage thousands of bare metal in the company’s data centres. There were significant changes made to Oath’s supply chain process using OpenStack, fulfilling common bare metal quota requests within minutes, he said.

“We are looking forward to deploying the Rocky release to take advantage of its numerous enhancements such as BIOS management, which will further streamline how we maintain, manage and deploy our infrastructure,” adds Penick.

Upgrading the OpenStack platform is a bit difficult, but OpenStack Rocky’s Fast Forward Upgrade (FFU) feature can help users to seamlessly complete the upgrading process and get on newer releases of OpenStack faster.

What are your thoughts on OpenStack's latest release? Let us know in the comments.

Google Cloud Platform offers services to address variety of workload requirements

Google has introduced some new database features along with partnerships, beta news and other improvements that help users get most of their databases for their businesses.

One can choose cloud to host applications from a portfolio of database options – such as SQL, NoSQL, relational, non-relational, scale up/down, scale in/out – but Google Cloud Platform (GCP) provides a comprehensive package of managed database services to address a variety of workload requirements.

This is what Google is now offering:

  • Oracle workloads can now be brought to GCP
  • SAP HANA workloads can run on GCP persistent-memory VMs
  • Cloud Firestore launching for all users developing cloud-native apps
  • Regional replication, visualisation tool available for Cloud Bigtable
  • Cloud Spanner updates

Google is joining hands with managed service providers (MSPs) to provide a fully managed service for Oracle workloads for GCP customers. Such partner-managed services unlock the ability to run Oracle workloads and leverage the rest of the GCP platform.

It's possible for users to run their Oracle workloads on dedicated hardware and then connect the applications running on GCP. It can offer fully managed services for Oracle workloads with the same advantages as GCP services by collaborating with a trusted managed service provider.

Users can choose the offering that suits their requirements, along with existing investment in Oracle software licenses. Google is providing an opportunity to customers and partners whose technical requirements do not fit neatly into the public cloud. They will be able to move their workloads to GCP by working with partners and take advantage of the benefits of not having to manage hardware and software.

Recently, Google collaborated with Intel and SAP to offer Compute Engine virtual machines supported by the upcoming Intel Optane DC Persistent Memory for SAP HANA workloads.

Google Compute Engine VMs with this Intel Optane DC persistent memory will offer higher overall memory capacity and lower cost compared to instances with only dynamic random-access memory (DRAM).

The company is continuing to scale its instance size roadmap for SAP HANA production workloads. It is working on new virtual machines that support 12TB of memory instead of the currently used 4TB by the summer of 2019, and 18TB by the end of 2019.

Google is expanding the availability of the Cloud Firestore beta to more users by bringing the UI to the GCP console.

Cloud Firestone is a serverless, NoSQL document database that simplifies storing, syncing and querying data for your cloud-native apps at global scale.

According to the company, it will also support Datastore Mode in the coming weeks. Currently available in beta, Cloud Firestore is the next generation of Cloud Datastore that offers compatibility with the Datastore API and existing client libraries.

Google Cloud Bigtable – a high-throughput, low-latency, and massively scalable NoSQL database – is an ideal option for analytical and operational workloads. The company has announced its general availability for regional replication. It has also launched client libraries for Node.js (beta) and C# (beta).

Google will is also planning to launch Python (beta), C++ (beta), native Java (beta), Ruby (alpha) and PHP (alpha) client libraries in the coming months.

What are your thoughts on Google's latest announcements? Let us know in the comments.

Alibaba Cloud targets Indian SMEs with new data centre

Alibaba Cloud’s new data centre in Mumbai is slated to open in January 2018 and will cater for the increasing demand for cloud services arising from small and medium-sized enterprises.

In order to help enterprises of all sizes move to the firm’s platform, the cloud computing arm of Alibaba will create a local team of professional consultants to offer service planning, implementation, and after-sales support.

Taking into consideration the rapid growth displayed by the Indian economy, Alibaba Cloud believes the market will offer tremendous business opportunities. Also in its globalisation strategy, India is earmarked as a key market.

Simon Hu, SVP of Alibaba Group and president of Alibaba Cloud said:

"We are excited to be officially opening our new Mumbai, India data centre in early 2018, enabling us to work closely with more Indian enterprises. These local enterprises are innovative and operating in growth sectors, and we look forward to empowering them through our cloud computing and data technologies.

As we build out the Alibaba Cloud network globally, India is another important piece that is now firmly in place. This continues our commitment to India, helping it to develop trade opportunities with other markets in the region and beyond."

With the new data centre, Alibaba Cloud will provide Indian enterprises with a comprehensive suite of cloud computing products including largescale computing, storage resources, and Big Data processing capabilities. Other data centre services to be offered include elastic computing, database, storage and content delivery, networking, analytics and big data as well as security.

At present, Alibaba Group’s cloud computing arm has 33 availability zones throughout 16 economic centres worldwide, with coverage extending across mainland China, Hong Kong, Singapore, Japan, Australia, the Middle East, Europe, India and the US (East and West Coast).

What are your thoughts on Alibaba's new data centre? Let us know in the comments.

Salesforce joins the CNCF recognising the power of containerisation

Salesforce has joined hands with the leading force in the containerisation space, the Cloud Native Computing Foundation (CNCF),

Being the owner of Kubernetes, the popular open-source container orchestration tool, the CNCF has gained momentum recently with a large number of renowned cloud technology companies joining it in 2017 such as AWS, Oracle, Microsoft, VMware, and Pivotal.

Salesforce, although a Software as a service (SaaS) vendor, has realised that containerisation provides a way to more tightly control the development process and seeks to gain a piece of the action from this association.

Mark Interrante, SVP Engineering at Salesforce announced the partnership in a blog post on Medium and said it is imperative to adopt new technologies like Kubernetes quickly as they help create products faster and easier.

He said, “We’ve seen how containerization simplifies the orchestration of software across a large fleet of servers. Kubernetes makes a great foundation for continuous innovation/continuous delivery which then improves our software delivery. This kind of collaboration, with Salesforce as an active participant in open technology ecosystems, is key to helping us move forward.”

Salesforce’s development teams have adopted many of the CNCF tools apart from Kubernetes.

Yesterday, we highlighted a range of new and updated projects from the CNCF.

What are your thoughts on Salesforce joining the CNCF? Let us know in the comments.