Category Archives: Database

Alibaba Cloud upgrades AnalyticDB with vector database engine

Alibaba Cloud, the digital technology and intelligence backbone of Alibaba Group, has enhanced its AnalyticDB vector engine, making it easier than ever for enterprises to access various large language models (LLMs) for building their customised generative AI capabilities. At its Data Management Summit hosted in Jakarta, Alibaba Cloud also upgraded its one-stop, cloud-native data management… Read more »

The post Alibaba Cloud upgrades AnalyticDB with vector database engine appeared first on Cloud Computing News.

Contract dispute with HPE costs Oracle $3bn

Lady Justice On The Old Bailey, LondonOracle has released a statement declaring it will appeal a jury decision to side with HPE in a long-running contract dispute worth $3 billion.

The dispute dates back to 2011 when Oracle decided to stop creating new versions of its database and other software for systems running Intel’s Itanium chip. The HP Enterprise claimed the decision violated the contractual terms between the organizations, a claim which the jury also believed. Oracle also claimed Intel had decided to stop supporting Itanium shifting focus to the x86 microprocessor, which the chip-maker has denied.

“Five years ago, Oracle made a software development announcement which accurately reflected the future of the Itanium microprocessor,” said Dorian Daley, General Counsel of Oracle. “Two trials have now demonstrated clearly that the Itanium chip was nearing end of life, HP knew it, and was actively hiding that fact from its customers.

“Oracle never believed it had a contract to continue to port our software to Itanium indefinitely and we do not believe so today; nevertheless, Oracle has been providing all its latest software for the Itanium systems since the original ruling while HP and Intel stopped developing systems years ago.”

Back in 2012, Santa Clara court’s Judge James Kleinberg confirmed to Oracle it would have to maintain its end of the contract for as long as HPE remained in the Itanium game. This decision was appealed by Oracle, which delayed the damages trial.

HPE has been seeking damages of $3 billion – $1.7 billion in lost sales before the case started, plus $1.3 billion in post-trial sales – which was awarded in full by the jury. Daley has unsurprisingly stated Oracle will appeal the decision, which could mean the sage will continue for some time.

Oracle has been having a tough time in the court room as of late, as it was seeking $8.8 billion in damages from Google over the unlicensed use of Java in a case which has dated back to 2010. The recent ruling was a victory for Google as the jury found Android does not infringe Oracle-owned copyrights because its re-implementation of 37 Java APIs is protected by ‘fair use’. Oracle again stated it would appeal the decision, though it has been a tough couple of months for Oracle’s legal team.

Salesforce to run some core services on AWS

Salesforce 1Salesforce has announced it will run some of its core services on AWS in various international markets, as well as continuing investments into its own data centres.

The announcement comes two weeks after the company experiences a database failure on the NA14 instance, which caused a service outage which lasted for 12 hours for a number of customers in North America.

“With today’s announcement, Salesforce will use AWS to help bring new infrastructure online more quickly and efficiently. The company will also continue to invest in its own data centres,” said Parker Harris, on the company’s blog. “Customers can expect that Salesforce will continue to deliver the same secure, trusted, reliable and available cloud computing services to customers, regardless of the underlying infrastructure.”

While Salesforce would not have appeared to have suffered any serious negative impact from the outage in recent weeks, the move could be seen as a means to rebuild trust in its robustness, leaning on AWS’ brand credibility to provide assurances. The move would also give the Salesforce team options should another outage occur within its own data centres. The geographies this announcement will apply to have not been announced at the time of writing.

Sales Cloud, Service Cloud, App Cloud, Community Cloud and Analytics Cloud (amongst others) will now be available on AWS, though the move does not mean Salesforce is moving away from their own data centres. Investment will continue as this appears to be a failsafe for the business. In fact, Heroku, Marketing Cloud Social Studio, SalesforceIQ and IoT cloud already run on AWS.

“We are excited to expand our strategic relationship with Amazon as our preferred public cloud infrastructure provider,” said Salesforce CEO Marc Benioff. “There is no public cloud infrastructure provider that is more sophisticated or has more robust enterprise capabilities for supporting the needs of our growing global customer base.”

AWS announce launch of X1 Instances for EC2

Cloud in my handAWS has announced the availability of X1 Instances for Amazon EC2, which it claims is the most memory available in any SAP-certified cloud instance available today.

The X1 instances have 2 TB of memory, and are powered by four 2.3 GHz Intel Xeon E7 8880 v3 processors delivering 128 vCPUs. The X1 instances also offer up to 10 Gb per second of dedicated bandwidth to Amazon Elastic Block Store, which the team believe is well suited to support large-scale in-memory databases, big data processing, and high performance computing.

“Amazon EC2 provides the most comprehensive selection of instances, offering customers, by far, the deepest compute functionality to support virtually any workload,” said Matt Garman, VP at Amazon EC2. “We’ve had a Memory Optimized instance family (our R3 family) for a while that is quite popular for high performance databases, in-memory analytics, and enterprise applications; however, customers have increasingly asked for even more memory to help run analytics on larger data sets with in-memory databases, generate analytics in real time, and create very large caches.

“With 2 TB of memory – 8 times the memory of any other available Amazon EC2 instance, and more memory than any SAP-certified cloud instance available today – X1 instances change the game for SAP workloads in the cloud. Now, for the first time, customers can run their most memory-intensive applications at scale with the elasticity, flexibility, and reliability of the AWS Cloud, rather than having to battle the complexity, cost, and lack of agility of colo or on-premises solutions.”

The X1 Instances are available via request in a number of AWS regions, including US East, US West, EU (Germany and Ireland), Asia Pacific (Tokyo, Sydney and Singapore), and will be available in the remaining areas over the next few months.

Alibaba and Softbank launch SB Cloud for Japanese market

AlibabaAlibaba and Softbank have announced the establishment of SB Cloud Corporation, a new joint venture to offer cloud computing services in Japan.

The demand for public cloud in Japan and surrounding countries has been growing in recent years, with Japan leading the way as the most advanced nation. A report from Gartner last year estimated the total public cloud services spending in the mature APJ region will rise to $11.5 billion by 2018. Alibaba has targeted the region to grow its already healthy cloud business unit.

“I’ve really enjoyed working with the Alibaba Cloud team on the joint venture over the past few months,” said Eric Gan, the new CEO of SB Cloud and EVP of SoftBank. “During the business planning discussions, I quickly felt that we were all working very much as one team with one goal. I believe the JV team can develop the most advanced cloud platform for Japanese customers, as well as for multinational customers who want to use the resources we have available in Japan.”

SB Cloud will enable Alibaba to increase its presence in the market, where it already offers services to SoftBank’s business customer base in Japan, which primarily comprises of global organizations. SB Cloud will open a new data centre in the country, where it will now serve customers outside of established SoftBank customer base, offering data storage and processing services, enterprise-level middleware as well as cloud security services.

A recent report from the US Department of Commerce highlighted the Japanese market is one of the most competitive worldwide, though five of the six major vendors are American, Amazon Web Services, Google, IBM, Microsoft and Salesforce. Domestic companies, such as Fujitsu, have announced aggressive expansion plans. Fujitsu claims to be to investing $2 billion between 2014 and 2017 to capture an increased market share in cloud computing, primarily focused on the growing IoT sub-sector.

While Alibaba’s traditional business has been in the Chinese market, the company has been making efforts over the last 12-18 months to diversify its outreach. Last year, the company launched a new data centre in Singapore, as well as in Silicon Valley. It also launched what it claims is China’s first cloud AI platform last August, DT PAI. The purpose-built algorithms and machine learning technologies are designed to help users generate predictive intelligence insights, claiming the service features “drag and drop” capabilities that let users easily connect different services and set parameters, seemingly following IBM’s lead in designing a more accessible offering for the industry.

EMC & Dell execs outline integration plan to create Dell Technologies

EMC Dell Integration

Dell’s Chief Integration Officer Rory Read (Right) and EMC’s COO of the Global Enterprise Services business unit Howard Elias (Left)

Speaking at EMC World in Las Vegas, Dell’s Chief Integration Officer Rory Read and EMC’s COO of the Global Enterprise Services business unit Howard Elias offered some insight into the workings of the Value Creation and Integration Office, the team built to manage the integration of EMC and Dell during the course of the merger.

The Value Creation and Integration Office was created following the announcement of the merger last year with the intention of managing the transition of taking two tech giants and moulding them into one efficient organization. Both Read and Elias have experience of overseeing such activities, Read was for example the President of Lenovo during the Intel acquisition, though there are few similarities between the pair’s previous experience and one of the largest mergers in business history.

“Both companies have some extensive experience of acquisitions and incorporating other businesses, but we couldn’t use any of the playbooks for this one,” said Elias of the current merger. But while there are few examples to draw upon to build a blueprint that is not to say it is a more complicated task. In fact, the pair argued the integration of the two organizations has been a relatively smooth journey thus far, with few major roadblocks envisioned moving towards Day 1, the team’s nickname for the deadline when Dell and EMC will cease to exist as two separate organizations.

Read

Dell’s Chief Integration Officer Rory Read

“The collisions or overlaps are very minor, this is why the integration has been very smooth so far,” said Read, with regard to the overlap in business operations between Dell and EMC. The pair drew attention to the current focus areas of both businesses to explain the smooth integration thus far. While Dell and EMC play in the same arena, to date there has been very little direct competition between the two businesses. Read claims this lack of overlap makes their job easier, but ultimately creates a host more opportunities for the new company, Dell Technologies, in the future.

While combining the revenues of the two businesses would certainly make a significant figure, the team believe the cross-selling and up-selling opportunities created by having a single business offering both the portfolios would create more prospects. “Our customer overlap isn’t large and opens up a lot of new opportunities,” said Read

In theory, by cross-selling Dell and EMC’s portfolio’s in one product offering the team believe there is an opportunity to steal market share from Dell/EMC competitors, dependent on which one is the incumbent supplier. This cross/up-selling opportunity will enable the team to exceed the combined revenues of Dell and EMC, the team claims.

The integration will not stop with EMC and Dell as the company plans to merge the channel partners as well. Details of this aspect of the integration have not been released as of yet, however Read and Elias highlighted the channel partner programmes for both organizations would be phased in. Some announcements will be made on Day 1, though the majority will take place at the end of the year, as this is a natural time for the channel partners to expect a change in operating practise.

Elias

EMC’s COO of the Global Enterprise Services business unit Howard Elias

The final hurdles the team face are the Chinese regulators, the one remaining body to have not signed off on the merger to date. While Chinese regulators have proven to be a difficulty for other organizations in the past, Read and Elias claim it should be a relatively simple process for the team. Read highlighted the fact that all other regulatory bodies had signed off on the deal 100% with no condition attached, it was a good sign when considering the Chinese regulatory process.

In terms of headcount, although there were no official figures given, Read and Elias did indicate there will be job losses as a result of the merger. Due to there being few areas where the two businesses overlap, the reduction in headcount will be low, according to Read, but as with any other merger it is unavoidable. The team will not be releasing any comments or numbers relating to job losses until Day 1.

There have been difficulties in bringing two vast organizations together according to the team, though this is unavoidable in such a task. The $67 billion deal is one of the largest in business history, and it shouldn’t surprise many that the task of integration is a vast one also, though the team are confident the methodology which is in place to create one organization, will be successful.

“This deal is on time, on plan and on budget, from the schedule we set out in October,” said Howard. “The integration and merger is running smoothly and we’ll be ready to go. Day 1 is not the end of anything, it’s the beginning of our new company.”

Microsoft announces general availability of SQL Server 2016

Microsoft1Microsoft has announced the SQL Server 2016 will hit general availability to all customers worldwide as of June 1.

The SQL Server, which is recognized in Gartner Magic Quadrants for Operational Database, Business Intelligence, Data Warehouse, and Advanced Analytics, will be available through four editions, Enterprise, Standard, Express and Developer. The team also announced it would move customer’s Oracle databases to SQL Server free with software assurance.

“SQL Server 2016 is the foundation of Microsoft’s data strategy, encompassing innovations that transform data into intelligent action,” said Tiffany Wissner, Senior Director of Data Platform Marketing at Microsoft. “With this new release, Microsoft is delivering an end-to-end data management and business analytics solution with mission critical intelligence for your most demanding applications as well as insights on your data on any device.”

Features for the SQL include mission critical intelligent applications delivering real-time operational intelligence, enterprise scale data warehousing, new Always Encrypted technology, business intelligence solutions on mobile devices, new big data solutions that require combining relational data and new Stretch Database technology for hybrid cloud environments.

“With this new innovation, SQL Server 2016 is the first born-in-the-cloud database, where features such as Always Encrypted and Role Level Security were first validated in Azure SQL Database by hundreds of thousands of customers and billions of queries,” said Wissner.

Last month, the team announced the team also announced it was bringing the SQL Server to Linux, enabling SQL Server to deliver a consistent data platform across Windows and Linux, as well as on-premises and cloud. This move seemingly surprised some corners of the industry by moving away from its tradition of creating business software that runs only on the Windows operating system. The news continues Chief Executive Satya Nadella’s strategy of making Microsoft a more open and collaborative organization.

IBM expands flash storage portfolio in continued transition to cloud positioning

Cloud storageIBM has announced the expansion of its flash storage portfolio, to bolster its position in the cognitive computing and hybrid cloud market segments.

The FlashSystem arrays combine its FlashCore technology with scale-out architecture, in the company’s continued efforts to consolidate its position as a vendor to power cloud data centres which utilize cognitive computing technologies. Cognitive computing, and more specifically Watson, has seemingly formed the central pillar of IBM’s current marketing and PR campaigns, as it continues its journey to transform Big Blue into a major cloud player.

“The drastic increase in volume, velocity and variety of information is requiring businesses to rethink their approach to addressing storage needs, and they need a solution that is as fast as it is easy, if they want to be ready for the Cognitive Era,” said Greg Lotko, GM of IBM’s Storage and Software Defined Infrastructure business. “IBM’s flash portfolio enables businesses on their cognitive journey to derive greater value from more data in more varieties, whether on premises or in a hybrid cloud deployment.”

The company claims the new offering will provide an onramp for flash storage for IT service providers, reducing the cost of implementing an all-flash environment, as well as scalable storage for cloud service providers. Another feature built into the proposition, will enable customers to deal with ‘noisy neighbour’ challenges and other network performance issues which can be present in a multi-tenant cloud environment.

“The workloads our department manages include CAD files for land mapping, geographic information system (GIS) applications and satellite imagery for the over 9.2 million acres of State Trust lands we’re responsible to oversee. The data we manage is tied directly to our goal to make this information available and to increase its analytical capabilities,” said William Reed, CTO at the Arizona State Land Department, one of IBM’s customers. “After exhaustive, comparative proof of concept testing we chose IBM’s FlashSystem, which has helped to increase our client productivity by 7 times while reducing our virtual machine boot times by over 85 percent.”

Overcoming the data integration challenge in hybrid and cloud-based environments

Vivo, the Brazilian subsidiary of Spanish telco Telefónica deployed TOA Technologies' cloud-based field service management softawre

Industry experts estimate that data volumes are doubling in size every two years. Managing all of this is a challenge for any enterprise, but it’s not just the volume of data as much as the variety of data that presents a problems. With SaaS and on-premises applications, machine data, and mobile apps all proliferating, we are seeing the rise of an increasingly complicated value-chain ecosystem. IT leaders need to incorporate a portfolio-based approach and combine cloud and on-premises deployment models to sustain competitive advantage. Improving the scale and flexibility of data integration across both environments to deliver a hybrid offering is necessary to provide the right data to the right people at the right time.

The evolution of hybrid integration approaches creates requirements and opportunities for converging application and data integration. The definition of hybrid integration will continue to evolve, but its current trajectory is clearly headed to the cloud.

According to IDC, cloud IT infrastructure spending will grow at a compound annual growth rate (CAGR) of 15.6 percent each year between now and 2019 at which point it will reach $54.6 billion.  In line with this, customers need to advance their hybrid integration strategy to best leverage the cloud. At Talend, we have identified five phases of integration, starting from the oldest and most mature right through to the most bleeding edge and disruptive. Here we take a brief look at each and show how businesses can optimise the approach as they move from one step to the next.

Phase 1: Replicating SaaS Apps to On-Premise Databases

The first stage in developing a hybrid integration platform is to replicate SaaS applications to on-premises databases. Companies in this stage typically either need analytics on some of the business-critical information contained in their SaaS apps, or they are sending SaaS data to a staging database so that it can be picked up by other on-premise apps.

In order to increase the scalability of existing infrastructure, it’s best to move to a cloud-based data warehouse service within AWS, Azure, or Google Cloud. The scalability of these cloud-based services means organisations don’t need to spend cycles refining and tuning the databases. Additionally, they get all the benefits of utility-based pricing. However, with the myriad of SaaS apps today generating even more data, they may also need to adopt a cloud analytics solution as part of their hybrid integration strategy.

Phase 2: Integrating SaaS Apps directly with on-premises apps

Each line of business has their preferred SaaS app of choice: Sales departments have Salesforce, marketing has Marketo, HR has Workday, and Finance has NetSuite. However, these SaaS apps still need to connect to a back-office ERP on-premises system.

Due to the complexity of back-office systems, there isn’t yet a widespread SaaS solution that can serve as a replacement for ERP systems such as SAP R/3 and Oracle EBS. Businesses would be best advised not to try to integrate with every single object and table in these back-office systems – but rather to accomplish a few use cases really well so that their business can continue running, while also benefiting from the agility of cloud.

Phase 3: Hybrid Data Warehousing with the Cloud

Databases or data warehouses on a cloud platform are geared toward supporting data warehouse workloads; low-cost, rapid proof-of-value and ongoing data warehouse solutions. As the volume and variety of data increases, enterprises need to have a strategy to move their data from on-premises warehouses to newer, Big Data-friendly cloud resources.

While they take time to decide which Big Data protocols best serve their needs, they can start by trying to create a Data Lake in the cloud with a cloud-based service such as Amazon Web Services (AWS) S3 or Microsoft Azure Blobs. These lakes can relieve cost pressures imposed by on-premise relational databases and act as a “demo area”, enabling businesses to process information using their Big Data protocol of choice and then transfer into a cloud-based data warehouse. Once enterprise data is held there, the business can enable self-service with Data Preparation tools, capable of organising and cleansing the data prior to analysis in the cloud.

Phase 4: Real-time Analytics with Streaming Data

Businesses today need insight at their fingertips in real-time. In order to prosper from the benefits of real-time analytics, they need an infrastructure to support it. These infrastructure needs may change depending on use case—whether it be to support weblogs, clickstream data, sensor data or database logs.

As big data analytics and ‘Internet of Things’ (IoT) data processing moves to the cloud, companies require fast, scalable, elastic and secure platforms to transform that data into real-time insight. The combination of Talend Integration Cloud and AWS enables customers to easily integrate, cleanse, analyse, and manage batch and streaming data in the Cloud.

Phase 5: Machine Learning for Optimized App Experiences

In the future, every experience will be delivered as an app through mobile devices. In providing the ability to discover patterns buried within data, machine learning has the potential to make applications more powerful and more responsive. Well-tuned algorithms allow value to be extracted from disparate data sources without the limits of human thinking and analysis. For developers, machine learning offers the promise of applying business critical analytics to any application in order to accomplish everything from improving customer experience to serving up hyper-personalised content.

To make this happen, developers need to:

  • Be “all-in” with the use of Big Data technologies and the latest streaming big data protocols
  • Have large enough data sets for the machine algorithm to recognize patterns
  • Create segment-specific datasets using machine-learning algorithms
  • Ensure that their mobile apps have properly-built APIs to draw upon those datasets and provide the end user with whatever information they are looking for in the correct context

Making it Happen with iPaaS

In order for companies to reach this level of ‘application nirvana’, they will need to have first achieved or implemented each of the four previous phases of hybrid application integration.

That’s where we see a key role for integration platform-as-a-service (iPaaS), which is defined by analysts at  Gartner as ‘a suite of cloud services enabling development, execution and governance of integration flows connecting any combination of on premises and cloud-based processes, services, applications and data within individual or across multiple organisations.’

The right iPaaS solution can help businesses achieve the necessary integration, and even bring in native Spark processing capabilities to drive real-time analytics, enabling them to move through the phases outlined above and ultimately successfully complete stage five.

Written by Ashwin Viswanath, Head of Product Marketing at Talend

AWS expands reach of Database Migration Service

City lights - EuropeAmazon Web Services has expanded the availability of its Database Migration Service to nearly all its territories worldwide.

Having already performed 1000 migrations since the turn of the year, the service is now available throughout the US, Europe and several locations in Asia. The company is yet to expand to its other regions including Sao Paolo and Seoul.

“Hundreds of customers moved more than a thousand of their on-premises databases to Amazon Aurora, other Amazon RDS engines, or databases running on Amazon EC2 during the preview of the AWS Database Migration Service,” said Hal Berenson, VP, Relational Database Services at AWS. “Customers repeatedly told us they wanted help moving their on-premises databases to AWS, and also moving to more open database engine options, but the response to the AWS Database Migration Service has been even stronger than we expected.”

Migrating a database to the cloud can be a complex and costly project, with enterprises sometimes having to make a tough decision. The decision Enterprises have traditionally had to make is to either take the database out of service while they copy the data, or purchase migration tools which can cost a small fortune. Amazon claims its service is a more cost effective proposition, starting at $3/TB, and it can reduce downtime.

AWS customer Thomas Publishing is an example of one such company who have utilized the service. The team are currently undergoing a transformation project to ensure the products are more user-friendly in the digital world.

“Faced with the challenge of rapidly growing volumes of data and the need to increase efficiency and deliver results on shorter timelines, we were confronted with unattractive options requiring significant upfront investment in both infrastructure and Oracle license expense,” said Hans Wald, Chief Technology Officer, Thomas Publishing

Amazon said that they plan to roll the service out to additional locations in the coming months.