Archivo de la categoría: NoSQL

Basho, Cisco integrate Riak KV and Apache Mesos to strengthen IoT automation

Basho and Cisco have integrated Riak and Mesos

Basho and Cisco have integrated Riak and Mesos

Cisco and Basho have successfully demoed the Riak key value store running on Apache Mesos, an open source technology that makes running diverse, complex distributed applications and workloads easier.

Basho helped create and commercialise the Riak NoSQL database and worked with Cisco to pair Mesos with Riak’s own automation and orchestration technology, which the companies said would help support next gen big data and internet of things (IoT) workloads.

“Enabling Riak KV with Mesos on Intercloud, we can seamlessly and efficiently manage the cloud resources required by a globally scalable NoSQL database, allowing us to provide the back-end for large-scale data processing, web, mobile and Internet-of-Things applications,” said Ken Owens, chief technology officer for Cisco Intercloud Services.

“We’re making it easier for customers to develop and deploy highly complex, distributed applications for big data and IoT. This integration will accelerate developers’ ability to create innovative new cloud services for the Intercloud.”

Apache Mesos provides resource scheduling for workloads spread across distributed – and critically, heterogeneous – environments, which is why it’s emerging as a fairly important tool for IoT developers.

So far Cisco and Basho have only integrated Basho’s commercial Riak offering, Riak KV, with Mesos, but Basho is developing an open source integration with Mesos that will also be commercialized around a supported enterprise offering.

“By adding the distributed scheduler from Mesos, we’re effectively taking the infrastructure component away from the equation,” Adam Wray, Basho’s chief executive officer told BCN. “Now you don’t have to worry about the availability of servers – you literally have an on-demand model with Mesos, so people can scale up and down based on the workloads for any number of datacentres.”

“This is what true integration of a distributed data tier with a distributed infrastructure tier looks like, being applied at an enterprise scale.”

Wray added that while the current deal with Cisco isn’t a reselling agree we can expect Basho to be talking about large OEM deals in the future, especially as IoT picks up.

IBM buys Compose to strengthen database as a service

IBM has acquired Compose, a DBaaS specialist

IBM has acquired Compose, a DBaaS specialist

IBM has acquired Compose, a database as a service provider specialising in NoSQL and NewSQL technologies.

Compose helps set up and manage databases running at pretty much any scale, deployed on all-SSD storage. The company’s platform supports most of the newer database technologies including MongoDB, Redis, Elastisearch, RethinkDB and PostgresSQL and is deployed on AWS, DigitalOcean and SoftLayer.

“Compose’s breadth of database offerings will expand IBM’s Bluemix platform for the many app developers seeking production-ready databases built on open source,” said Derek Schoettle, general manager, IBM Cloud Data Services.

“Compose furthers IBM’s commitment to ensuring developers have access to the right tools for the job by offering the broadest set of DBaaS service and the flexibility of hybrid cloud deployment,” Schoettle said.

Kurt Mackey, co-founder and chief executive of Compose said: “By joining IBM, we will have an opportunity to accelerate the development of our database platform and offer even more services and support to developer teams. As developers, we know how hard it can be to manage databases at scale, which is exactly why we built Compose –to take that burden off of our customers and allow them to get back to the engineering they love.”

IBM said the move would give a big boost to its cloud data services division, where it’s seeing some solid traction; this week the company said its cloud data services, one of its big ‘strategic imperatives’, saw revenues swell 30 per cent year on year. And according to a report cited by the IT incumbent and produced by MarketsandMarkets, the cloud-based data services market is expected to swell from $1.07bn in 2014 to $14bn by 2019.

This is the latest in a series of database-centric acquisitions for IBM in recent years. In February last year the company acquired database as a service specialist Cloudant, which built a distributed, fault tolerant data layer on top of Apache CouchDB and offered it as a service largely focused on mobile and web app-generated data. Before that it also bought Daeja Image Systems, a UK-based company that provides rapid search capability for large image files spread over multiple databases.

Google reveals Bigtable, a NoSQL service based on what it uses internally

Google has punted another big data service, a variant of what it uses internally, into the wild

Google has punted another big data service, a variant of what it uses internally, into the wild

Search giant Google announced Bigtable, a fully managed NoSQL database service the company said combines its own internal database technology with open source Apache HBase APIs.

The company that helped give birth to MapReduce and its sister Hadoop is now making available the same non-relational database tech driving a number of its services including Google Search, Gmail, and Google Analytics.

Google said Bigtable is powered by BigQuery underneath, and is extensible through the HBase API (which provides real-time read / write access capabilities).

“Google Cloud Bigtable excels at large ingestion, analytics, and data-heavy serving workloads. It’s ideal for enterprises and data-driven organizations that need to handle huge volumes of data, including businesses in the financial services, AdTech, energy, biomedical, and telecommunications industries,” explained Cory O’Connor, product manager at Google.

O’Connor said the service, which is now in beta, can deliver over two times the performance of its direct competition (which will likely depend on the use case), and has a TCO of less than half that of its direct competitors.

“As businesses become increasingly data-centric, and with the coming age of the Internet of Things, enterprises and data-driven organizations must become adept at efficiently deriving insights from their data. In this environment, any time spent building and managing infrastructure rather than working on applications is a lost opportunity.”

Bigtable is Google’s latest move to bolster its data services, a central pillar of its strategy to attract new customers to its growing platform. Last month the company announced the beta launch of Google Cloud Dataflow, a Java-based service that lets users build, deploy and run data processing pipelines for other applications like ETL, analytics, real-time computation, and process orchestration, while abstracting away all the other infrastructure bits like cluster management.

Percona buys Tokutek to mashup MySQL, NoSQL tech

Percona acquired Tokutek to strengthen its expertise in NoSQL

Percona acquired Tokutek to strengthen its expertise in NoSQL

Relational database services firm Percona announced it has acquired Tokutek, which provides a high-performance MongoDB distribution and NoSQL services. Percona said the move will allow it to improve support for non-relational database technologies.

Tokutek offers a distribution of MongoDB, called TokuMX, which the company pitches as a drop-in replacement for MongoDB – but with up to 20 times performance improvements and 90 per cent reduction in database size.

One of the things that makes it so performant is its deployment of fractal tree indexing, a data structure that optimises I/O while allowing for simultaneous search and sequential access but with much faster insertions and deletions (it can also be applied in MariaDB).

Percona said the move will position the company to offer the full range consulting and technology services to support MySQL and MongoDB deployments; Percona Server already supports TokuMX as an option but the move will see the later further integrated and ship standard with the former.

“This acquisition delivers game-changing advantages to our customers,” said Peter Zaitsev, co-founder and chief executive of Percona. “By adding a market-leading, ACID-compliant NoSQL data management option to our product line, customers finally have the opportunity to simplify their database decisions and on-going support relationships by relying on just one proven, expert provider for all their database design, service, management, and support needs.”

John Partridge, president and chief executive of Tokutek said: “Percona has a well-earned reputation for expert database consulting services and support. With the Tokutek acquisition, Percona is uniquely positioned to offer NoSQL and NewSQL software solutions backed by unparalleled services and support. We are excited to know Tokutek customers can look forward to leveraging Percona services and support in their TokuMX and TokuDB deployments.”

NoSQL adoption is growing at a fairly fast rate as applications shift to handle more and more unstructured data (espeically cloud apps), so it’s likely we’ll see more MySQL incumbents pick up non-relational startups in the coming months.

 

GigaSpaces Releases XAP 9.5: Enhanced for Cassandra Big Data Store, .NET Framework

GigaSpaces Technologies has released XAP 9.5, a new version of its in-memory computing platform that enables a quick launch of high-performance real-time analytics systems for Big Data.

At the core of the latest release of the GigaSpaces platform is XAP 9.5’s enhanced integration with NoSQL datastores, such as Cassandra. Combining the Cassandra datastore with the GigaSpaces in-memory computing platform adds real-time processing and immediate consistency to the application stack, while also guaranteeing dynamic scalability and transactionality – all necessary elements for enterprises that need real-time analytics or processing of streaming Big Data.

In this combined architecture, XAP in-memory computing provide the real-time data processing engine that is interoperable with any language or application framework, while the Cassandra DB provides long-term storage of data for use in real-time analytics.

GigaSpaces benchmark done for the integration of XAP with Cassandra shows that this integration dramatically improves real-time performance for data retrieval operations. Putting the GigaSpaces in-memory data grid in front of the Cassandra Big Data solution resulted in performance of read that is up to 2000 times faster.

Up until XAP 9.5. this integration was only available for XAP Java users. XAP 9.5 further innovates by allowing .Net users to leverage the same built in Cassandra integration. This integration provides a seamless bi-directional translation between Cassandra’s columnar data model and the richer document and object oriented models available in XAP. This works for both Java & .NET XAP deployments allowing for .NET developers to speed up their Cassandra based big data applications.

“The GigaSpaces XAP Cassandra integration enables companies to enjoy both in-memory data grid capabilities and Big Data processing, easily and for any framework – Java or .NET,” says Uri Cohen, GigaSpaces VP of Product. “This enables companies to be more agile in meeting both current and future data processing challenges.”

Garantia Data Offers First Redis Hosting on Azure

Garantia Data, a provider of in-memory NoSQL cloud services, today announced the availability of its Redis Cloud and Memcached Cloud database hosting services on the Windows Azure cloud platform. Garantia Data’s services will provide thousands of developers who run their applications on Windows Azure with virtually infinite scalability, high availability, high-performance and zero-management in just one click.

Garantia is currently offering its Redis Cloud and Memcached Cloud services free of charge to early adopters in the US-East and US-West Azure regions.

Used by both enterprise developers and cutting-edge start-ups, Redis and Memcached are open source, RAM-based, key-value memory stores that provide significant value in a wide range of important use cases. Garantia Data’s Redis Cloud and Memcached Cloud are reliable and fully-automated services for running Redis and Memcached on the cloud – essentially freeing developers from dealing with nodes, clusters, scaling, data-persistence configuration and failure recovery.

“We are happy to be the first to offer the community a Redis architecture on Windows Azure,” said Ofer Bengal, CEO of Garantia Data. “We have seen great demand among .Net and Windows users for scalable, highly available and fully-automated services for Redis and Memcached. Our Redis Cloud and Memcached Cloud provide exactly the sort functionality they need.”

“We’re very excited to welcome Garantia Data to the Windows Azure ecosystem,” said Rob Craft, Senior Director Cloud Strategy at Microsoft. “Services such as Redis Cloud and Memcached Cloud give customers the production, workload-ready services they can use today to solve real business problems on Windows Azure.”

Redis Cloud scales seamlessly and infinitely, so a Redis dataset can grow to any size while supporting all Redis commands. Memcached Cloud offers a storage engine and full replication capabilities to standard Memcached. Both provide true high-availability, including instant failover with no human intervention. In addition, they run a dataset on multiple CPUs and use advanced techniques to maximize performance for any dataset size.

Garantia Brings Redis Cloud to Heroku, AppFog, AppHarbor

Garantia Data, a provider of in-memory NoSQL cloud services, today announced the availability of its Redis Cloud database hosting service on HerokuAppFog and AppHarbor platforms over AWS. Garantia Data’s new Redis Cloud add-ons will provide the hundreds of thousands of developers who run their applications on these platforms with an infinitely scalable, highly available, high-performance and zero-management Redis solution in just one click.

Used by both enterprise developers and cutting-edge start-ups, Redis is an open source, RAM-based, key-value memory store that provides significant value in a wide range of important use cases. Garantia Data’s Redis Cloud is a fully-automated service for running Redis on the cloud – completely freeing developers from dealing with nodes,clusters, scaling, data-persistence configuration and failure recovery.

“Redis Cloud has been running in a private beta on Amazon EC2 since January and in a free, public beta since June, and we survived several node failures and three AWS outages without losing any customer data,” said Ofer Bengal, CEO of Garantia Data. “After successfully navigating these events, we are now 100 percent confident that our service is fully reliable and ready for PaaS environments. Heroku, AppFog and AppHarbor developers will now be able to enjoy the powerful benefits that our solution can bring to their critical databases, while gaining more time to focus on building the best possible applications.”

Redis Cloud is the only solution that scales seamlessly and infinitely, so a Redis dataset can grow to any size while supporting all Redis commands. It provides true high-availability, including instant failover with no human intervention. In addition, it runs a dataset on multiple CPUs and uses advanced techniques to maximize performance for any dataset size. Redis Cloud add-ons let developers create multiple databases in a single plan, each running in a dedicated process and in a non-blocking manner.

“We’re very excited to welcome Garantia Data to the AppFog ecosystem,” said Lucas Carlson, CEO and founder of AppFog. “Redis Cloud is exactly the sort of production, workload-ready service that our customers have been demanding. As huge fans of Redis, we feel that Redis Cloud’s robust performance and complete feature set makes it one of the best NoSQL DB-as-a-Service options out there. We can’t wait to see what developers create with Redis Cloud and AppFog!”

“We’re excited to welcome Garantia Data’s Redis Cloud into the AppHarbor add-on catalog,” said Michael Friis, co-founder of AppHarbor. “Redis is becoming a critical component for many .NET developers and is used by prominent .NET-powered web-properties like StackOverflow.

“We’ve seen Redis become an integral part of modern web applications, in part because of its amazing performance and flexibility,” said Glenn Gillen, Engineering Manager for Heroku Add-ons. “We’re excited to include Redis Cloud in the Heroku Add-ons marketplace so our customers can take advantage of its highly available and scalable solution in the quickest and simplest way possible.”

Garantia Data is currently offering the Redis Cloud free of charge to early adopters of its Heroku, AppFog and AppHarbor add-ons. The company will demonstrate the Redis Cloud and its new PaaS add-ons at Booth #332 duringAWS re: Invent, November 27-29 in Las Vegas.


FairComs Newest c-treeACE Bridges SQL, NoSQL Worlds

FairCom today announced the tenth major edition of its cross-platform database technology, c-treeACE® V10, that introduces the industry’s first Relational Multi-Record Type support for seamless integration between relational and non-relational database worlds.

c-treeACE V10 also delivers features such as new Java interfaces, performance and scalability enhancements, additional platform support, and new replication models. With this latest version come significant performance gains including 30 percent faster transaction throughput, 60 percent faster SQL performance, 200 percent better replication throughput, and 26 percent faster read performance.

“The database market is growing substantially, yet there are many problems plaguing developers today: large data volumes; requirements to reduce data access time; data access requirements from a myriad of new locations, like mobile devices and the cloud; trickier integration; and decreasing budgets,” said Randal Hoff, FairCom’s VP of Engineering. “Engineers tell us they really need technology that enables them to work seamlessly within both the relational and non-relational worlds. In the past, they’ve felt forced to choose one or the other, when, in fact, they realize concrete benefits from both. Our newest c-treeACE gives them the flexibility to enjoy the best of both worlds: high performance data throughput levels that a NoSQL database can provide; and concurrent relational access for ease of data sharing with other parts of the enterprise, including cloud and mobile devices, all at a reasonable price point.”

For more than 30 years, FairCom has provided a unique model to enterprise database developers and ISVs not available from off-the-shelf databases. Its c-treeACE offers the highest levels of tailored configuration and control while simultaneously supporting a variety of non-relational API’s (e.g., ISAM, .NET, and JTDB) along with industry-standard relational API’s (e.g., SQL, JDBC, ODBC, PHP, Python, etc.) within the same application, over the same data. Enterprises such as Federal Express, Microsoft, NASA and Visa have used FairCom technology in mission-critical solutions.

Photos/Multimedia Gallery Available: http://www.businesswire.com/cgi-bin/mmg.cgi?eid=50473561&lang=en


Garantia Testing asks “Does Amazon EBS Affect Redis Performance?”

The Redis mavins at Garantia  decided to find out whether EBS really slows down Redis when used over various AWS platforms.

Their testing and conclusions answer the question: Should AOF be the default Redis configuration?

We think so. This benchmark clearly shows that running Redis over various AWS platforms using AOF with a standard, non-raided EBS configuration doesn’t significantly affect Redis’ performance. If we take into account that Redis professionals typically tune their redis.conf files carefully before using any data persistence method, and that newbies usually don’t generate loads as large as the ones we used in this benchmark, it is safe to assume that this performance difference can be almost neglected in real-life scenarios.

Read the full post for all the details.


Benchmarking Redis on AWS: Is Amazon PIOPS Really Better than Standard EBS?

The Redis experts at Garantia Data did some benchmarking in the wake of Amazon’s announcement of

Their conclusion:

After 32 intensive tests with Redis on AWS (each run in 3 iterations for a total of 96 test iterations), we found that neither the non-optimized EBS instances nor the optimized-EBS instances worked better with Amazon’s PIOPS EBS for Redis. According to our results, using the right standard EBS configuration can provide equal if not better performance than PIOPS EBS, and should actually save you money.

Read the full post for details and graphs.