Archivo de la categoría: Apache Solr

NetDocuments 13.1 Release Includes Secure Document Delivery

NetDocuments today announced their 13.1 release. This latest release contains more than 20 new and updated features, including document delivery via a secure link and a new search engine.

Available for the first time with this release, Secure Document Delivery enables public sharing of documents without loss of content control. Users select the documents they wish to deliver, and NetDocuments generates a secure URL for each document to be sent to multiple email addresses. This feature offers users a number of security and control options, including the ability to:

  • Password-protect the viewing of documents
  • Set document permissions, including the ability to download documents
  • Set a predetermined expiration date, after which the URL will no
    longer be active
  • Lock the original version of the document in NetDocuments, thereby
    prohibiting any changes to be made to the delivered document

Previously, third parties could only gain access to documents stored in NetDocuments by obtaining a username and password through a client portal. The Secure Document Delivery feature also improves collaboration and adds increased functionality to the suite of SEC-, FINRA- and HIPAA-compliant content management features.

As part of the 13.1 release, the NetDocuments’ NDSearch, an enterprise-class search platform embedded in NetDocuments and available to all users, now relies on the SolrTM search engine to power its functionality across 750 million documents for its global customer base in 140 countries.

Solr’s open source enterprise platform provides NetDocuments with the flexibility to adjust search functionality to meet customer needs. All previous search features remain available, including search analysis filters, find similar, dynamic filters and saved search. With Solr, users will enjoy quicker display of items in workspaces and other search result lists, an improvement in productivity made possible by Solr’s ability to index metadata for documents significantly faster.

“Collaboration has always been a main pillar of our company, and the new document delivery tool, which was highly requested by our customers, provides the easiest way for our customers to work with someone outside their organization while maintaining complete control of their documents,” said Leonard Johnson, vice president of marketing and product management at NetDocuments. “Our users have always valued our powerful search tool, and switching to Solr not only provides them with a superior search experience, but gives us greater customization and scalability as we near the billion document mark.”

Lucid Imagination Combines Search, Analytics and Big Data to Tackle the Problem of Dark Data

Image representing Lucid Imagination as depict...

Organizations today have little to no idea how much lost opportunity is hidden in the vast amounts of data they’ve collected and stored.  They have entered the age of total data overload driven by the sheer amount of unstructured information, also called “dark” data, which is contained in their stored audio files, text messages, e-mail repositories, log files, transaction applications, and various other content stores.  And this dark data is continuing to grow, far outpacing the ability of the organization to track, manage and make sense of it.

Lucid Imagination, a developer of search, discovery and analytics software based on Apache Lucene and Apache Solr technology, today unveiled LucidWorks Big Data. LucidWorks Big Data is the industry’s first fully integrated development stack that combines the power of multiple open source projects including Hadoop, Mahout, R and Lucene/Solr to provide search, machine learning, recommendation engines and analytics for structured and unstructured content in one complete solution available in the cloud.

With LucidWorks Big Data, Lucid Imagination equips technologists and business users with the ability to initially pilot Big Data projects utilizing technologies such as Apache Lucene/Solr, Mahout and Hadoop, in a cloud sandbox. Once satisfied, the project can remain in the cloud, be moved on premise or executed within a hybrid configuration.  This means they can avoid the staggering overhead costs and long lead times associated with infrastructure and application development lifecycles prior to placing their Big Data solution into production.

The product is now available in beta. To sign up for inclusion in the beta program, visit http://www.lucidimagination.com/products/lucidworks-search-platform/lucidworks-big-data.

How big is the problem of dark data? The total amount of digital data in the world will reach 2.7 zettabytes in 2012, a 48 percent increase from 2011.* 90 percent of this data will be unstructured or “dark” data. Worldwide, 7.5 quintillion bytes of data, enough to fill over 100,000 Libraries of Congress get generated every day. Conversely, that deep volume of data can serve to help predict the weather, uncover consumer buying patterns or even ease traffic problems – if discovered and analyzed proactively.

“We see a strong opportunity for search to play a key role in the future of data management and analytics,” said Matthew Aslett, research manager, data management and analytics, 451 Research. “Lucid’s Big Data offering, and its combination of large-scale data storage in Hadoop with Lucene/Solr-based indexing and machine-learning capabilities, provides a platform for developing new applications to tackle emerging data management challenges.”

Data analytics has traditionally been the domain of business intelligence technologies. Most of these tools, however, have been designed to handle structured data such as SQL, and cannot easily tap into the broad range of data types that can be used in a Big Data application. With the announcement of LucidWorks Big Data, organizations will be able to utilize a single platform for their Big Data search, discovery and analytics needs. LucidWorks Big Data is the only complete platform that:

  • Combines the real time, ad hoc data accessibility of LucidWorks (Lucene/Solr) with compute and storage capabilities of Hadoop
  • Delivers commonly used analytic capabilities along with Mahout’s proven, scalable machine learning algorithms for deeper insight into both content and users
  • Tackles data, both big and small with ease, seamlessly scaling while minimizing the impact of provisioning Hadoop, LucidWorks and other components
  • Supplies a single, coherent, secure and well documented REST API for both application integration and administration
  • Offers fault tolerance with data safety baked in
  • Provides choice and flexibility, via on premise, cloud hosted or hybrid deployment solutions
  • Is tested, integrated and fully supported by the world’s leading experts in open source search
  • Includes powerful tools for configuration, deployment, content acquisition, security, and search experience that is packaged in a convenient, well-organized application

Lucid Imagination’s Open Search Platform uncovers real-time insights from any enterprise data, whether structured in databases, unstructured in formats such as emails or social channels, or semi-structured from sources such as websites.  The company’s rich portfolio of enterprise-grade solutions is based on the same proven open source Apache Lucene/Solr technology that powers many of the world’s largest e-commerce sites. Lucid Imagination’s on-premise and cloud platforms are quicker to deploy, cost less than competing products and are more easily tailored to specific needs than business intelligence solutions because they leverage innovation from the open source community.

“We’re allowing a broad set of enterprises to test and implement data discovery and analysis projects that have historically been the province of large multinationals with large data centers. Cloud computing and LucidWorks Big Data finally level the field,” said Paul Doscher, CEO of Lucid Imagination. “Large companies, meanwhile, can use our Big Data stack to reduce the time and cost associated with evaluating and ultimately implementing big data search, discovery and analysis. It’s their data – now they can actually benefit from it.”