Year over year, the main line of legal press generates a fair amount of "noise" with articles in their periodicals, publications, journals, and blogs about the rising expense and burden surrounding the litigation world- as it does with the rapidly changing world of litigation technology. The use of technology has become common place in litigation. Yet, even as technology related law experiences gains in both admissibility, as well as understanding, parties are under more scrutiny with respect to tolls they have utilized to identify, preserve, and collect electronically store information (ESI).
Enter the latest technology wave to crash into the litigation world- Technology Assisted Review (TAR). The Blair Maron study empirically established beyond a statistical doubt that humans are not nearly as accurate than they self-reported being when working amidst a heterogeneous data set of documents that have a variety of different data types and formats...or using ad hoc, keyword as the lone approach to identifying relevant ESI, for that matter. This, then, is a very important factor that should be at the root of and interwoven into the foundation of evaluating the effectiveness of automated search technology and accompanying tools with respect to their positive impact on litigation practices.
The 2012 judicial opinion approving the use of said TAR will only expedite its acceptance and utilization of this methodology. At its core, TAR is a process for ranking or coding a collected corpus of ESI by using a computerized system that harnesses a the knowledge base of a multitude of subject matter experts on a smaller set of documents- and then takes license and liberty when applying them to the remaining documents in the corpus of the collection.
How does this happen? It could be through a few different methods:
1. algorithms can be to either show how similar- or dissimilar- the remaining documents are to what has already been coded as "relevant" to a litigation matter by subject matter experts'
2. Or by implementing methodologies that develop a set of stringent, systematic rules that essentially emulate the patterns of the expert decision makers processes.
3. Generally, TAR systems incorporate the use of statistical and/or sampling techniques that serve as a road map of sorts to guide and measure the overall process.
There is, importantly, often an accepted trade-off for the practitioner between precision and volume. The system to “retrieve more documents” comes with an expense- figuratively and literally- of decreasing accuracy/precision, resulting in the presence of more irrelevant documents. For this reason, logically, practitioners and litigating parties would be best served to evaluate the use of TAR methods and techniques in a wide range of cases- but particularly in large and complex litigation matters. For additional background and a glossary of terms, see Maura R. Grossman and Gordon V. Cormack, The Grossman-Cormack Glossary of Technology-Assisted Review, with Foreword by John M. Facciola, U.S. Magistrate Judge, 2013 Fed. Cts. L. Rev. 7 (January 2013).
A publication from The Sedona Conference, The Sedona Conference, Navigating The Vendor Proposal Process (2007 ed.), p. 29, summarizes this best:
Technology is developing that will allow for electronic relevancy assessments and subject matter, or issue coding. These technologies have the potential to dramatically change the way electronic discovery is handled in litigation, and could save litigants millions of dollars in document review costs. Hand-in-hand with electronic relevancy assessment and issue coding, it is anticipated that advanced searching and retrieval technologies may allow for targeted collections and productions, thus reducing the volume of information involved in the discovery process.
The trajectory and enormity of growing data warehouses- in concert with human language fluidity/interchangeability, and individual differences- in concert with cost factors- should make this a required approach to litigation going forward.