TREC Legal Track

About | 2012 | 2011 | 2010 | 2009 | 2008 | 2007 | 2006 | Papers | Other

About the Legal Track

The goal of the Legal Track at the Text Retrieval Conference (TREC) is to assess the ability of information retrieval techniques to meet the needs of the legal profession for tools and methods capable of helping with the retrieval of electronic business records, principally for use as evidence in civil litigation. In the USA, this problem is referred to as "e-discovery." Like all TREC tracks, the Legal Track seeks to foster the development of a research community by providing a venue for shared development of evaluation resources ("test collections") and baseline results to which future results can be compared.

The Legal Track operates on an annual cycle that moves through three phases: task planning, experimentation, and reporting. Task planning occurs in the winter and spring, experimentation in the spring and summer, and reporting in the fall and winter. Registration is open to all interested research teams. Participants must register each year with NIST using a link that is typically available on the TREC Web site from mid-January to mid-April. Registration with NIST provides access to password-protected parts of NIST's TREC Web site and membership for one email address on low-volume mailing list that is used by NIST for announcements that are of interest to all tracks (e.g., TREC conference submission deadlines and registration procedures).

The Legal Track maintains a separate mailing list (https://mailman.umiacs.umd.edu/mailman/listinfo/trec-legal) as the principal online venue for announcements regarding the track and for discussion of issues related to the design of the Legal Track. The volume of messages on this mailing list varies throughout the year, but is generally fairly light. Anyone may join at any time, regardless of whether they intend to participate in the track in a specific year, and membership in the mailing list typically does not need to be renewed each year. Members have access to an archive of prior posts to the mailing list.

TREC is a research community in which the organizers and participants are volunteers. Participants agree in writing to abide by the agreement concerning dissemination of TREC results. Neither TREC nor the Legal Track exercises any other editorial control over oral or written statements made by track organizers, track participants or future users of the test collection. Moreover, a stated affiliation by an individual with an organization should not normally be interpreted to imply that organization's endorsement of statements made by that individual in the course of their academic research.

Participating research teams in any track of each year's TREC are invited to participate in an invitation-only conference for participants at NIST in Gaithersburg, MD at which research results from that year will be discussed. Working notes papers written by each research team in October are made available only to participating research teams at that time. The research results from each year's TREC tracks are then disseminated more broadly in writing on the TREC Web site in February of the following year.


2012 Legal Track

The TREC Legal Track will not run in 2012. A new data set will be made available shortly; however, it will not be possible to develop the necessary topics and gold standard, and to conduct the proposed experiment within the time constraints of TREC 2012. The data set, consisting of approximately one million email messages and attachments from a liquidated business enterprise, will be made available by the University of Maryland, subject to a usage agreement. The availability of this data set, as well as future plans to conduct experiments similar to those proposed for TREC 2012, will be announced at this location at a future date.

2012 Track Coordinators


2011 Legal Track (Reporting Phase)

In 2011, the Legal Track has a single task, referred to as the learning task, in which participating teams can use either an interactive or a fully automated process to perform review for responsiveness.

2011 Learning Task Test Collection

The goal of the 2011 learning task is to determine which documents (email messages or attachments, treated separately) should be produced in response to a production request for which a set of "training" relevance judgments are available. In 2011, participating teams can request training judgments on specific documents.

Resource Source URL
Guidelines 2011 Learning Task http://plg.uwaterloo.ca/~gvcormac/legal11/treclegal11.html
http://trec-legal.umiacs.umd.edu/guidelines/topic401.pdf (Topic 401 topic-specific guidelines)
http://trec-legal.umiacs.umd.edu/guidelines/topic402.pdf (Topic 402 topic-specific guidelines)
http://trec-legal.umiacs.umd.edu/guidelines/topic403.pdf (Topic 403 topic-specific guidelines)
Complaints 2009-2010 Learning Task http://trec-legal.umiacs.umd.edu/topics/LT09_Complaint_J_final.pdf (topics 401, 402)
http://trec-legal.umiacs.umd.edu/topics/LT10_Complaint_K_final-corrected.pdf (topic 403)
Production Requests ("Topics") 2011 Learning Task Guidelines http://plg.uwaterloo.ca/~gvcormac/legal11/legal11topics.txt
Document Collection EDRM Data Set Project corpora/trec/legal10/
Training Judgments Available August 29, 2011 (to participants)
Available March 15, 2012 (to others)
 
Mop-Up Task Judgments Available now to participants (password protected) and to others on March 15, 2012. corpora/trec/legal11
Final Evaluation Judgments Available September 30, 2011 (to participants)
Available March 15, 2012 (to others)
 
Evaluation Tools 2010 Learning Task corpora/trec/legal10-results/

2011 Track Coordinators


2010 Legal Track (Completed)

In 2010, the Legal Track included two tasks, an interactive task focused on end-to-end evaluation of an interactive process of review for responsiveness or privilege and a learning task focused on technology evaluation.

2010 Interactive Task Test Collection

The goal of the 2010 interactive task was to determine which document sets (where a set was defined to be an email messages with its attachments) should be produced in response to a production request for which a "topic authority" was available to answer specific questions posed by a participating team.

Resource Source URL
Guidelines 2010 Interactive Task http://trec-legal.umiacs.umd.edu/guidelines/itg10_final.pdf
Complaint and Production Requests ("Topics") 2010 Interactive Task http://trec-legal.umiacs.umd.edu/topics/LT10_Complaint_K_final-corrected.pdf
Document Collection EDRM Data Set Project corpora/trec/legal10/
Detailed Relevance Judgment Guidelines NIST TREC 2010 Data http://trec.nist.gov/data/legal/10/AssessmentGuidelines_leg_int_2010.pdf
Evaluation Relevance Judgments NIST TREC 2010 Data http://trec.nist.gov/data/legal/10/qrel_leg_int_2010_msg_post.txt

2010 Learning Task Test Collection

The goal of the 2010 learning task was to determine which documents (email messages or attachments, treated separately) should be produced in response to a production request for which a set of "training" relevance judgments are available. In 2010, all participating teams used the same training judgments.

Resource Source URL
Guidelines 2010 Learning Task http://plg.uwaterloo.ca/~gvcormac/legal10/legal10.pdf
Complaint and Production Requests ("Topics") 2009 Interactive Task http://plg.uwaterloo.ca/~gvcormac/legal10/complaint-09.pdf
Document Collection EDRM Data Set Project corpora/trec/legal10/
Training Relevance Judgments ("Seed Set") 2009 Interactive Task corpora/trec/legal10/
Evaluation Relevance Judgments 2010 Learning Task corpora/trec/legal10-results/
Evaluation Tools 2010 Learning Task corpora/trec/legal10-results/

2010 Legal Track Results

Resource Source URL
TREC Track Overview Paper TREC 2010 Proceedings http://trec.nist.gov/pubs/trec19/papers/LEGAL10.OVERVIEW.pdf
TREC Papers from Participating Teams
(Track Overview paper lists participating teams)
TREC 2010 Proceedings http://trec.nist.gov/pubs/trec19/t19.proceedings.html
Detailed Learning Task Results TREC 2010 Proceedings http://trec.nist.gov/pubs/trec19/appendices/legal-learning.html (results)
http://trec.nist.gov/pubs/trec19/appendices/legal.learning.pdf (run name key)

2010 Track Coordinators


2009 Legal Track (Completed)

In 2009, the Legal Track included two tasks, an interactive task focused on end-to-end evaluation of an interactive process of review for responsiveness or privilege and a batch task focused on technology evaluation.

2009 Interactive Task Test Collection

The goal of the 2009 interactive task was to determine which document sets (where a set was defined to be an email messages with its attachments) should be produced in response to a production request for which a "topic authority" was available to answer specific questions posed by a participating team.

Resource Source URL
Guidelines 2009 Interactive Task http://trec-legal.umiacs.umd.edu/LT09_ITG_final.pdf
Complaint and Production Requests ("Topics") 2009 Interactive Task http://trec-legal.umiacs.umd.edu/LT09_Complaint_J_final.pdf
Document Collection Clearwell and University of Maryland Available from Doug Oard
Detailed Relevance Judgment Guidelines 2009 Interactive Task Available from Bruce Hedin
Relevance Judgments and Evaluation Tools NIST TREC 2009 Data http://trec.nist.gov/data/legal/09/evalInt09.zip

2009 Batch Task Test Collection

The goal of the 2009 batch task was to determine which documents (scanned business records) should be produced in response to a production request. Teams could optionally use a predefined set of "training" relevance judgments that were available.

Resource Source URL
Guidelines 2009 Batch Task http://trec-legal.umiacs.umd.edu/guidelines/batch09a.html
Complaints, Production Requests ("Topics")
and Training Relevance Judgments
2006-2008 Legal Tracks http://trec.nist.gov/data/legal/09/BatchTopics2009.zip
Document Collection IIT http://ir.nist.gov/cdip/
http://trec-legal.umiacs.umd.edu/guidelines/lewis2.txt (metadata description)
Reference Boolean Run NIST TREC 2009 Data http://trec.nist.gov/data/legal/09/refL09B.gz
Evaluation Relevance Judgments and Evaluation Tools NIST TREC 2009 Data http://trec.nist.gov/data/legal/09/resultsL09.zip

2009 Legal Track Results

Resource Source URL
TREC Track Overview Paper TREC 2009 Proceedings http://trec.nist.gov/pubs/trec18/papers/LEGAL09.OVERVIEW.pdf
TREC Papers from Participating Teams
(Track Overview paper lists participating teams)
TREC 2009 Proceedings http://trec.nist.gov/pubs/trec18/t18_proceedings.html
Detailed Interactive Task Results TREC 2009 Proceedings http://trec.nist.gov/pubs/trec18/appendices/app09int2.pdf (per-topic results)
http://trec.nist.gov/pubs/trec18/appendices/legal.interactive.pdf (run name key)
http://trec.nist.gov/pubs/trec18/appendices/app09int1.pdf (run descriptions)
Detailed Batch Task Results TREC 2009 Proceedings and
NIST TREC 2009 Data
http://trec.nist.gov/pubs/trec18/appendices/legal.results.html (results)
http://trec.nist.gov/pubs/trec18/appendices/legal.batch.pdf (run name key)
http://trec.nist.gov/data/legal/09/resultsL09.zip (score range summary)

2009 Track Coordinators


2008 Legal Track (Completed)

In 2008, the Legal Track included three tasks, an interactive task focused on end-to-end evaluation of an interactive process of review for responsiveness, a relevance feedback task focused on technology evaluation when some "training" relevance judgments are available, and an ad hoc task focused on technology evaluation when no training relevance judgments are available.

2008 Interactive Task Test Collection

The goal of the 2008 interactive task was to determine which documents (scanned business records) should be produced in response to a production request for which a "topic authority" was available to answer specific questions posed by a participating team.

Resource Source URL
Guidelines 2008 Interactive Task http://trec-legal.umiacs.umd.edu/guidelines/2008InteractiveGuidelines.pdf
Complaint and Production Requests ("Topics") 2008 Interactive Task http://trec-legal.umiacs.umd.edu/2008InteractiveTopics.pdf
Document Collection IIT http://ir.nist.gov/cdip/
http://trec-legal.umiacs.umd.edu/guidelines/lewis2.txt (metadata description)
Topic-Specific Relevance Judgment Guidelines NIST TREC 2008 Data http://trec.nist.gov/data/legal/08/LegalInteractive_TopicGuidelines_2008.pdf
Pre-Adjudication Relevance Judgments and Appeals 2008 Interactive Task Available from Bruce Hedin
Adjudicated Relevance Judgments NIST TREC 2008 Data http://trec.nist.gov/data/legal/08/LT_Int_FinalAssessments.txt
Evaluation Tools 2008 Interactive Task Available from Bruce Hedin

2008 Relevance Feedback Task Test Collection

The goal of the 2008 relevance feedback task was to determine which documents (scanned business records) should be produced in response to a production request. Teams could optionally use a predefined set of "training" relevance judgments that were available.

Resource Source URL
Guidelines 2008 Legal Track http://trec-legal.umiacs.umd.edu/guidelines/adhocRF08b.html
Complaints, Production Requests ("Topics")
and Training Relevance Judgments
2006-2007 Legal Tracks http://trec.nist.gov/data/legal/08/topicsRF08.zip
Document Collection IIT http://ir.nist.gov/cdip/
http://trec-legal.umiacs.umd.edu/guidelines/lewis2.txt (metadata description)
Reference Boolean Run NIST TREC 2008 Data http://trec.nist.gov/data/legal/08/refRF08B.gz
Relevance Assessment Guide 2008 Legal Track http://trec-legal.umiacs.umd.edu/guidelines/howto2008.doc
Evaluation Relevance Judgments and Evaluation Tools NIST TREC 2008 Data http://trec.nist.gov/data/legal/08/resultsRF08.zip

2008 Ad Hoc Task Test Collection

The goal of the 2008 ad hoc task was to determine which documents (scanned business records) should be produced in response to a production request based on the complaint, the production request, and a Boolean query negotiation history.

Resource Source URL
Guidelines 2008 Legal Track http://trec-legal.umiacs.umd.edu/guidelines/adhocRF08b.html
Complaints and Production Requests ("Topics") 2008 Ad Hoc Task http://trec.nist.gov/data/legal/08/topicsL08_v3.zip
Document Collection IIT http://ir.nist.gov/cdip/
http://trec-legal.umiacs.umd.edu/guidelines/lewis2.txt (metadata description)
Reference Boolean Run NIST TREC 2008 Data http://trec.nist.gov/data/legal/08/refL08B.gz
Relevance Assessment Guide 2008 Legal Track http://trec-legal.umiacs.umd.edu/guidelines/howto2008.doc
Evaluation Relevance Judgments and Evaluation Tools NIST TREC 2008 Data http://trec.nist.gov/data/legal/08/resultsL08.zip

2008 Legal Track Results

Resource Source URL
TREC Track Overview Paper TREC 2008 Proceedings http://trec.nist.gov/pubs/trec17/papers/LEGAL.OVERVIEW08.pdf
TREC Papers from Participating Teams
(Track Overview paper lists participating teams)
TREC 2008 Proceedings http://trec.nist.gov/pubs/trec17/t17_proceedings.html
Detailed Interactive Task Results TREC 2008 Proceedings http://trec.nist.gov/pubs/trec17/appendices/legal.interactive.results.pdf
Detailed Relevance Feedback Task Results TREC 2008 Proceedings and
NIST TREC 2008 Data
http://trec.nist.gov/pubs/trec17/appendices/legal.relevance-feedback.results.html
http://trec.nist.gov/pubs/trec17/appendices/legal.feedback.results.pdf (run name key)
http://trec.nist.gov/data/legal/08/resultsRF08.zip (score ranges, reference runs) http://trec.nist.gov/data/legal/08/mediansRF08.zip (more score ranges)
Detailed Ad Hoc Task Results TREC 2008 Proceedings and
NIST TREC 2008 Data
http://trec.nist.gov/pubs/trec17/appendices/legal.adhoc.results.html
http://trec.nist.gov/pubs/trec17/appendices/legal.adhoc.results.pdf (run name key)
http://trec.nist.gov/data/legal/08/resultsL08.zip (score ranges, reference runs) http://trec.nist.gov/data/legal/08/mediansL08.zip (more score ranges)

2008 Track Coordinators


2007 Legal Track (Completed)

In 2007, the Legal Track included three tasks, a "main" task focused on technology evaluation when no training relevance judgments are available, a relevance feedback task in which some "training" relevance judgments are also available, and an interactive challenge task in which the goal is to balance underproduction and overproduction.

2007 Main Task Test Collection

The goal of the 2007 "main" task was to determine which documents (scanned business records) should be produced in response to a production request based on the complaint, the production request, and a Boolean query negotiation history. The 2007 main task is referred to in the Track Overview paper as the "ad hoc" task.

Resource Source URL
Guidelines 2007 Legal Track http://trec-legal.umiacs.umd.edu/guidelines/main07b.html
Complaints and Production Requests ("Topics") NIST TREC 2007 Data http://trec.nist.gov/data/legal/07/topicsL07_v1.zip
Document Collection IIT http://ir.nist.gov/cdip/
http://trec-legal.umiacs.umd.edu/guidelines/lewis2.txt (metadata description)
Reference Boolean Run NIST TREC 2007 Data http://trec.nist.gov/data/legal/07/refL07B.gz
Relevance Assessment Guide 2007 Legal Track http://trec-legal.umiacs.umd.edu/guidelines/TRECLega2007l_HowToGuide_Version1.1.doc
Evaluation Relevance Judgments NIST TREC 2007 Data http://trec.nist.gov/data/legal/07/qrelsL07.normal (without probabilities)
http://trec.nist.gov/data/legal/07/qrelsL07.probs (with probabilities)
Evaluation Tools NIST TREC 2007 Data http://trec.nist.gov/data/legal/07/l07_eval_v10.zip
http://trec.nist.gov/data/legal/07/glossaryL07.html (naming conventions)

2007 Relevance Feedback Task Test Collection

The goal of the 2007 relevance feedback task was to determine which documents (scanned business records) should be produced in response to a production request based on the complaint, the production request, a Boolean query negotiation history, and some "training" documents for which relevance judgments are available.

Resource Source URL
Guidelines 2007 Legal Track http://trec-legal.umiacs.umd.edu/guidelines/main07b.html
http://trec.nist.gov/pubs/trec16/papers/LEGAL.OVERVIEW16.pdf (task-specific details)
Complaints and Production Requests ("Topics") 2006 Legal Track http://trec-legal.umiacs.umd.edu/topics/trec_legal_eval_topics.v1.3.zip
Training Relevance Judgments 2006 Legal Track http://trec.nist.gov/data/legal06.html
Document Collection IIT http://ir.nist.gov/cdip/
http://trec-legal.umiacs.umd.edu/guidelines/lewis2.txt (metadata description)
Reference Boolean Run NIST TREC 2007 Data http://trec.nist.gov/data/legal/07/input.refL06B.gz
Relevance Assessment Guide 2007 Legal Track http://trec-legal.umiacs.umd.edu/guidelines/TRECLega2007l_HowToGuide_Version1.1.doc
Evaluation Relevance Judgments 2007 Legal Track http://trec-legal.umiacs.umd.edu/qrels/qrelsL07.rf.zip
Evaluation Tools NIST TREC 2007 Data http://trec.nist.gov/data/legal/07/l07_eval_v10.zip

2007 Interactive Task Challenge Test Collection

The goal of the 2007 interactive challenge task was to perform machine-assisted human review to balance the cost of type 1 (false positive) and type 2 (false negative) errors when determining which documents (scanned business records) should be produced in response to a production request for which only the complaint and the production request are available.

Resource Source URL
Guidelines 2007 Interactive Challenge Task http://trec-legal.umiacs.umd.edu/guidelines/interactivetask.html
Complaints and Production Requests ("Topics") 2007 Interactive Challenge Task http://trec-legal.umiacs.umd.edu/topics/interactivetask.zip
http://trec-legal.umiacs.umd.edu/topics/2007interactivetasktopics.doc (topic priority list)
Document Collection IIT http://ir.nist.gov/cdip/
http://trec-legal.umiacs.umd.edu/guidelines/lewis2.txt (metadata description)
Relevance Assessment Guide 2007 Legal Track http://trec-legal.umiacs.umd.edu/guidelines/TRECLega2007l_HowToGuide_Version1.1.doc
Evaluation Relevance Judgments TREC 2007 Legal Track http://trec-legal.umiacs.umd.edu/qrels/qrelsL07.interactive.txt

2007 Legal Track Results

Resource Source URL
TREC Track Overview Paper TREC 2007 Proceedings http://trec.nist.gov/pubs/trec16/papers/LEGAL.OVERVIEW16.pdf
TREC Papers from Participating Teams
(Track Overview paper lists participating teams)
TREC 2007 Proceedings http://trec.nist.gov/pubs/trec16/t16_proceedings.html
Detailed Main Task Results TREC 2007 Proceedings http://trec.nist.gov/pubs/trec16/appendices/legal.main.results.html
http://trec.nist.gov/pubs/trec16/appendices/legal.main.pdf (run name key)
http://trec.nist.gov/data/legal/07/mediansL07.zip (range of scores)
http://trec.nist.gov/data/legal/07/refL07B.eval (Boolean reference)
Detailed Relevance Feedback Task Results TREC 2007 Proceedings http://trec.nist.gov/pubs/trec16/appendices/legal.relevance.feedback.results.html
http://trec.nist.gov/pubs/trec16/appendices/legal.rel-feedback.pdf (run name key)
Detailed Interactive Task Results TREC 2007 Proceedings http://trec.nist.gov/pubs/trec16/appendices/legal/interactive.pdf
http://trec.nist.gov/pubs/trec16/appendices/legal.interactive.pdf (run name key)

2007 Track Coordinators


2006 Legal Track (Completed)

In 2006, the Legal Track included one task, which was focused on technology evaluation when no training relevance judgments are available.

2006 Legal Track Test Collection

The goal of the 2006 Legal Track was to determine which documents (scanned business records) should be produced in response to a production request based on the complaint, the production request, and a Boolean query negotiation history.

Resource Source URL
Guidelines 2006 Legal Track http://trec-legal.umiacs.umd.edu/guidelines/guidelines6.txt
Complaints and Production Requests ("Topics") 2006 Legal Track http://trec-legal.umiacs.umd.edu/topics/trec_legal_eval_topics.v1.3.zip
Document Collection IIT http://ir.nist.gov/cdip/
http://trec-legal.umiacs.umd.edu/guidelines/lewis2.txt (metadata description)
Practice Materials 2006 Legal Track http://trec-legal.umiacs.umd.edu/topics/TobaccoComplaintJan_23_2006.doc (practice complaint)
http://trec-legal.umiacs.umd.edu/topics/rule34.trainingtopics.finalJan23.doc (practice topics)
http://trec-legal.umiacs.umd.edu/topics/TrainingTopics.xml (practice topics in XML)
http://trec-legal.umiacs.umd.edu/topics/booleanexample.rtf (practice Boolean queries)
Relevance Assessment Guide 2006 Legal Track http://trec-legal.umiacs.umd.edu/guidelines/TRECLegal_HowToGuide_Version4Final.doc
Evaluation Relevance Judgments and Evaluation Tools NIST TREC 2006 Data http://trec.nist.gov/data/legal/06/qrels.legal06

2006 Legal Track Results

Resource Source URL
TREC Track Overview Paper TREC 2006 Proceedings http://trec.nist.gov/pubs/trec15/papers/LEGAL06.OVERVIEW.pdf
TREC Papers from Participating Teams
(Track Overview paper lists participating teams)
TREC 2006 Proceedings http://trec.nist.gov/pubs/trec15/t15_proceedings.html
Detailed Legal Track Results TREC 2006 Proceedings http://trec.nist.gov/pubs/trec15/appendices/legal.results.html
http://trec.nist.gov/pubs/trec15/appendices/legal.results.html (run name key)

2006 Track Coordinators


Published Research using Legal Track Test Collections

The test collections produced in the Legal Track are freely available for research and commercial use under the conditions indicated in the distribution package for each test collection. Research publications based at least in part on use of TREC Legal Track test collections that we are aware of (other than those in the TREC Proceedings) are listed here.
  1. A. Arampatzis, J. Kamps and S. Robertson, Where to Stop Reading in a Ranked List?: Threshold Optimization Using Truncated Score Distributions, Proceedings of the 32nd International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR), Boston, MA, July (2009)
  2. R. Bauer, D. Brassil, C. Hogan, G. Taranto and J.S. Brown, Impedance Matching of Humans and Machine in High-Q Information Retrieval Systems, IEEE International Conference on Systems, Man and Cybernetics (SMC), San Antonio, TX, USA, October (2009)
  3. D. Brassil, C. Hogan and S. Attfield, The Centrality of User Modeling to High Recall with High Precision Search, IEEE International Conference on Systems, Man and Cybernetics (SMC), San Antonio, TX, USA, October (2009)
  4. H. Chu, Factors affecting relevance judgment: A report from TREC Legal Track, Journal of Documentation, 67(2)264-278 (2011)
  5. M. Grossman and G. Cormack, Technology-Assisted Review in E-Discovery Can Be More Effective and More Efficient Than Exhaustive Manual Review, Richmond Journal of Law and Technology, 17(3), Spring (2011)
  6. M. Grossman and G. Cormack, Inconsistent Assessment of Responsiveness in E-Discovery: Difference of Opinion or Human Error?, ICAIL 2011 Workshop on Setting Standards for Searching Electronically Stored Information in Discovery Proceedings (DESI IV), Pittsburgh, PA, USA, June (2011)
  7. B. Hedin and D. Oard, Replication and Automation of Expert Judgments: Information Engineering in Legal E-Discovery, IEEE International Conference on Systems, Man and Cybernetics (SMC), San Antonio, TX, USA, October (2009)
  8. C. Hogan, R. Bauer, and D. Brassil, Automation of Legal Sensemaking in E-Discovery, Artificial Intelligence and Law 18(4)431-457 (2011).
  9. C. Hogan, D. Brassil and M. Marcus, Human Aided Computer Assessment for Exhaustive Search, IEEE International Conference on Systems, Man and Cybernetics (SMC), San Antonio, TX, USA, October (2009)
  10. A. Kontostathis and S. Kulp, The Effect of Normalization when Recall Really Matters, Proceedings of the International Conference on Information and Knowledge Engineering (IKE), Las Vegas, NV, USA, July (2008)
  11. D. Oard, J. Baron, B. Hedin, D. Lewis and S. Tomlinson, Evaluation of Information Retrieval for E-Discovery, Artificial Intelligence and Law 18(4)347-386 (2011).
  12. J. Parapar, A. Freire and A. Barreiro, Revisiting N-Gram Based Models for Retrieval in Degraded Large Collections, Proceedings of the 31st European Conference on IR Research (ECIR), Toulouse, France, April (2009)
  13. V. Rangan, Discovery of Related Terms in a Corpus using Reflective Random Indexing, ICAIL 2011 Workshop on Setting Standards for Searching Electronically Stored Information in Discovery Proceedings (DESI IV), Pittsburgh, PA, USA, June (2011)
  14. S. Tomlinson and B. Hedin, Measuring Effectiveness in the TREC Legal Track, in M. Lapu, K. Mayer, J. Tait and A. Trippe (eds.), Current Challenges in Patent Information Retrieval, Springer (2011)
  15. J. Wang, Accuracy, Agreement, Speech and Perceived Difficulty of Users' Relevance Judgments for E-Discovery, SIGIR 2011 Information Retrieval for E-Discovery (SIRE) Workshop, Beijing, China, July (2011)
  16. J. Wang and D. Soergel, A User Study of Relevance Judgments for E-Discovery, Proceedings of the Annual Meeting for the American Society for Information Science and Technology (ASIST), Pittsburgh, PA, USA, October(2010).
  17. W. Webber, D. Oard, F. Scholer and Bruce Hedin, Assessor error in stratified evaluation, Proceedings of the 19th ACM Conference on Information and Knowledge Management (CIKM), Toronto, ON, Canada, pp. 539-548, October (2010)
  18. W. Webber, Re-examining the Effectiveness of Manual Review, SIGIR 2011 Information Retrieval for E-Discovery (SIRE) Workshop, Beijing, China, July (2011)
  19. F. Zhao, D. Oard and J. Baron, Improving Search Effectiveness in the Legal E-Discovery Process Using Relevance Feedback, ICAIL DESI 2009 III Global E-Discovery/E-Disclosure Workshop, Barcelona, Spain, June (2009)

Other Materials

Additional materials related to the Legal Track are listed here in most-recent-first order. Some of these materials are outdated, and are included here only for archival purposes.
The old Legal Track Web page is still available, but no longer being updated.
Last modified: Thu May 10 23:11:35 2012
Doug Oard oard@umd.edu