Document Review Validation Survey

Quick Note on Terminology

In this survey we're defining "TAR" as "traditional TAR": categorization (1.0) or active learning (TAR 2.0). We're using "GenAI" to mean a Generative AI-based review process (e.g. aiR, eDiscovery AI). In many important respects, GenAI can also be (or is) considered a form of TAR, but, here, we're just using "TAR" to mean "traditional TAR", to avoid any confusion.
1.When reviewing for production, on average how many RFP's are you typically responding to?
2.When a list of RFP's is converted into a review protocol (i.e. when writing instructions for the review team), the list of RFP's is generally summarized into a short-list of of categories (or issues) that are responsive . In other words, 50 RFP's are "collectivized" into say 10 "issues". How many issues are reviewers typically reviewing for?
3.Before running a TAR or GenAI based review, do you typically apply search terms to limit the population?
4.Which of the below have you validated using recall and precision?
5.What is your primary method of validating results from Keyword Searches?
6.What is your primary method of validating results from TAR 1.0?
7.What is your primary method of validating results from TAR 2.0?
8.What is your primary method of validating results from a GenAI review?
9.Have you ever used an Elusion rate, as a single data point, to validate a project?
10.What is your primary review methodology?