Augmented Intelligence (AL) is “a subsection of AI machine learning developed to enhance human intelligence rather than operate independently of or outright replace it. It is designed to do so by improving human decision-making and, by extension, actions taken in response to improved decisions.” In this sense, users are supported, not replaced, in the decision-making process by the filtering capabilities of the Augmented Intelligence solutions, but the final decision will always be taken by the users who are still accountable for their actions.
In this field, Technology-assisted review systems (TARS) use a kind of human-in-the-loop approach where classification and/or ranking algorithms are continuously trained according to the relevance feedback from expert reviewers, until a substantial number of the relevant documents are identified. This approach has been shown to be more effective and more efficient than traditional e-discovery and systematic review practices, which typically consists of a mix of keyword search and manual review of the search results.
Given these premises, ALTARS will focus on High-recall Information Retrieval (IR) systems which tackle challenging tasks that require the finding of (nearly) all the relevant documents in a collection. Electronic discovery (eDiscovery) and systematic review systems are probably the most important examples of such systems where the search for relevant information with limited resources, such as time and money, is necessary.
CALL FOR PAPERS ALTARS 2022
In this workshop, we aim to fathom the effectiveness of these systems which is a research challenge itself. In fact, despite the number of evaluation measures at our disposal to assess the effectiveness of a "traditional" retrieval approach, there are additional dimensions of evaluation for TAR systems.
For example, it is true that an effective high-recall system should be able to find the majority of relevant documents using the least number of assessments. However, this type of evaluation discards the resources used to achieve this goal, such as the total time spent on those assessments, or the amount of money spent for the experts judging the documents.
The topics include, but are not restricted to:
- Novel evaluation approaches and measures for e-Discovery;
- Novel evaluation approaches and measures for Systematic reviews;
- Reproducibility of experiments with test collections;
- Design and evaluation of interactive high-recall retrieval systems;
- Study of evaluation measures;
- User studies in high-recall retrieval systems;
- Novel evaluation protocols for continuous Active Learning;
- Evaluation of sampling bias.
Research papers, describing original ideas on the listed topics and on other fundamental aspects of Technology-Assisted Reviews methodologies and technologies, are solicited. Moreover, short papers on early research results, new results on previously published works, and extended abstract on previously published works are also welcome. Research papers presenting original works should be in the 7-8 pages range, short papers should be in the 5-6 pages range and extended abstract should be 3-4 pages long. For all the submission types the references are not counted in the page limit. Papers must be in the CEUR-ART single column style.
The accepted papers will be published in the ALTARS 2022 Proceedings. The Proceedings will be published by CEUR-WS, which is gold open access and indexed by SCOPUS and DBLP.
Authors must submit their papers via Easychair: https://easychair.org/conferences/?conf=altars2022.
- Giorgio Maria Di Nunzio, University of Padua (Italy)
- Evangelos Kanoulas, University of Amsterdam (The Netherlands)
- Prasenjit Majumder, DAIICT, Gandhinagar and TCG CREST, Kolkata (India)
- Amanda Jones , Lighthouse (USA)
- Parth Mehta , Parmonic (USA)
- Doug Oard , University of Maryland (USA)
- Jeremy Pickens , OpenText (USA)
- Fabrizio Sebastiani , CNR-ISTI (Italy)
- Mark Stevenson , University of Sheffield (UK)
- Jyothi Vinjumar , Walmart (USA)
The DoSSIER project: Domain Specific Systems for Information Extraction and Retrieval
Prof. Allan Hanbury
A Stopping Rule for Technology Assisted Reviews Based on Classification and Counting Processes
Reem Bin Hezam and Mark Stevenson
Relevance-specific clustering in predictive coding
John Tredennick and William Webber
Transferring knowledge between topics in Systematic Reviews
Alessio Molinari and Evangelos Kanoulas
Evaluation of Automated Citation Screening with Normalised Work Saved over Sampling: an Analysis
Wojciech Kusa, Petr Knoth and Allan Hanbury
TAR: Current Controversies and Open Research Questions
Dave Lewis and Jeremy Pickens
International Collaboration for the Automation of Systematic Reviews (ICASR)