Search engine are often used to complete complex information tasks. These tasks cannot be expressed with a single query, may be composed by multiple sub-tasks that need to be completed, the information need of the user may change throughout a task completion effort, learning effects take place, while users may not know exactly what they are looking for or how to formulate their information need.
Information Retrieval research has traditionally focused on serving the best results for a single query— socalled ad hoc retrieval. However, users typically search iteratively, refining and reformulating their queries during a session. A key challenge in the study of this interaction is the creation of suitable evaluation resources to assess the effectiveness of IR systems over sessions.
The workshop will attempt to bridge the TREC Session and Task evaluation exercises, with the goal of evaluating system performance over an entire session, keeping the “user” in the loop.
Given the history of user interactions with a search engine, develop algorithms that account for these interactions to improve the search results for a current query.
Given a query, develop algorithms that can comprehend the underlying user task and all possible sub-tasks and retrieve documents to complete the overall task.
Given a complex task develop algorithms that interact with the user towards the completion of the task.