TIMan Logo

Text Information Management and Analysis Group


About| People | Projects | Publications | Downloads| Internal Wiki

Project: User Modeling and Simulation

Computationally modeling a user of an intelligent system is required in order to optimize the system's service to the user. If we are to optimize the collaboration of an intelligent system with a user, the objective function to be optimized by the system must include a mathematical description of the user the system intends to interact with, including modeling many aspects of the user, including, e.g., the user's task, the user's need, the user's preferences, the user's context, the user's cognitive state, and how the user would respond to various interactions provided by the system. Clearly, in order for a system to provide effective personalized service to each individual user, it has to not only model the "average user", but also model the variations of users so as to obtain an accurate mathematical model of each individual user in each interaction context.

Due to limited research on user modeling and user simulation, in the development of current generation of intelligent systems, we generally have to make many unrealistic simplification assumptions about the users to enable formalization of the interaction problem and make optimization of interaction with users tractable. The next-generation intelligent systems must use more sophisticated user models for optimizing their interaction with users, ideally in a personalized and adaptive way. This requires new research on user modeling and user simulation. User modeling and simulation are closely related because a computational model of a user would allow us to build a user simulator to simulate how a user would behave when interacting with a system, whereas a user simulator can be regarded as defining a (complete) user model computationally.

Our interest in user modeling and simulation is also due to the need for user simulators in evaluating interactive intelligent systems with reproducible experiments. The current evaluation methods used for evaluating many empirically defined tasks (e.g., many information retrieval, machine learning, and natural language processing tasks) are generally based on the Cranfield Evaluation Methodology developed by information retrieval researchers in the 1960s, where they introduced the widely known measures such as Precision and Recall (see Cleverdon's Cranfield paper) and suggested a test collection can be reused to evaluate effectiveness of any component in a system and produce reproducible experiment results.

However, a limitation of the traditional test collection-based evaluation method is that it cannot be used to evaluate interactive systems, which are often evaluated via small-scale user studies or online A/B testing. However, when a real user is used for evaluating an interactive system, the experiment results would not be reproducible because after a user interacted with a system to perform a task, the user's cognitive state would have changed and thus even if the same user could interact with another system to perform the same task, the behavior of the user would not be exactly the same as that of the user when performing the task firs time. It is thus clear that in order to obtain reproducible experiment results, we must control the user in some way, and user simulation allows us to achieve this goal. With a user simulator, any system can be evaluated by having the system interact with a simulated user and measuring the overall performance of the system in terms of the task completion and effort made by the user.

We are especially interested in developing interpretable user simulators because such user simulators have parameters that can be interpreted meaningfully as reflecting natural variations of real users. This enables us to vary the parameters of a simulator to simulate many different users. Another benefit of interpretable user simulators is that they can be used to analyze the behavior of real users by fitting a user model (user simulator) to the observsed behaviors of a user to obtain the estimated parameter values, which can then be interpreted to understand the user's behavior.

Finally, user simulation can also be leveraged to generate synthetic data for training machine learning algorithms, especially reinforcement learning algorithms for optimizing an interactive system.

Our work in this area was mainly done by Sahiti Labhishetty in her dissertation, which includes simulation of query formulation using a Precision-Recall-Effort optimization framework, simulation of cognitive user behavior for E-Commerce search users, evaluation of user simulators using a Tester-based approach (see Sahiti's publication page for more information). This ACM ICTIR 2017 paper laid out the fotheoretical undation for using user simulation for evaluation of interactive systems.

For more information about our research in this area, check out ChengXiang Zhai's keynote talk at NTCIR 16 (2022) on "Information Retrieval Evaluation as Search Simulation"

Also, check out the book (draft) on User Simulation for Evaluating Information Access Systems by Krisztian Balog and ChengXiang Zhai.

Our current work on user modeling and simulation is mostly in the context of Research Strand 2 of the INVITE Institute (funded as one of the NSF AI Institutes).