Ritter is a professor of IST, of Psychology, and of CSE at Penn State. He researches the development, application, and methodology of cognitive models, particularly as applied to interface design, predicting the effect of behavioral moderators, and understanding and assisting learning.
He is working on projects that apply cognitive theories to tutors, and also a project on modifying the ACT-R cognitive architecture to have a physiology substrate. He has created extensions to several architectures, eyes and hands for models, and also compilers to create cognitive models more easily. Recently, his lab created one of the largest ACT-R models with over 5,000 rules. He has helped develop CoJACK as an exploration in how to create models quickly.
He has helped write and edit books, including one on applying cognitive models in synthetic environments (HSIAC, 2003), one on order effects on learning (2007, Oxford), contributed to a National Research Council report on how to use cognitive models to improve human-system design (Pew & Mavor, 2007), and has recently published a book on what psychology do systems designers (and modelers) need to know (Springer, 2014) with Gordon Baxter (a HF/UX Consultant) and Elizabeth Churchill, the Director of User Experience at Google. The book attempts to build a model of the user in the reader's head; the book also provides foundational psychology knowledge for building simulations of human behavior. He is an associate editor of Human Factors and IEEE: SMC HMS. He edits the Oxford Series on Cognitive Models and Architectures.
Talk title: Some Futures for Cognitive Modeling and Architectures
Abstract: Recent developments in computer science, software engineering, and simulations have provided some new opportunities for modeling human behavior. I'll note some approaches that we and others are pursuing that you can too. These include: (a) Interacting directly with the screen-as-world. It is now possible for models to interact with uninstrumented interfaces both on the machine that the model is running on as well as remote machines. Just one implication is that this will force models to have more knowledge about interaction, an area that has been little modeled, but essential for all tasks. (b) Providing a physiological substrate to implement behavioral moderators' effects on cognition. Cognitive architectures can now be more broad in the measurements they predict and correspond to. This approach provides a more complete and theoretically appropriate way to include stressor effects and emotions in models. (c) Fitting models to data using genetic algorithms. This can lead to model overfitting, but it can also be seen as a way to understand and predict how people are different in their underlying parameters using a multi-variable non-linear stochastic multiple-output regression (aka model fitting). It can also lead to a greater understanding of our models.