A tutorial on DynaSearch: A Web-based system for collecting process-tracing data in dynamic decision tasks
This tutorial describes DynaSearch, a Web-based system that supports process-tracing experiments on coupled-system dynamic decision-making tasks. A major need in these tasks is to examine the process by which decision makers search over a succession of situation reports for the information they need in order to make response decisions. DynaSearch provides researchers with the ability to construct and administer Web-based experiments containing both between- and within-subjects factors. Information search pages record participants’ acquisition of verbal, numeric, and graphic information. Questionnaire pages query participants’ recall of information, inferences from that information, and decisions about appropriate response actions. Experimenters can access this information in an online viewer to verify satisfactory task completion and can download the data in comma-separated text files that can be imported into statistical analysis packages.
KeywordsProcess tracing Dynamic decision making Web-based experiments
Dynamic decision making (DDM) is an important theoretical and practical problem that has received increasing attention from mathematical modelers and experimental researchers since it was initially addressed by Edwards (1962) and Toda (1962). More recently, there have been attempts to distinguish among different types of dynamic decision models (Busemeyer & Pleskac, 2009; Gonzalez, 2012; Osman, 2010; Peebles & Banks, 2010) and the psychological processes associated with those models. Most DDM studies have examined the processes involved in the control of a complex single system—the attempt to manage inputs in a way that achieves desired outputs (see Gonzalez, Fakhari, & Busemeyer, 2017, for a recent review). However, the single-system model that has been used to guide these system control studies does not adequately represent a coupled system situation that is common in environmental emergencies where information about the evolving threat from an environmental system is used to make decisions about managing the response by an affected social system. This distinction between a single system and a coupled system is important when the two systems have different dynamic properties and, especially, when one system can be controlled but the other cannot. For example, it is possible to monitor the onset of a hurricane but not to control its behavior. By contrast, as is discussed in Lindell, Murray-Tuite, Wolshon, and Baker (2019), it is possible to monitor and to (partially) control the evacuation of communities in the potential impact area.
In some cases, such as wildfires, both mitigation and protective actions are possible. For example, Brehmer (1992) and Brehmer and Allard (1991) described the process by which DMs choose mitigation actions to fight wildfires, whereas Drews, Siebeneck, and Cova (2015) described the process by which DMs choose protective actions to avoid casualties in populations threatened by wildfires. In other cases, such as hurricanes, only protective actions are possible. Both mitigation actions and protective actions take time to implement, so DMs must initiate those response actions on the basis of inferences about the future states of System X rather than its current state. When X, Y, or both are complex systems, these inferences must be based on a careful search of the available information about them.
The present article will address a deficiency in current behavioral research methods by providing a tutorial that describes the rationale and use of DynaSearch, a computer program for conducting process-tracing studies of DDM tasks in which DMs are only able to take protective actions; that is, they are only able to intervene into System Y. As will be described more completely below, DynaSearch provides researchers with the ability to construct and administer Web-based process-tracing experiments that contain both between- and within-subjects factors. Specifically, information search pages record participants’ acquisition of verbal, numeric, and graphic information. Moreover, questionnaire pages query participant’s recall of the information, inferences from that information, and decisions about appropriate response actions.
Review of process-tracing tools
One limitation of research on DDM has been that it has ignored the information search processes that people undertake before they make decisions (Patrick & James, 2004). Although there has been research on sequential decision making, much of this work has focused on the development of statistical models for determining the optimal stopping rule for terminating sequential information search (Czajkowski, 2011; Diederich, 2001). However, there is an extensive empirical literature on information search in static decision tasks. One method of studying this information search process, process tracing, uses an information display board (IDB) that Payne (1976) introduced to decision researchers—who began to use it extensively within a single decade (Ford, Schmitt, Schechtman, Hults, & Doherty, 1989). The IDB originally consisted of a physical board containing a rectangular grid in which the rows represent alternative choices (e.g., cars) and columns represent the attributes of those alternatives (e.g., cost, performance, and fuel economy). The information in each cell was hidden, so participants needed to tell the experimenter which cell’s contents they wished to see. The experimenter recorded the identity of the chosen cell, revealed the information it contained, and then concealed it again after the participant had viewed the contents. Payne, Bettman, and Johnson (1988) later implemented an IDB for desktop computers as Mouselab, which was subsequently converted to an Internet application, MouselabWEB (Willemsen & Johnson, 2010a). These programs present the IDB on a computer page in which the cells in the grid are blank until the experimental participant uses the mouse to move the cursor. Holding the cursor over a cell and clicking the mouse button reveals the cell’s data, and moving the cursor away conceals the information again. In these versions of the IDB, the computer records data about the information search process, allowing researchers to assess the importance of specific cells by examining the order in which they are searched (including the transition probabilities between cells); the frequency and amount of acquisition time spent looking at each cell, row, or column; and the total time spent during information search (Aschemann-Witzel & Hamm, 2011; Ettlin, Bröder, & Henninger, 2015; Willemsen & Johnson, 2010b).
Other computer-based process-tracing programs have been developed over the years, some of which are special-purpose programs designed for very specific tasks, such as bank officers’ processing of business loans (Andersson, 2001). In addition, some computer-based process-tracing programs are based upon specific information processing models. For example, DSMAC (Saad, 1998)—which is a revised version of SMAC (Saad, 1996)—is a PC-based program for studying multi-attribute choice based upon the sequential sampling model rather than the more general process-tracing model. DSMAC begins by having participants provide attribute ranks and importance weights, after which they click the attribute on which they wish to compare the alternatives. This generates a page that provides the attribute information and asks them to rate their confidence that they prefer Alternative A to Alternative B. They are also allowed to view past information pages that contain all previously acquired information. The program output lists the number of attributes accessed (as well as their acquisition order and duration), confidence in the leading alternative, time spent in different decision stages, and the use of past information.
However, most other computer-based process-tracing programs have been designed for broad application as extensions of Mouselab. In many cases, these computer programs have been designed to extend process tracing beyond the information acquisition component of decision making to include problem representation, information evaluation, response generation, and postdecision evaluation and learning (Payne & Vehkatraman, 2011, p. 227). MouseTrace (Jasper & Levin, 2001; Jasper & Shapiro, 2002) is a Windows-based extension of Mouselab that allows experimenters to specify instructions to participants (“text schema”) and define the choice task in terms of the characteristics of an IDB (“matrix schema”). Participants may search the cells of the IDB until they have acquired enough information to eliminate some of the alternatives (in preliminary decision stages) or to choose one of the alternatives (in the final decision stage). ComputerShop (Huneke, Cole, & Levin, 2004; Levin, Huneke, & Jasper, 2000) is another Windows-based system that is similar to Mouselab but is designed to be more similar to consumer web shopping by providing participants with pull-down menus rather than Mouselab’s matrix format. Search Monitor (Brucks, 1988) is a PC-based program that represents information search in the form of a sequential or tree structure rather than an IDB. In Search Monitor’s sequential structure, a participant first decides which of a number of sources (e.g., stores) to search, then which product brands to search and, finally, which attributes to obtain information about. One distinctive feature of Search Monitor is that it uses simultaneous interfaces between an experimenter and a participant that allow the participant to ask natural language questions about alternatives and attributes, which the experimenter answers by providing the answer through the computer.
Flashlight (Schulte-Mecklenbeck & Murphy, 2012; Schulte-Mecklenbeck, Murphy, & Hutzler, 2011) is another process-tracing program that extends the capabilities of Mouselab, by providing greater flexibility in the format (graphic as well as verbal and numeric) and arrangement (free-form rather than grid) of decision information. Mousetracker (Freeman & Ambady, 2010) is a process-tracing program that allows experimenters to display verbal, numeric, graphic, and auditory information as well as to record a variety of measures such as trajectories derived from mouse movements. Finally, interactive process tracing (IAPT) is a PC-based procedure developed by Reisen, Hoffrage, and Mast (2008) that combines features of Mouselab, active information search (Huber, Huber, & Schulte-Mecklenbeck, 2011; Huber, Wider, & Huber, 1997), and verbal protocol analysis (Ericsson & Simon, 1993). The active information search component allows participants to identify the attributes about which they would like to obtain information and report this information to an experimenter (attribute selection phase). An IDB allows participants to search for information about the attributes of each alternative and to choose an alternative (the information acquisition and choice phase). The verbal protocol analysis requires participants to retrospectively report how they made their decisions (the strategy identification phase).
One potential limitation of mouse tracking is that it has low temporal resolution and a potential for medium distortion ris,k due to its reactive effect (Schulte-Mecklenbeck et al., 2017). For example, Franco-Watkins and Johnson (2011) found that two methods of eye-tracking produced more—and more stable—cell acquisitions and reacquisitions than did mouse tracking, quite possibly because of the greater perceptual–motor effort required for mouse tracking (see Gray, Sims, Fu, & Schoelles, 2006). Moreover, mouse tracking has been criticized for impeding the use of automatic processes of decision making (Glöckner & Betsch, 2008; Glöckner & Herbold, 2011). This could be a major problem for simple static decision tasks involving the choice of common consumer products with familiar attributes (e.g., choice of a breakfast cereal), but it seems less relevant to complex dynamic decision tasks involving decisions—such as whether and when to evacuate from an approaching hurricane. Such decisions are rare even for long-time coastal residents and are once-in-a-lifetime decisions for most people. Accordingly, survey research on hurricane evacuation decision making suggests that such decisions are much more likely to be reasoned than automatic (Lindell et al., 2019).
An overview of DynaSearch
In a typical DynaSearch experiment, an information search page contains all of the textual, numeric, and graphic information that is available for the participant to search for that situation report. Each information search page is typically followed by a questionnaire page that uses fixed-format or open-ended questions to query participants about their situational comprehension and projection (see Gonzalez & Wimisberg, 2007). For example, participants could be asked to recall the values of specific event parameters from a previously viewed information search page, provide inferences from that information (e.g., forecasts of future parameter values), or intended behavioral responses (e.g., decisions to take some response action—see Wu, Lindell, & Prater, 2015a, 2015b). Questionnaire pages can also contain questions about the participant’s perception of the relative importances of different search page elements (Van Ittersum & Pennings, 2012), the overall mental workload required by that specific scenario (Rubio, Díaz, Martín, & Puente, 2004), basic demographic characteristics (e.g., sex, age, education), or previous personal experience with that decision task prior to the experiment.
It is important to note that mouse tracking in DynaSearch has some similarities with, but also some differences from, that in other programs such as Mousetrap (Kieslich & Henninger, 2017). Although both types of programs record the identity of the target cell, Mousetrap records positional and temporal data from the start location to the target cell and ignores the duration of the cell click. By contrast, DynaSearch ignores the positional and temporal data from the start location to the target cell and records the duration of the cell click. This difference arises from the different functions of the target cells in the two types of experiments—decision outcomes in Mousetrap versus information sources in DynaSearch.
It is important to recognize that the sequential-decision paradigm (Czajkowski, 2011) on which DynaSearch is based has some similarities with, but also some important differences from, an outcome-sampling paradigm (Wulff, Hills, & Hertwig, 2015b). Both paradigms involve games against nature under initial ignorance that allow for active costless search and have a common payoff structures that allow DMs to choose between two options with multiple outcomes (Wulff, Mergenthaler-Canseco, & Hertwig, 2018). However, the two paradigms differ in one crucial respect: In the outcome-sampling paradigm the observations are independent, whereas in the sequential-decision paradigm the observations are correlated. Specifically, in the outcome-sampling paradigm, Cp = pi+1 | pi = 0, where pi is the probability of a relevant outcome on trial i and Cp = pi+1 | pi is the conditional probability of the relevant outcome on trial i+1, given its occurrence on trial i. Thus, Cp = 0 in the outcome-sampling paradigm, whereas Cp > 0 (and usually much greater) in the sequential-decision paradigm.
In addition, studies in the outcome-sampling paradigm often allow DMs to continue sampling indefinitely, whereas the sequential-decision paradigm has a definite deadline (e.g., the time at which the hurricane makes landfall or dissipates to a nonthreatening wind speed). Moreover, the sequential-decision paradigm involves a single-play decision, rather than a multiplay decision (see Wulff, Hills, & Hertwig, 2015a) in which information is sampled until a decision is made. Another difference is that the hurricane sequential-decision paradigm can also involve multiple decisions (e.g., deciding whether to perform evacuation preparation tasks such as gathering household members, packing bags, securing the home from storm damage, and shutting off utilities; Kang, Lindell, & Prater, 2007). Finally, the sequential-decision paradigm involves much higher costs of a decision error than does the typical consumer choice, although, of course, laboratory versions of the hurricane sequential-decision task do not actually involve life-and-death consequences.
Setting up a DynaSearch experiment
Designing the experiment
DynaSearch allows the experimenter to specify one or more scenarios, each of which can be defined as between-subjects or within-subjects factors. There is no limit to the number of factors or levels, and the number of levels in a within-subjects factor does not need to be the same for each between-subjects factor level. Finally, there is no limit to the number of participants in each cell of the experimental design.
Assigning the participants
There are two ways to provide access permission for participants to log into an experiment. First, the experimenter can assign login IDs to a prespecified set of participants. DynaSearch then emails the participants their login IDs, the URL for the DynaSearch login page, and any instructions needed to begin the experiment. Once the participants create passwords for themselves and log in, they are immediately transferred to the experiment’s home page to view the first instruction page. Alternatively, an experimenter can post the notice of a blind experiment for self-selected participants. In this case, DynaSearch will create a special blind URL that can be emailed, along with instructions, to any desired subject pool, such as a Facebook group. When participants follow this URL, they are logged into the DynaSearch system with a unique automatically generated anonymous ID. When recruiting participants in this way, the experimenter assigns an expiration date to the blind URL so the duration of the experiment can be limited.
Running a DynaSearch experiment
Each DynaSearch experiment consists of a sequence of experimenter-designed pages that are embedded between fixed format login and termination pages. The login page requires participants who have been assigned a login ID by the experimenter to enter this ID and a password before being directed to the study they are assigned to. Note, that since the blind URL method routes participants directly to the study, participants recruited in this way skip the login step. The termination page thanks participants for participating in the study. Once logged in, participants can terminate the experiment early by clicking a logout menu item at the bottom of the page. During an experiment, the browser forward and back arrows are disabled, so participants must either continue the information search pages and questionnaire pages in the order designed by the experimenter or terminate participation in the experiment. Disabling the forward and back arrows prevents participants from returning to a previous information search page in order to find the correct answers to items on a questionnaire page.
Any number of page sequences within an experiment can be embedded into between-subjects branches, creating parallel paths through the experiment that are followed according to a subject’s between-subjects condition. If there are any between-subjects conditions, DynaSearch randomly assigns each participant to one of them when logging in. The probability of being assigned to a specific between-subjects condition varies inversely with the proportion of the number of participants that have already been assigned to that condition. This ensures equal numbers of participants in the different conditions.
DynaSearch has three basic page types that can be used in an experiment. As noted earlier, these are the instruction page, information search page, and questionnaire page. None of these page types is required, so an experiment could use just two or even just one of them. All three page types can be designed so participants in all conditions see all of them or they can be varied within a between-subjects design that presents different page sequences to participants in different experimental conditions.
A DynaSearch experiment typically begins with one or more instruction pages that provide participants with any background information they need about the task and how to respond to later pages. Instruction pages also can be placed elsewhere in the flow of the experiment. For example, an instruction page can also be used to introduce each new within-subjects condition, such as when participants are presented with multiple scenarios. Some of the information on these pages might be common to all conditions in an experiment whereas other information might be specific to each condition. Instruction pages are simply HTML formatted pages of the experimenter’s construction, so they can contain any valid HTML code. This allows the experimenter to display not only formatted text, but also hyperlinks to images and video stored anywhere accessible via the internet.
Information search pages
DynaSearch assigns a unique identifier to each object that can be revealed by a mouse click on an information search page. There are two of these types of objects—table cells that display their contents and legend box cells that trigger the display of graphical images. DynaSearch responds to each mouse click on a table cell or legend box cell by storing two types of related data in a single record for that information search event. DynaSearch will already have located the participant’s information record, which contains their unique ID number, any between-subjects condition to which they have been assigned, and the identity of the page being viewed immediately after the participant arrives at a specific information search page. After each click, DynaSearch identifies the click’s sequence number and stores the amount of time that the object that has been clicked is viewed (the length of time the mouse button is depressed) and the page locations of the mouse at the beginning and end of the click (i.e., the object identifier).
As in other process-tracing programs, tables in DynaSearch contain systematically related types of numeric or textual information that is laid out in a grid. Although there might be instances in which experimenters set table cells to always display their contents, they will ordinarily make tables interactive. An interactive table displays the row and column names, but the cell contents appear blank until they are made visible by clicking within that cell. Making a table entry interactive directs DynaSearch to record click information for the experimenter to download later. Although text boxes are not interactive, a special case of a table is one with a single cell that contains an interactive text message. Such single-cell tables provide experimenters with the ability to record access to textual information that can change but cannot be forecast in advance—such as alerts, watches, or warnings that are declared only when conditions reach a specific threat level.
Image boxes display images in one of several ways. A background image, such as a map, can be displayed permanently to provide a constant frame of reference. Alternatively, an image can be made interactive, in which case the image is only displayed when clicked—similar to an interactive table cell. An image can also be associated with a table-formatted legend box, which allows the participant to view systematically related images by clicking on different legend box cells. As in the Wu et al. (2015a, 2015b) hurricane-tracking experiment, there could be a hurricane-tracking map as a background image, and the legend box could be a table that has rows corresponding to different superimposed images (current/past hurricane center positions, forecast track, uncertainty cone, wind swath) and columns corresponding to different time periods (Day 1 forecast through Day 5 forecast). Image boxes can also be used to display other types of graphical information, such as plots of parameters over time.
Each information search page includes a Done button that participants can click to advance to the next page after reviewing the information on that page. An information search page may optionally include a Timer that specifies the maximum amount of time that a participant can spend on the page. The timer is shown as a countdown clock that displays the time remaining before DynaSearch automatically proceeds to the next page.
Items on a questionnaire page can be presented at different points in an experiment, in either multiple-choice or open-ended format, or in a mixture of the two. Participants answer multiple-choice questions by clicking radio buttons, and answer open-ended questions via keyboard text or numeric entry. These pages may be used in various ways during an experiment. An initial questionnaire page can gather basic background information about the participant before the experiment begins, whereas an interim questionnaire page is typically presented immediately after an information search page to collect participants’ situational awareness, judgments about the event’s future status, and decisions about how to respond to that situation report. Final questionnaire pages can collect information such as participants’ summary judgments about their information search processes and their subjective workload.
A worked DynaSearch example: An evacuation route decision experiment
This section illustrates the use of DynaSearch by showing how a DDM problem might be addressed. The problem is simple enough to illustrate most of the construction and running of an experiment in detail, but it is complex enough to demonstrate most of DynaSearch’s key capabilities. The experiment itself focuses on how coastal residents might respond to an approaching hurricane. The development and running of this experiment is outlined here, but is described in full in the DynaSearch User’s Manual.
Layout of the experiment
both tabular delay information and delay-annotated route maps are available,
only delay-annotated route maps are available, and
tabular delay information and a nonannotated map are available.
In addition, the experiment has one within-subjects factor, which evaluates how participant information search behavior changes over two time periods (situation reports) during the course of the evacuation. The first level of the within-subjects factor describes conditions just after the evacuation order has been issued, and the second level describes conditions 12 h later.
Questionnaire pages are created directly within DynaSearch through its Questionnaire Editor. In the Charleston evacuation experiment, DynaSearch presents the same questionnaire page after each information search page, with each questionnaire page containing two multiple-choice questions, the first being an evacuation route choice, and the second evaluating confidence in that route choice. DynaSearch allows experimenters to vary the contents of questionnaire pages from one situation report to another within a scenario. This feature avoids the spuriously high levels of situational awareness that would result if participants were allowed to focus their information search on a fixed set of items that were repeated on every questionnaire page. Although the answer choices appear in the Questionnaire Editor in exactly the same format as the experimenter wants participants to view them, DynaSearch stores only the numeric index of each response in its results database, so all stored results are integer numbers corresponding to the response alternative’s order rather than its content. For example, if the evacuation route alternatives are “Central route through the city,” “Western route around the city,” and “Eastern route around the city” (in that order) and a participant chose “Western route around the city,” DynaSearch would store the response as “2.”
The Search Page Editor control toolbar, with icons indicating box types that can be added. When a box is created, it can be dragged and resized to suit the experimenter’s desired page design.
A table—in this example, 9×3—providing delay times for each segment of the evacuation routes. The white rectangles are those table entries whose contents will be concealed until clicked.
A text box instructing participants how to access the information in the table cells.
An image box showing the Charleston base map, with the three evacuation routes indicated.
The Done button, which participants click to advance to the next page when they are finished.
A timer with a maximum time of 5 min, which counts down while the participant is on the page and automatically advances to the next page if it runs out before the Done button is clicked.
A legend box, associated with the Charleston evacuation route map image box. In this example, the legend box contains three buttons, each of which displays an overlay image with color-coded delay times for the west, central, and east evacuation routes, respectively, when clicked.
Building the experiment from the pages
Assigning participants to the experiment
For an experiment that does not use a blind URL, each participant must be added to the experiment explicitly. As we noted earlier, the experimenter creates login credentials and DynaSearch sends an email to the participants letting them know how to log into the experiment.
Participants’ view of the experiment
Once participants have logged into DynaSearch, via either a blind URL or in response to an email notification of their credentials, they are immediately directed to the first page of the experiment. They then proceed through each page in turn until they reach the end of the experiment. If any participants elect to leave the experiment before proceeding through all of the pages, a menu at the bottom of each page provides them with a logout option that terminates their participation. In the Charleston evacuation experiment, the first page is an instruction page that contains instructions for the experiment, as well as a roadmap of the Charleston area.
When participants finish reviewing the instruction page, they click the Continue button to proceed to the preexperiment questionnaire page, where they are asked to answer the questions by radio button clicks and then to click the Submit button to proceed to the main body of the experiment.
Participants’ evacuation decisions are recorded via a questionnaire page that requires them to chose one of three evacuation routes and also to specify how confident they are in this decision. Clicking the Submit button advances them to the situation report for the 12-h time period and a similar sequence of instruction, information search, and questionnaire pages. After completing the information search and questionnaire pages for the 12-h time period, they proceed to the postexperiment questionnaire page.
The postexperiment questionnaire page allows participants to provide feedback on how useful they felt the various information sources were when making their evacuation route decisions. This page also provides an open-ended question allowing them to provide additional feedback on their decision process. Clicking the Submit button takes them to a page thanking them for their participation and logging them out of the experiment.
Viewing and evaluating output from the experiment
At any time during the course of the experiment, the experimenter can use DynaSearch to preview or download the results from participants who have completed the experiment. This feature is useful to experimenters who wish to view participants’ responses before giving them credit or paying them for participation, in order to verify that the responses are not careless or frivolous. Once all of the data have been collected and are ready for analysis, the experimenter can download the results to CSV files. DynaSearch keeps questionnaire page results separate from information search page results.
With all experimental data available in the form of CSV files, analyses can easily be performed offline from DynaSearch, using whatever statistical analysis package the experimenter chooses.
As we noted earlier, DynaSearch produces a record of the responses for participants as they proceed from the beginning of the experiment to its end. A participant’s response sequence for a given scenario contains the click data for information search pages and the questionnaire data. The information search data can be downloaded into a CSV file listing the identity of each successive cell clicked (an arbitrary ID generated in the experiment design phase), the order of clicks and duration of that click (in s) for all of the clicks that the participant produced for that information search page. The questionnaire data can also be downloaded in CSV format. This file contains the sequence of responses to the items on each questionnaire page. Both the information search and questionnaire CSV files contain the results for the entire experiment, with unique identifiers for each participant, and for their between-subjects levels.
After an experiment is complete, the experimenter can download each participant’s output CSV data file in order to extract click counts (based on the click orders) and click durations for each cell in the information search displays and response values for each dependent variable in the questionnaire pages. After conducting some preprocessing operations described below, the experimenter can use a statistical package such as SPSS or R to analyze the data.
To calculate the average number of times each cell in an information search page was viewed, all counts/durations should be initially divided by the number of information search pages encountered across all scenarios. However, further adjustments would be needed if the number of cells available for viewing differs between information types (graphic, numeric, and verbal). For example, Wu et al. (2015a) left the counts/durations for an NHC watch/warning message box unchanged (i.e., divided by 1), because there was only one cell for each forecast advisory. However, the counts/durations for each of the five parameters in the numeric parameter table over the six situation reports were then divided by 30, because there were 30 cells in the numeric parameter table.
In addition to the mean click count and mean click duration for each cell on the information search page, experimenters can compute two search pattern indexes. The first of these is the Lohse and Johnson (1996) reacquisition rate (RAR), which is the number of cells viewed at least twice, divided by the total number of cells viewed for all information search pages encountered in a scenario. The second is the Wu et al. (2015a) search pattern stability (SPS) index, which was the correlation between the cells viewed in the first and last of their six situation reports when each viewed cell received a score of 1 and each unviewed cell received a score of 0. Comparing the first and last scenarios with respect to both RAR and SPS provides an indication of the extent to which participants change their search strategies over time.
This tutorial has described DynaSearch, a Web-based system that supports process-tracing experiments on dual-system DDM tasks. Most DDM research has adopted a single-system model that examines the processes involved in complex system control, but DynaSearch supports research on coupled system DDM tasks by examining the process by which DMs search for the information they need to make response decisions. These data can be used to identify the sources of DMs’ deficiencies in the choice and timing of response actions to environmental threats (Drews et al., 2015; Huang, Lindell, Wei, & Samuelson, 2017). DynaSearch provides researchers with the ability to construct and administer Web-based experiments containing information display boards that record participants’ searches of verbal, numeric, and graphic information as scenarios evolve over multiple situation reports. It supports both between- and within-subjects factors and provides an integrated questionnaire mechanism for querying participants about their situational awareness, projections, and decisions in response to each situation report, as well as for administering pre- and posttests. This Web-based system allows researchers to collect data from a more diverse sample of participants than is typically feasible in laboratory settings that use desktop “point and click” or eye-tracking information display boards. Researchers interested in using DynaSearch can find the user manual and request an account at https://www.cs.clemson.edu/dynasearch/login.php.
The authors are grateful for the efforts of other contributors to the development of DynaSearch. Jonathan Cox and Brandon Pelfrey developed the initial desktop version of the code; Christopher Malloy and Le Liu assisted with extending and improving its capabilities beyond the original design. This research was supported by the National Science Foundation under Grants SES-0838654, SES-0838639, IIS-1212501, IIS-1212790, and IIS-1540469. None of the conclusions expressed here necessarily reflects views other than those of the authors.
The term “page” is interchangeable with “screen” in this article. Screen is most appropriately used when taking the participant’s point of view—it is what the participant sees. Page is most appropriately used when taking the experimenter’s point of view—since the experimenter thinks in terms of developing Web pages.
- Brehmer, B., & Allard, R. (1991). Real-time dynamic decision making: Effects of task complexity and feedback delays. In J. Rasmussen, B. Brehmer, & J. Leplat (Eds.), Distributed decision making; Cognitive models for cooperative work, Chichester: Wiley.Google Scholar
- Ericsson, K. A., & Simon, H. A. (1993). Protocol analysis: Verbal reports as data (2nd). Cambridge: MIT Press.Google Scholar
- Gonzalez, C., Fakhari, P., & Busemeyer, J. (2017). Dynamic decision making: Learning processes and new research directions. Human Factors, 0018720817710347.Google Scholar
- Hotaling, J. M., Fakhari, P., & Busemeyer, J. R. (2015). Dynamic decision making. In International encyclopedia of the social and behavioral sciences (2nd ed., Vol. 6, pp. 709–714). Amsterdam: Elsevier.Google Scholar
- Huber, O., Huber, O. W., & Schulte-Mecklenbeck, M. (2011). Determining the information that participants need: Methods of active information search. In M. Schulte-Mecklenbeck, A. Kühbeger, & R. Ranyard (Eds.), A handbook of process tracing methods for decision research (pp. 65–85). New York: Psychology Press.Google Scholar
- Lindell, M. K., Murray-Tuite, P., Wolshon, B., & Baker, E. J. (2019). Large-scale evacuation. New York: Routledge.Google Scholar
- Lindell, M. K., & Perry, R. W. (1992). Behavioral foundations of community emergency planning. Washington, DC: Hemisphere.Google Scholar
- Lindell, M. K., Prater, C., & Perry, R. W. (2006). Introduction to emergency management. Hoboken: Wiley.Google Scholar
- Payne, J. W., & Vehkatraman, V. (2011). Opening the black box: Conclusions to A handbook of process tracing methods for decision research. In M. Schulte-Mecklenbeck, A. Kühbeger, & R. Ranyard (Eds.), A handbook of process tracing methods for decision research (pp. 223–249). New York: Psychology Press.Google Scholar
- Reisen, N., Hoffrage, U., & Mast, F. W. (2008). Identifying decision strategies in a consumer choice situation. Judgment and Decision Making, 3, 641–658.Google Scholar
- Schulte-Mecklenbeck, M., Johnson, J. G., Böckenholt, U., Goldstein, D. G., Russo, J. E., Sullivan, N. J., & Willemsen, M. C. (2017). Process-tracing methods in decision making: On growing up in the 70s. Current Directions in Psychological Science, 26, 442–450. https://doi.org/10.1177/0963721417708229 CrossRefGoogle Scholar
- Willemsen, M. C., & Johnson, E. J. (2010a). MouselabWEB: Monitoring information acquisition processes on the web. Retrieved from http://www.mouselabweb.org/
- Willemsen, M. C., & Johnson, E. J. (2010b). Visiting the decision factory: Observing cognition with MouselabWEB and other information acquisition methods. In M. Schulte-Mecklenbeck, A. Kühberger, & R. Ranyard (Eds.), A handbook of process tracing methods for decision making (pp. 21–42). New York: Taylor & Francis.Google Scholar