Keywords

1 Introduction

In this paper, we are interested in how computational hardware and software have changed geospatial imagery analysis in the United States Intelligence Community, and consider the implications of that shift for characterizing, modeling, and supporting humans in the visual interpretation of imagery. In particular, we have become acutely aware of the technological limitations of eye tracking systems for studying visual attention in natural settings.

Computational technologies have profoundly changed how humans generate, store, share, and interact with data and information. As a result, office work today relies on quite a different set of technologies than it did even twenty years ago, when the electric typewriter was an unremarkable desktop artifact. These days, typewriters are a rarity: instead, we rely on desktop computers, network cables, routers, servers, and mice, icons and electronic displays to work more creatively and flexibly with ever greater amounts of data and information. As many commentators have noted, the same systems that are enabling us to work with all that information are also generating new types of data and information at an astounding rate. The resulting information glut presents capability and capacity challenges for even the most technologically sophisticated sectors of Western economies.

This data glut is problematic even among the institutions comprising the United States’ military and civilian intelligence community (or IC). Although these are among the most sophisticated developers and consumers of new technology, even they are struggling to adapt to the rapid evolution of their local information ecologies [1].

Of particular importance is the remote sensing revolution that has swept the intelligence community since 2001. Conflict in Afghanistan and Iraq spurred the United States government to invest heavily in developing, deploying, and integrating a dizzying array of sensor systems into the country’s suite of collection technologies. Sensor investments blossomed quickly, producing a data harvest that has strained the capacity of national information transmission, storage and processing systems. Over the past few years, leaders in both military and civilian intelligence functions have expressed concern that their analytic experts are “swimming in sensors and drowning in data,” as Air Force Lieutenant General David Deptula famously put it [2, 3].

To help military and intelligence agencies realize the information return on their country’s sensor investment, technology developers have increasingly turned their attention to enhancing the analytic performance of human operators. They are doing so by developing systems aimed at helping human analysts discover, assess, and make decisions about the signatures and patterns captured in national datasets. Such systems include methods, algorithms, software, data products, and visualizations that are rapidly making their way to the ground stations, cubicles and open workspaces of the intelligence community.

In the best of worlds, analysts would be able to adopt these new technologies into their workflows, leveraging their wealth of data to make better sense of complicated events, trends and shifts in the world around us. Yet the extent to which new analytics and interaction models are actually enabling people to identify important patterns in the flood of sensor data remains an open question – one that will challenge human-computer interaction researchers to expand their evaluative frameworks to increasingly complex forms of perceptual and cognitive work.

1.1 Previous Work

The authors of this paper include an anthropologist, a cognitive neuroscientist, a computer scientist and a software-engineer-turned-data-scientist. We have spent much of the past decade studying human-information interaction among imagery analysts in the intelligence community, most recently with Synthetic Aperture Radar (SAR) analysts affiliated with Sandia National Laboratories. Sandia is a multi-program, federally funded research and development center (FFRDC) owned by the United States Department of Energy and headquartered in Albuquerque, New Mexico and Livermore, California. Sandia’s national security mission includes a wide array of basic and applied research and development activities, including the design and engineering of remote sensing systems used in operational, tactical and strategic mission areas.

Sandia is internationally recognized for its expertise in developing SAR systems. These radars rely on the movement of a platform, such as an air vehicle, to synthesize a photographic aperture. As the platform travels over an area to be imaged, the radar emits pulses of radio energy to “illuminate” terrain features. Reflected energy is captured and processed by image formation algorithms that create two-and-three dimensional renderings of terrain features [4].

As researchers working at the intersection of human, SAR technologies, and national security, our proximity to sensor engineers and their operational users has given us a unique opportunity for studying real-world intelligence workflows. In previous papers, we have leveraged this experience to describe some of the challenges that accompany designing, fielding, and then evaluating the usability and utility of remote sensing systems in the operational workplace. We have discussed the use of ethnographic field study methods with cognitive work analysis and cognitive task analysis frameworks, an approach that has informed the development of experimental visual search studies we have conducted with imagery analysts [5,6,7]; offered guidance to radar engineers for designing usable operational interfaces for deployed systems [8]; and examined how eyetracking data can be leveraged with measures of visual saliency and radar image quality metrics to assess the utility of an image product for a particular class of signature detection tasks [9].

1.2 Purpose of This Paper

This paper is different from our previous reports: it is a position paper asserting the need to re-think task models that describe visual inspection of image products; and data collection systems and analysis frameworks that enable researchers to perform inductive studies of dynamic, user-driven visual workflows.

In the following pages, we explain how new technologies have changed imagery analysis in ways that challenge the task-based models frequently described in the visual inspection literature [10,11,12,13]. We suggest that many imagery analysis activities involve visual information foraging workflows, in which analysts pull selectively across a heterogeneous assortment of data and information types to inform decisions about whether or not an indicator meets accepted criteria for “something that matters” [14,15,16,17]. This is somewhat of a departure from traditional visual inspection models, which do not explicitly account for the presence of easily accessible electronic data and information in today’s workplaces.

Characterizing the complex, dynamic workflows of imagery analysis will require us to develop systems that accurately and precisely capture the behaviors indicative of resource foraging decisions in interactive workflows. Of particular importance are eyetracking data collection systems and analysis methods that enable researchers to identify what features seem to be cueing analysts to seek additional information in the electronic resources available to them. To this end, we briefly discuss a recent initiative to develop a prototype system that associates gaze events with geospatial point data, which we hope will enable us to capture and model the trajectory of a user’s movement through large, heterogeneous information spaces.

2 Imagery Analysis as Visual Inspection and Profession

In the academic literature, imagery analysis has traditionally been studied using models of visual inspection derived from well-established research in human factors and industrial engineering. However, as we discuss below, desktop computing and enterprise information systems have changed analytic practice in the intelligence community, in ways that challenge older models of visual inspection.

2.1 Imagery Analysis as Visual Inspection

From a human-information interaction perspective, imagery analysts are using their visual systems seek, detect and characterize features of interest in geospatial information products; i.e., they are performing visual inspection, which involves the deployment of our evolved capacity for visual search in purposeful work.

Visual inspection and visual search are related but not synonymous scientific research areas. As Wang et al. have pointed out, in the academic literature the term “visual search” has historically denoted the study of physical and neuropsychological processes associated with perception, cognition and attention in the context of scanning a stimulus in the visual field [18,19,20,21]. In contrast, inspection is the purposeful activity of examining and artifact to identify anomalies, signatures of signals associated with an event or trend of significance [11,12,13].

Researchers in human factors, industrial and engineering psychology have been developing empirical and theoretical models of visual inspection processes for decades. This research is the empirical foundation for qualitative and quantitative models that have, in turn, provided structure for examination of the factors that influence human inspection performance [22, 23]. Models of inspection processes typically decompose these workflows into distinct goal states which can be described in terms of specific behaviors, perceptual activity, cognitive activity, and decisions that mark a transition from one task state to another.

  1. 1.

    Selection of the item to be examined;

  2. 2.

    Manipulation of the item or one’s workspace to facilitate visual examination;

  3. 3.

    Visual scanning of the item for anomalous features;

  4. 4.

    Detection of an anomaly/feature of interest;

  5. 5.

    Evaluation of the anomaly to determine its meaning, cause and significance; and

  6. 6.

    A decision about whether the anomaly merits action (see discussions in [12, 13, 22, 23].

Task models such as the one delineated above are very useful for researchers studying visual work in organizational settings. They enable close study of specific stages of a workflow, including the experimental identification of factors that promote successful outcomes, such as the identification of a fault indicative of an impending mechanical failure. They have enabled practitioners to develop and evaluate training protocols used to prepare inspectors for evaluating high-consequence sociotechnical systems, such as power grid components or commercial aircraft [10, 11, 18, 23]. Finally, they have supported the design and execution of controlled laboratory studies to document eye movement events and patterns associated with different stages in an inspection workflow, with studies of gaze behavior dating back to the 1970s (for example, [24]).

Interestingly, over the past decade, scientists traditionally associated with laboratory studies of visual search have shifted attention to studying visual inspection. Of particular importance are observational and experimental studies of feature detection behaviors and performance among airport baggage screeners [26,27,28]. Eye tracking studies that capture patterns of gaze behavior have enabled researchers to explore how vocational experience influences search strategies and performance; i.e., by comparing the strategies and detection performance of non-professional searchers with trained airport security officers; and by developing skill acquisition models to explain the performance of professional inspectors [26, 28].

These studies have has generated some lively, entertaining debates about basic understandings of visual attention, perception and cognition, but have also highlighted the difficulties that attend the application of laboratory methods and approaches in actual work environments [27, 29]. One particular challenge is the adaptation of eye tracking data collection and analysis to sample gaze behaviors as people perform tasks in real-world work environments, where it is difficult – sometimes impossible – to put a full suite of experimental controls on stimuli, the environment, and the behaviors of the human operators. As we discuss below, this is a significant barrier to understanding how people interact with data and information to develop intelligence assessments with remote sensing data products.

2.2 Imagery Analysis as a Profession

Imagery interpretation and analysis are common tasks in many domains, from medical diagnostics to drought monitoring. However, in the intelligence community, “imagery analysis” is a recognized professional domain. Both civilian and military intelligence agencies hire geospatial imagery analysts, commonly referred to as “IAs,” to review and assess geospatial datasets derived from the nation’s array of remote sensing systems (e.g., electro-optical, infrared, radiometric). Geospatial imagery analysts use a range of software tools, data and information resources, and assessment methods to detect and evaluate evidence of important trends and events. They are responsible for communicating intelligence findings to military and civilian stakeholders to support tactical, operational and strategic decision- and policy-making.

As is true in most Western office environments, technology has dramatically changed how these professionals work with data and information. In 2010, one of the authors (McNamara) spent a year performing a multi-site field study of imagery analyst workflows. Interviews and observational research with imagery analysts brought McNamara in contact with imagery analysts across a range of experience levels, from recently-hired novices to senior intelligence personnel with decades of experience in the agency. It was from this latter category of seasoned domain experts that she was made aware of how much the world of imagery analysis had changed over the previous decade – a change described as the shift from “hardcopy” to “softcopy” imagery analysis.

2.3 The Softcopy Revolution in Imagery Analysis

As the term implies, “hardcopy” describes images printed on paper or rendered in transparent/semitransparent films. In contrast, “softcopy” images are stored electronically and accessed via computer on a display terminal. In the intelligence community, both have been around for decades [30], but only recently have softcopy image products and analysis methods overtaken hardcopy imagery as the primary work focus of geospatial intelligence analysis.

When asked to compare hardcopy and softcopy workflows, senior imagery analysts invariably pointed to differences in the artifacts used in their intelligence activities. In the days of hardcopy analysis, filing cabinets, films and light tables were the key tools of the imagery analyst. One analyst had fond memories of walking down rows of filing cabinets to locate a particular cabinet, then a particular drawer, and then thumbing through files to identify the relevant films. Once she had returned to her desk, she removed the films from their protective envelopes to be displayed against the glowing surface of her light table. She examined her selected images for indicators of strategically-important activities, using her hands to move optical tools (such as magnifying lenses) and measurement instruments to examine fine detail and evaluate features in the scenes. This focused, manual-visual workflow enabled her to assess trends in a region of interest, for inclusion in reporting products that were disseminated to military decision-makers across the Department of Defense.

Of all the resources that imagery analysts used, the light table may have been the hallmark artifact of the profession. One of McNamara’s interviewees spent his career examining electro-optical imagery for evidence of change in Soviet military posture. At one point in the interview, he observed that he had likely spent more of his adult life with his light table than with his wife or children.

However, in today’s intelligence workplace, light tables are about as common as electronic typewriters. As enterprise computing systems took hold in government workplaces in the 2000s, filing cabinets, films and light tables were gradually supplanted by desktop computers, servers, routers, mice and keyboards, and so-called “electronic light tables” – i.e., hardware and software setups that analysts use to retrieve, manipulate, enhance, compare, measure, and mark imagery that is rendered on digital displays and/or CRT monitors (the latter still prized by analysts for the stability and clarity they afford).

Imagery analysts who developed their skills in the days of hardcopy inspection often express dissatisfaction with electronic light tables. They point out that even the highest resolution digital displays can perceptibly pixelate fine details in an object or scene. However, this is only one way in which softcopy analysis has changed human-information interaction in this domain.

2.4 The Complexity of Softcopy

The digitization of geospatial information has opened the door to more dynamic and interactive models of analyst-imagery interaction. First, not only are remote sensing systems generating more data; they are generating greater array of image products, and in seemingly ever-increasing quantities. The data flooding the intelligence community’s networks and servers has given image scientists exciting new research challenges. They have responded by developing new mathematical models and algorithms that exploit everything from the statistical properties of image pixels to the physics of radiometric datasets. Their creativity has provided the intelligence community with new image products and new ways to analyze geospatial data; for example, by integrating temporal and geospatial information to examine patterns of activity at local and regional levels.

Second, the metaphor of the “electronic light table” underestimates what softcopy analysis entails. Today’s commercial imagery analysis platforms do far more than simply reproduce the physical manipulation of hardcopy artifacts on a light table. Softcopy systems enable a remarkable array of interactions with geospatial datasets. For example, an analyst might apply different spectral filters to electro-optical imagery to assess changes in the moisture content of agricultural topsoil; or import records of other types of intelligence reporting, such as captured radio communications, to help her determine if recent vehicular activity indicates expanding military presence in a region of interest. Softcopy systems also facilitate the creation and rapid dissemination of new intelligence products; for example, an analyst could create a KML (Keyhole Markup Language) to represent all known military vehicle and tank positions in the past month, and provide the file as part of a geospatial analysis package for her customers.

2.5 Softcopy Imagery Analysis Is Visual Information Foraging

As technology has evolved in the intelligence workplace, so have the strategies people use to work with data and information. The hardcopy-and-light-table inspection model is inadequate to account for the range of behaviors comprising softcopy analytic workflows. Certainly, softcopy image analysis can be described in terms of source selection, manipulation, examination, detection, evaluation and decision-making. However, desktop computing enables people to perform a wider range of operations at each stage of an inspection task. In particular, when an analyst detects a feature of interest, he often sets aside the primary stimulus (e.g., an electro-optical satellite image) to seek complementary data to characterize and resolve ambiguous features.

For this reason, we have come to think of visual imagery analysis as visual-inspection-plus-information foraging. Information foraging theory was first articulated by Peter Pirolli and Stuart Card in the late 1990s [14]. Derived from ecological models of resource foraging “in the wild,” information foraging theory posits that human information seeking can be modeled as a tradeoff between the amount of time spent seeking sources of relevant information (working “between patches” of information); and the amount of time one spends ingesting relevant information (realizing the information value “within patch”). An efficient foraging strategy is one that minimizes the former while maximizing the latter.

Information foraging theory is one of the most powerful descriptive models of human-information behaviors with electronic data, influencing the design of information retrieval systems for large, heterogeneous spaces, such commercial search engines used with the World Wide Web (Chi et al.). Although it has not been widely used in the design and evaluation of systems for geospatial intelligence analysis, information theory’s principles can usefully expand visual inspection models to account for the expanded range of behaviors that we have observed with imagery analysts working with softcopy systems. Indeed, a recent paper by Paik and Pirolli extended information theory to develop a computational cognitive of information selection in map-based geospatial intelligence retrieval and integration tasks [15].

Among imagery analysts, the availability of so many information sources means that foraging activities have become part and parcel of the image inspection workflow. For example, detecting a feature of interest in an electronic image frequently shifts an analyst’s attention away from inspecting that image into a foraging mode. She may open other image products for comparison with the scene at hand; or begin searching complementary datasets to determine if there are other indicators of the suspected activity. If a signature seems meaningful, the analyst will often engage in “patch enrichment” behavior: pulling complementary information from the selected databases and compiling a reference set she can use to contextualize the anomaly for her stakeholders.

3 SAR Imagery Analysis: A Dynamic Workflow

Since 2011, most of our work has focused on understanding how imagery analysts working with the US military use products from SAR systems to identify and evaluate indicators of operationally important events, such as the movement of illicit cargo through remotely populated areas in a military theater. As previously described, SAR systems are radiometric sensors that actively illuminate a scene, generating rich datasets whose physical properties can be exploited to generate a range of image types. Sophisticated geo-registration algorithms enables accurate spatial alignment of images generated at different points in time. This makes SAR systems particularly useful for detecting and revealing changes in a scene, such as the appearance of a car near the side of a building.

Because SAR datasets are so information dense, SAR researchers have been able to develop a variety of algorithms for highlighting different types of objects in a scene, as well as changes in the state or position of scene features. Many of these products are intended to help SAR analysts detect and make sense of signatures indicative of behaviors of interest. However, as image scientists expand the range, quantity and quality of SAR image products, it is worth assessing whether or not image products are helping analysts as intended. After all, introducing new information into an established workflow may benefit performance or lead to decrements, depending on whether people are able to integrate it into their decision-making framework. This requires us to understand how analysts use their perceptual and attentional resources to identify significant indicators, so that we can ensure our products support human requirements.

3.1 Characterizing Analyst-Imagery Interactions in SAR Operations

Between 2011 and 2014, we performed a series of observational and experimental data collection activities with SAR imagery analysts generating near-real time intelligence for tactical decision-making in military theaters of operation [5,6,7, 9]. The SAR systems in question were regularly used to image key terrain areas, so that the analysts could detect and characterize signatures associated with known illicit activities. Because it was used so regularly, this system was generating a tremendous amount of radiometric data, and Sandia’s image scientists were continuously working to improve and/or provide more useful image products to help analysts detect and assess key signatures. At the time we became involved with the program, the system was generating more than a half-dozen types of distinct image products on a regular basis, across many linear miles of terrain.

To facilitate analytic inspection and decision-making, Sandia had provided the SAR analyst teams with custom-designed, SAR-specific softcopy display and analysis software, which the analytic teams used to select, display, manipulate, and mark image products containing indicators of potentially important events. We were asked to support a comparative evaluation of two different image products using two slightly different analysis algorithms, to determine if analysts would find one more useful than the other.

As described in [5], we performed ethnographic interviews using frameworks from Cognitive Work Analysis and cognitive task analysis to develop descriptive models of the SAR operational system. We also observed training sessions with pilots, radar operators, and imagery analysts comprising the deployed teams. Although we identified a number of distinct work processes and activities that contributed to system success, we focused on the analysis of newly-generated SAR imagery as the keystone task that integrated team members and technologies in the execution of the intelligence mission. The importance of this workflow highlighted the need to understand how imagery analysts were using the Sandia softcopy system to interact with different SAR image products to detect and characterize the signatures of interest.

SAR analysts work quickly. In fact, it is not unusual for an experienced analyst to exhaustively scan several hundred square meters of terrain in a few seconds, and do so using a number of complementary image products. This made observational documentation impossible.

Instead, we instrumented a copy of the Sandia softcopy display system so that we could log analyst interactions as they executed an operationally realistic inspection task. For our stimulus set, we selected four sets of SAR images, each consisting of a few dozen frames of imaged terrain, from operationally deployed systems. For every frame in these sets, we provided the full suite of SAR image products, as well as thirty days’ worth of historical imagery for each frame, providing the participants with a total of several hundred image products available for review. Each of the four sets contained one frame displaying a verified signature of illicit activity, as well as multiple distractors; i.e., visually similar but operationally irrelevant markings scattered throughout the scenes. We marked each of the four target frames and several of the distractor frame with the word “STOP.”

Twelve SAR analysts volunteered to perform our simulated search task. Each participant was assigned two of the four image sets we had curated. They were instructed to search the entire set of image products, as though they were performing an operational assessment; but to stop at each of the marked frames and report whether or not the frame contained any operationally meaningful features. Once the analyst had completed evaluating both sets of imagery, we played back a screen video recording of their session and asked them to narratively describe their search strategy and actions as they were examining the images.

3.2 Dynamic Foraging in Analytic Workflows

As discussed in [5], we discovered that analysts were actually using the image products differently than the system engineers had expected. However, what really caught our attention were the interaction patterns we extracted from the log files. Our customized logging system captured detailed information about the operations each analyst performed on the datasets we had assigned them. The logs allowed us to see the order in which each frame was examined; which image products each analyst accessed when evaluating the content of each frame; the amount of time spent on the frame; selection and manipulation of the image products associated with each of the frames (such as panning, zooming, and flickering across the half-dozen image products the SAR system generated for each scene); mouse clicks on a feature to mark anomalies; and opening additional image windows to access and display any of the previous months’ image products.

By comparing the log files with the video recordings and voiceover narratives we collected from the analysts, we identified interaction behaviors that seemed to be associated with different stages in an inspection workflow: moving rapidly from one frame to another, with minimal zooming or flickering, indicated the analyst was quickly scanning frames for anomalies of interest without detecting anything meaningful. Stopping on an image product, then manipulating the frame by panning around its content or zooming in and out of features in the frame, indicated a deeper investment of attention. Rapid, repeated flickering among different image products within the frame indicated a level of expectation that an anomaly might be present, since analysts rely on the animation effects of rapid flickering to facilitate the detection of new objects. When such a feature was detected, the analyst would position the feature of interest in the center of her display, then zoom to a useful level of resolution before moving into an extended flicker pattern among multiple image products, examining the area for additional evidence of an important event. If the feature seemed like it might meet the criteria for a signature of intelligence significance, the analyst would engage in seeking-and-enrichment behavior: opening a new display window and populating it with previously collected SAR imagery for the same frame, comparing and contrasting changes over time to put the anomaly into an activity context.

As we examined the log file captures and identified consistent interaction behaviors associated with different stages in the SAR analysis workflows, we realized that the analysts were engaged in complex, dynamic type of search and inspection, interwoven with behaviors characteristic of information foraging in electronic datasets. By “feature contingent,” we mean that detection of a potential feature in a frame seemed to occasion a shift in the analyst’s interaction strategy, as indicated by a change in the pattern of interaction with particular image products (from panning across frames to flickering within a frame, for example). This opened up the possibility of using information foraging theory to develop models of efficiency gains as imagery analysts developed proficiency in the work environment. However, we realized that we were also missing key component of information: What features, in what areas of which of the images, were capturing the participants’ attention, precipitating the detected shift in their behavioral patterns?

Empirical studies of information foraging behaviors have capitalized on mouse-element interactions, or “clicks,” in an interface [17]. However, when an imagery analyst is using a softcopy system to select, display and manipulate image sources, she may not be leaving a path of click events to inform an empirical description of her workflow. Studies of visual attention require the use of eye tracking systems, which became an important data collection resource in our studies of SAR analysts. Although we were able to develop and implement experimental protocols that revealed important sources of search efficiency between novice and experienced analysts (e.g., [9]), we quickly realized that eye tracking systems are simply not built for studying the unpredictable dynamics of foraging behaviors.

4 Needed: Eye Tracking in Dynamic, User-Driven Workflows

This experience is leading us to developing a prototype system for collecting gaze data in dynamic, user-driven workflows, so that we can develop what we think of as “gaze-informed information foraging models.”

As Clark et al. have pointed out [27], it can be extremely difficult to translate laboratory-based theory and methods into operational field settings, if only because of the degrees of freedom one must account for in naturalistic environments. This is particularly the case with eye tracking systems, which evolved in a research paradigm that privileges highly constrained tasks with static stimuli to test hypotheses about perception and attention. In principle, eyetracking data could reveal how imagery analysts perceive and respond to visual cues in geospatial datasets. In practice, it is extremely difficult to study gaze-contingent analytic decision-making in realistic, user-driven workflows. Underlying the design of most eyetracking systems is a deductive model of inquiry that assumes the researcher can formulate hypotheses about how participants will interact with a stimulus, or some feature within a stimulus. Another important assumption is the degree of control exercised over the presentation of the stimulus; and, to a lesser extent, the degree to which one can predict the range of actions the participant can take with the stimulus.

These assumptions are not valid in the context of dynamic, user-directed workflows, like the ones we studied with our instrumented SAR analysis package. Log files can reveal quite a bit about which image products an analyst selects, and in what order, and how he manipulates those to facilitate the detection and characterization of items that have attracted his attention. Unfortunately, they tell us next to nothing about which scene features our analyst is attending, nor how his visual system examines different elements of information across different image products to build an integrated narrative about the meaning of an anomaly and whether it merits the investment of additional resources.

What is needed are eye tracking data collection systems and analysis protocols that support inductive studies of analytic interactions with softcopy imagery inspection. Such a system would enable researchers to sample gaze activity as people are solving an imagery analysis problem, then calculate key events (fixations and clusters of fixations) and associate those with the content rendered on the display screen.

This is the kind of system we are beginning to build for our own work with SAR imagery analysts. SAR imagery has a number of properties that make it ideal for the kind of eye tracking data collections and analysis platform we envision: for one thing, every pixel is associated with a stable set of geo-coordinates. Because these systems are equipped with highly accurate geo-registration algorithms, pixels locations are stable across image products – useful for determining if people are shifting their attention from one area of a scene to another as they move among image products depicting features in that scene. Secondly, the SAR research community has developed and implemented efficient algorithms for accurate and reliable segmentation of SAR images into regions of like pixels. This clustering reduces dynamic range in the data and facilitates the association of calculated gaze events, such as a fixation, with a mathematically calculated feature in the content. Over the next eighteen months, we will be working to iteratively develop, test, and then expand a prototype system that integrates gaze data collected using a COTS eyetracker with features in SAR imagery.

4.1 Conclusion

Two decades ago, Spencer described the work of aircraft inspectors as a multisensory process, involving “…such behaviors as looking, listening, feeling, smelling, shaking, and twisting” [22]. Similarly, the work of imagery analysts has become more complex as technology affords a wider range of data and information sources, as well as an expanded repertoire of operations with those information sources.

In the United States, leaders in the military and civilian intelligence community remain concerned about how to manage the flow of data and information that their analysts must evaluate. The community needs analysis software systems that that genuinely support the perceptual and cognitive work of intelligence analysts, whose work increasingly requires they become efficient foragers in the wild world of big data. Without the ability to robustly characterize and identify interaction.

Developing inductively-oriented algorithms and software for gaze tracking studies could expand the applicability of eye tracking systems for the complex perceptual cognitive workflows associated with information foraging. In particular, we suggest that gaze data and the associated user behaviors and stimuli can be treated as a high-dimensional point dataset amenable to a number of decomposition and pattern analysis techniques, including matrix decomposition and graph algorithms. Expanding the repertoire of mathematical models, algorithms and software used with eye tracking systems could provide an entirely novel source of data for documenting the search behaviors that characterize information foraging in large, complex data environments.