Via Ferrata 5, 27100 Pavia - ITALY
web-vision@unipv.it
+39 0382 98 5372/5486

Development of a semi-autonomous system for eye tracking experiments and analysis of the results

Customer segmentation is an approach to dividing customers of a product or service by identifying similarity attributes within groups. Customer similarities within a group can be assessed in many ways. For instance, by demographic information (e.g., age, gender, and income), geographic information, psychographics (e.g., personality type, preferences, etc.), and behavior (e.g., shopping behavior and browsing pattern). The purpose of the carried out work is to allow the grouping of customers of an insurance company by their gaze, to find possible similarities among the clients of a certain product.

  1. Phases of the Project
    The project has been divided into two phases. The first phase aimed to gather data from customers, while the second examined and built customer profiles from the dataset.
  2. System Requirements in the First Phase
    Data about customer preference will be gathered using a sort of “questionnaire” displayed on a screen, and eye data will be collected through an eye tracker. For eye data acquisition, the insurance company will invite some representative customers to attend eye tracking experiments at certain branches. These customers will be selected based on the type of insurance services they have subscribed to date. The eye tracking systems installed in the company’s branches will be presumably operated by sales personnel. Therefore, the system should be easy to use and involve a minimum intervention from the staff. Moreover, all the data should be saved to a shared storage on a remote server. The questionnaire and stimuli of the eye tracking experiment will be designed by a team from the Neosperience company, which is composed of psychologists and graphic designers. Hence, the system should provide a way for the team to develop the stimuli separately, and then easily embed them into the application. Furthermore, the application should be sufficiently generic to accommodate flexible stimuli such as video, sound, static images, and static images with 'clickable' sensitive areas. As an output from the first phase, the system is expected to produce a gaze dataset that will be provided as an input to the next stage. The considered eye features are derived from primary eye metrics, such as fixation duration and pupil size.
  3. System Design and Implementation
    Based on the above stated requirements, an eye tracking system with three main modules has been implemented: a module for conducting experiments, a module for creating the stimuli, and a module for data processing.
    • Module for Conducting Experiments
      This module covers the basic workflow of an experiment. The staff of the branches where experiments will be carried out will need to launch the application and arrange the position of the customer so that the eyes are detected correctly. This is done with the help of an eye position indicator on the screen. From this point onward, the experiment process is unattended. The customer will be guided by a tutorial provided in the form of text and audio. After the application correctly detects the presence of the eyes, it will start the calibration procedure. If the calibration quality is sufficiently good, then the experiment proceeds to the main stimuli. Otherwise, the calibration is repeated. The main stimuli can be videos, sound, static images, and static images with 'clickable' sensitive areas. The task also can be varied, such as looking at a stimulus which is displayed for a certain time, freely observing a stimulus without time limitations and moving to the next stimulus with a key press, or selecting one from some options in the stimulus by gazing on it. In this last case, a customer can perform the equivalent of a mouse click by fixating a certain sensitive area for a minimum time. At the end of the experiment, an FTP module sends the data from the local storage to a server. The application also produces heat maps and scan paths, with an option to print or email them as a feedback to the customer.
    • Module for Developing Stimuli
      In this module, the designer can define the stimuli by constructing a configuration file which specifies various options for the experiment. There are three kinds of configuration files depending on their purpose: main configuration, playlist settings, and sensitive area settings. The main configuration file specifies parameters which will be used by the whole application, such as the minimum quality of calibration, the 'click' dwell time, heat map and gaze plot color settings, default printer, FTP server, and SMPT settings. In total, there are 52 customizable variables in the main configuration file. The playlist settings file contains the list of stimuli files and their parameters (such as the file name, type, duration, and information about whether they have associated sensitive areas or not). Lastly, the sensitive area settings file lists the coordinates and sizes of the clickable sensitive areas within a stimulus. The system also provides an interface to help the designer define sensitive areas.
    • Module for Data Processing
      In this module, the raw eye data is transformed into more meaningful features. In this first phase of the project, this module provides simple statistics about eye data such as total fixation duration, total fixation count, pupil size, and visit count. The statistics can be aggregated for the whole stimulus or separated by predefined areas of interest. Results are saved in MS Excel file format, so it will be interoperable with almost any statistical software for further analysis.
PROJECT INFO
Duration: July - September 2016
Funded by: Neosperience Lab S.r.l.
Project type: Research Contract
Get In Touch

Laboratorio di Visione Artificiale e Multimedia
Dipartimento di Ingegneria Industriale e dell'Informazione
Università di Pavia
Via Ferrata 5, 27100 Pavia - ITALY

+39 0382 98 5372/5486

web-vision@unipv.it