Via Ferrata 5, 27100 Pavia - ITALY
web-vision@unipv.it
+39 0382 98 5372/5486

Our studies on Human-Computer Interaction (HCI) exploit different devices to design new interfaces that satisfy the user's needs. In particular, we focus on:

  • gaze interaction,
  • gesture interaction,
  • smart wearable devices interaction.

For gaze interaction we use an eye tracker - a device for measuring eye positions and eye movement and so detecting the user's gaze direction.
Interfaces operated through the eyes are of great help for people with severe disabilities, allowing them to use their gaze to identify, or even move, objects on the screen, as well as to write. We have developed: Eye-S, a system that allows input to be provided to the computer through a pure eye-based approach; Netytar, a gaze-based Virtual Digital Musical Instrument (Virtual DMI), usable by both motor-impaired and able-bodied people, controlled through an eye tracker and a "switch", to play music with the eyes; a Gaze-Based Web Browser; e5Learning, an e-learning environment where eye tracking is used to observe user behavior, so as to adapt content presentation in real-time. For the temporary exhibition 1525-2015. Pavia, la Battaglia, il Futuro. Niente fu come prima, a satellite event of Expo 2015, held at the Visconti Castle in Pavia, we have developed an application addressed to the observation of seven famous tapestries representing the Battle of Pavia of 1525. Without mouse or keyboard, but just using their eyes, visitors could explore the artworks (see here the movie "A Gaze-controlled System for Handless Interaction with Artworks"): they could make enlargements and scrolling operations, and view information on specific subjects of each tapestry as they looked at them. At the end of the exploration the visitors could also review their gaze replay, a movie showing a sequence of fixations on the areas of the tapestry on which the user's eyes focused.
We are also studying the effectiveness of existing and new RSVP (Rapid Serial Visual Presentation) image visualization methods, which involve strong eye activity. Soft Biometrics, Automotive, Assistive and Persuasive Technologies are other examples of fields for application.

For gesture interaction we have used a Kinect - a motion sensing input device.
For the temporary exhibition 1525-2015. Pavia, la Battaglia, il Futuro. Niente fu come prima we developed an application that allowed the visitor, through a Kinect sensor, to select a specific tapestry and guide the visualization on particular details just using simple hand-gestures (see here a movie about the developed gesture interaction via Kinect).

Smart wearable devices have been used in 2018-2021 to study fall detection with recurrent neural networks.
Accidental falls have an enormous human cost, especially for elderly people. There is need for automatic fall detection techniques for timely warnings. We considered the use of “smart” wearable devices and the application of an innovative technique: deep learning on embedded. Implementation challenge: limited computing and memory resources; battery life for continuous use 24x7. We collected datasets with simulated falls by volunteers: seven carry positions, seventeen different activities, forty volunteers, over five thousand tracks. We implemented manual annotations on videos, basic for training.


Our latest Journal publications on Human-Computer Interaction
See also a full list of our

Publications

Get In Touch

Laboratorio di Visione Artificiale e Multimedia
Dipartimento di Ingegneria Industriale e dell'Informazione
Università di Pavia
Via Ferrata 5, 27100 Pavia - ITALY

+39 0382 98 5372/5486

web-vision@unipv.it