Testing the Usability of Visual Languages: A Web-Based Methodology

Mauro Mosconi - Marco Porta

Dipartimento di Informatica e Sistemistica - Università di Pavia
Via Ferrata, 1 - 27100 - Pavia - Italy

mauro@vision.unipv.it - porta@vision.unipv.it
Published in the Proceedings of the 8th International Conference on Human-Computer Interaction (HCI'99), 22-27 August 1999, Munich, Germany, Vol. 1, pp. 1053-1057



1. The Objective: Testing the Expressiveness of Visual Control Structures

The purpose of this paper is to illustrate the methodology we developed in order to start a comparative usability study for different implementations of visual control flow constructs.

Since loops in data-flow visual languages (Hils 1992) may be difficult to understand (Green and Petre 1996), we decided to test the usability of the solutions we devised for VIPERS, a data-flow visual language developed at the University of Pavia (Ghittori, Mosconi and Porta 1998). To get useful indications, we opted for a comparative analysis, also referring to the well-known data-flow language LabView (Vose 1986), where iterative constructs are implemented according to a totally different philosophy.

Among the possible evaluation methods (Preece 1993), we elicited observational evaluation, which involves observing or monitoring users' behavior while they are using an interface, and survey evaluation, which means seeking users' subjective opinions. To collect data about what users do when they interact with the test interface, employment of direct observation was avoided. In fact, if users are constantly aware that their performance is being monitored, their behavior may be strongly influenced (Hawthorne effect). Instead, we used software logging to record the dialog between user and system. In particular, our methodology is based on the use of the log files of a Web server, as will be illustrated. Moreover, we elicited questionnaire forms to support the survey evaluation.

2. The Testing Methodology

2.1 Selecting the Test Context

Every usability evaluation is meaningful within a precise context, including the practice level of testers, the types of task being undertaken and the environment in which the test is carried out.

For our first experiments, we decided to work with high school students (17 to 19 years old) with little skill in textual programming and no experience at all in visual programming. We set proper mathematical applications as test tasks. Even though we were aware that many other application domains would be more suitable for a data-flow approach, we opted for problems close to their school experience.

As far as the programming environment was concerned, our aim was to make the interaction independent of the computer platforms used (by carrying out tests in an heterogeneous environment -our lab- with both PCs and Mac and UNIX machines). This last consideration also influenced our choice to focus on the program understanding process rather than on program construction.

We stress the fact that we did not want to compare the usability of the whole VIPERS and LabView environments, but only to observe how efficiently these two languages visually express control constructs (loops, in particular).

2.2 Planning the Tests

We planned two sessions, each one with twelve users. Altogether, twelve users tackled a set of three problems through VIPERS and twelve the same set through LabView.

Each user had to examine, in sequence, three visual programs displayed on the computer screen and translate them into correspondent textual programs (in pseudo-Pascal, since they all knew this language). We considered the number of right solutions as a first indicator of the comprehensibility of the languages in the loop implementation. Time for the test was set at one and a half hours.

2.3 Implementing the Tests: the Technical Approach

The idea making the creation of our tests (and generally any tests that can exploit this set up) particularly economical is the use of a web server and its log files. Each problem is presented to the user in the form of a web page: during the test, the user interacts solely and exclusively with a web browser, independently of the platform used. In the web page (see Figure 1) the problem is visualized as an image, or rather, as an image map. Image maps are graphics where certain regions are mapped to URLs. By clicking on different regions, different resources can be accessed from the same graphic.

Figure 1: a (LabView) visual program to be examined is shown within a web browser

In our case, we associated each graphic symbol with a page illustrating its meaning. The log file of the web server, by registering the actions of the users, was able to reveal how many times (and in what order) each user had clicked on a symbol of the program to ask for the help pages, and how much time the user stopped to read the explanations before returning to the main page using the BACK key.

Figure 2 points out, using "fingers marks", those parts of one of the test programs that were the most studied by an average of testers. For a complete report of the results obtained see (Ghittori 1998): here, we concentrate on illustrating the methodology used.

Figure 2: a finger-marks representation of the mostly accessed elements in a (VIPERS) program understanding process (derived from the web-server log file)

One type of result we believe worth showing is that given in Figure 3, where the path followed by a tester in examining a program was reconstructed on the basis of the log file.

Figure 3: reconstruction of the mental scan-path of the user inspecting a visual program

Our impression is that this scan path (which can easily be obtained automatically) may prove to be a very interesting tool (as analyzed by cognitive psychologists) with which to verify old hypotheses regarding the mental representation of visual programs and the cognitive processes involved in programming, in these new environments.

3. Considerations About the Testing Methodology

We believe we have perfected a methodology for the usability analysis which is effective, supplies a wealth of feedback, and is moreover fast and economical. In particular, we would like to point out that:

References

Ghittori, E. (1998). Usabilità dei linguaggi visuali dataflow: il problema dei costrutti di controllo. Master's Thesis, Dipartimento di Informatica e Sistemistica, Università di Pavia.

Ghittori, E., Mosconi, M., Porta, M. (1998). Designing new Programming Constructs in a Data Flow VL. Proceedings of the 14th IEEE International Conference on Visual Languages (VL'98), 1-4 September 1998, Nova Scotia, Canada.

Green, T. R. G., Petre, M. (1996). Usability Analysis of Visual Programming Environments: A 'Cognitive Dimension' Framework. Journal of Visual Languages and Computing, 7(2), pp. 131-174.

Hils, D. D. (1992). Visual Languages and Computing Survey: Data Flow Visual Programming Languages. Journal of Visual Languages and Computing, vol. 3, pp. 69-101.

Preece, J. (1993). A Guide to Usability. Human Factors in Computing. Addison Wesley.

Vose, G. M. (1986). LabView: Laboratory Virtual Instrument Engineering Workbench. BYTE, vol. 11, n. 9, 82-84.


Back to the top


This paper is also available in PostScript and PDF formats:

zipped postscript (size 955 Kb) PDF (size 635 Kb)