Showing posts with label Google Research Project. Show all posts
Showing posts with label Google Research Project. Show all posts

Saturday, March 19, 2016

Tracking students in Google Earth

Our paper 'Footprints in the sky: using student track logs from a 'bird's eye view' virtual field trip to enhance learning' has been published.  It describes how students were tracked zooming and panning around Google Earth on a virtual field trip.  Their movements were recorded and their visual attention inferred as a paint spray map: high attention = hot colors, track = blue line.

A paint spray map of 7 students (1-7) performing a search task in Google Earth.
Background imagery has been removed to aid clarity.
Click to expand.

How it works
The idea is to track students performing a search task, in our experiment they looked for evidence of an ancient lake that has now dried up in a study area.  Their 3D track as they zoom and pan around in Google Earth is recorded, their visual attention is mapped as if it is a can of paint spraying:  if they zoom in to check an area in close up, Visual Attention (VA) builds up, if they zoom out VA still builds up but is spread over a much larger area.

Mapping the accumulation of VA  along with their track projected onto the ground (blue line) shows where the students have searched and in what detail at a glance.  The small multiples above show data from 7 students who were given 3 set areas to investigate in further detail (target/guide polygons).  This was done in Google Earth but to aid visability, the Google Earth base map has been removed.  From the maps we can predict what the students were doing, e.g. student g5 didn't appear to visit the top right guide polygon at all and students g1, g3 and g6 only gave it a cursory look.  By comparison, students g2 and g4 explored it much more thoroughly.

How it could be used
The idea would be to give the maps to students to help them assess how they did on the exercise.  In addition the VA from all students can be collated which can be used by the tutor to see if his/her activity worked well or not (bottom right of the multiples above).  In this case the summed VA shows that students examined the areas they were supposed, that is, within the target/guide polygons.

The system only works with a zoom and pan navigation system where the zoom function is needed to explore properly.  If the exercise can be solved just by panning, a paint spray map won't show much variation in VA and interpretation would be difficult to impossible.


Other Related Work
Learning Analytics is a growing area of investigation, there's lots of work tracking student's logs using VLEs (LMS in US) to understand their learning.  There has also been use of tracking to see where avatars have moved in virtual environments, visualizing it as a 'residence time' map similar to the VA maps above.  However, this is the first attempt we've come across where movement in 3D virtual environment via zoom and pan has been tracked and visualized.




Wednesday, November 27, 2013

Footprints in the Sky: Tracking Students on Virtual Fieldtrips

Virtual Field Trips (VFTs) can be used to go to places that are impossible to visit (mid Atlantic Ridge), or act as a replacement for students unable to physically attend a field trip.  An example of one produced by colleagues at the Open University is previewed in the video below (source):



VFTs have been produced using 3D platforms such as Google Earth but it is only recently that developments in software and hardware have meant that the technology is robust enough to use in everyday teaching.  

Tracking Students:  One idea we had in our Google research project was to see if tracking students flying around VFTs can be used to inform tutors and students about student's learning.  This topic isn't well covered in the literature so worth investigating.  A paper Muki, Paolo Viterbo and I have just submitted to a journal describes our work in this area.  We collected 4D data (3D with time) using the Google Earth API of students navigating around to complete an educational search task.  In some VFTs students are limited to walking but in ours they had access to zoom and pan.

Two Visualisations:  In the paper we describes two visualisations which help users’ (either tutors or students) make sense of the complex 4D tracking data.  One is a static graphic (not covered today), the other is an animation:

The animation links an altitude vs distance graph with a 3D view of the track in space using Google Earth’s cross section functionality.  We think that these visualisations are quick and effective ways to evaluate student's search activities.

Experiment Summary:  In the experiment students:
1.     Viewed a Google Earth tour which explained how to identify paleo-geographical features (lake banks surrounding a lake long since dried up). 
2.     They were then set a task searching for their own example in a defined study area.  An important feature of the task was that students could not complete their search without zooming in to check characteristics in more detail.  Their route through 3D space was tracked and saved to a server.
3.     They marked their answer on the map.



Visualised data: The simple 3D path in space looks like spaghetti thrown into the air (top section above diagram), it’s difficult to interpret.  However, by plotting altitude against distance along path in a linked graph (bottom part) the actions of the student zooming in and out on targets can be clearly seen.   In the main view (top of image) the red arrow shows camera location and the hair line on the graph (bottom) shows the relevant point on the graph.  You can control the hair line to explore the path, this page links to a sample KML file and the youtube clip explains how to set it up and what it shows in more detail. 


What Does it Show? From interacting with this visualization several aspects of the students’ performance can be easily gauged:
·      Did the student zoom in on sensible targets (i.e. the ‘answer’ area and other areas that needed checking out)?
·      Did the student get disorientated (stray outside the yellow study area box or spend an overly long time in one area)?
·      Were they thorough in their search or just do the bare minimum (did they zoom in on a number of sensible locations, just a few or did they fail to zoom in at all)?

Possible Uses:  This technique could be applied to a number of virtual field trip situations.  The case study we’ve already looked at represents a physical geography/earth science application.  It also could be used for:

human geography: e.g. if students are taught that poorer neighborhoods are likely to be further from the centre of a city you could then ask students to identify poor neighborhoods in a sample city.  Tracking a successful search would show students navigating to sample sites around the edge of the city and then zooming into streetview to check their if they were right or not.

Student created maps:  Students are first tasked with identifying volcanoes in a country.  They mark three answers on a class shared map in the first stage.  In the second stage, they assess their peers' work and are tracked zooming in on each other's placemarks.   You could see how good their performance was in the second stage from the tracking animation e.g. did they check out suggestions in enough detail.  IMHO This last example has the advantage of representing deeper learning, it challenges students to think critically about each other’s work.

Ethics:  Learning Analytics is a powerful new tool for teaching, used carefully it has huge potential to assist students and tutors.  However, it also raises real teaching issues such as will students react well to the extra kind of feedback they can now receive?  Will institutions use it to measure tutors performance in a confrontational manner?  IMHO we need to approach this new tool with an open, student focused, frame of mind.
  

Thursday, November 1, 2012

Eye-Tracking Zoomable Maps


This post was joint authored by Paolo Battino and Rich Treves.

One of the evaluation techniques we said we would employ in our Google Research Project (links to search query) is eye-tracking.  Eye-tracking software is usually designed to record the position of your gaze on the screen, assuming the content of the screen only changes in a predictable manner. This means that current eye-tracking software is good for understanding actions on a web page e.g. did the user spend more time looking at side menu, header or main content?  This is because the screen is divided into static areas and the time spent looking at each area can be easily calculated.

Problem with Eye-Tracking: Unfortunately, this does not work with a map (or a virtual globe) and you want to keep track of the geographic location observed by the user.  In this situation, XY coordinates on screen recorded by the eye-tracker do not directly map onto Lat Long coordinates because the user can zoom, pan and tilt the map ‘camera’. 

Solution:  We have developed a solution entirely based on software developed for this project.  See example below:  
    
Heat maps showing density of eye fixations on a Google Earth map.  
Reading down, the screen shots represent zooming in.  
Red = High density, Blue = Low

Subjects were tested in a mock up of an educational situation.  They were shown (in a Google Earth tour) how to identify a special type of valley and then asked to find one in a given area.  The heat-maps show where on the surface of Google Earth the user was looking at during the experiment independent of zoom level/tilt/pan position. 

Heat Map Script: The heat-map script, developed by Patrick Wied, is particularly efficient in showing “the big picture” (top) but also shows dynamic rendering when the user zooms in.  The screen shots themselves are from a Google Map mashup with all the usual zoom and pan controls.

HowTo:  The solution we describe here only works with Google Earth as it requires the Google Earth API.  

Summary of the Process:
1)   During the experiment, the eye-tracker records each fixation in terms of X,Y tuple together with a very accurate timestamp (this is important).
2)   During the experiment, a custom script records the position of the Google Earth ‘camera’ which is producing the view  on screen.  It polls the Google Earth API every 200 milliseconds or so and every entry is timestamped.
3)   After the experiment, on a webpage using the Google Earth API we reproduce exactly the same view displayed during the exercise by feeding Google Earth the logs from  [2].
4)   Using the timestamp of each log entry, we look up the eye-tracking logs to find out if there was a fixation recorded at exactly that time.
5)   We then use the X,Y screen coordinates to poll Google Earth and transform those coordinates into latlongs. In effect we ‘cast’ a ray from a specific location on screen onto the virtual globe.
6)   Using the API we record the lat long from the end of the cast ray and put it into a database (see diagram below)
7)   This data is processed to render the heat-map.

There are obvious far more technical details that this but for the moment we thought we'd just get the idea out.

Problems with Eye-Tracking Maps:  There are a couple of inherent issues to do with eye-tracking virtual globe maps that have already occurred to us:

  • High Altitude Zooms:  At both high and low altitude the fixation is captured as a point, at a high zoom the user may be looking at a larger feature.  A circle polygon would better represent the fixation at altitude.
  • Tilt inaccuracy: In a situation where the user is highly tilted, the inherent inaccuracies of the eye-tracking kit get amplified - a small change in eye position can represent a large variation in distance on the ground. 
In the particular case study we've discussed today we don't think either of these are a particular issue but they need to borne in mind in other situations.



Wednesday, July 4, 2012

Google Research Update & Elevation Profiler

See Labels > Google Research Project in the right column to see earlier posts about this project.

Work on our Google Project is going well:

- On the content front, after feedback from team members and talk aloud 'hallway' testing, the final version of the Google Earth tours needed for the tests have been produced.  
- On the testing software front Paolo has got the system working where we record both eye-tracking of subjects using our tours and events in the Google Earth API (e.g. tracking camera movements by the subject).  We are exploring the idea of combining the results from both but at the moment we cannot combine them directly for analysis.  

Google Earth tour Sound: We have identified that there is incompatibility between playing sound on Macs and PCs from tours.   When he has some time, Paolo has promised to write up a HowTo post here on a work around.

Elevation Profile in Google Earth:  As part of the work I've been thinking about how to use cross sections or elevation profiles to visualize topography.  I thought I'd write up some features of the elevation profiler I've explored that are quick and easy to use for showing elevation.   

Problem:  I want to show the user the topography of a river valley.  In areas of dramatic topography such as the Grand Canyon you can just tilt the Google Earth camera and the user gets the idea of what the valley is like.  However, in our study area the valley landscape I want to show is much more subtle, the slopes need to be exaggerated to show up.  Also, when you are considering topography across a large distance (say the elevation of the Amazon) topography will naturally be subtle compared to the long length of the feature.

Solution:  Use an elevation profile feature of lines to exaggerate the topography.

HowTo:  

1] Find the area you want to draw a cross section across.  Use the path tool (its a button on the top line of Google Earth with a line with blobs icon) draw a simple line across part of the feature, you should use a click ONLY at the start and the end. 



2] Once you have named and saved your line, find it in the places column to the left.  Right click it and select 'Show elevation profile'.  An elevation cross section will appear at the bottom of the screen.  You can click within the profile and a red arrow will appear on screen to show the elevation at any point on your line.

3] move the mouse within the elevation and a vertical line and red arrow show the height at any point.  

4] A nice trick is to draw a deliberately short line section and lengthen it.  To do this, right click the line in places column > properties.  Now find the end of the line you marked (a blue or red square) and click and drag it.  As the line lengthens the profile dynamically grows.  This allows for all kinds of teaching questions, e.g. in my case I could start with the line going down just one slope and ask students to predict the rest of the profile on paper.  You then complete the profile dragging the line out and the true cross section is revealed and have a competition on who drew the best profile.

Teaching Point:  Its important to remind the students that what they're looking at is an exaggerated section otherwise they may get the idea that the topography is as dramatic as it looks.  I would do this by tilting down to view the line in Google Earth and asking the students why the topography doesn't look the same as the profile.  You could also ask them to calculate what the exaggeration is by reading off values from the vertical and horizontal axes.