Showing posts with label Research. Show all posts
Showing posts with label Research. Show all posts

Monday, January 9, 2017

Three Geo-Animations for Atlas Tours (Google Earth Tours, Esri Story Maps)

Just less than a year ago I published a post about 3 types of Atlas tour (1).   I've been thinking about the topic over the last year as I've been writing papers so I thought I should develop that post with some more detail.  I discussed this in my recent Google Education talk.

Types:
Just as you can have different types of PowerPoint (fieldwork briefing, photo slide show, talk etc. etc.) you can have different types of Atlas Tour.  Esri Story Maps (ESM) have identified a number of different types which emphacise text narrative, I believe most Atlas Tours should be narrated using audio, so I'm not going to discuss those.  My sorting works on two axes:

  • 3D or 2D:  ATs can be used to discuss both landscape (3D) or map views (2D).  
  • Realistic base map vs Symbolized:  showing realistic imagery works well when illustrating landscape but symbolising is endlessly useful in paring down a map to simlyfy it to the elements needed (e.g. temperature and wind but nothing else bottom right below)  
which produces 4 groups.  These are illustrated in an image grid below (2):




I give examples of the four groups in this videod section of my Google Education talk recently.

Geo-Animations
Within an Atlas Tour, you can have different types of animation that are highly suited to the format, I've identified 3 which I think are particularly useful and to illustrate them I've prepared a story board of an Atlas tour discussing the famous Snow cholera map:

1] Map Sequence:  using annotations or revealing layers (build animation) of a map one by one in order to explain a complex map.  The sequence above illustrates the build animation with street names added and then the pump.  It becomes much more important on complex maps.

Click to expand.  The audio narrative script is found under each image.


2] Time Animation:  Showing a sequence of maps to show change due to time.  This is well discussed in the cartographic literature.  Note that I've invented data, the spread was actually not recorded.
Click to expand


Avatar animation:  flying down from a symbolised map view into a 'human' view.  This is an original idea of mine and IMHO is very powerful, you can illustrate spatial relationships and then follow up with showing what they look like in real life.  In this case, on the street.

Click to expand


these aren't the only animations you can use and you can certainly usefully link out to static imagery and non-map video from within a Atlas Tour.  However, they are all very spatial and so worth highlighting above other formats in an Atlas Tour.



1] at the time I called them Google Earth Tours but to include people interested in using Esri Story Maps I now use Atlas tours as an encompassing term.

Wednesday, November 16, 2016

Fieldscapes: A new idea for Virtual Fieldtrips

A while back I wrote a post about Google Expeditions.  Since then I've come across a couple of colleagues working on a format that has a lot of similar, interesting features.  I presented these to Geography school teachers on Wednesday night at a 'TeachMeet' run at the RGS (thanks for hospitality RGS and for Alan Parkinson for standing in to compere).  Google were there promoting their expeditions in schools.  I couldn't post my slides for copyright reasons so I thought I'd write some notes.

Basic Idea:

Much as in a third person shooter game you enter a fieldtrip 'world' and explore it.  You can find markers which can be clicked bringing up web materials (related images, videos, multiple choice questions).  The environment can be customized by the teacher allowing them to put in instructions, self assessment questions and links 'in world'.  This means you can re-use the environment for different levels of students.


The video above gives you a nice taste, I am not convinced by the 'learning fieldwork skills' functionality but the other features it shows are very interesting.

As an aside, Declan De Poar came up with a similar idea for use in Google Earth  that I remember him showing me.


Who is doing this?

Daden are a commercial company already working with the Open University on this, Fieldscapes is their project.  A colleague of mine at Hertfordshire (Phil Porter) came up with a similar idea.

Thursday, October 20, 2016

New Paper: How to make an Excellent Google Earth tour

We (myself and Artemis Skarlatidou) have just submitted a paper to a cartographic journal about a successful experiment we did on users' understanding of Google Earth Tours.  The work produced two rules of thumb to consider when making Google Earth tours so I thought I'd blog about it.  Note that the title of this post isn't how to make a 'cool' Google Earth tour that grabs users' attention, this is about how to use them as an effective communication tool.

Why should I care about Google Earth tours?
Before we get to the two best practices its useful to think about the media we're discussing.  Is it worth using?  My answer to that is that Google Earth tours are common on the web and the wider generic group of Google Earth like animations (Atlas tours) are everywhere!  e.g.:
- TV (e.g. weather forecasts)
- The web (e.g. National Geographic)
- Mobile satnav apps

As an example of Atlas tours in satnav apps, both Google Maps and Apple Maps in driving directions mode will zoom into tricky road junctions when you approach them but then zoom out when you are on a straight road section to show you the wider view.

So you should consider creating a Google Earth tour (or Atlas tour if you prefer) as a way to tell your spatial story.

Best practice 1: use high paths
If you are producing a tour with two or more low points, you get to choose how the camera moves between the two low views.  Users' mental map of the study area will be better when your tour following a 'Rocket' path(1) where there is a mid point where you can see the start and end of your tour. This video explains the point and tells you how to achieve it technically in Google Earth:




Best practice 2: use of speed
We haven't explicitly proved it but an animation speed of 1 second for any camera motion is a good rule of thumb(2).  If the tour is more visually complex, you may want to slow the speed down.  Reasons to take more time:
- You are flying through a complex 3D cityscape
- There are lots of elements on screen (points, lines, areas) that you want users to understand

As an example, these are some of the experimental Google Earth tours; only the 'low, fast' condition really troubled the users in the experiment.



Conclusion:
Atlas tours are very common as they are an effective media to communicate a spatial story or data.  Google Earth is one of a suite of software that can be used to produce Atlas tours, I think the principles described here will apply whatever software is used.

I read all the studies I could find in 2011 and produced an earlier paper which discussed these and 17 other best practices for producing Google Earth tours.  This is the shorter blog version of the paper.


Notes
1] In the paper, this is called the high path.  Less memorable but more professional sounding.

2] our experiment ran at speeds slower than this and user's had little problem building up a mental map of the study area.

Saturday, March 19, 2016

Tracking students in Google Earth

Our paper 'Footprints in the sky: using student track logs from a 'bird's eye view' virtual field trip to enhance learning' has been published.  It describes how students were tracked zooming and panning around Google Earth on a virtual field trip.  Their movements were recorded and their visual attention inferred as a paint spray map: high attention = hot colors, track = blue line.

A paint spray map of 7 students (1-7) performing a search task in Google Earth.
Background imagery has been removed to aid clarity.
Click to expand.

How it works
The idea is to track students performing a search task, in our experiment they looked for evidence of an ancient lake that has now dried up in a study area.  Their 3D track as they zoom and pan around in Google Earth is recorded, their visual attention is mapped as if it is a can of paint spraying:  if they zoom in to check an area in close up, Visual Attention (VA) builds up, if they zoom out VA still builds up but is spread over a much larger area.

Mapping the accumulation of VA  along with their track projected onto the ground (blue line) shows where the students have searched and in what detail at a glance.  The small multiples above show data from 7 students who were given 3 set areas to investigate in further detail (target/guide polygons).  This was done in Google Earth but to aid visability, the Google Earth base map has been removed.  From the maps we can predict what the students were doing, e.g. student g5 didn't appear to visit the top right guide polygon at all and students g1, g3 and g6 only gave it a cursory look.  By comparison, students g2 and g4 explored it much more thoroughly.

How it could be used
The idea would be to give the maps to students to help them assess how they did on the exercise.  In addition the VA from all students can be collated which can be used by the tutor to see if his/her activity worked well or not (bottom right of the multiples above).  In this case the summed VA shows that students examined the areas they were supposed, that is, within the target/guide polygons.

The system only works with a zoom and pan navigation system where the zoom function is needed to explore properly.  If the exercise can be solved just by panning, a paint spray map won't show much variation in VA and interpretation would be difficult to impossible.


Other Related Work
Learning Analytics is a growing area of investigation, there's lots of work tracking student's logs using VLEs (LMS in US) to understand their learning.  There has also been use of tracking to see where avatars have moved in virtual environments, visualizing it as a 'residence time' map similar to the VA maps above.  However, this is the first attempt we've come across where movement in 3D virtual environment via zoom and pan has been tracked and visualized.




Thursday, June 19, 2014

Are men better than women at navigating in virtual 3D spaces?

I have a PhD student Craig Allison who is looking at spatial understanding in maps and related 3D spaces.   He entered and won the faculty round of three minute thesis', a public speaking competition to see who could present their work best in three minutes with one powerpoint slide.  This is his talk at the final of the event competing with other PhD students from around the University.

Navigation in 3D Spaces: He covers the importance of designing 3D spaces well to assist users navigate them and the gender differences that he has found in his experiments.  It's especially relevant to anyone designing virtual field trips using tools such as streetview and/or Sketchup.





Sad that I couldn't make the talk to support him, great work Craig!

I've marked the location of the Psychology building he discusses if anyone wants a look.

Wednesday, November 27, 2013

Footprints in the Sky: Tracking Students on Virtual Fieldtrips

Virtual Field Trips (VFTs) can be used to go to places that are impossible to visit (mid Atlantic Ridge), or act as a replacement for students unable to physically attend a field trip.  An example of one produced by colleagues at the Open University is previewed in the video below (source):



VFTs have been produced using 3D platforms such as Google Earth but it is only recently that developments in software and hardware have meant that the technology is robust enough to use in everyday teaching.  

Tracking Students:  One idea we had in our Google research project was to see if tracking students flying around VFTs can be used to inform tutors and students about student's learning.  This topic isn't well covered in the literature so worth investigating.  A paper Muki, Paolo Viterbo and I have just submitted to a journal describes our work in this area.  We collected 4D data (3D with time) using the Google Earth API of students navigating around to complete an educational search task.  In some VFTs students are limited to walking but in ours they had access to zoom and pan.

Two Visualisations:  In the paper we describes two visualisations which help users’ (either tutors or students) make sense of the complex 4D tracking data.  One is a static graphic (not covered today), the other is an animation:

The animation links an altitude vs distance graph with a 3D view of the track in space using Google Earth’s cross section functionality.  We think that these visualisations are quick and effective ways to evaluate student's search activities.

Experiment Summary:  In the experiment students:
1.     Viewed a Google Earth tour which explained how to identify paleo-geographical features (lake banks surrounding a lake long since dried up). 
2.     They were then set a task searching for their own example in a defined study area.  An important feature of the task was that students could not complete their search without zooming in to check characteristics in more detail.  Their route through 3D space was tracked and saved to a server.
3.     They marked their answer on the map.



Visualised data: The simple 3D path in space looks like spaghetti thrown into the air (top section above diagram), it’s difficult to interpret.  However, by plotting altitude against distance along path in a linked graph (bottom part) the actions of the student zooming in and out on targets can be clearly seen.   In the main view (top of image) the red arrow shows camera location and the hair line on the graph (bottom) shows the relevant point on the graph.  You can control the hair line to explore the path, this page links to a sample KML file and the youtube clip explains how to set it up and what it shows in more detail. 


What Does it Show? From interacting with this visualization several aspects of the students’ performance can be easily gauged:
·      Did the student zoom in on sensible targets (i.e. the ‘answer’ area and other areas that needed checking out)?
·      Did the student get disorientated (stray outside the yellow study area box or spend an overly long time in one area)?
·      Were they thorough in their search or just do the bare minimum (did they zoom in on a number of sensible locations, just a few or did they fail to zoom in at all)?

Possible Uses:  This technique could be applied to a number of virtual field trip situations.  The case study we’ve already looked at represents a physical geography/earth science application.  It also could be used for:

human geography: e.g. if students are taught that poorer neighborhoods are likely to be further from the centre of a city you could then ask students to identify poor neighborhoods in a sample city.  Tracking a successful search would show students navigating to sample sites around the edge of the city and then zooming into streetview to check their if they were right or not.

Student created maps:  Students are first tasked with identifying volcanoes in a country.  They mark three answers on a class shared map in the first stage.  In the second stage, they assess their peers' work and are tracked zooming in on each other's placemarks.   You could see how good their performance was in the second stage from the tracking animation e.g. did they check out suggestions in enough detail.  IMHO This last example has the advantage of representing deeper learning, it challenges students to think critically about each other’s work.

Ethics:  Learning Analytics is a powerful new tool for teaching, used carefully it has huge potential to assist students and tutors.  However, it also raises real teaching issues such as will students react well to the extra kind of feedback they can now receive?  Will institutions use it to measure tutors performance in a confrontational manner?  IMHO we need to approach this new tool with an open, student focused, frame of mind.
  

Thursday, November 1, 2012

Eye-Tracking Zoomable Maps


This post was joint authored by Paolo Battino and Rich Treves.

One of the evaluation techniques we said we would employ in our Google Research Project (links to search query) is eye-tracking.  Eye-tracking software is usually designed to record the position of your gaze on the screen, assuming the content of the screen only changes in a predictable manner. This means that current eye-tracking software is good for understanding actions on a web page e.g. did the user spend more time looking at side menu, header or main content?  This is because the screen is divided into static areas and the time spent looking at each area can be easily calculated.

Problem with Eye-Tracking: Unfortunately, this does not work with a map (or a virtual globe) and you want to keep track of the geographic location observed by the user.  In this situation, XY coordinates on screen recorded by the eye-tracker do not directly map onto Lat Long coordinates because the user can zoom, pan and tilt the map ‘camera’. 

Solution:  We have developed a solution entirely based on software developed for this project.  See example below:  
    
Heat maps showing density of eye fixations on a Google Earth map.  
Reading down, the screen shots represent zooming in.  
Red = High density, Blue = Low

Subjects were tested in a mock up of an educational situation.  They were shown (in a Google Earth tour) how to identify a special type of valley and then asked to find one in a given area.  The heat-maps show where on the surface of Google Earth the user was looking at during the experiment independent of zoom level/tilt/pan position. 

Heat Map Script: The heat-map script, developed by Patrick Wied, is particularly efficient in showing “the big picture” (top) but also shows dynamic rendering when the user zooms in.  The screen shots themselves are from a Google Map mashup with all the usual zoom and pan controls.

HowTo:  The solution we describe here only works with Google Earth as it requires the Google Earth API.  

Summary of the Process:
1)   During the experiment, the eye-tracker records each fixation in terms of X,Y tuple together with a very accurate timestamp (this is important).
2)   During the experiment, a custom script records the position of the Google Earth ‘camera’ which is producing the view  on screen.  It polls the Google Earth API every 200 milliseconds or so and every entry is timestamped.
3)   After the experiment, on a webpage using the Google Earth API we reproduce exactly the same view displayed during the exercise by feeding Google Earth the logs from  [2].
4)   Using the timestamp of each log entry, we look up the eye-tracking logs to find out if there was a fixation recorded at exactly that time.
5)   We then use the X,Y screen coordinates to poll Google Earth and transform those coordinates into latlongs. In effect we ‘cast’ a ray from a specific location on screen onto the virtual globe.
6)   Using the API we record the lat long from the end of the cast ray and put it into a database (see diagram below)
7)   This data is processed to render the heat-map.

There are obvious far more technical details that this but for the moment we thought we'd just get the idea out.

Problems with Eye-Tracking Maps:  There are a couple of inherent issues to do with eye-tracking virtual globe maps that have already occurred to us:

  • High Altitude Zooms:  At both high and low altitude the fixation is captured as a point, at a high zoom the user may be looking at a larger feature.  A circle polygon would better represent the fixation at altitude.
  • Tilt inaccuracy: In a situation where the user is highly tilted, the inherent inaccuracies of the eye-tracking kit get amplified - a small change in eye position can represent a large variation in distance on the ground. 
In the particular case study we've discussed today we don't think either of these are a particular issue but they need to borne in mind in other situations.