Isotype Interactive

For a moment, I thought it was a coincidence. A few months ago I became fascinated by Isotype, the pictorial language created in the 1920s and 1930s. Within the span of a week, I found out that several people I work with actually had made the same discovery.

First I attended an impassioned lecture on ‘graphics with a cause’ by my colleague Yuri Engelhardt who compared Otto Neurath, the father of Isotype, with Hans Rosling, a modern protagonist of the power of visualization. A couple of days later I discovered that Eugene Tjoa, with whom I collaborate in several projects, was planning to revitalize the work of Gerd Arntz, the man who designed most of the Isotype icons.

Of course it was not coincidental at all. Sooner or later everyone interested in data visualization stumbles upon Isotype and is captivated by its clean icons and its clear principles of design. Neurath’s guidelines for visualizing data are as valuable today as they were when he wrote them down in 1936.

For obvious reasons, interactivity is not part of the Isotype cookbook. That’s what makes Eugene’s plans to modernize original Isotype productions so exciting. He will update the statistics of visualizations designed by Gerd Arntz and enhance them by adding interactive features. In my opinion, the challenge of this great experiment will be to keep the interactive design as clean and effective as the original visual design. And although Neurath never knew about interactive media like the web, he did write about combinations of Isotype images in exhibitions. With a little fantasy, this remark could be a good starting point for creating isotype interactives:

Every picture has to give a new impulse to attention, to conscious thought, to a desire for deeper knowledge. Interest has to be the guide between one picture and another. But it is possible to overdo things. “Less is more.” The teaching effect will be greater, the memory will be clearer, when only a small number of good pictures has been given, every one different from the other, and a the same time every one supporting the other.

(International picture language. The first rules of Isotype, 1936, p. 66)

Explanation first, then overview, zoom, and the rest of it

In reply to my previous post about the functions of interactivity Eugene Tjoa advocates more attention to explanation in interactive infographics. I totally agree with him. Information visualization has a very fruitful influence on the design of infographics but most infovis techniques are developed for specialists, not for a general audience. Furthermore, there are no general accepted conventions (yet) about the interaction design of visualizations. And although recent research suggests that visual difficulties can stimulate engagement and active processing of information, I think it’s a good idea to at least initially give users of interactives some clues about what they are looking at and how they can play with it.

At the New York Times they call these clues the ‘annotation layer’ and they take them very seriously. At last year’s edition of the Eyeo Festival, Amanda Cox of The Times graphics department stated that “the annotation layer is the most important thing we do.” Without it, she explained, it’s like saying to your audience: “Here’s an interface, now go ahead and browse for the rest of your life.” If you missed her presentation, be sure to watch the video.

The BBC also often adds a layer of annotation to its interactives. Literally. A good example is their interactive map of deathly accidents on British roads between 1999 and 2008. On opening the page, the actual map is partly hidden behind an overlay with a text that explains what data is visualized and how it can be manipulated. After this ‘window’ is closed, smaller overlays indicate the three ‘manipulation modalities’: you can enter your postcode, zoom in on the map or specify a year.

But when it comes to explanation, even the BBC can’t compete with The LRA Crisis Tracker. The smooth design of this interactive map and timeline in a way conflicts with its sinister content. The website offers a live overview of the atrocities committed by the Lord’s Resistance Army in central Africa. Visitors can track the number of abductions, mutilations and killings in real time.

Maybe it’s because of the gravity of the data that so much attention is paid to clarifying them. On your first visit you are automatically presented an overlay with a movie that explains what the site and the data are about. After closing the video player or on subsequent visits, the map and the timeline present themselves with a small animation that highlights the different interactive options and gives you a feeling for the amount of data. It’s like the designers adapted the famous visualization mantra to: Explanation first, then overview, zoom, filter, and details-on-demand. Unfortunately the animation doesn’t run as smoothly anymore as it probably did when the site was launched. Sad but true, the likely cause is the increase of woeful incidents.

What’s the purpose of interactive features?

My research is about the effectiveness of interactives. One thing I’m interested in is the influence of the type and number of interactive features on the ability of an infographic to transfer information. But before you can study this influence, you first have to define what these ‘interactive features’ actually are. Different typologies are possible. For instance, one could look at the form of interactive elements, like buttons and sliders, but in my model of interactivity these are just images or representational modalities. I’m more interested in the functions of these elements, or what I’ve called their ‘manipulation modalities’, like browsing, showing, hiding, dragging, zooming, filtering, or adding data.

Of course, researchers have categorized the functions of interactive features before. Probably the most famous classification is the one Ben Shneiderman proposed in The Eyes Have It, his famous article about the Visual Information Seeking Mantra: overview, zoom, filter, details-on-demand, relate, history, and extract. Recently I found a taxonomy of interaction techniques by Yi, Kang, Stasko, and Jacko. Although their typology looks like the ones above, it’s actually of a higher level. The following categories of interaction aren’t based upon tasks or functions, but upon the intent of the user: with what purpose does he or she perform a certain interactive operation?

  • Select: mark something as interesting
  • Explore: show me something else
  • Reconfigure: show me a different arrangement
  • Encode: show me a different representation
  • Abstract/Elaborate: show me more or less detail
  • Filter: show me something conditionally
  • Connect: show me related items

The typology is based on a review of professional information visualizations articles and tools and clearly focuses on the visualization of data. It would be interesting to test its usefulness with regard to interactive infographics in general, for example by analyzing a corpus of popular interactives on the web. According to the authors the categories are not exhaustive: “Some techniques are difficult to classify and do not quite fit into any one of the categories.” A quick try on What employees say, an interactive about the Fortune’s annual ranking of the best companies to work for created for CNN by Infographics.com, indicates that the opposite is true as well: in some cases one single click will perform several of the functions from the list above. Whether that says more about the typology or about the interactive, I’m not yet sure.

Not so interactive slideshow

In their article Narrative Visualization: Telling Stories with Data (pdf) – that I found thanks to this must-see interactive movie about data journalism – Edward Segel and Jeffrey Heer define 7 basic genres of narrative visualization (magazine style, annotated chart, partitioned poster, flow chart, comic strip, slideshow, and film/video/animation) and three schemas that are widely used to tell stories with (visual) data:

  • The Martini Glass structure; a linear narrative that only allows user interaction when the story is finished,
  • The Drill-Down Story; a presentation that lets the user dictate what stories are told and when, without a prescribed ordering.
  • The Interactive Slideshow; a regular slideshow that incorporates interaction on its slides.

That last model obviously was what The Guardian had in mind when it decided to create an interactive about the trapped miners in Chili. A good idea, but the implementation is a bit sloppy. It’s for example easy to miss the tiny ‘next’ button at the bottom because the slideshow hardly fits on the screen of a laptop. The slides are heavy loaded with text but miss headings that guide you through the story. And isn’t a ‘back’ button a basic requirement for an interactive slideshow?