By robert pratten, October 23rd, 2009

Having watched Christy Dena’s  excellent presentation yesterday (see the embedded video below), it motivated me publish the attempts I’ve been making to document transmedia storytelling.

The presentation identifies some key requirements for transmedia documentation:

  • indicate  which part of the story is told by which media
  • indicate the timing of each element
  • indicate  how the audience traverses the media (what’s the call to action?)
  • indicate what the audience actually sees and does
  • take account of the possibility for “non-linear traversal” through the story
  • provide continuity across developers (who may be working on different media assets)

Christy also references music notation and says that it would be nice to present a transmedia project in this way so that someone could see the beauty of it at a glance.

I’ve been looking at this approach myself and I’m not the first. I knew that Mike Figgis (who is a composer as well as a director) when working on Timecode used a kind of music notation to present and explain his ideas for four stories would be told simultaneously in real-time. And in fact I was delighted to see that he’s put his notes online!

So here’s my proposed solution. The breakthrough that came while watching Chrisy’s presentation was to separate the actual story narrative from the experience of it. Hence at the highest level we have two timelines: one for story and one for the experience

Transmedia notation

Taking this idea further, it’s possible to break the media  into separate timelines so that it’s possible to see which media is being used where.

Hence, at a very high level, it’s possible to see in the example above that the audience first encounters the story through an online game which actually reveals the end of the narrative. During the game it looks like there are several mobile media used and some internet video.

At a glance this does meet many of the documentation criteria although it doesn’t reveal the detail of course or say how the media is traversed.

Experiencing the Media

I took the approach that progression of the experience (and hence unlocking or revealing of  media that tells another piece of the story) is via two controls:

  • Triggers
  • Dependencies

Hence, each stage or “state” of the experience is represented by a media asset that is unlocked by a trigger and made available to the audience participant if he/she meets the dependencies (age, location, time, network etc.).

Example triggers and dependencies might be:

  • Time –  media released  to a calendar schedule or lock/unlock it by time of day (e.g. only available between 3pm & 4pm)
  • Location – media released only to those in a certain geographical area or changed/modified based on location
  • Device/Platform – media only available on mobile or only on project sponsor’s network or only on TV
  • Knowledge – media released  only if participant has experienced some other content first
  • User action – media released when person clicks a button or link
  • Audience numbers – media released when enough people are playing game or is switched off if more than six people are in the room
  • Age – must be over 15?

Each media asset that’s unlocked must be described in terms of:

  • The type of media ( e.g. audio, video, image, text, interactive)
  • Device implementations and dependencies (e.g. audio only available via mobile)
  • The story knowledge revealed (info, characters, plot points, props, locations)

So now, at a high level, and without lots of messy lines criss-crossing an A3 sheet of paper, it’s possible to present very clearly each media asset and it’s relationships:

  • to the story
  • to the experience
  • to the audience
  • to other media.

Of course additional documentation is needed for each asset but at least there’s now a simple overview.

This is still a work in progress and I’ll develop it further but I’d be interested to hear thoughts from others or find other approaches.


  • Share/Bookmark

Posted in cross-media gaming marketing storytelling transmedia

robert pratten Robert Pratten is CEO and Founder of Transmedia Storyteller Ltd, an audience engagement company and provider of Conducttr, an pervasive entertainment platform. He has more than 20 years experience as an international marketing consultant and has established himself as a thought-leader in the field of transmedia storytelling. He is author of the first practical book transmedia storytelling: Getting Started in Transmedia Storytelling: A Practical Guide for Beginners.


  • Interesting Robert, nice to see people plugging away at coming up with a standard way to 'chart' multichannel experiences - but I think there are a multitude of ways of representing time, channels, media form etc: in charts for every project I am involved in they are all different and also subtle variations on a theme (most are not confusing criss-cross lines). As a composer from my Uni days & recording producer I know there is a good deal that can be borrowed from various forms of music captured in graphical design - orchestral score is a simplistic multi-channel idea but I think old and modern multi-track recording (such as Logic, Reason, Cubase, Performer etc that combine audio, midi, digital instruments, text, video tracks) all representing different elements across multiple timelines are better analogs - as an example I just threw up this image of Mike Oldfield's graphical layout for a 25 minute piece of a 24 track piece from the early 80s - which as well as being very representative of the 'audio experience' has a colourful child like, accessible charm  and enjoy :)

  • Examples of our approach are shown in the video of Lances PTTP presentation. It's applied as annotations within the script, production notes, creative specs and other documentation. So it's actually pretty simple to implement. The parties who perform the annotation don't need to know anything about formal ontologies.

  • Hey David, thanks for continuing the discussion.
    I also thought that someone must have solved this problem before and researched different methods including UML and FSMs etc. The problem I feel with these formal languages is that while they might most accurately and succinctly describe the system, nobody without a formal education in these languages can understand them.
    What I feel would be nice is to find a way to communicate the solution to as many people as possible (e.g. client, creative team). Once the media asset has been identified then that can be described with the an approach most appropriate to the media and those going to create it (e.g. script for a movie).

    Any chance you might post an example for your ontological approach? From your description it sounds a little complicated for someone arriving at transmedia from the entertainment industry.

  • There are graphical modeling formalisms, like UML, that can be useful for depicting the constituents, structure, sequence, states, interactions, and activities of a transmedia program. These are used when designing and analyzing systems, typically software architectures.

    Here's a simple introduction -

  • I like that density bar representation.

    The approach that we've taken to the description, modeling, and visualization of transmedia programs is to utilize an ontological representation of the storyworld and program elements, and to derive these aspects from this base representation. So any documentation, model perspectives, diagrams et al. are composed from the ontology.

    In this way, the world and program representations are not limited by the specification of their presentation type (e.g. the conventions of flow diagramming, or a specific documentation template scheme). We can render these formats by mapping to them from our ontology. This technique also supports abstraction, which allows us to simplify the representation and adapt it to its application.

blog comments powered by Disqus
  • twitter
  • facebook
  • delicious
  • youtube
  • vimeo

Join the WorkBook Project mailing list - enter your email below...


There are no events to show at this time.

Powered by Lifestream.

Podcast Archive