Having watched Christy Dena’s excellent presentation yesterday (see the embedded video below), it motivated me publish the attempts I’ve been making to document transmedia storytelling.
The presentation identifies some key requirements for transmedia documentation:
- indicate which part of the story is told by which media
- indicate the timing of each element
- indicate how the audience traverses the media (what’s the call to action?)
- indicate what the audience actually sees and does
- take account of the possibility for “non-linear traversal” through the story
- provide continuity across developers (who may be working on different media assets)
Christy also references music notation and says that it would be nice to present a transmedia project in this way so that someone could see the beauty of it at a glance.
I’ve been looking at this approach myself and I’m not the first. I knew that Mike Figgis (who is a composer as well as a director) when working on Timecode used a kind of music notation to present and explain his ideas for four stories would be told simultaneously in real-time. And in fact I was delighted to see that he’s put his notes online!
So here’s my proposed solution. The breakthrough that came while watching Chrisy’s presentation was to separate the actual story narrative from the experience of it. Hence at the highest level we have two timelines: one for story and one for the experience
Taking this idea further, it’s possible to break the media into separate timelines so that it’s possible to see which media is being used where.
Hence, at a very high level, it’s possible to see in the example above that the audience first encounters the story through an online game which actually reveals the end of the narrative. During the game it looks like there are several mobile media used and some internet video.
At a glance this does meet many of the documentation criteria although it doesn’t reveal the detail of course or say how the media is traversed.
Experiencing the Media
I took the approach that progression of the experience (and hence unlocking or revealing of media that tells another piece of the story) is via two controls:
Hence, each stage or “state” of the experience is represented by a media asset that is unlocked by a trigger and made available to the audience participant if he/she meets the dependencies (age, location, time, network etc.).
Example triggers and dependencies might be:
- Time – media released to a calendar schedule or lock/unlock it by time of day (e.g. only available between 3pm & 4pm)
- Location – media released only to those in a certain geographical area or changed/modified based on location
- Device/Platform – media only available on mobile or only on project sponsor’s network or only on TV
- Knowledge – media released only if participant has experienced some other content first
- User action – media released when person clicks a button or link
- Audience numbers – media released when enough people are playing game or is switched off if more than six people are in the room
- Age – must be over 15?
Each media asset that’s unlocked must be described in terms of:
- The type of media ( e.g. audio, video, image, text, interactive)
- Device implementations and dependencies (e.g. audio only available via mobile)
- The story knowledge revealed (info, characters, plot points, props, locations)
So now, at a high level, and without lots of messy lines criss-crossing an A3 sheet of paper, it’s possible to present very clearly each media asset and it’s relationships:
- to the story
- to the experience
- to the audience
- to other media.
Of course additional documentation is needed for each asset but at least there’s now a simple overview.
This is still a work in progress and I’ll develop it further but I’d be interested to hear thoughts from others or find other approaches.
Posted in cross-media gaming marketing storytelling transmedia