3/3
The third and final day of Eyeo was no exception to the inspiration I’ve come to expect from the festival.  The day was a bit shorter than the previous two, lacking an afternoon workshop and evening sessions, but the morning and midday were jam-packed without any break, so it was still a full day.
Up first was Nicholas Felton, who goes by the online moniker “Feltron” (I’d always thought his last name was Feltron, but it isn’t).  He talked about the data collection process he goes through to create his annual reports and I found it comforting to see how laborious the process is for him.  There are many people, of which Nick is one, that just. work. hard. It’s nice to see that effort is required to create such beautiful pieces. 
After Nicholas, Mark Hansen, who I first heard about at a conference back in 2008 and have even exchanged a few experimonth-related emails with, spoke about programming people instead of pixels through a project he’s collaborating with the Elevator Repair Service.  Using Processing, he’s remixed three books into one play/reading and then serves up the lines to the actors via iPhones, live.  The actors, who know the books intimately, read them in the new order.  The whole thing takes about eight hours and in addition to remixing sentences and their order, he also assigns the actors various locations, which have them move about the performance space (picture a large place like the NYPL), sometimes joining together, something moving apart.  It’s a very interesting expression of data and while I’m not sure how applicable it is to my work, it has definitely pushed the boundaries of what output I see possible from a piece of code.
After Mark, Jer Thorp spoke about several projects he’s done in the year he’s lived in New York.  Jer’s so smart and approachable, I’m really becoming a big fan.  It was great to see Cascade through his eyes: I see now that it’s a reader/story analysis tool for NYT staff, not necessarily a toy for the end user.  What will stick with me most from his talk, however, was the process he went through to visualize the names of 9/11 victims for the 9/11 Memorial and Museum (I won’t go through his talk as it’s well documented here). What I loved most about his process was that he used Processing to figure out the solution to a problem, without making Processing the product of that solution.  I also appreciate how the visualization wasn’t done after Processing had everything figured out.  (He created a interface for the output so that the visualization was editable by the architects before becoming final.)
Aaron Koblin was up next and he showcased several awesome projects that I never realized were done by the same person.  The sheep drawing, Johnny Cash and Wilderness Downtown projects were all, at least in part, his creation.  He also showed a number of projects that I’d never seen, but were similarly delightful.  In the science museum world, we have citizen science.  Aaron’s work is like citizen art.  Lots of people create small parts, which comprise a whole that they’re often unaware of until it’s complete.  There is a gestalt that is really moving about these varied, but cohesive expressions.  Like Jake’s talk the day earlier, I was on the verge of tears at several points.
The last talk of the day was a panel, Data Viz & Social Justice. Laura Kurgan, who pointed out that no data is raw, immediately earned lots of head nodding from me.  As did an audience member who later asked the question — What are we doing to communicate that in our visualizations?  And why aren’t their error margins in them? – to which the audience spontaneously applauded.
In my work with scientists, I’m exposed to different approaches to data.  Some scientists don’t even look at data until they have their questions and hypotheses defined and documented.  Others consider the inability to shift gears while exploring the data a major weakness in research methodology.  I don’t know where I am in that continuum, but I don’t hear us (designers, developers, data viz geeks) talking about it at all.  Someone mentioned that these sorts of things should be covered in a future Eyeo.  I do and do not agree (more on that in my final Eyeo installment).
After the wrap-up, which involved lots of clapping, I spent the afternoon over at the Walker and then out to dinner with friends new and less new. My thoughts on the festival in general and what I’ll take away will be posted tomorrow.

3/3

The third and final day of Eyeo was no exception to the inspiration I’ve come to expect from the festival.  The day was a bit shorter than the previous two, lacking an afternoon workshop and evening sessions, but the morning and midday were jam-packed without any break, so it was still a full day.

Up first was Nicholas Felton, who goes by the online moniker “Feltron” (I’d always thought his last name was Feltron, but it isn’t).  He talked about the data collection process he goes through to create his annual reports and I found it comforting to see how laborious the process is for him.  There are many people, of which Nick is one, that just. work. hard. It’s nice to see that effort is required to create such beautiful pieces. 

After Nicholas, Mark Hansen, who I first heard about at a conference back in 2008 and have even exchanged a few experimonth-related emails with, spoke about programming people instead of pixels through a project he’s collaborating with the Elevator Repair Service.  Using Processing, he’s remixed three books into one play/reading and then serves up the lines to the actors via iPhones, live.  The actors, who know the books intimately, read them in the new order.  The whole thing takes about eight hours and in addition to remixing sentences and their order, he also assigns the actors various locations, which have them move about the performance space (picture a large place like the NYPL), sometimes joining together, something moving apart.  It’s a very interesting expression of data and while I’m not sure how applicable it is to my work, it has definitely pushed the boundaries of what output I see possible from a piece of code.

After Mark, Jer Thorp spoke about several projects he’s done in the year he’s lived in New York.  Jer’s so smart and approachable, I’m really becoming a big fan.  It was great to see Cascade through his eyes: I see now that it’s a reader/story analysis tool for NYT staff, not necessarily a toy for the end user.  What will stick with me most from his talk, however, was the process he went through to visualize the names of 9/11 victims for the 9/11 Memorial and Museum (I won’t go through his talk as it’s well documented here). What I loved most about his process was that he used Processing to figure out the solution to a problem, without making Processing the product of that solution.  I also appreciate how the visualization wasn’t done after Processing had everything figured out.  (He created a interface for the output so that the visualization was editable by the architects before becoming final.)

Aaron Koblin was up next and he showcased several awesome projects that I never realized were done by the same person.  The sheep drawing, Johnny Cash and Wilderness Downtown projects were all, at least in part, his creation.  He also showed a number of projects that I’d never seen, but were similarly delightful.  In the science museum world, we have citizen science.  Aaron’s work is like citizen art.  Lots of people create small parts, which comprise a whole that they’re often unaware of until it’s complete.  There is a gestalt that is really moving about these varied, but cohesive expressions.  Like Jake’s talk the day earlier, I was on the verge of tears at several points.

The last talk of the day was a panel, Data Viz & Social Justice. Laura Kurgan, who pointed out that no data is raw, immediately earned lots of head nodding from me.  As did an audience member who later asked the question — What are we doing to communicate that in our visualizations?  And why aren’t their error margins in them? – to which the audience spontaneously applauded.

In my work with scientists, I’m exposed to different approaches to data.  Some scientists don’t even look at data until they have their questions and hypotheses defined and documented.  Others consider the inability to shift gears while exploring the data a major weakness in research methodology.  I don’t know where I am in that continuum, but I don’t hear us (designers, developers, data viz geeks) talking about it at all.  Someone mentioned that these sorts of things should be covered in a future Eyeo.  I do and do not agree (more on that in my final Eyeo installment).

After the wrap-up, which involved lots of clapping, I spent the afternoon over at the Walker and then out to dinner with friends new and less new. My thoughts on the festival in general and what I’ll take away will be posted tomorrow.