Much of this dialogue Matthew says, takes root around philosophical arguments and observations of Vilem Flusser, a theorist who was "deeply interested in the role of media and the apparatuses operated by subjects in the world". For Flusser the apparatus is a "a tool that changes the meaning of the world in contrast to mechanical tools that work to change the world itself."
In this scenario a camera is seen both as a mechanical tool and as an apparatus that can change how we construct meaning in the world. With these ideas as "scaffolding", Matthew tells us "I've been playing with different ideas around how to construct meaning out of different pieces of the world."
Matthew: The first tool I built started as an exploration of ideas of time and representations of time. I wanted to know what a clock that created a picture of the past would look like. Specifically, I liked the idea of a camera that watched the world, and then each second, minute, and hour the average color of the camera's view became a sample of the world. This would mean that looking at a color clock would be like looking at the past in terms of color. A way of looking at the world that we can't physically accomplish without the use of a computer.
This approach uses three different radial gradients with 24, 60, and 60 divisions respectively for hours, minutes and seconds. A camera's view is then averaged and sampled at regular intervals in order to populate the ramp keys that make up the gradient of the circle.
This was interesting, and left me thinking about taking something like a movie and applying the same idea. What would 100 equidistant samples of a movie or youtube short look like?
I started by creating a ramp with 100 divisions, and then used a similar technique of sampling average color to populate the values for the ramp keys driving the gradient. These were then populated by scripts that were run based on the playback of the movie. The images above are a 100 equidistant average frame samples of the short Stray on Vimeo.
Finally, another idea came out of an idea rooted in the website MOVIEBARCODE. Moviebarcode looks to convert a full film into a barcode of color - each frame being a single pixel wide, and a uniform number of pixels tall. I liked this idea, and wondered how I could apply this to single images rather than whole whole movies.
I started by considering how I would condense the information in a whole image to just a barcode of color. Each bar is the reduction of an entire picture compressed to a resolution of one pixel high, by the normal length (in pixels) of the image. This distorted version of a photo is then stretched to be 256 pixels tall (to make the color schema more visible). As a point of perspective, the first barcode image below is the product of roughly 270 operations.
Matthew Ragan has been a very active proponent of TouchDesigner - first as a student, and now through the very, very interesting "Compositional & Computational Principles for Media" course he currently teaches at Arizona State University. If you spend any time at all on the TouchDesigner Facebook Group, the TouchDesigner Help Group or the Derivative Forum you will have the pleasure first-hand of interacting with him there - it is recommended!