Revealing Space

2012, coding
The results of my “architectural machines reloaded” course in Weimar/Germany.
The task was to design and code an architectural machine in Processing. What I did was an analyzing machine in openFrameworks.


Public space is a far more complicated phenomenon than we realize in our everyday life’s. Public space is not just about the physical setting or the social interaction, it has a component of time as a very basic element of it’s constitution. The project „Revealing Space“ aims to reveal people’s physical movement in public spaces by making them visible in a certain space over a longer time span. Thus the project aims to sensitize the viewer of the fact that space is not just made up at one certain moment in time (the moment we perceive it) but it always relates to what has happened in its past (and one could even say, of what will happen to it in future). This is not just true for people’s movement in public spaces but also for the built physical environments and the social interactions that have been performed or will be performed in those spaces.

In many ways the project “Revealing Space” relates to the installation project “Secret Trails” that I realized in different locations in Helsinki in summer 2011. Whereas the earlier project used Kinect’s depth data to track people’s movement and projected it back into the same space in real-time, ‘Revealing Space’ uses traditional prerecorded video input.


Methods | Programming

The project uses two quite different methods:
1 – Simple Pixel Differentiation
The first attempt compares all pixels of the current video frame (red, green, blue values) with a base image and adds the difference to the output image if it is higher than a certain value (threshold).
2 – Blob detection using open CV
The second attempt uses the open CV library for openFrameworks . The library add-on enables to detect the moving person as a blob (the person itself is detected similarly to the method mentioned above). All blob center points of one frame are added to one vector (for blobs per frame), which is then added to another vector (for all frames). This nested vector recalls the principle of a two-dimensional array list in processing. The principle enables to store as many blobs in one frame as needed and at the same time know the time sequence of the blobs. The blobs are then redrawn to the screen as circles of different alpha values according to the time they were detected (higher alpha value for newer blobs).
In principle as many frames as wanted could be stored, thus enabling to encode videos of unlimited time. The time period is just limited by the patience of the software user: For the results shown in the project video I ran the video material that was originally 9:34min at a speed of 3x thus reducing the time to 3:12min. In principle the software could also be used for real-time encoding.
I included a limitation for how long blobs are stored, a reasonable and quite effectful function. The time period can be chosen before running the program. It was only in the process of implementing when I realized the power of the choice: How long is the “memory of a space”? Is it just a second, a minute, an hour or a year? The choice has a great influence on the results and therefore shows the power of software developers and information designers: the power to influence our memory.

for a pdf project documentation see here

source-code download
1- simple pixel differentiation
2- blob detection using openCV