Close
Company Post

Refactor: An Autonomous Generative Painting Factory Mapping the Artist

Reflecting on audio-visual expression and the mediation of digital language, we need to ask what cognitive potential is engaged? Are only people with synesthesia able to experience a "blend perception"? Or is all our cognition in essence embodied and thus linked? Audio-visual art offers an embodied unified sensory experience that alters our consciousness and redefines space. 
In 'Refactor' the painterly experience of the visual artist is mapped into computer codes that make possible a painting factory without any presence of the painter. The platform is an interwoven spatial-visual-aural sculpture that redefines space through the arrangement of generated recurring images and sounds. Visual images are immersed in the soundscape and sounds are emerged from visual images. Painting becomes sound and fades away from our sight like sound. Compositional algorithms of the painter are brought to life by compositional algorithms of the sound artist.

Background

After a series of exhibitions artist Nikzad Arabshahi had the idea to reconstruct his painting in real-time through computer renderings in a generative algorithmic composition. All of these algorithms are implemented by the artist in TouchDesigner, which allowed him to "simply visualize [his] thoughts without limitation while providing all the necessary equipment for the challenges of production". Refactor is a culmination of Arabshahi's earlier painting projects (Eraser – 2013 and Chlordiazepoxide – 2011) transformed into a generative art project.

The project is an experiment in a new dialogue blending two mediums - sound and visual - so that the two play with each other. The visual part is the result of an artistic pursuit and painterly experience that has "transformed the painter's mindset". The project portrays this new reconning and is thus named "Refactor".

At Refactor's core is a computer program based on evaluating, reviewing, reflecting and reinterpreting the painter's practice of art making.

The outcome of this self-reflection is turned into compositional algorithms that can be formulated mathematically so that their sequential outcome forms a dynamic network of computer codes. These computer codes make possible painterly experience without the painter; a painting factory without any need for the painter's presence or control. They recreate the process of image making and control painterly quality of this realization on dedicated visual platforms.

In audio-visual art one cannot separate sound and visual from each other without disrupting the essence of this new art. In the audio-visual art, the relation of sonic and visual stimulus is no longer a simple one, but a new language that is created for decoding, decrypting, deciphering, revealing and then again concealing, reinterpreting, imagining, evoking a fleeting moment of what might present itself as the real or as the true.

In audio-visual art one cannot separate sound and visual from each other without disrupting the essence of this new art.

This new language articulated by intertwining space, sound and image has now become medium as well as mediator of new ways of dialogues.

Method

Various sensors like Kinect, Leapmotion, EEG, Myo and Wacom were installed in Arabshahi's studio and for two consecutive weeks captured his hands' physical movements and the ways in which he used both physical painting brushes and virtual brushes in digital painting.

[Tech Tip: Capture Gestures using this Gesture Capture Component]

Analysis of these bundles of data led to the design of TouchDesigner modules imitating Arabshahi's behaviours. Charts of materials science and technical application of acrylic paint were next employed to model and simulate painting techniques like Impasto, Glazing, wet on wet, brushwork, etc.

The outcome of these simulations -visual elements and units in the form of lines, shapes and marks - are subsequently assembled into meaningful shapes and patterns via another layer of local compositional algorithm for data which was devised: "a network of modules designed to build a meaningful texture".

At the last stage, a final compositional algorithm for general composition of the whole picture was added and the network condensed to become a drawing engine. Five of these simulation engines were used to reach the final picture.

Process

Operation sequencing of each painting simulation engine can be summarized as follows:

  • Analysing and resampling main data, design a Channel Operators network system to simulate basic algorithms using various CHOPs to generate and control data.
  • Using CHOPs' data to design and control various Surface operations (SOP) and Particle systems to render basic visual elements.
  • Using SOPs' renders to redesign images by making Texture operation (TOP) networks for final composites.
  • Using TOP to CHOP techniques for image processing. 
  • Mapping and translating final image into parameters meaningful for data sonification in two separate channels. These parameters are: Red, Green, Blue, Alpha, and Contrast of the final image, Playfulness and intensity of movement and performance of SOPs and particle systems in x-y-z units. These data are filtered and smoothed by 7fps and sent to Max/MSP through the UDP protocol for generating sound.

The final image is mapped into a time-based format and is projected onto two raw canvases size 140" x 236" mounted on the gallery's wall, and three metal multiple layers sculpture size 141" x 149" x 188" installed on three sides of gallery's space.

Operations and Sound Design

Meaningful data is not overabundant. If one takes raw visual data at 1920 x 1080 px and 30fps, that is a bitrate of about 10Mb per second, this abundance of raw data does not give meaningful parameters for understanding a picture. The whole complex mathematical methods of pattern recognition are in fact the practice of designing algorithms to translate this abundance of raw data into few meaningful parameters.

Vedad FamourZadeh: After the Second World War, composers began using graphical scores for their new conception of music. George Ligeti's Artikulation is a famous example, but some graphical scores like Earle Brown's Folio does not give a one-to-one relation between the score and the final sound and leaves more room for free interpretation.

def setupSession(self):
		apiUrl = self.ownerComp.par.Url.eval()
		userName = self.ownerComp.par.Username.eval()
		passWord = self.ownerComp.par.Password.eval()
		session = requests.Session()
		self.mySession = session
 
		# get login token
		r1 = session.get(apiUrl, params={
			'format': 'json',
			'action': 'query',
			'meta': 'tokens',
			'type': 'login'
		})

In Refactor a time-based right to left movement was adopted as the base for the score, with the vertical axis divided into a grid of three octaves of microtonal just intonation intervals that is the outcome of extensive research into forgotten musical intervals of Qutb al-Din al-Shirazi (1236 – 1311).

A set of three sine wave/white noise modular synthesizer is implemented. Within the main one, an oblique line, say going from low to high in certain time, will sound a glissando on this musical scale, the second one at an octave sliding freely between notes produces a chirping sound, and the third one moving in triple time.

This set is then layered into itself with the help of a poly-object in Max/Msp to build a polyphonic texture that creates a soundscape that brings the visual to life. The visual's colour pallet and its contrasts determine the intensities of harmonics of these sets of modular synthesizers.

An extra parameter named 'Playfulness' is defined to map the activity of the visual, that is, to what degree the visual is chaotic, playful and noisy or how much it is coherent and ordered.

The final soundscape made of complex series of sine wave and white noise modular synthesizer tries to give a general feel, a general atmosphere of the visual. Sound moves in space based on how much each projector is active. At the end, the audience is immersed in a sonic environment, in a soundscape that would put them inside the generated visuals, as if living in it, as if meditating in a temple of sounds and colours and movements.

About the Artists

Nikzad Arabshahi

Nikzad is a Tehran-based media artist, working in the fields of painting, videography and new media. Nikzad started his professional career in 2000 and so far has had several exhibitions of his works at Art Exhibitions, theatre shows and multimedia installations.

Vedad FamourZadeh

Vedad is a PhD candidate at the UTS, in the Faculty of Arts and Social Sciences department of Communication, researching on Persian music ontology and interactive sound installations. Qualified as a vibro-acoustic/signal processing engineer from Sharif University of Technology, and Institut de Recherche et Coordination Acoustique/Musique (IRCAM), where I have commenced my research on Persian music in 2004. As a sound artist/Composer, I am exploring the integration of sound art and electronic music with different soundscapes and diverse musical tradition of Iran.

 

  • Apple
  • Banana
  • Cherry

Comments