maurograziani.org
Music Art Technology & other stories
Posted on 20141108 by MG
Music driven by cosmic rays. It's not uncommon. Every now and then, someone does experiments like this. Indeed, since notes and frequencies can be expressed numerically and sent to synthesis algorithms, things aren't that complex: you just have to choose how to rescale the starting data.
For some years now, these procedures have also acquired a scientific value under the name sonification. Transforming any type of data into audio makes it easier to identify repeating patterns, differences, and other characteristics of the phenomenon being investigated.
Obviously, from a compositional standpoint, things take on a completely different aspect. The physical phenomenon that generates the data isn't the least bit concerned with assuming a pattern that might be meaningful or significant to us. In fact, the result is usually repetitive, boring, and monotonous (often also monotonous in the mathematical sense of always moving in the same direction, e.g., a logarithmic curve that always rises or falls nearing a limit).
The Kosmophono we're talking about is quite old, having been in operation since at least 2005 (site). It's a gamma-ray spectrometer that operates in the 3-7 MeV range, whose output is sent to the MIDI IN of a synthesizer.
A brief explanation. Most gamma rays are produced by very powerful extrasolar phenomena (in distant space). The waves produced by these phenomena are not audio (otherwise they would not reach this far), but are part of the electromagnetic spectrum (like radio waves and the light we see). Furthermore, they occupy the lowest wavelength range (lower than 0.006 nanometers = 6 thousandths of a millionth of a millimeter), which equates to the highest frequencies, about 5x1019 Hz, and are also very harmful to us.
Fortunately, they do not reach the ground but rather crash into the upper layers of the atmosphere, and it is these collisions that the spectrometer detects. Each of these events produces an energy output that is measured and converted by a 12-bit ADC, with the highest seven bits used as MIDI pitch and the next four as velocity (the last bit is discarded).
The player is a MIDI synthesizer, usually a Roland JX-305 or an Alesis QSR.
Now you can listen to a single sequence sent to the Roland. Being a single sequence, it's monophonic and rather boring, but it's useful for understanding the range and density of the events. It's an MP3.
Perceptually, things change when two sequences are superimposed. The superposition isn't real-time. Simply, two sequences saved at different times are sent to the MIDI port (one is the previous sequence). As a result, the two sequences are completely unrelated, but it's interesting to note that our perceptual system (at least that of those accustomed to listening to a certain type of contemporary music) tends to create correlations. Indeed, also due to the fact that two different timbres are used here, this fragment is more interesting than the previous one.
Another superposition of various sequences assigned different timbres, but all attributable to "biological" sounds (birds, insects, meows, etc.). The result is decidedly pleasant because all the sounds fall into a specific typology, and our brain has an easy time building connections and interpreting the whole thing as a fairly coherent soundscape.
On the Kosmophone website you can listen to other examples.