Latest item from Florian Hecker is called Synopsis Seriation (EDITIONS MEGO 256), and it’s a large-scaled heavy duty project expressed as two hour-long CDs, packaged in a super jewel case for extra weight and heft.
Plenty of inexplicable digital tones and an aura of scientific research; I would sum it up (if pushed) as being another example of Hecker’s very advanced approach to synthesis, and this particular one sees our man reunited with Alberto de Campo, the modernist composer and software engineer who collaborated with Hecker and Russell Haswell on Kanal GENDYN in 2012, itself a fairly extreme example of computer-process music that seemed to mix up ideas about modern electronic composition with techno dance music, by way of an investigation of sewer systems. Gotta admit I have found myself increasingly alienated from Hecker’s projects over time, finding 2011’s Speculative Solution hard to listen to, accompanied by texts that were impossible to understand; there was also the record Acid in the Style of David Tudor, which was equally abstruse but at least produced exciting noise when spun. And the recent vinyl item in the Portraits GRM series that I evidently seem to have enjoyed at some level.
Besides De Campo on today’s release, there are other software geniuses who have been roped in to assist, namely Bertrand Delgutte and Jayaganesh Swaminathan (Auditory Chimera algorithm), Bob Sturm (DBM Sparse Decomposition algorithm), and Axel Röbel (Texture analysis and resynthesis algorithm). The Chimera thing may have a connection to those 2012 LPs Hecker made for the label – Chimerization and Chimärisation. We see images in the form of visualisations produced by Vincent Lostanien, several pages of grids printed in utilitarian shades of green and pink, whose array of data and numbers looks fascinating – a dense field of printed marks occupying a zone somewhere between op-art and database reports. The back page of the booklet speaks with some authority, and in a very terse manner, of things like “time-frequency scattering extracts spectrotemporal modulations according to a multiresolution pyramid scheme”, illustrating these concepts with small diagrams. This is all beyond my mental span.
According to the label, the releases represents Hecker’s “current research in machine listening 1 and music information retrieval, where the ‘ghosts in the machine’ are unsupervised, engineered operators designed to extract auditory features from a signal”; and the achievement of Synopsis Seriation is summarised thus – “by embracing time, succession, and sound as an immaterial, its multitude of auditory perspectives and encoded logic challenges a traditional synoptical overview of analytical architecture and resynthesized sensation.” I think this may mean that Hecker is moving towards saying something about the nature of human perception, the way that we experience time and space through our eyes and ears, and attempting to offer a new and challenging perspective on the matter. He does it through synthesised music, but I think chiefly what interests him is the processing – the constant analysis and re-analysis of material, such that whatever sources he pushes through his multiple algorithms keep on changing and mutating. And that word “unsupervised” worries me; suggesting he turns on his machines in the lab and then leaves them to get on with it, running their way through the program until its completed. I suppose that because it’s all based on computer data, it gives him the opportunity to measure it and quantify it in certain ways.
This work still feels very arid and remote to me, and while admittedly I find this record less alienating or exhausting than some of his previous statements, I continually find myself getting bored with it quickly. This may be because the actual sound of it is so limited, so unvarying; no dynamics, no modulation of the tone, just lots of “information” transformed into flat sounds and following a logic that’s hard to keep up with. Very few concessions are made to traditional aesthetics of musical enjoyment. I feel like the concept is trapped in a rigid grid of its own making; those who do support it seem convinced that the value of these experiments is self-evident, yet all they are able to do is describe it, restating the parameters of the concept, and repeating what is happening in the process. I am still struggling to see the point behind all this self-referential material. Arrived 5th May 2021.
- This is the technology that allows devices like Siri and Alexa to “understand” your requests. ↩