Paul White (Sound on Sound magazine) interviewed Klaus Schulze in 1999 about what he’d like to see in synthesizer design developments. Please note that all (c) are with the magazine and the author. The original article can be found here.
Though many people still associate Klaus Schulze with Tangerine Dream (he was part of the group at the very start of the ’70s), that relationship actually formed only a very small part of Klaus’s career. Since then, he’s become widely respected as a leading electronic music composer in his own right.
I was invited to meet Klaus when he agreed to help launch the Quasimidi Polymorph synthesizer (reviewed in SOS January 1999) at the Turnkey music store in London’s West End. However, I thought that instead of talking about his career and music, ground already well‑covered in numerous other interviews (see SOS February 1996), it would be interesting to get his views on the way electronic instrument design is evolving.
Even if Klaus’s love affair with analogue instruments hadn’t been as well documented as it is, his almost reverential interest in the wall of vintage analogue modular synths that reside in the Turnkey’s Loopstation basement would have given the game away to even the most casual observer. However, he’s by no means an analogue‑only man, as our interview was to prove.
When you last spoke to SOS, you weren’t entirely convinced by physical modelling as an alternative to a true analogue synthesizer. But as you’ve helped to launch the Quasimidi Polymorph, I imagine you’re not quite as unsympathetic towards the digital modelling of analogue as you were three years ago?
“When I first experienced digital synthesizers, I liked them because for the first time ever, I could switch a machine on, play an A and it would be in tune. Even so, I soon felt something was missing in the sound. The technology has now progressed from the point of storing sound samples to modelling analogue oscillator sounds, and in the case of Quasimidi’s Polymorph, you can start off with an oscillator that’s something like a Moog and then put it through a filter. In fact, there are a number of interesting digital instruments around at the moment, most of them from small companies, such as the Virus synth from Access and the Novation Supernova. There’s also the Nord Lead, though I don’t like that quite so much, as it sounds to me more like an ARP than a Moog.”
The plus points of Analogue
What is it that you like so much about the original Moog sound? Is it mainly down to the filter or is there more to it?
“The filter has a real 24dB response, and also the human error built into the oscillators means the sound is always moving. At home I have a Studio Electronics SE1 Minimoog rack, but they use a digitally controlled tuning system to prevent drift, which I think makes the sound different — but it’s not very much. I am very sensitive about Moog sounds and to me the SE1 still sounds closer to the real thing than digital modelling.”
Surely it would be possible to build in a random but limited amount of tuning drift into a physically modelled instrument.
“Yes, I think they have tried it but then it becomes regular — it sounds as though it’s been mathematically detuned. The Minimoog takes a long time to stabilise, and during that period, you may find one oscillator goes up in pitch, one goes down and the third stays more or less where it was. Because of this, the same instrument can sound slightly different every day. And it isn’t just the tuning. All the analogue components drift slightly with time or temperature, and when you put all that together, the sound is very special. The ARP Odyssey and 2600 were always more stable than a Minimoog, but there was still enough drift to make them feel alive. Even now, when I’m using my Minimoog on stage, it’ll often be out of tune when I start to play. I’ll sometimes start with a closed filter, then as I open the filter I tune the oscillator so that by the time the filter is fully open, the instrument is back in tune!
“The visual presentation of an instrument also affects the way you feel about it — the wood and the large knobs. It’s like food — if it looks ugly, you probably think it tastes less good than nicely presented food. If you have a modular synth that has no controls other than what you see on a computer, it probably also affects how you feel about it. A system like that can be great for creating atmospheric sounds that need complicated patching, but for live performance where you want to work with the controls in real time, it’s not really the way. With my modular system, I know that if I lean over and turn a knob, something will happen. With a computer, you may be playing with your eyes closed, then to change a sound, you have to use a mouse to find something on the screen.”
Presumably it’s better if you have some form of hardware controller?
“Yes, and of course there are people like Access who build hardware controllers for the Microwave — more people are doing that and it’s like going back to the ’70s with all those knobs. I remember when the Roland JD800 came out in the late ’80s, a lot of players initially complained because it had too many knobs. They’d been brought up with Yamaha DX7s or Roland D50s and weren’t used to the idea. It wasn’t until the techno thing happened when people started to get back into real-time control — then they wanted to have knobs. With techno music or DJs, they’re like me on stage: they don’t know exactly what they’re going to do in 10 minutes’ time, so they want to be able to make quick changes directly from the controls. Using a digital system with buttons and menus, you’d have to take a half hour break to achieve what would be a moment’s real-time improvisation with an analogue instrument.”
Accepting that even the current physically modelled instruments don’t sound exactly like their analogue counterparts, do you think models like the Polymorph, that do have lots of knobs, provide enough control or is too much still hidden away inside?
“It’s getting to the stage now where just about all the controls you’d want to adjust during a performance are available. For example, they have all the controls for the envelopes, oscillator waveforms and filters as well as a switch for 12dB/octave filters, like Oberheim, or 24dB/octave, which is like Moog. Maybe they’ll build even better things in the future, but what they have done is very good. Of course, as I said in the beginning, they’re still just emulating analogue synthesis and to me they don’t sound quite the same as true analogue synths, but for those who came into music in the late ’80s and who haven’t used the original Moogs so much, perhaps that doesn’t matter. The real question is: why do people suddenly want instruments that sound like the old stuff?”
This shot of Klaus Schulze’s studio illustrates both his affection for old Moog analogue synthesizers (his Moog C3 modular and two Minimoogs) and his willingness to embrace new digital gear such as the Quasimidi Raven (blue keyboard) and Cyber 6 (red keyboard), shown.
Analogue and Digital
“I was very happy at first when I had a Fairlight, a Korg M1 and loads of other digital gear. I put all my old analogue instruments in the car, though it’s a good thing I didn’t give it all away — I kept my old Moogs. In the late ’80s I was doing Dark Side of the Moog with Peter Namlook from Germany and he asked if we could try my old Moog modular system. It hadn’t been plugged in then for 10 years, but when I played it again, I was surprised by how good it sounded, and I wondered how I’d ever forgotten.
“Now I use both analogue and digital. I have nothing against digital, because everything has its good features. I don’t agree with Jean-Michel Jarre’s point of view where he wants everything to be analogue — that’s as limiting as doing everything digitally. If you want a nice lead sound, you can get it from a modelling synth or from a Minimoog, whereas if you want a sampled sound, then clearly a sampler is the right tool for the job. In the process of making music, you get inspiration from a particular machine because you turn a knob and discover something beautiful. It’s like you’re triggering the machine, but sometimes it triggers you! And you should be open to that.”
What then do you feel are the good points of physical modelling synths emulating analogue?
“The first thing is that they are stable, but also they can be made to produce some types of sounds that would be impossible with all-analogue circuitry. For example, the Polymorph includes some sampled waveforms from Mellotrons and other things, as well as models of analogue oscillators. You can also do things that might require 20 oscillators and three filter banks on an analogue machine — you’d take up the capacity of the whole machine for one sound. I also like some of the digital sounds from the old Korgs, like the fretless bass and fluegelhorn.
“Samplers are also very useful, and I sometimes sample a Minimoog to leave it free for another job. Sampling it makes it sound different, but this can be good, and you can also play it at a lower octave to create atmospheric sounds. I’ve also no objection to using sample CD‑ROMs when appropriate and some of the Spectrasonics libraries are very good, especially Symphony Of Voices and Distorted Reality.”
Quick Work If You Can Get It
“Another good thing about digital synths is that they can help you get your ideas together very quickly. I have a Sound Canvas GM synth in my studio that gives me 16 MIDI channels of sounds including drums, so I can compose something using sounds that are roughly right, then replace them with something else later. Digital synths also remember your sound changes, whereas with the Minimoog, nobody ever remembers exactly how they were set up to produce a particular sound.”
Perhaps one reason some players don’t like using the old analogue machines is that they have no presets.
“I think that just using presets is a big sickness in today’s music, though on the other hand, a lot of people are not used to creating their own sounds and they’re probably inspired by sounds they’ve already heard on the radio. In fact, a lot of people would rather load a sound into a sampler and play rather than create their own sound.”
Isn’t part of that problem down to the usual menu structure of a typical digital synth?
“Yes, that’s what I hated about the DX7. It was the same when I bought my rackmount Roland JD990 after the JD800 keyboard — there were no knobs or sliders. Of course, I can edit my sounds in the keyboard, store them on a card, then put them back into the rack module. I’ll be interested to see what people like Korg and Roland will do in the future, because apart from a few instruments like the JD800, most editing is still via menus and buttons with perhaps just a few knobs. What I’d like to see is a 24-voice, true analogue synthesizer that could really handle everything and where the effects could be edited independently on each part. With a lot of digital synths, if you put them into multi mode, the voices sound different and the effects setup is always a compromise.
“Even expensive instruments don’t have proper multitimbral operation — the voices are somehow connected. You may get separate outputs where the effects are only on outputs one and two leaving you to add external effects to sounds routed via other outputs, but that’s not really good enough. If I route a sound to output four, I want its effects to come out on output four as well, and only on output four. Obviously it’s a question of money, because to keep everything separate, you need to have different effects processors for each part, but as the technology becomes cheaper, it may happen.”
Space And Time
“Most music technology is first developed in other areas, such as NASA or computing, and I think technology will make another big leap in the early years of 2000. The real problem is finding technicians who are creative enough to design the right instruments, because most of these guys are working at their desks every day, not working in studios or performing on stage. They’ll tell you that by pushing certain buttons you can do this or this — and yes you can, if you have the time — but on stage you have no time. You have to be able to make a change in seconds. You can’t read an owner’s manual to find out what to do. It’s OK if you do exactly the same thing every night, but I want to be able to improvise so I need separate access to all the parameters for all the parts.”
That’s all very well, but if you can model a multitimbral modular synth system in a rack box, how do you provide a hardware control system? It would cover an entire wall!
“Yes, like the modular Moog system was. There will always be a decision to make between cost and having all the controls you want. If you want 10 sequences running at the same time and you want independent control over all the filters, you have to be aware that you need more money. For the DJ who wants to run maybe two sequences plus a drum machine, it’s not so expensive.”
How complex are your recordings these days?
“Even though I have four Minimoogs, because they are monophonic, they can only produce four tones. So you set up a bass line, a solo line plus two more parts and you are done. When you’ve got something right, you can record it to hard disk to free up a Minimoog, and with low-cost systems such as the Audiowerk8 card around now, most people can afford to do this. With only two analogue instruments you can do maybe 12 tracks. When I first started out in recording, people would ask how many tracks I had and I’d say 24. They’d ask how come I didn’t need 48 tracks, but the truth is that if you have a good musical idea, you might only need three tracks. Wanting more equipment can be just an excuse, unless you’re working with real drums where you do need a lot of recording tracks.
“There is a tendency for people to blame a lack of creativity on not having the instruments they think they need. But I remember a concert where my modular system had broken down and I had just a Polymoog, a Minimoog and a small ARP sequencer to work with. I had to get through a one-hour concert using this, but it worked. Of course, it didn’t sound as rich as I wanted it to, but the audience didn’t notice because they weren’t thinking about how the sound was achieved, they just enjoyed the music.”
What do you think of the way people use sequencers today?
“A lot of people are not playing music any more, but rather using the computer as a calculating machine where they play one or two notes and then copy them or move them around. Instead of programming drum sounds, they’ll use sampled drum loops. Sometimes they do it perfectly so you don’t know you’re not hearing a real performance, but only a few people know how to do that properly — for example, Solar Moon System. They’ll use drum loops too but they play other things on top of that. I could never work that way — I play drums on the keys and I might loop something after 200 bars. This is probably because I used to be a drummer. A lot of techno music is made directly from the computer without much real playing and that’s OK too.
“I think there are two approaches to music — one is to work within the computer and move things around as your emotion suggests, while the other is just to play everything, and if you make one or two small mistakes, use the sequencer’s editing functions to correct them. If the mistake is more than four or five notes, it’s usually quicker just to play the part again.”
What sequencer do you use and how do you think it could be improved?
“I use Emagic Logic Audio Platinum because it is powerful and also very stable, but like all these complex sequencers, there are things that could be better. Lots of colour looks nice, but this takes memory and computing power. I don’t want to make a picture of the screen — I just want to work with it! There are also lots of things that you never use that can get in the way of the things you do. I remember with my old Atari running Notator that you could use Shift Q to disable the scoring side of the program to free up more memory. Okay, we have much more memory today, but there’s still a lot of waste. For example, Emagic’s Environment is hugely flexible, but there are many things I know I’ll never need. You often see a menu with 20 options when you know you’ll only ever want to see four of them.”
I also use Logic and I’ve noticed how large the song files can be even before you play any notes. That’s because the Environment takes up a lot of space, so why can’t there be a way to have several songs using just one Environment?
“When I do remixes and somebody sends me over a MIDI file, I have to recreate everything and it takes a long time to get it back the way they had it running in their own studio.”
And if they’ve used folders on the Arrange page it’s even worse!
“Oh yes — you open a folder and you have tracks everywhere! It’s the same with ‘Demix All Channels’ — your Arrange page is full of things all over the place. Every time you import somebody else’s MIDI files, you have to start from scratch, moving data onto the right tracks, sorting out the names and assigning them to suitable instruments. There must be a more intelligent way to deal with this.
I’ve often thought it would be helpful if you could use colour in the menus to colour options that you don’t use often — that way, the ones left in black could be the ones you want. In fact, it would also be useful to be able to colour patch names in the patch lists.
“There are always annoying things in any software, but on the whole these are very powerful systems that are far more complicated than in the days of analogue recording. As to the future, perhaps it will eventually get like Star Trek where you use voice commands to say ‘Computer, I want a B‑flat minor chord playing a flute sound on the first bar — no, the flute should sound a bit darker and have more reverb’. The last step is that you plug a MIDI cable into your brain!”
Manufacturers, Hear This…
You are the kind of player manufacturers listen to, so is there anything you’d like to say about the way synths and sequencers are progressing?
“We were talking earlier about true multitimbral effects, but my dream is still to have some kind of modular workstation where Korg, Roland, Yamaha and all the different manufacturers agree on a format and then provide synth modules or other devices as plug-in cards that fit inside a common system with a common user interface. Perhaps the manufacturers won’t like this because they want you to buy everything from them, but most musicians want a mixture of different things. It would be like a computer system with a lot of plug-in slots, so rather than using the computer to provide software synthesis, each card would be a hardware synthesizer or sampler. It’s like a Digidesign system for audio recording where the DSP on the cards does all the work rather than expecting the computer to do everything.
“Outside you’d have just one keyboard and one control surface where the same knob controls the same parameter for whatever synth is selected in your sequencer track — one knob for reverb, one for envelope attack, one for LFO and so on. This would require the manufacturers to cooperate so the same MIDI controller information addressed the same parameter on each instrument, and of course on some instruments there would be unused knobs as not all synths have the same parameters, but I think something like this is possible. On the other hand, maybe a large touch screen would be a viable control option as this could provide different controls for different types of synthesis without having redundant knobs. If you only want to use a few controls in live performance, you could save the patch so that only the parameters you need to adjust are visible. Not only would such a system be easier to manage, it would also take up less space than a conventional system, there would be less wiring to worry about and it should be a lot cheaper. A tower system with 20 card slots that connects to the computer via SCSI or something shouldn’t cost the world.
“It may take some time to configure the control system to be exactly as you’d like it, but you’d only have to do it once. We already do this with sequencers where we may spend two or three days creating an Environment and keying in all the patch names. Such a system would be very fast to use, because as soon as you selected a new sequencer track, the controls would instantly switch so that they worked for the instrument assigned to that track.”
Perhaps now that companies such as Yamaha and Emu are building serious instruments on cards, there’ll be more pressure for other companies to co-operate, or at least for a third-party company to build an external PCI chassis with a hardware control surface?
“It’s possible, but I still think there will be resistance from the manufacturers. It will need one or two companies to start things off, then the others will follow.”
Is there anything new you’d like to see in sound creation, or do you think we already have all the sound possibilities we need?
“I think that when you look at all the synthesis types already available, it would take a lot of dedication for anyone to come up with something that sounded really different. What’s missing so far is a system that can perfectly resynthesize voices so you can speak into it, then combine elements of your voice with synthesis — not just a vocoder effect. For example, to use the components of your voice to modify the envelope or filter of a different sound, or vice versa. Early attempts at resynthesis were disappointing, but the technology is getting better and there are lot of creative possibilities. You may even be able to use different elements of the natural sound to control maybe four instruments at once. I don’t know how it will sound, but I’m definitely not happy with existing resynthesis technology.”