Main Index
Tutorials Index

PART 1
Origins of Csound:
  Music as Digits
  Csound History
  Practical Work
  Preconceptions

PART 2
Simple Guide to Csound


PART 3
Signal Processing


CDP Logo

~ GETTING STARTED WITH CSOUND ~
by Andy Hunt

This text is taken from a series of lectures prepared by Andy Hunt1 for Music Technology students at the University of York. Edited by Archer Endrich.

PART 1 – Origins of Csound: Historical & Musical Setting

  • Music as Digits
  • History
  • Composing & Performing
  • Some Preconceptions

Csound is one of the 'top-of-the-range' software synthesis programs. What image does this conjure up for you? Why do you think composers consider it to be so important, and what does it do? What musical results can you achieve with it, and how does one use it? These are the questions which this tutorial may begin to answer.

Csound can be described as a programming language. It enables you to specify instruction sets for both audio and musical generation and processing operations:

  • a language: it is a way to write out instructions
  • audio: directing the physical mechanics of making sounds
  • musical: shaping sounds and patterns of sound events
  • generation: making sounds with numbers, using design components provided by the language
  • processing: modifying sounds in various ways, referred to by the umbrella term, 'signal processing'.

Music as Digits

We know that at the lowest level computers work with information expressed as a series of 0's and 1's. Sound signals are effectively represented by numbers which employ groups of 16 bits ('16-bit'), with thousands of these data sets per second of sound (e.g., a sample rate of 44100 samples per second).

This raises two basic questions:

1. Given that we wouldn't normally want to work at the 0's and 1's level, how can we specify sound designs at a higher level?

2. What can we achieve by doing this?

We'll come to the how shortly; here are two main why's:

(1) technical – because of the accuracy of digital representation, sound can be produced and reproduced without signal degradation. Because of this, there are now many forms of digital sound storage: Compact Disk (CD), Digital Audio Tape (DAT), Mini Disk, and Digital Compact Cassette (DCC).

(2) artistic – so we can shape sounds in ways not possible before, and also have control over the final sound of a composition or a recording – composers now have more options than just specifying notes and leaving the rest to others.

As a first example, I suggest that you re-listen to an electro-acoustic composition on one of your CD's which illustrates a kind of music in which change of timbre (sound colour) is the primary feature: i.e., sounds shaped in a new way special to our own era. Consider how you would write down this piece using conventional notation, notes & rests on a stave. Would there be a problem?

History of Csound

Csound has its origins in the intellectually stimulating and fertile environment of the Bell Telephone Research Laboratories (New Jersey, USA), where leading-edge work was being done on handling sound: the scientific development of signal processing techniques. It was here that Max V. Matthews in the late 1950's began to experiment with representing music as digits, and by the early 1960's, he had created the world's first computer music program, Music 3.

Shortly after this, an event of crucial importance took place: he distributed his next version, Music 4, to composers at Stanford and Princeton Universities, with source code (in FORTRAN). This meant that the conceptual basis of Music 4 could be studied and further developed by composers. Separate development paths soon emerged. By 1973, after considerable work, Barry Vercoe created the version known as Music 11 to run on a PDP-11 computer at MIT.

Bell Laboratories made another immense contribution when Dennis Ritchie created the 'C' programming language. This was more suited to the tasks facing programmers for music, and also produced more portable code, so it soon became a standard. Barry Vercoe re-made Music-11 in the C language, thereby creating Csound.

Csound has the following important features:

  • separate control rate and audio rate processing (audio rate is the same as the sample rate; control rate is a slower rate used to shape certain features of the sound)
  • it is widely portable under 'C' and UNIX (UNIX is a comprehensive operating system widely used in education and research).
  • it is structured in a way which allows for further development. In 1990 continuous control of timbre was extended to spectral data types (Phase Vocoder), and in 1992 real-time control under MIDI was introduced. The latest versions also make it easier for users to add their own functions.

Composing and Performing

Let's pause for a moment to compare traditional approaches to making music with the way computer music is created.

In traditional music-making, instrumentalists invent and perform directly on their instruments, from improvisation to finished performance, or, composers create a score which is then 'processed' by these instrumentalists. Each musical instrument has its own physical characteristics and social implications. The instrumentalists may or may not make their own instruments; the composers may or may not improvise on instruments in the process of composition, and may or may not participate in the final performance, in which case their role ends with the score and attendance at rehearsals.

It is interesting to observe that the creation of music with the help of the computer involves all of these steps:

1st the composer defines the instrument itself to realise the sonic image he or she has in mind.

2nd the composer uses pre-set algorithms for sound generation and/or defines new ones, and completes the accompanying event list: the musical score.

3rd the performance program is invoked, with any special processing which takes place at this time. It is here that the nuances associated with live performance need to be realised.

4th the sound is actually produced.

Let's now turn to some example computer music scores. The following are based on Computer Music (Dodge & Jerse, Ex. 1.1). The scores define a brief tune in D-Major, @ crotchet = 60 (1 per second): D-D'-F#-G-A-B-A-D. These notes are expressed as frequencies, starting with 284 cycles per second (the D just above middle C), and the number 20000 is amplitude -- about mezzo forte.

Music 11

i1

0

1

284

20000

i1

1

1

568

20000

i1

2

.5

370

20000

i1

2.5

.5

392

20000

i1

3

1

440

20000

i1

4

1

554

20000

i1

5

2

440

20000

Music 5

NOT

0

1

284

20000;

NOT

1

1

568

20000;

NOT

2

.5

370

20000;

NOT

2.5

.5

392

20000;

NOT

3

1

440

20000;

NOT

4

1

554

20000;

NOT

5

2

440

20000;

Music 4BF

I

1

0

1

284

20000

I

1

1

1

568

20000

I

1

2

.5

370

20000

I

1

2.5

.5

392

20000

I

1

3

1

440

20000

I

1

4

1

554

20000

I

1

5

2

440

20000

Each of these describes the same thing. There are 7 note events: the 1st, 2nd, 5th & 6th last for 1 second, the 3rd & 4th for ½ second, and the 7th for 2 seconds. The pitches are defined as frequences (cycles per second), each of which is set at an amplitude (loudness) of 20000 out of a total range of 32000.

Some Preconceptions Examined

Although the above is clear enough, it may immediately raise preconceptions such as:

  • computer music is "all numbers";
  • 'pop music' on one side of a huge uncrossable divide, uses graphic interfaces, synths and sequencers, while
  • 'serious music' uses number-oriented, text driven computer music systems.

This view of things is however a rather strange myth, for no such divide exists in the real world. It is true that especially since the 1970's, for reasons beyond the scope of this tutorial, there has been a growing sense that such a divide is real, but the facts tell another story.

For example, the same people involved in the creation of computer music during the 1960's were also involved in the creation of synthesisers! And careful listening to a great deal of so-called "popular music" reveals a great deal of musical experimentation and daring. Real people enjoy and are involved in all aspects and forms of music-making. 'Divides' come about when people try to put music into categories, whereas the living musician is most often concerned with a continuous spectrum of artistic expression.

The point of a program like Csound is that it is 'soft': it can produce different results simply by changing the instructions. Thus the one program, Csound, can in fact emulate the sounds produced by any synthesiser. Listen to the 'koto' sound on a DX-7 synthesiser; the same results can be achieved with Csound, e.g., with the 'pluck' algorithm. Wired into the synthesiser, as it were, is an algorithm, and this can be reproduced by Csound in its 'soft' environment.

Both formats have their advantages. The synthesiser has the speed needed for real-time performance. Software synthesis is slower, but has immense flexibility. As the years go by and computing power increases, both of these will coalesce into a new range of musical instruments, with performance immediacy and software flexibility combined.

For the moment, I suggest that you read thoroughly pages 10 through 19 of the Dodge & Jerse text2 Section 1.3). Part 2 of this tutorial will explain:

  1. how to access Csound on the computer
  2. the basic structure of Csound as a language
  3. initial model orchestras and scores
  4. how to begin to create your own orchestras and scores

In Part 2 we will explore a simple oscillator in some detail: the aim is to work with simple means but nevertheless begin to achieve aurally significant results.

Footnotes

  1. © 1993 Andy Hunt, York, N. Yorks. England
  2. Computer Music – Synthesis, Composition & Performance, Dodge & Jerse (Schirmer Books NY 1985). Also see The Csound Book (ed. Richard Boulanger. MIT Press. Cambridge, Massachusetts. 2000), and the Csound website.

Last Updated Jan. 2016 -- HTML5 version
Revisions: Robert Fraser