(last updated on April 29, 2014)
As I wrote a few days ago, I had decided that I would try to write a Gospel song despite my inability to credibly play an instrument and lack of experience with music composition. The reason: it seemed like an interesting challenge at the time, and I was quite ready to be a clueless novice in the process.
Despite my inability to perform, I do have a history of having been exposed to music environmentally, and it seems to have wedged certain patterns into the part of my brain. Yesterday’s comments were heartening, as some of the decisions I made about chord progression–acquired by mashing keys on the digital piano until they sounded right–were on the mark. Woot!
I spent a few hours over the past couple of days figuring out how to actually get from my scribbled notes into something I could actually share. Geeky notes follow :-)
What I Have Noticed about Music Structure
I’ve noticed that the popular music I listen to has a definite pattern to the combination of notes that are played in each temporal group. What I’m calling “pattern to the combination of notes” corresponds, I think, to the chords selected from the major or minor scale in a particular “temporal group”; this group is called a bar or measure. Part of the joy of music seems to come from the ((combination of predictability of the pattern balanced with surprise**; a lot of popular songs follow a formula of some kind where there are certain number of bars with certain combinations of chords, but the way the notes are expressed is quite infinitely varied.
There are 12 notes in western music, and all of our songs are made of those combinations. From a graphic designer’s perspective, one could think of them as the available named colors. As with colors, certain notes just seem to go well together, while others don’t. I once tried to figure out why this was by looking at the relationships between pure sinusoidal tones, as I know from Electrical Engineering that sine waves are the fundamental building blocks of all more complex waveforms. As it turns out, the waves that sound good together tend to share harmonics in interesting whole-numbered multiples, but I don’t know if this really what creates the impression of harmony.
I came up with a progression of chords (each of which consists of 3 notes) that sounded “right” to me. I could have started with any good-sounding chord, but I found that for the next chord to sound right, only certain notes were available otherwise it would sound weird. The pattern I was aware of reminds me a lot of a good episode of The A-Team, the formulaic-yet-enjoyable 80s television hit featuring Mr. T:
- 4 bars of introduction, which sets the style of the song. Like when the A-Team gets contacted by the help-seeker of the week.
- 4 more bars of “character building”, fleshing out where this song is going to go emotionally. Very similar to Hannibal working out how to handle the problem with Face, Murdock, and B.A.
- 4 more bars of “conflict setup”, which feels like an intermediate mystery to be solved. The Team is dealing with the operational challenges of the mission, but before the final conflict has arisen.
- 4 more bards of “conflict resolution”, a plot point that is closed (for now). The Team is finally at the place where they need to take action to prepare for the final act.
- 16 more bars of “whooping it up”, which is when the show features the “Let’s build something kickass and shoot up the place” montage.
- Repeat as necessary.
I found I couldn’t break certain expectations outside of certain intervals, or it sounded wrong. I couldn’t change a chord in the middle of a bar, because…well, I didn’t like it. I couldn’t change the key until at least 16 bars of that setup had occurred…key changes are like changing the lighting in a room in a movie. And even then, it seems like the keys need to share at least some notes immediately before the break so there is some commonality. The chords within a series of bars had to relate to each other in some way too, in some manner I can’t quite grasp. It might be pure familiarity at work here, hundreds of years of the same structures informing thousands and thousands of songs. Maybe some of them just were easier to play on certain instruments. Others just are good sounding; not all chords are created equally when it comes to harmonic bliss. My friend Lee pointed me toward equal temperament as a concept; it turns out that all these various “official notes” western music are a hack so a piano can have a fairly decent go at playing different scales “equally well”. Which really means “equally bad”…they’re close, but not exact.
I was dimly aware that chords and chord progressions describe a song pretty well; it’s like using a grid in graphic design to pre-solve certain spatial relationships and create proportional harmony in the division of space. If I could just pick a chord progression that sounded OK to me, that would solve a lot of problems automatically. This presumes that I could actually tell if the notes I was picking were in the progression or not. This is something I seem to be able to do, but I sometimes wonder if all musicians actually can tell. I have lost count of the number of college band guitar solos that seem to be connected to a different song than the one they were playing.
Picking a Progression of Chords
And then…I was stuck. I couldn’t pick the progression because I couldn’t hear the rhythm in my head to further constrain the problem sense. I tried humming a few things to myself, trying to match the keys on the keyboard, but this felt like spinning my wheels. I eventually realized that I could impose some additional structure by creating a fake lyric that just had the right number of syllables in it. I basically picked cliche phrases from half-remembered gospel songs and piled ’em up to create a few verses, then hummed how I thought a singer might deliver them.
To find the right keys, the technique I used was to just play all the keys until I found the one I wanted. The process was very similar to sketching very loosely with lots of overlapping lines to outline rough shapes: our eyes pick the line we want, and then we ink it. In a similar way I thrashed keys that were in the direction I wanted the song to go, and ignored any note that didn’t sound right. Once I identified something that sounded good, I wrote down the name of the note based on the one thing I remember from piano lessons: how to find C. I scribbled these notes down on my paper, and used them to play the chords over and over again while humming my fake lyrics. This was a laborious process because I had trouble matching my chord notes to the actual keys on the keyboard, and my dexterity was poor. However, I did manage to come up with this chord progression in the key of C major for the first 8 bars:
GCE - BEG - ACE - GBD ACE - CFA - GBF - GCE
Each group of notes is one measure, and they are all played simultaneously to “fill the space”. When I played it out, using an organ sound, it sounded “right” but it was also very filling. These are the big meaty-sounding chords, with lots of harmonic shared relationships. I believe Lee called these “major triads”, but I’m not sure if that’s what I ended up with. It did sound familiar and somewhat churchy, though as I mentioned in yesterday’s comments that that last group (GCE) seemed to “end” the song rather abruptly, leaving me “no where to go”. I was fascinated to learn from a commenter that this phenomenon actually has a name: cadence, which is the “punctuation” of music. Some progressions just sound more “final” than others. Neat.
While I had some chords down, I wasn’t able to really experience them. I can’t play the keyboard well enough to get a real sense of the song. Fortunately, I have a tendency to buy music gear whenever I think I might actually have the drive to learn something musical “for real”. I also have some digital sound editing software that I use for audio storyboarding, editing, and digital media asset production. What I needed to do was enter the notes into a music sequencer software package, then render the sound to an MP3 file.
My MacBook Pro is running Windows XP natively, and into this is plugged an M-Audio Axiom 25 USB Keyboard Controller that I picked up 2 years ago with the idea that I might actually use it to learn how to use Reason, one of the first really cool virtual studio products that not only looked pretty, but actually didn’t crash every few minutes. I unpacked the Axiom for the first time about 3 hours ago, and I’m happy to see that it actually works. :-)
I happpened to already have Sony Acid Pro 6.0, a multi-track music creation and sound synthesis package that I use for creating audio soundtracks from multiple sources, though it’s been years since I’ve had to do this kind of work. This is the companion software to Sound Forge, the sound editing package that I’ve been using for quite some time for tweaking audio at the sample level. You can think of Sound Forge as the audio equivalent to Photoshop because it creates assets, while Acid is more akin to something like InDesign because it combines and layers assets you’ve created elsewhere.
Acid Pro 6 has a lot of music synthesis stuff built into it, and after some fussing with it I got it to recognize the Axiom 25 and input notes directly. Then I discovered I could actually draw the notes directly in with a pencil tool.
After entering my chord progression, I heard it played back to me for the first time at a measured pace, and realized I didn’t like the second group. It sounded kind of awkward, so I tweaked it in the program until it sounded more interesting and less like a “full-stop”. Commenter Steve had written to suggest another approach in his earlier comment, but I didn’t refer to it because I wanted to see what I would come up with. You download the zip archive of GospelTest01 and listen to 0410-GospelTest00.mp3 to hear the progression, or scroll to the end of this post and click the audio player button to hear GospelTest00 and GospelTest01.
Churchiness and Sonic Space
As I listened to my chord progression, I felt a bit of despair: it sounded very boring and, well, predictable and lame. It reminded me of a graphic design faux-pas I see a lot from non-designers: the gradient fill. What happens is that the non-designer sees a great big white space, and they are overcome by the urge to “put something pretty” there. Rather than compose something tasteful, the gradient fill comes to the rescue to add some “style”. It almost always looks terrible unless applied with some subtlety. I’d just done the same thing, musically. Oh dear.
Then I remembered: the chord progression just provides structure so other elements can play on top of it. Since I had the chords entered into Acid, I told it to loop continuously while I noodled around on the keyboard and tried to imagine someone singing on top of it. The chords then became less dominant, and suddenly the flatness went away. It opened up. The name of this file is GospelTest01.mp3 if you want to hear the difference.
A few notes:
- I did take the “gradient fill” metaphor farther in this file by using the 80s pop music equivalent: the synthesizer string fill to add moodiness and depth. It actually is starting to sound like something, despite my shameful use of synthesizer cheese. At least I am not adding orchestra stabs to “punch it up” or gating my drums.
There’s an interesting thing that I noticed in the 2nd-to-last chord I chose, which is its disharmonious quality relative to the other chords. It also sounds like there is a hole in the middle of it, which creates a slight anxious feeling. The last chord somehow “seals the hole” and eliminates that anxiety. I think I’ve heard this before in other hymns.
Because I’m entering in the notes by hand, there is no live performance feel to the vocal part. It all sounds very robotic, because the notes are “quantized” to strict note boundaries. I applied Acid’s Groove tool, which attempts to introduce some liveliness by slightly offsetting the timing. I think it did something, but my sensitivity to this kind of note spacing is fairly poor. It reminds me of kerning, which is the art of spacing letterforms so they look “even”. Some people have the knack for it, able to see subtleties that I can’t detect. And so it might be with musical timing. I left everything quantized here as a guide; I imagine a singer would know what to do to make this sound much cooler..it sounds pretty broken and clunky to me right now, especially listening to it “cold” without my imagination filling in the blanks.
Next Steps: Lyrics and Emotional Progression
p>As I was laying down the vocal part in 0410-GospelTest01.mp3, I had certain aspects of the yet-to-be-written lyric in mind:
- The singer starts by lamenting the difficulty of living
- The singer starts to realize that he’s already half saved, he just needs to let something go
- The singer bursts into joyful celebration of salvation of some kind, hallelujah
The first 8 bars I have are that first part: lamentation. It’s heavy-sounding and doesn’t kind lift off except for one single note that rises optimistically before sinking again. Listening to it again I realized that there aren’t enough notes in the vocal part to really make it work, but I’ll hit that again when the entire structure is fleshed out.
The second 8 parts will lift up somehow, but repeat a few times. It takes a while for the singer to realize he’s “saved”, so I’m thinking that a repeating rising and dropping might impart that sense emotionally. Maybe an upward key change?
The last part, the joyful refrain, should soar. I think this will happen through longer notes and a very energetic playful sequence of notes, like you’re on some kind of awesome theme park ride that gives you a huge boost. Or something.
So that will be the next focus; any refinement will wait to see what I have at the end of the next stage. I think it will be actually rather difficult because I’ll need to keep the big picture in mind of progression, and it takes me a long time to figure out how the notes fit together. The awesome part, though, is that it doesn’t take a lot of time to try things, and the feedback is immediate. When I’m doing graphics work or development, it takes a lot longer to get to the point where I can really immerse myself into what I’m making.
Click the Play Button! It should play 24 seconds of GospelTest00 (just church chords) and then GospelTest01 (a first pass at trying to create something over the chord structure):