Sonic Design : Weekly Learning

  Sonic Design

Weekly Learning


WEEK 1:

In this class, the teacher gave us a detailed overview of the semester's content and explained that the main tasks are divided into four parts: 

Task 1 - Sound Fundamentals
Task 2 - Auditory Imaging
Task 3 - Audio Storytelling (Audiobook)
Task 4 - Audio for Silent Movies & Eportfolio. 

Finally, the teacher recommended that we prepare high-quality headphones to better assist with the project.

WEEK 2:

This week, the teacher led us to learn some basic AU usage, and let us feel the changes that changing the parameters can bring to the sound. Finally, we had some small exercises. I was very interested.

WEEK 3:

This week the professor still led us in tuning exercises. We also learned a new thing called reverberation. We can use reverberation to simulate a bathroom or any sound with echo.

WEEK 4:
1. Audio Spatial Positioning
  • Use "Pan" to control the position of the left and right audio channels. For example, pan a jet sound effect from left to right to simulate a "flyby" effect. This is done using the blue pan bar in multitrack software.
2. Two Types of Audio Automation
  • Clip Automation: Bind an effect to a single audio track. When the clip is moved, the effect moves with it. This is suitable for localized sound effects (e.g., the distance of a voice changing).
  • Track Automation: Bind an effect to a track and don't move with the clip. This is suitable for fixed video scenes (e.g., a plane flying by at a fixed time, and the effect can be reused even if the audio is changed).
  • Key: Use keyframes to draw a curve—the yellow line adjusts the volume (near sounds louder, far sounds quieter), and the blue line adjusts the panning. Steep curves = fast changes, shallow curves = slow changes.
3. Sound Optimization Techniques
  • Understand the Four Stages of Sound (ADSR): Use the volume graph to identify attack, decay, sustain, and release (e.g., a drum has a fast attack).
  • Use EQ to adjust clarity: reduce high frequencies for distant sounds and boost high frequencies for nearby sounds. Use a high-pass filter to reduce low-frequency noise.
  • Pitch Shift: Lower the pitch to make small sound effects (like a microwave oven) appear muffled, simulating the sounds of large machinery.
4. Creating Ambient Sound Effects
  • Process: Deconstruct the image (focus sound like a large machine, with weak background sounds like a computer) → Find resources (e.g., BBC sound effects library, or combining sound effects with pitch shifting) → Add reverb (adjust room size and damping) to simulate a space.
WEEK 5:

Today the teacher introduced five new tools to us.


1. LAYERING:


Superimposing and fusing multiple sounds into a unique new sound is similar to the PS layer design concept. Professional sound effects are often achieved with this technology.

2. TIME STRETCHING:

The duration of the sound can be adjusted (for example, a 10-second voice message can be stretched to 20 seconds to slow it down, or compressed to 6 seconds to speed it up). This only changes the audio speed/rhythm, not the pitch.

3. PITCH SHIFTING:


Changes the pitch of a sound without changing its duration—turning it higher makes it thinner (like a chipmunk), while turning it lower makes it thicker (like a monster/zombie).

4. REVERSING:


Reversing audio can produce strange sounds, so it's important to combine it with layering and optimization. For example, reversing an explosion sound and overlaying it with the original can create a buildup effect before the explosion.

5. MOUTH IT!:


If you can't find a sound effect, you can record your voice, use a microphone, practice creating it, and then refine it with techniques like pitch shifting. Sound design requires exploration and experimentation, and good sound effects often emerge by chance.

Class Exercise 1:


1. Deep Rich Explosion:

I tried to use the tools introduced by the teacher in class and combined them with what I learned in previous classes to create explosion sound effects.


Track 1:

I tweaked the first explosion using the parametric EQ and hard limiting tools.



Track 2:

I tweaked the 2nd blast using parametric EQ, hard limiting tool, pitch shifter.


Track 3:

For the third explosion I just used forced clipping.



Track 4:

For the explosive charge, I used a parametric equalizer, forced limiting, and reversing the audio.


Final Audio:



2. Variation of punch sound:


Track 1:



Track 2:


Track 3:


Track 4:

I tried to add an effect where you can punch hard and then punch hard.


Track 5:


Final Audio:




3. Monster or Alien Voice:



In Adobe Audition, I used five parameters to transform a female vocal into a monster/alien sound:

1. Pitch Shifter: -10 to -15 semitones for a deeper sound;

2. Stretch and Pitch: 30-35 stretch and 20-25 pitch for a deeper, slower sound;

3. Reverb: Choose "Ghost Whisper" (100% dry, 50% reverb) or "Vocal Reverb (Medium)" (2-3 seconds reverb, 5%-10% dry/wet ratio) for a more spacious feel;

4. Echo: Feedback and echo levels at 30%-50%, and left and right channel delays at 100ms/150ms for a more layered sound;

5. Distortion: 20%-40% guitar distortion and 50%-70% mix for a rougher sound.


Final Audio:


WEEK 6:

In this class, the teacher first covered audio editing standards: tracks must be clearly named, edits must be made at zero level (to avoid noise, and fades can be added if cuts are not accurate), and duplicated sound effects must be varied to prevent a robotic effect.

Then the teacher covered mixing techniques: using bus tracks to manage similar tracks (for more efficient volume adjustment and effect application) and using automation to control volume dynamics and channel positioning. He emphasized that all tracks, including the master track, must be clip-free, and the master track volume must be kept at zero. A mix volume of -6 dB to -12 dB is recommended.

For students in the ISD program, the teacher covered 5.1 surround sound setup: selecting 5.1 channels when creating a new session and adjusting the positioning and width of sound effects channels (to simulate real-world hearing, such as muting the left channel while the right channel is muted). The teacher also mentioned that a tutorial video would be provided to remind students with different audio systems to set up headphones for surround sound and to answer operational questions.

WEEK 7:

This week, Teacher Z provided feedback on Project 2, suggesting I optimize the audio in two ways:

1. Adjust the volume. Reduce the excessively high volume of the first part of the audio ("first step"), and increase the overall low volume to around 15 to avoid affecting my ability to hear ambient sounds and maintain focus;

2. Enrich the ambient sound effects. Currently, the ambient sounds are not obvious or clear. Adding elements such as the sound of a bicycle riding quickly in the rain or the sound of vehicles passing by can better fit the scenario of "walking in a quiet city."

WEEK 8:

We didn't have class this week, but the teacher assigned homework 3: the "Audio Storytelling" project. This week, you need to complete the story narration and dialogue creation for next week's discussion. You can view examples from senior students and blog references on the MyTimes module page. The teacher will upload a technical tutorial video later today to replace today's class. I have read this notice.

WEEK 9:

Today’s class walked us through the complete workflow of audio production—from pre-recording preparation to detailed post-production editing. The emphasis wasn’t on creating fancy effects, but on understanding that great audio comes from clarity, consistency, and suitability for the intended context. Below is the key summary:

1. Before Recording: Environment and Equipment Set the Upper Limit

Good audio doesn’t start when you hit the record button—it starts with the space, timing, and device you choose.
  • Environment Setup
Pick the quietest place possible and stay away from traffic noise, fans, refrigerators, or any electrical hums. Using cushions or blankets to build a small enclosed area can greatly reduce echo and room reflections.
  • Device Choice
A phone is enough. iPhones generally record more cleanly, while Android devices may require heavier post-processing.
  • Core Principle
Poorly recorded audio is like an overexposed photo—no amount of editing can fully fix it.

2. Post-Production: The Core Steps That Make Audio Sound “Professional”

(1) Basic Cleanup: Remove the Dirt First
  • Noise Reduction
Capture a noise print and apply noise reduction to the whole track; keep the strength moderate to avoid metallic artifacts that damage the voice.
  • Removing Unwanted Sounds
Breath noises, bumps, mouth clicks—let automated tools handle what they can, and manually clean what they can’t catch.

(2) Volume Calibration: Making the Audio “Steady”
  • Gate
Set a threshold so only real vocal sounds pass through, keeping the background noise out. Adjust attack and release times to avoid cutting off the natural tail of words.
  • Compressor
Control volume spikes to keep the narration even. After compression, use makeup gain to bring levels back up.
  • Final Gain & Limiter
Keep the final output around –6 dB for safety. The limiter should always be the very last step to prevent distortion or clipping.

(3) Sound Quality Enhancement: Clean and Comfortable to Listen To
  • EQ
Improve clarity, reduce muddiness or harshness, and fine-tune based on the type of content.
  • De-esser
Reduce sharp “S” and “sh” frequencies, but apply lightly to keep the voice natural.

3. Core Principles (Repeated by the Instructor)
1. Listen, don’t just look — waveforms don’t tell the whole story.
2. Moderation is everything — every tool can harm the voice if pushed too far.
3. Always follow the correct order:
Cleanup → Volume → Tone → Limiter
Changing the order will weaken the results.

Exercise:


document:
WEEK 10:

This week we presented the current content of Project Two to the teacher and received feedback.

Teacher's feedback:

1. The teacher insisted that the vocals must be at their loudest volume, louder than the background music and sound effects;

2. The teacher pointed out that the current music volume was too high and needed to be lowered, only to be appropriately increased when supporting scenes/story;

3. The teacher affirmed the quality of the audio recording ("The audio recording is good").

WEEK 11:

This week's class focused primarily on audio design, particularly the differences in sound logic between games and videos. We also learned the complete audio design workflow and how to practically use effects in Adobe Audition (AU). Overall, I gained a more concrete and systematic understanding of "how sound shapes the experience."

I. Core Differences Between Game and Video Audio

1. Video Audio: Linear, Time-Axis Based

Video is a typical linear medium.

Whether it's film, advertising, or animation, it has a fixed timeline—from 0 seconds to the end.

My typical workflow is:

Watch the video first

Identify the time points where sound needs to be added

Precisely place sound effects or music according to the timecode

The process is very straightforward; all sounds revolve around a fixed timeline.

2. Game Audio: Non-linear, Event-Triggered

Games are completely different.

They don't have a fixed time frame; all sound is determined by the player's actions.

For example:

Player moving left or right → triggers footsteps

Player opening menu → triggers UI sound

Player running, jumping, getting injured → triggers different event sound effects

This **Event Mapping** logic makes game audio much more complex and dynamic than video.

II. Audio Design Process and Tools

1. Initial Planning: Sound Must Start with the Storyboard

The instructor emphasized that audio planning shouldn't be done later, but should begin at the storyboard stage.

I need to add the following to the storyboard:

Music genre or atmosphere description

Sound effect design for specific actions

Ambient sounds in the scene

The relationship between camera angle and sound

The template can be freely adjusted; the key is to ensure everyone understands the sound intent.

2. Use of Reference Files

Clients usually provide visual references, such as concept art.

We can't just list "what sounds are there," but rather describe "what the sounds should be like" based on the visuals.

For example:

Rounded, cute characters → Use soft, light sound effects

Sharp-edged mechanical sounds → Use sounds with a more pronounced metallic texture

This is a part I often overlooked before.

3. AU Effects Practice: Focus on the Chorus Effect

The teacher led us in practicing the Chorus effect, which simulates "multi-person layering" through short delays and sound duplication.

It can be used to:

Make vocals fuller

Create crowd applause

Make a single voice sound richer

Parameters such as the number of Voices and Delay Time can significantly change the layering of the sound.

The teacher also mentioned other effects (flagging, phasing), but emphasized not relying too much on software-specific functions; first, master the general effects.

III. Classroom Tasks and Practical Skills

This task is: Choose one of four silent videos, design and record sound for the character's movements and environment.

During the process, I particularly remembered a few of the teacher's suggestions:

1. Record in a quiet environment if possible.

The cleaner the environment, the easier the post-production process and the more natural the noise reduction.

2. Sound should have "continuity."

For example, a bus: The sound should enter before the visuals appear, allowing the listener to "feel it approaching."

3. Sounds for similar operations should be consistent.

All button click sounds should be similar, and all footstep rhythms should be consistent to enhance the overall cohesion of the work.

Reflection

This lesson made me realize that: Audio design is not just about "adding sound," but a way of building experiences.

Videos rely on timelines; games rely on event triggers.

And regardless of the type, early planning, understanding visuals, and accurately using tools are the true core competencies of an audio designer.

Class Exercise

I selected this video.



I provided the voiceover for the footsteps and the energy gathering.


Final Audio:

WEEK 12:

Comments

Popular Posts