😊☹️😊☹️😊☹️😊☹️

Doctors thought that I have bipolar disorder, and my friends say that my mood is unpredictable.

But my mood is perfectly predictable – there’s even a chart for it: NYSE: SE

Easy or hard?

It’s not “I can’t do it”, but “how can I do it?”.

It’s not “I don’t have time”, but “how can I have time?”.

It’s not “I can’t afford it”, but “how can I afford it?”.

One shuts your mind; the other opens.

One gives you an excuse; the other a challenge.

One is how most think; the other, you.

There’s nothing to fear

Your entire life, everything you know, and everyone you love, is contained on the surface of an insignificant speck of dust you call “Earth”; an insignificant speck of dust floating in the vastness of the solar system, the galaxy, the supercluster, the universe, and beyond.

To fear, is to think that any of your actions matter, that your feelings matter, that you matter, in this vastness that no one can even begin to comprehend.

You are humble, and there’s nothing to fear.

There’s nothing to fear.

The Alpha Male

The most confident, competitive and bold individuals in a group of social animals are often the alphas of the dominance hierarchy. These alphas exhibit leader-like or dominant behavior and gets things done, while conversely the betas exhibit follower-like or submissive behaviors and represent the majority of the herd.

The alphas have more responsibilities, providing for the pack and protecting them from threats, while being rewarded through preferential access to resources, including food and mates.

When it comes to us Homo sapiens, it’s no surprise that about 70% of senior executives are alpha males. These individuals not only take on responsibilities others would find overwhelming, they willingly do so.

What makes alphas confident, assertive, challenge-seeking, and able to take stress, is often attributed to the magic fuel: testosterone.

The endocrine system [From: wikipedia.org]

Testosterone is often, but misleadingly, called the “male hormone”, and is one of the over 50 hormones regulated by the endocrine system. Though it is mainly synthesized in the male testis, it is also produced by the female ovary. Though produced in smaller quantities in females, they are actually more sensitive to it.

Known effects and feedback mechanisms of testosterone [From: Ebo Nieschlag, researchgate.net]
(This may be copyrighted, please contact me to remove.)

Among countless benefits, testosterone is known to improve spatial cognitive abilities, improve physical and mental endurance, as well as reduce anxiety and depressive disorders. Testosterone also increases muscle mass and lowers body fat.

Though, there is a price to pay.

While being the alpha may seem great, they are often short-lived, both in terms of their hierarchical rank and life expectancy.

Having a high level of testosterone also indirectly leads to higher metabolism, requiring more food to sustain. More importantly, alphas are also typically exposed to higher levels of stress, prompting the release of Cortisol.

In the short term, cortisol signals the body to prepare to handle danger by triggering higher blood pressure, elevating metabolism of fat, protein, carbohydrates, and keeps you alert. However, it also disrupts the testosterone biosynthesis pathway.

In other words, you quickly become fat and beta. That’s when you know you’re past the tipping point.

Long term exposure to cortisol and lowered testosterone, especially with increasing age, can lead to anxiety disorders, depression, brain fog, lowered metabolism, decreased muscle mass and increased weight. This leads to a vicious cycle causing even lower testosterone levels. In other words, you quickly become fat and beta. That’s when you know you’re past the tipping point.

The higher you go, the more stressful your work, the more likely I’m talking about you. Therefore, it is important to ensure that your endocrine system is always in the right condition to maintain a healthy level of testosterone. This includes proper management, and possibly biohacking, of these key factors: Diet, Exercise, Sleep, Sex, Lifestyle, Supplementation.

Consult a real doctor.

Audio Equalization and the Harman Curve

Equalization is generally considered a taboo in the audiophile world, with the general consensus in the community that it degrades sound quality and introduces noise, and the insistence that we should stay true to the actual sound reproduction of the HiFi setup.

In theory, equalization can introduce noise, especially if it’s a DSP equalizer based on FFT. However, this generally imperceptible “noise” (hardcore audiophiles may start insulting my aural acuity from hereon) is usually outweighed by far greater benefits that well-implemented, tasteful equalization brings.

While each headphone comes with an intrinsic factory-tuned sound signature, a quality pair of cans can usually be further tuned to achieve greater reproduction fidelity, sound neutrality, soundstage, instrument separation – things often believed to be immutable properties tied to physical driver design.

Disclaimer: I’m not claiming that any two equalized headphones can sound the same, or that you can make a cheap pair of headphones match a high-end one. There are still physical limits to what you can and cannot achieve. I’m just saying that certain properties are not as fixed or immutable as is often believed, and you can actually squeeze more performance out of your pair of cans.

Frequency Response

A headphone driver is nothing but an analog system that converts an input electrical current into output kinetic vibrations of a diaphragm, causing longitudinal compression waves to propagate through the air which our eardrums pick up as sound.

A system (as in signals and system) is usually mathematically described with a transfer function H(s), and it is commonly represented graphically as a frequency response diagram.

Frequency response of Sony WH-1000XM4. [Source: rtings.com]

A system typically responds to each frequency differently. The frequency response diagram above shows amplification or attenuation of each input frequency, and can be mechanically obtained by sending an input frequency sweep of constant amplitude, while measuring the resulting sound pressure level through a measurement rig.

These are some common terms used to describe particular ranges of frequencies:

Lows / Bass (<250Hz) – Kick drums, bass guitar, and other sounds that help add ambience.

Mids (in between) – Main vocals, guitars, and other center-stage instruments, usually representing the bulk of the audio you hear.

Highs / Treble (>2KHz) – Cymbals, flutes, and other sounds that help add detail to audio.

A direct interpretation of the frequency response diagram is that an amplification or attenuation in any of the frequency ranges would directly impact the presence of sounds in the corresponding ranges.

However, it gets more complicated than that.

An overemphasis in bass coupled with under-emphasis in mids and highs may cause audio to sound “veiled” as if covered by an object. You can easily reason by thinking about what happens to your voice when you speak with a hand over your mouth – the bass passes through easily, but mids and highs are attenuated. On the other hand, having too much treble may cause sibilance (unpleasant emphasis of “s” sounds), and also listener fatigue.

Harman Target

We typically use sound signatures to summarize the general shape of a frequency response curve. Common ones include:

Flat / neutral – No emphasis to any particular frequency.

Colored – The opposite of being flat / neutral, with emphasis in several ranges.

Analytical – Emphasis in treble, with “bright” being the more extreme case.

Warm – Emphasis in bass, with “dark” being the more extreme case.

V-shaped – Emphasis in bass and treble.

A-shaped – Emphasis in mids.

etc.

Is there a certain sound signature that is most consistently perceived to be of a better sound quality / preferred by most trained listeners?

That is exactly what the study at Harman, The Relationship between Perception and Measurement of Headphone Sound Quality, by Sean Olive, aimed to unravel.

The result of the study is the Harman Curve, representing the sound signature with the highest probability of being preferred by an audio professional.

Almost a decade later today we have the Harman Curves (seven major variations as of today). You can find the most variations in the AutoEQ project (we’ll elaborate more on this later).

Comparison of four Harman target curves for over-ear and in-ear headphones. [Source: innerfidelity.com]

Surprisingly, audio professionals prefer a headphone sound signature that’s far from flat. It is said that this closely approximates what an actual flat loudspeaker would sound in a studio environment, thus it closely resembles what audio professionals were actually aiming for when mixing their tracks.

The result of the study became so impactful that headphone companies started manufacturing and marketing headphones targeted at the Harman Curve. This includes the well known AKG K371 and K361, and Samsung Galaxy Buds Plus.

Equalization

Sony WH-1000XM4 (blue) against Harman target (dotted) [Source: headphones.com]

It is evident that the above pair of cans deviates quite a bit from the Harman curve. If you actually own a pair of XM4s, you may even describe the sound as dark or “veiled”.

But fret not, equalization to the rescue.

Equalization is a technique originally used to shape signals through a telephone line to ensure that the output at the other end is flat across all frequencies, and is thus “equalized”.

The concept was later applied to audio engineering use-cases like compensating for the uneven frequency response of the recording equipment to ensure faithful reproduction. In this context, the term “equalization” may no longer be accurate, given that a flat response might not be the final intent of its use.

The two most common user interface for an equalizer is the graphic (or fixed band) equalizer, and the parametric equalizer.

Since each slider of an equalizer amplifies or attenuates a its own range of frequencies, you can also think of them as frequency-specific volume controls.

OLYMPUS DIGITAL CAMERA

A graphic equalizer consists of sliders (or “faders”) for an evenly spaced fixed set of bands, e.g. 31hz, 62hz, 125hz, 250hz… 16000Hz (evenly spaced in the logarithmic sense), allowing the user to define the gain for each.

A parametric equalizer [Source: wikipedia.org]

On the other hand, a parametric equalizer not only allows the user to define the gain, but also the central frequency of each EQ band, and usually also the Q factor (Quality factor) which determines its bandwidth.

Evidently, the graphic equalizer may be way friendlier but it’s less precise than a parametric one.

Equalization can very easily cause signal clipping, so most equalizers typically also benefit greatly from a pre-amp, allowing you to uniformly attenuate an entire signal before equalization to avoid the clipping caused by equalization peaks.

Equalizers are typically implemented using linear filters; most commonly band-pass filters and peaking filters. Analog equalizers are commonly implemented as band-pass filters connected in parallel, while DSP equalizers are commonly implemented as peaking filters connected in series.

Frequency response graph of a band-pass filter. [Source: wikipedia.org]

A band-pass filter is typically made using a cascade of a high- and low-pass filter (such as an RLC circuit), and allows only a single band (a range of frequencies) to pass through, while virtually attenuating everything else.

Frequency response graph of a peaking filter. [Source: dsprelated.com]

A peaking filter is typically implemented as a DSP filter, and it provides unity gain everywhere except for an amplification or attenuation of a bell-shaped band.

AutoEQ

But how exactly do we precisely equalize to the Harman target?

You typically can’t, at least not without a measurement rig for feedback tuning, as well as a driver that’s actually physically able to conform to the exact demands. But you can very well approximate it.

In theory, it is possible to calculate the equalization settings required to bring a source curve to a target curve. The legendary AutoEQ project by jaakkopasanen does exactly that, backed by raw frequency response data from various online databases (covering thousands of headphones).

The repository comes not only with pre-built results (usually targeting a variation of Harman Curve), but also scripts, tools and normalized raw data to generate your own.

To shift the sound signature of your headphones towards the Harman Curve, simply install a system-wide equalizer in your chosen platform (or use your player’s built-in equalizer) and apply the appropriate results for your headphone model.

TLDR

“I have an Android phone, and I have a Sony WH-1000XM4. What do I do?”

  1. Download a good equalizer like Poweramp Equalizer.
  2. Use the following settings:
    • Equalizer Mode: Parametric mode
    • Bands Overlap: Cascade
    • Smooth Equalizer/Tone Gains: Off
  3. Find your headphone model from the results section and copy the values into the app.

Enjoy the music 😀

If the equalization doesn’t seem to be working, go into your settings page and select “Equalizer doesn’t work”. Most likely you’ll have to turn on DUMP Permission and Notification Listener Permission (follow the instructions in the Poweramp WebADB page, it will walk you through).

Many sites would recommend Wavelet, but I’m not getting the right results as of writing. Though it can automatically apply AutoEQ settings, it somewhat suffer from clipping issues no matter what I’ve tried (changing buffer size, etc.). You can try that first. I’m using a Galaxy Note 10+ (SM-N975F/DS), One UI version 4.1, Android version 12.

Learning Theory (Vol. 1)

(This applies to most skills in general, be it e-sports, physical sports, music, etc. and even architectural thinking, competitive programming, software engineering and whatnot)

Really, how does the body actually do that?

“I actually consciously look at the falling notes and think to myself that I must press down my right ring finger then my left index.”… Says nobody ever. NOBODY.

So, you want to be a Pokémon master? Well well, let’s talk about…

  1. Conscious vs unconscious mind
  2. Crystalized vs liquid intelligence
  3. Progressive overload
  4. Break down and drill
  5. Fresh vs tired
  6. Repetition
  7. Speed vs accuracy
  8. Bored vs surprised
  9. Promises
  10. Resting
  11. Rewards

Conscious vs unconscious mind

Conscious mind is general purpose (think CPU).

It can do everything, but it is slow and requires mental effort.

Skill output from conscious mind often has low parallelism (can’t multitask well), and is inconsistent and unreliable.

Unconscious mind is purpose-built (think GPU, or more accurately FPGAs or ASICs).

It can’t do most things, but whatever it can do, it does them really well and effortlessly.

Skill output from unconscious mind often has high parallelism (can multitask well), and is consistent and reliable.

(Unconscious mind is used interchangeably with subconscious mind)

Key Takeaways:

To master most skills – we first understand its intricacies and learn the right form with the conscious mind, then shift / drill it into the unconscious mind for execution.

Crystalized vs liquid intelligence

Crystalized intelligence determines how much you already know. In general it breaks down over time, but is relatively permanent with periodic refreshing.

Liquid intelligence determines the rate at which you can learn and adapt. In general, it decreases with age, but having more crystallized intelligence can also help you crystalize liquid intelligence more efficiently.

As an example…

On average, a younger person picks up new dance moves faster than an older person – Liquid intelligence decreases with age.

However, an older dancer with more moves under his belt can pick up new dance moves faster than a younger person who knows fewer dance moves – Liquid intelligence crystalizes more efficiently with more crystalized intelligence.

Key Takeaways:

“The more you know, the faster you learn. The faster you learn, the more you know.” – Kelvin Ang

Aim to learn broadly, even if it’s a bunch of easy skills. It will eventually aid you in learning the harder skills. Move on and learn something else if you’re stuck with something.

Progressive overload

While it is difficult to change the amount of liquid intelligence a person has, you can directly and artificially boost the amount of crystalized intelligence by focusing on the lowest hanging fruits first. This in turn allows liquid intelligence to crystalize more efficiently.

In other words, having a strong foundation in easier skills allows you to learn harder skills more quickly and with greater quality than if you were to jump straight into the harder ones.

Key takeaways:

When learning any skill, focus on the low hanging fruits / easier skills and learn and master as much of them as possible to allow you to learn the harder skills more easily.

Build your own learning list in terms of difficulty, and learn them in order (remember to rearrange if you find that some tricks are harder than you initially thought).

Break down and drill

When learning a skill, we are usually motivated by the end result (watching a very cool trick, a very skilful execution of something, etc.), and we’re usually tempted to jump straight into attempting the end result.

Most difficult skills are usually a compound of many constituent skills, and their original inventors usually discovered them after knowing the constituents.

It is very important to keep this in mind, and be able to break down a complex trick into discrete small steps that you can practice with drills, which you can later combine back into the whole trick.

An example of breaking a trick down:

  • A [full cab no-comply] is just a [backside 180 pivot] followed by a [frontside no comply 180] so you can drill those first.
  • A [backside 180 pivot] is simply a larger [backside kickturn] so you can drill that first.
  • A [frontside no comply 180] is just [stepping off to the right with the front leg], followed by a [frontside 180 pop with the rear leg], followed by [hopping back onto the board with both legs], so drill those separately.

Other than drilling broken down steps of a trick, it is also important to do fundamental drills. Think of a pianist or a singer doing scales.

Key takeaways:

Break down tricks into smaller steps and drill them separately before combining them back.

If a particular step is too difficult, continue breaking it down or do a nerfed version of it first (e.g. doing steps on grass, doing on flipped board, etc.).

Remember to drill foundational skills for widespread general benefits, much like how a pianist or singer practices scales.

Fresh vs tired

Doing something while mentally fresh (like in the morning, or in the beginning of a session) naturally allows you to use more of your conscious mind.

Doing things while mentally tired (like at the end of a tiring work day) naturally forces you to rely more on unconscious mind for application of skills.

Alcohol artificially inhibits your conscious mind (causing you to use more of your unconscious mind), so you might find that alcohol may allow you shoot better in a game or have better game sense or “gut feeling” in a game you already know, but cause you to suck badly in a completely foreign game or task.

Caffeine on the other hand artificially boosts both your unconscious and conscious mind, allowing you to squeeze in some extra progress.

Key takeaways:

When you’re mentally fresh, your conscious mind dominates, so aim to learn new skills and find more details.

When you’re mentally tired, your unconscious mind dominates, so aim to solidify your existing skills.

Repetition

There are two goals to repetition.

Initial repetition with conscious mind helps you understand intricacies of something and to be “able to do it“.

Further repetition trains your unconscious mind to take over tasks from the conscious mind, to be “able to do it without thinking“.

Be wary when using repetition. Repetition eventually trains things into your unconscious mind, even if it is not done correctly. This negatively impacts the learning and causes progress to plateau – “I attempted this 500 times, every day, but still making very minimal progress“.

Key takeaways:

Do repetitions early in the session to learn new tricks, and focus conscious effort on building the right form.

Do repetitions later in the session to master new tricks, but avoid long-term repetitions with the wrong form.

Speed vs accuracy

Once a particular trick is possible (but not perfect), it is easy to forget about accuracy and jump straight to doing it fast.

However, it is important to remember that repetitions with bad form causes the bad form to stick, and doing it faster just causes it to stick harder.

Unless a skill is inherently speed oriented, it is most important to focus on form and accuracy first, and only increase speed if you can do it while maintaining form and accuracy.

Form and accuracy comes from conscious effort. Speed is naturally achieved from unconscious effort.

If needed, use a spotter, video recording or mirror to ensure that you are keeping up with or progressing towards the right form.

Key takeaways:

Focus on form and accuracy over speed unless the particular skill requires speed, and never intentionally increase speed while forgoing form and accuracy.

Have a spotter, or use video recording, to ensure that you are always using the correct form.

Bored vs surprised

When you repeatedly do something, you start to get bored. Your conscious mind is disengaged. As a result, you stop noticing details, leading to less improvements.

Repetitions in boredom causes bad habits to stick if you’re not already doing the right thing.

You can repeatedly trick yourself into a surprised state by doing circuits. This allows you to learn quicker.

When doing each set of repetitions, your conscious mind is usually more engaged in the beginning of a set, while it gradually disengages towards the last few reps in a set.

Therefore, within each circuit, use shorter sets help to learn, and longer sets help to remember.

Key takeaways:

To improve quickly and avoid bad habits, always shake things up to keep your mind engaged. Use circuits, rotations, challenges, etc. to “surprise” your mind, and add new tricks into your mix. Avoid letting yourself feel “bored”.

Within each circuit, use shorter sets to learn, and longer sets to remember.

Promises

Pushing your limits is important, but it is often very easy to cut yourself some slack and give up too early, especially when it gets tough.

Making promises helps to push you past your limits, and keeps you focused on achieving more than you normally would.

You can make yourself a promise by setting a target repetition count (“I’m going to do 100 of this trick now“), or better yet, a training schedule or strategy.

You can reinforce promises by telling it to someone else, and it works especially well if you also tell the person when you’ve completed the promise.

An intermediate strategy, almost as effective as making a promise to someone else, would simply be to say or announce your repetitions verbally when you’re doing them.

Key takeaways:

To push yourself past your limits and keep yourself focused, declare what you want to do before you do them, count out load verbally, and tell someone else exactly what you want to do before and after you do them.

Resting

It is common to be surprised by unexpected improvements whenever you go for a short break, or when you try something again the next day.

Neurons make new connections over time, even when resting, to prepare for the next encounter. Coupled with being fresh and surprised after a rest, you’ll usually get a boost in performance and learning rate.

In general, practicing 1 hour per day over 5 days yields greater results than practicing 5 hours in one day. Spreading a difficult trick over a few sets or days lets you improve more quickly than stagnating on it for an entire session. Effectively, we can combine the advantages of resting, surprise and freshness.

If you want to learn a manual, throw it in occasionally during your sessions, and do it often and across many days, instead of focusing entire sessions on the same manual.

Key takeaways:

Spread difficult tricks you want to learn throughout your session, and across many days. Mixing in 5 difficult skills that you want to learn into your sessions often yields better results than focusing on just one in an entire session.

Rewards

No matter how good or how fast of a learner you are, there will eventually be a time when it becomes a grind towards minuscule improvements.

Without something internally or externally motivating you, you’ll eventually dry out and stop.

Let’s be real here. Most skills and talents are meant to be shared, witnessed, or used competitively.

The easiest way to keep yourself on fire is to incorporate social aspects to your game – post videos of yourself in social media, have friends or communities that share the same interest, get your friends to play, compete in ladders and leagues, etc.

Key takeaways:

Unless you already have some really strong and permanent internal motivation for training, it is important to integrate social aspects in your game to keep yourself motivated.

Post on social media, be part of a group or community or create one, rope your friends in, compete in ladders and leagues, etc.

In all chaos there is a cosmos, in all disorder a secret order

Hundreds of vectors joined head-to-tail, each rotating at a constant rate, unknowingly drawing the symbol of chaos in a surprisingly orderly fashion. Explanation: The Fourier Series.

\[ f(t) = \sum^\infty_{n=-\infty} c_ne^{\frac{2n\pi}{T} it} \] \[ c_0 = \frac{1}{T} \int^T_0 f(t) dt \] \[ c_n = \frac{1}{T} \int^T_0 f(t) * e^{-\frac{2n\pi}{L} t} dt \]

The Fourier Series

In the beginning, Fourier was trying to solve the Heat equation, which is a PDE describing how heat, with an initial distribution, propagates through a medium over time.

\[ \frac{\partial}{\partial t} T(x, t) = \alpha \frac{\partial^2}{\partial x^2} T(x, t) \]

The above equation says that:

The [rate of change over time of heat at a particular point x in the medium] is proportional to the [rate of change over time of heat gradient at a particular point x]

If you can find any T(x, t) that satisfies the above equation, and satisfy the initial and boundary conditions, then this T(x, t) can describe the way the heat distribution changes over time.

To start off, Fourier noticed that if we have an absurd initial heat distribution that looks exactly like a cosine wave with amplitude h, we can show that it satisfies the Heat equation.

Let:

\[ T(x, 0) = h cos(x) \]

Then:

\[ \frac{\partial}{\partial t} T(x, 0) = \alpha \frac{\partial^2}{\partial x^2} T(x, 0) \] \[ = \alpha (- h cos(x)) = -\alpha T(x, 0) \]

The above equation says that:

The [rate of change over time of heat at a particular point x in the medium] is proportional to the [heat at a particular point x in the medium]

This means that whatever cosine of arbitrary amplitude we start with, we’ll just get the same cosine with shorter amplitude in the next timestep. This is recursively true for all t >= 0.

Therefore:

\[ \frac{\partial}{\partial t} T(x, t) = -\alpha T(x, t) \]

Note that the above simply describes exponential decay over time. The solution for the above is simply:

\[ T(x, t) = C e^{-\alpha t} = h cos(x) e^{-\alpha t} \]

But to really solve a PDE, we also need another ingredient in the recipe: The boundary conditions T(0, t), T(L, t).

To find T(0, t) and T(L, t) for the Heat equation of a medium of length L, observe that if we split the heat distribution along the length of the medium into infinitely many points, the temperature difference between the last two points on both edges always approaches zero.

Therefore:

\[ \frac{\partial}{\partial x} T(0, t) = \frac{\partial}{\partial x} T(L, t) = 0 \]

We need to verify that our function T(x, t) also satisfies the above.

\[ \frac{\partial}{\partial x} T(x, t) = -h sin(x)e^{-\alpha t} \] \[ T(0, t) = T(L, t) = 0 \] \[ \iff L = n\pi \text{ for any integer } n \]

Evidently, our function T(x, t) only satisfies the boundary condition for very specific medium lengths L where L is a multiple of π.

To get around this, we can scale our cosine function to have a period that matches any medium of length L:

\[ T(x, t) = h cos(\frac{2\pi}{L} x) e^{-(\frac{2\pi}{L})^2 \alpha t} \]

Notice that if we scale the cosine in this way, the second partial derivative with respect to x will end up with an extra constant term. To ensure that the partial derivative with respect to t matches, we will also have to add that scaling into the exponential term.

\[ \frac{\partial}{\partial t} T(x, t) = \alpha \frac{\partial^2}{\partial x^2} T(x, t) \] \[ = -\alpha h (\frac{2\pi}{L})^2 cos(\frac{2\pi}{L} x) e^{-(\frac{2\pi}{L})^2 \alpha t} \]

In fact, we can have infinitely many solutions that can satisfy the boundary conditions for a medium of length L. More generally, we can have:

\[ T(x, t) = h cos(\frac{n\pi}{L} x) e^{-(\frac{n\pi}{L})^2 \alpha t} \] \[ \text{ for any integer } n \]

With this, we actually have the solution to describe ALL cosine shaped heat distributions that start with the amplitude h and period L, over a medium of length L and thermal diffusivity a.

We can currently only solve heat distributions in the shape of cosines with very specific periods. How can we solve the Heat equation for anything?

Fourier noticed that if we have two initial heat distributions T_1 and T_2:

\[ T_1(x, 0) = h_1 cos(\frac{\pi}{L} x) \] \[ T_2(x, 0) = h_2 cos(\frac{2\pi}{L} x) \]

We can make a third initial heat distribution T_3…:

\[ T_3(x, 0) = T_1(x, 0) + T_2(x, 0) \] \[ = h_1 cos(\frac{\pi}{L} x) + h_2 cos(\frac{2\pi}{L} x) \]

… and we immediately have the solution for it:

\[ T_3(x, t) = T_1(x, t) + T_2(x, t) \] \[ = h_1 cos(\frac{\pi}{L} x) e^{-(\frac{\pi}{L})^2 \alpha t} \] \[ + h_2 cos(\frac{2\pi}{L} x) e^{-(\frac{2\pi}{L})^2 \alpha t} \]

Due to linearity, any initial heat distributions described by the sum of any of these cosine waves with specific periods can be solved by the sum of all their solutions.

BUT… that’s not very useful yet.

At this point, Fourier asked a really absurd question: “If we can describe any heat distribution solely by summing arbitrarily many of these cosine waves of specific periods and arbitrary amplitudes, how can we write that down Mathematically?”

The general solution of the Heat equation:

\[ T(x, t) = \sum^\infty_{n=0} a_n cos(\frac{n\pi}{L} x) e^{-(\frac{n\pi}{L})^2 \alpha t} \] \[ = a_0 cos(\frac{0\pi}{L} x) e^{-(\frac{0\pi}{L})^2 \alpha t} \] \[ + a_1 cos(\frac{1\pi}{L} x) e^{-(\frac{1\pi}{L})^2 \alpha t} \] \[ + a_2 cos(\frac{2\pi}{L} x) e^{-(\frac{2\pi}{L})^2 \alpha t} \] \[ + … \]

Of course, whether this could work was still unknown to him at that time. Besides, even if we knew the individual periods of the cosines that make up the solution, we will do not know the individual amplitudes a_n.

What we really want to find out first is how we can represent an initial condition with cosines, so we can simply ignore the t terms for now.

\[ T(x, 0) = \sum^\infty_{n=0} a_n cos(\frac{n\pi}{L} x) \] \[ = a_0 cos(\frac{0\pi}{L} x) \] \[ + a_1 cos(\frac{1\pi}{L} x) \] \[ + a_2 cos(\frac{2\pi}{L} x) \] \[ + … \]

Now, let’s think about what these a_n terms actually mean. In particular, we look at the first term a_0.

Clearly, the entire first term is a constant as a_0 is a constant and the terms on the right evaluates to 1. But what does it really mean?

If we think about what happens when t is very large, we can see that every subsequent term falls to 0, leaving only the a_0 term remaining (i.e. the heat along all points on the rod is constant).

Therefore, a_0 is actually the average heat of the medium. Another way of expressing an average is through an integration:

\[ a_0 = \frac{1}{L} \int^L_0 T(x, 0) dx \]

The Fourier observed that every other term in the summation goes to zero when summed over the length of the medium as all the chosen cosine functions are odd:

\[ a_0 = \frac{1}{L} \int^L_0 T(x, 0) dx \] \[ = \frac{1}{L} \int^L_0 \sum^\infty_{n=0} a_n cos(\frac{n\pi}{L} x) dx \] \[ = \frac{a_0}{L} \int^L_0 cos(\frac{0\pi}{L} x) dx \] \[ + \frac{a_1}{L} \int^L_0 cos(\frac{1\pi}{L} x) dx \] \[ + \frac{a_2}{L} \int^L_0 cos(\frac{2\pi}{L} x) dx \] \[ + … \] \[ = \frac{a_0}{L} \cdot L \] \[ + \frac{a_1}{L} \cdot 0 \] \[ + \frac{a_2}{L} \cdot 0 \] \[ + … \] \[ = a_0 \]

Applying Euler’s formula to convert our cosine into an exponential pair, we would expect our previous averaging formula to still hold:

\[ a_0 = \frac{1}{L} \int^L_0 T(x, 0) dx \] \[ = \frac{1}{L} \int^L_0 \sum^\infty_{n=0} a_n (e^{\frac{n\pi}{L} ix} + e^{-\frac{n\pi}{L} ix}) dx \] \[ = \frac{a_0}{L} \int^L_0 \frac{1}{2} (e^{\frac{0\pi}{L} ix} + e^{-\frac{0\pi}{L} ix}) dx \] \[ + \frac{a_1}{L} \int^L_0 \frac{1}{2} (e^{\frac{1\pi}{L} ix} + e^{-\frac{1\pi}{L} ix}) dx \] \[ + \frac{a_2}{L} \int^L_0 \frac{1}{2} (e^{\frac{2\pi}{L} ix} + e^{-\frac{2\pi}{L} ix}) dx \] \[ + … \] \[ = \frac{a_0}{L} \cdot L \] \[ + \frac{a_1}{L} \cdot 0 \] \[ + \frac{a_2}{L} \cdot 0 \] \[ + … \] \[ = a_0 \]

Note also, that each pair of exponentials are just odd mirrors of each other, so we can further simplify:

\[ a_0 = \frac{1}{L} \int^L_0 T(x, 0) dx \] \[ = \frac{1}{L} \int^L_0 \sum^\infty_{n=0} a_n \frac{1}{2} (e^{\frac{n\pi}{L} x} + e^{-\frac{n\pi}{L} x}) dx \] \[ = \frac{1}{L} \int^L_0 \sum^\infty_{n=0} a_n * e^{\frac{n\pi}{L} x} dx \] \[ = \frac{a_0}{L} \int^L_0 e^{\frac{0\pi}{L} x} dx \] \[ + \frac{a_1}{L} \int^L_0 e^{\frac{1\pi}{L} x} dx \] \[ + \frac{a_2}{L} \int^L_0 e^{\frac{2\pi}{L} x} dx \] \[ + … \] \[ = \frac{a_0}{L} \cdot L \] \[ + \frac{a_1}{L} \cdot 0 \] \[ + \frac{a_2}{L} \cdot 0 \] \[ + … \] \[ = a_0 \]

Here comes the genius observation that Fourier made to allow us to find any arbitrary a_n. Fourier noticed that we can use a simple trick to kill off all coefficients except a_1 – by multiplying in an additional exponential term.

\[ a_1 = \frac{1}{L} \int^L_0 T(x, 0) e^{-\frac{1\pi}{L} x} dx \] \[ = \frac{1}{L} \int^L_0 \sum^\infty_{n=0} a_n e^{\frac{n\pi}{L} x} e^{-\frac{1\pi}{L} x} dx \] \[ = \frac{a_0}{L} \int^L_0 e^{\frac{0\pi}{L} x} e^{-\frac{1\pi}{L} x} dx \] \[ + \frac{a_1}{L} \int^L_0 e^{\frac{1\pi}{L} x} e^{-\frac{1\pi}{L} x} dx \] \[ + \frac{a_2}{L} \int^L_0 e^{\frac{2\pi}{L} x} e^{-\frac{1\pi}{L} x} dx \] \[ + … \] \[ = \frac{a_0}{L} \int^L_0 e^{-\frac{1\pi}{L} x} dx \] \[ + \frac{a_1}{L} \int^L_0 e^{\frac{0\pi}{L} x} dx \] \[ + \frac{a_2}{L} \int^L_0 e^{\frac{1\pi}{L} x} dx \] \[ + … \] \[ = \frac{a_0}{L} \cdot 0 \] \[ + \frac{a_1}{L} \cdot L \] \[ + \frac{a_2}{L} \cdot 0 \] \[ + … \] \[ = a_1 \]

Thus, we now have the general formula to find the coefficients for the Heat equation:

\[ a_n = \frac{1}{L} \int^L_0 T(x, 0) * e^{-\frac{n\pi}{L} x} dx \]

Therefore, the complete solution for solving the Heat equation is:

\[ T(x, t) = \sum^\infty_{n=0} a_n * cos(\frac{n\pi}{L} x) e^{-(\frac{n\pi}{L})^2 \alpha t} \] \[ a_0 = \frac{1}{L} \int^L_0 T(x, 0) dx \] \[ a_n = \frac{1}{L} \int^L_0 T(x, 0) * e^{-\frac{n\pi}{L} x} dx \]

To generalize out of the Heat equation PDE for any periodic function f(t) of period T, we first introduce the sine components into the equation, each with its own coefficients b_n:

\[ f(t) = \sum^\infty_{n=-\infty} (a_n * cos(\frac{2n\pi}{T} t) \] \[ + b_n * sin(\frac{2n\pi}{T} t)) \]

Once again applying Euler’s formula, we can rewrite this as:

\[ f(t) = \sum^\infty_{n=0} (a_n * \frac{1}{2}(e^{i\frac{2n\pi}{T} t} + e^{-i\frac{2n\pi}{T} t}) \] \[ + b_n * \frac{1}{2i}(e^{i\frac{2n\pi}{T} t} – e^{-i\frac{2n\pi}{T} t}) \] \[ = \sum^\infty_{n=0} (\frac{1}{2}(a_n – ib_n)e^{i\frac{2n\pi}{T} t} \] \[ + \frac{1}{2}(a_n + ib_n)e^{-i\frac{2n\pi}{T} t}) \] \[ = \sum^\infty_{n=0} \frac{1}{2}(a_n – ib_n)e^{i\frac{2n\pi}{T} t} \] \[ + \sum^{-1}_{n=-\infty} \frac{1}{2}(a_n + ib_n)e^{-i\frac{2n\pi}{T} t} \] \[ = \sum^\infty_{n=-\infty} \frac{1}{2}(a_n – ib_n) e^{i\frac{2n\pi}{T} t} \] \[ = \sum^\infty_{n=-\infty} c_ne^{i\frac{2n\pi}{T} t} \]

The above is known as the exponential form of Fourier Series. To find c_n, the complex coefficient of each term, the same method used in finding a_n in the general Heat equation applies.

\[ f(t) = \sum^\infty_{n=-\infty} c_ne^{\frac{2n\pi}{T} it} \] \[ c_0 = \frac{1}{T} \int^T_0 f(t) dt \] \[ c_n = \frac{1}{T} \int^T_0 f(t) * e^{-\frac{2n\pi}{L} t} dt \]

A layman / practical interpretation of the above is…

You can break down a periodic complex function f(t) with period T into a sum of exponentials.

To find the coefficient c_n of each exponential term, simply integrate your original function multiplied by its respective exponential along the whole period.

To think that Fourier achieved this in 1822 is simply mind-blowing.

Euler’s Formula

Right. Just installed MathML block for WordPress because I want to talk about Fourier Series.

I’m just going to write a quick sidetrack required for the next post (and also for testing the MathML block).

Here’s the Euler’s formula discovered in 1748.

\[ e^{ix} = cos(x) + isin(x) \]

This can be visualized as a unit complex number that rotates clockwise around the origin on the complex plane for every increase in x.

We can reverse the direction of the rotation simply by doing:

\[ e^{-ix} = cos(x) – isin(x) \]

With this, we can easily represent any cosine as:

\[ cos(x) = \frac{1}{2} (e^{ix} + e^{-ix}) \]

Likewise, sines can be represented as such too:

\[ sin(x) = \frac{1}{2j} (e^{ix} – e^{-ix}) \]