πŸ‘€thunderbongπŸ•‘6yπŸ”Ό403πŸ—¨οΈ44

(Replying to PARENT post)

πŸ‘€alberto_olπŸ•‘6yπŸ”Ό0πŸ—¨οΈ0

(Replying to PARENT post)

There are three really important things about the Fourier transform in my mind. Two are math and one is engineering.

- the Fourier Transform preserves energy (Parceval's theorem, the norm of the transformed function is the same as the norm of the original)

- there exists an inverse transform to get the original function back

- once you grasp that magnitude/phase describe patterns in the function you can gain powerful intuition about the transform and how to use it as an analytical and design tool.

Those first two properties tell us that the transform preserves information, basically it's another way of looking at the same thing to gain more insight without loss. The third is something not harped on enough in engineering courses, and failure to teach it in my mind is one reason so many people thing controls/signal processing is black magic.

A big followup question here is, are there other transforms for which energy is preserved, and there exists an inverse? The answer is yes, there are infinitely many. The third property is more useful, which begs the question, which of those other transforms are useful?

An example of this is a cousin of the Fourier Transform called the Discrete Cosine Transform which is critical in compression, classification, and machine learning (especially contemporary speech recognition). It's not as straightforward as Fourier, since the result isn't as obvious as breaking down the energy into patterns, but rather what it does is break the energy into correlated parts, in other words it preserves the energy while representing it in decorrelated bins. The strongest of those bins is the most important part, which is why compression technology works by taking the DCT and tossing out low magnitude components (it preserves the most important energy) while also showing how it can work for machine learning, where it decomposes the input information into an equivalent but decorrelated representation where inputs aren't shared for different parts of something like a neural net.

There are other equally cool orthogonal transforms, I like the Hilbert transform myself because it can extract really useful info like signal envelopes and be used to make wacky noises, like a frequency shifter.

πŸ‘€holy_cityπŸ•‘6yπŸ”Ό0πŸ—¨οΈ0

(Replying to PARENT post)

It’s good that they link to 3brown1blue’s video as well. I was already decently well learned in DSP and information theory before I saw that video and it still managed to teach me a very unique perspective on the fourier transform.

These visualizations are very useful for those without an intuition built up. This is the exact way to think about things if you need to work in the frequency domain.

I do wish they mentioned phase. It’s always glossed over when teaching the fourier transform but is incredibly essential to describing a coherent signal.

πŸ‘€willis936πŸ•‘6yπŸ”Ό0πŸ—¨οΈ0

(Replying to PARENT post)

love the interactive demos.

one fundamental thing i always feel is missing with all these videos and articles about the spinny circles set end to end with different phases and amplitude is: why on earth do such configurations happen to have the capacity to approximate any function you prescribe?? to me this is the entire mystery behind fourier transforms. the spinny circles are kind of unusual to look at, but do nothing to illuminate to me why convergence of fourier series happens, and for this reason exactly i am of the opinion that this meme analogy is not useful for beginners beyond entertainment.

of course the details for convergence of fourier series are the entire topic of classical harmonic analysis. one hand-wavy way to make sense of it is to first sample and then identify that the dft matrix for the vector space of sampled signals is a basis. kind of similarly, int dx sin x sin nx from 0 to 2pi is 0 for all n, and the span of {sin nx,cos nx} is somehow dense in some function space. although that isn't really very illuminating since to the uninformed it amounts to a numerical coincidence. every single article of this sort that i have seen falls flat in this respect and i feel like this most interesting part has been obscured.

πŸ‘€enthdegreeπŸ•‘6yπŸ”Ό0πŸ—¨οΈ0

(Replying to PARENT post)

1. The only thing that matters in engineering is the discrete Fourier transform (DFT). That’s what anyone who wants to calculate anything must use. 2. The fast Fourier transform (FFT) is used to to calculate the DFT. Cue up Matlab or LabView. 3. Knowing the units of the abscissa in sample and frequency space is next. 4. Studying the sampling theorem so hard you can describe it in a number of different contexts, rigorously, is critical. 5. Knowing what aliasing and antialiasing are is critical. 6. Knowing how to use transforms to interpolate is next. 7. Knowing how to mess around with the real and imaginary parts of the complex DFT to improve the S/N is a good achievement. 8. Convolutional filters. 9. Smoothing data sets with non-causal filters. 10. What windowing functions, and why?

Have I left anything out?

πŸ‘€aj7πŸ•‘6yπŸ”Ό0πŸ—¨οΈ0

(Replying to PARENT post)

This is very cool.

Tangent idea: it seems like the way we describe Earth's movement in space (rotation + orbit + precesion) is akin to the image of the hand that draws itself; where instead of describing the whole trajectory, we instead describe it in terms of "circular components" (rotation, orbit and precesion, yes I'm aware the orbit is not perfectly circular).

The closest visualization of Earth's "full trajectory" in space, that I've been able to find, is a video on YouTube (https://youtu.be/0jHsq36_NTU), which unfortunately is a bit exaggerated and not very accurate.

Has anyone seen something better than the above?

πŸ‘€ta1234567890πŸ•‘6yπŸ”Ό0πŸ—¨οΈ0

(Replying to PARENT post)

Great job on the article! Really succinct and a good introduction to the topic. Would've really helped me in undergrad when I first was learning about the Fast Fourier Transform.
πŸ‘€pequalsnpπŸ•‘6yπŸ”Ό0πŸ—¨οΈ0

(Replying to PARENT post)

Two easy questions: 1. Metaphysically, is everything actually constructed from sine waves? Or is this totally arbitrary (e.g., just as easy to construct things from square waves) 2. Neural firing approximates sine waves in the brain (e.g., alpha waves). Is a single neuron firing a square wave with a small duty cycle, or something else altogether?
πŸ‘€dr_dshivπŸ•‘6yπŸ”Ό0πŸ—¨οΈ0

(Replying to PARENT post)

Bug: If you move the harmonic-count slider to the left and redraw the wave, the slider moves to the right... If you press Play right after redrawing the wave, the harmonics are there. If you keep spamming Play as the right side of the image animates into a filtered wave, the quieter (not necessarily upper) harmonics gradually fade into silence.
πŸ‘€_fbptπŸ•‘6yπŸ”Ό0πŸ—¨οΈ0

(Replying to PARENT post)

The write-up is so amazing. The animated diagrams are so amazing too. I wonder how many weeks it would have taken to create such a write-up.
πŸ‘€hi41πŸ•‘6yπŸ”Ό0πŸ—¨οΈ0

(Replying to PARENT post)

Well this doesn't actually explain FFT at all, but it could be interesting to use the circle like thing on speech.
πŸ‘€PavlikPajaπŸ•‘6yπŸ”Ό0πŸ—¨οΈ0

(Replying to PARENT post)

This is such a nice intro, really well done.
πŸ‘€ykevinatorπŸ•‘6yπŸ”Ό0πŸ—¨οΈ0

(Replying to PARENT post)

Best illustration ever for the saying, "Give me 19 parameters and I'll fit an Elephant!"
πŸ‘€gHostsπŸ•‘6yπŸ”Ό0πŸ—¨οΈ0

(Replying to PARENT post)

This is the best introduction to FTT I've ever seen. I wish I had this when I was an undergrad.
πŸ‘€atarianπŸ•‘6yπŸ”Ό0πŸ—¨οΈ0

(Replying to PARENT post)

Thanks for this article. It had Fourier-transformed my brain-waves!
πŸ‘€hongoπŸ•‘6yπŸ”Ό0πŸ—¨οΈ0

(Replying to PARENT post)

Small Nitpick: You cannot reconstruct a square wave even if you use infinite fourier coefficients due to Gibbs phenomenon.(https://en.m.wikipedia.org/wiki/Gibbs_phenomenon)
πŸ‘€catchmrbharathπŸ•‘6yπŸ”Ό0πŸ—¨οΈ0

(Replying to PARENT post)

One bit of pedantry to complain about: this (really clever) essay is really only describing Fourier Decomposition, the idea that any function can be described in a vector space of sinusoidal basis functions. The Fourier Transform is an algorithm that exploits some (also really clever, and obviously of critical practical impact) factoring representations to compute that sum of functions in N*log(N) time in the number of data points instead of the naive N^2 summation of products.
πŸ‘€ajrossπŸ•‘6yπŸ”Ό0πŸ—¨οΈ0