Notice: This publish is an excerpt from the coming near near ebook, Deep Finding out and Medical Computing with R torch. The bankruptcy in query is at the Discrete Fourier Change into (DFT), and is positioned partly 3. Section 3 is devoted to clinical computation past deep finding out.
There are two chapters at the Fourier Change into. The primary strives to, in as âverbalâ and lucid some way as was once imaginable to me, forged a mild on whatâs at the back of the magic; it additionally displays how, unusually, you’ll be able to code the DFT in simply part a dozen strains. The second one specializes in rapid implementation (the Speedy Fourier Change into, or FFT), once more with each conceptual/explanatory in addition to sensible, code-it-yourself portions.
In combination, those duvet way more subject material than may just sensibly are compatible right into a weblog publish; subsequently, please believe what follows extra as a âteaserâ than an absolutely fledged article.
Within the sciences, the Fourier Change into is near to in all places. Said very in most cases, it converts information from one illustration to every other, with none lack of knowledge (if accomplished accurately, this is.) In the event you use torch
, it is only a serve as name away: torch_fft_fft()
is going a method, torch_fft_ifft()
the opposite. For the consumer, thatâs handy â you âsimplyâ want to understand how to interpret the consequences. Right here, I need to lend a hand with that. We commence with an instance serve as name, taking part in round with its output, after which, attempt to get a grip on what’s going on at the back of the scenes.
Figuring out the output of torch_fft_fft()
As we care about exact working out, we commence from the most straightforward imaginable instance sign, a natural cosine that plays one revolution over your entire sampling duration.
Place to begin: A cosine of frequency 1
The best way we set issues up, there might be sixty-four samples; the sampling duration thus equals N = 64
. The content material of frequency()
, the underneath helper serve as used to build the sign, displays how we constitute the cosine. Specifically:
[
f(x) = cos(frac{2 pi}{N} k x)
]
Right here (x) values development over the years (or house), and (okay) is the frequency index. A cosine is periodic with duration (2 pi); so if we wish it to first go back to its beginning state after sixty-four samples, and (x) runs between 0 and sixty-three, weâll need (okay) to be equivalent to (1). Like that, weâll succeed in the preliminary state once more at place (x = frac{2 pi}{64} * 1 * 64).
Letâs briefly verify this did what it was once intended to:
df <- information.body(x = sample_positions, y = as.numeric(x))
ggplot(df, aes(x = x, y = y)) +
geom_line() +
xlab("time") +
ylab("amplitude") +
theme_minimal()

Now that we have got the enter sign, torch_fft_fft()
computes for us the Fourier coefficients, this is, the significance of the quite a lot of frequencies provide within the sign. The collection of frequencies regarded as will equivalent the collection of sampling issues: So (X) might be of duration sixty-four as neatly.
(In our instance, youâll realize that the second one part of coefficients will equivalent the primary in magnitude. That is the case for each and every real-valued sign. In such circumstances, that you must name torch_fft_rfft()
as an alternative, which yields ânicerâ (within the sense of shorter) vectors to paintings with. Right here although, I need to provide an explanation for the overall case, since thatâs what youâll in finding accomplished in maximum expositions at the matter.)
Even with the sign being genuine, the Fourier coefficients are advanced numbers. There are 4 techniques to check up on them. The primary is to extract the true phase:
[1] 0 32 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
[29] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
[57] 0 0 0 0 0 0 0 32
Just a unmarried coefficient is non-zero, the only at place 1. (We commence counting from 0, and might discard the second one part, as defined above.)
Now taking a look on the imaginary phase, we discover it’s 0 all the way through:
[1] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
[29] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
[57] 0 0 0 0 0 0 0 0
At this level we all know that there’s only a unmarried frequency provide within the sign, particularly, that at (okay = 1). This fits (and it higher needed to) the best way we built the sign: particularly, as undertaking a unmarried revolution over your entire sampling duration.
Since, in idea, each and every coefficient will have non-zero genuine and imaginary portions, regularly what youâd document is the magnitude (the sq. root of the sum of squared genuine and imaginary portions):
[1] 0 32 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
[29] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
[57] 0 0 0 0 0 0 0 32
Unsurprisingly, those values precisely replicate the respective genuine portions.
In the end, thereâs the part, indicating a imaginable shift of the sign (a natural cosine is unshifted). In torch
, we now have torch_angle()
complementing torch_abs()
, however we want to consider roundoff error right here. We all know that during each and every however a unmarried case, the true and imaginary portions are each precisely 0; however because of finite precision in how numbers are offered in a pc, the real values will regularly no longer be 0. As an alternative, theyâll be very small. If we take the sort of âpretend non-zeroesâ and divide it via every other, as occurs within the attitude calculation, giant values may end up. To stop this from going down, our customized implementation rounds each inputs prior to triggering the department.
part <- serve as(Toes, threshold = 1e5) {
torch_atan2(
torch_abs(torch_round(Toes$imag * threshold)),
torch_abs(torch_round(Toes$genuine * threshold))
)
}
as.numeric(part(Toes)) %>% spherical(5)
[1] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
[29] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
[57] 0 0 0 0 0 0 0 0
As anticipated, there’s no part shift within the sign.
Letâs visualize what we discovered.
create_plot <- serve as(x, y, amount) {
df <- information.body(
x_ = x,
y_ = as.numeric(y) %>% spherical(5)
)
ggplot(df, aes(x = x_, y = y_)) +
geom_col() +
xlab("frequency") +
ylab(amount) +
theme_minimal()
}
p_real <- create_plot(
sample_positions,
real_part,
"genuine phase"
)
p_imag <- create_plot(
sample_positions,
imag_part,
"imaginary phase"
)
p_magnitude <- create_plot(
sample_positions,
magnitude,
"magnitude"
)
p_phase <- create_plot(
sample_positions,
part(Toes),
"part"
)
p_real + p_imag + p_magnitude + p_phase

Itâs honest to mention that we don’t have any reason why to doubt what torch_fft_fft()
has accomplished. However with a natural sinusoid like this, we will perceive precisely whatâs happening via computing the DFT ourselves, via hand. Doing this now will considerably lend a hand us later, after weâre writing the code.
Reconstructing the magic
One caveat about this phase. With an issue as wealthy because the Fourier Change into, and an target market who I believe to alter broadly on a size of math and sciences training, my probabilities to fulfill your expectancies, expensive reader, will have to be very with regards to 0. Nonetheless, I need to take the danger. In the event youâre a professional on this stuff, youâll anyway be simply scanning the textual content, taking a look out for items of torch
code. In the event youâre slightly conversant in the DFT, you should still like being reminded of its interior workings. And â most significantly â when youâre quite new, and even utterly new, to this matter, youâll with a bit of luck remove (a minimum of) something: that what turns out like one of the most biggest wonders of the universe (assuming there’s a fact by hook or by crook akin to what is going on in our minds) could be a surprise, however neither âmagicâ nor a factor reserved to the initiated.
In a nutshell, the Fourier Change into is a foundation transformation. In terms of the DFT â the Discrete Fourier Change into, the place time and frequency representations each are finite vectors, no longer purposes â the brand new foundation looks as if this:
[
begin{aligned}
&mathbf{w}^{0n}_N = e^{ifrac{2 pi}{N}* 0 * n} = 1
&mathbf{w}^{1n}_N = e^{ifrac{2 pi}{N}* 1 * n} = e^{ifrac{2 pi}{N} n}
&mathbf{w}^{2n}_N = e^{ifrac{2 pi}{N}* 2 * n} = e^{ifrac{2 pi}{N}2n}& …
&mathbf{w}^{(N-1)n}_N = e^{ifrac{2 pi}{N}* (N-1) * n} = e^{ifrac{2 pi}{N}(N-1)n}
end{aligned}
]
Right here (N), as prior to, is the collection of samples (64, in our case); thus, there are (N) foundation vectors. With (okay) working during the foundation vectors, they are able to be written:
[
mathbf{w}^{kn}_N = e^{ifrac{2 pi}{N}k n}
] {#eq-dft-1}
Like (okay), (n) runs from (0) to (N-1). To know what those foundation vectors are doing, it’s useful to briefly transfer to a shorter sampling duration, (N = 4), say. If we achieve this, we now have 4 foundation vectors: (mathbf{w}^{0n}_N), (mathbf{w}^{1n}_N), (mathbf{w}^{2n}_N), and (mathbf{w}^{3n}_N). The primary one looks as if this:
[
mathbf{w}^{0n}_N
=
begin{bmatrix}
e^{ifrac{2 pi}{4}* 0 * 0}
e^{ifrac{2 pi}{4}* 0 * 1}
e^{ifrac{2 pi}{4}* 0 * 2}
e^{ifrac{2 pi}{4}* 0 * 3}
end{bmatrix}
=
begin{bmatrix}
1
1
1
1
end{bmatrix}
]
The second one, like so:
[
mathbf{w}^{1n}_N
=
begin{bmatrix}
e^{ifrac{2 pi}{4}* 1 * 0}
e^{ifrac{2 pi}{4}* 1 * 1}
e^{ifrac{2 pi}{4}* 1 * 2}
e^{ifrac{2 pi}{4}* 1 * 3}
end{bmatrix}
=
begin{bmatrix}
1
e^{ifrac{pi}{2}}
e^{i pi}
e^{ifrac{3 pi}{4}}
end{bmatrix}
=
begin{bmatrix}
1
i
-1
-i
end{bmatrix}
]
That is the 3rd:
[
mathbf{w}^{2n}_N
=
begin{bmatrix}
e^{ifrac{2 pi}{4}* 2 * 0}
e^{ifrac{2 pi}{4}* 2 * 1}
e^{ifrac{2 pi}{4}* 2 * 2}
e^{ifrac{2 pi}{4}* 2 * 3}
end{bmatrix}
=
begin{bmatrix}
1
e^{ipi}
e^{i 2 pi}
e^{ifrac{3 pi}{2}}
end{bmatrix}
=
begin{bmatrix}
1
-1
1
-1
end{bmatrix}
]
And in the end, the fourth:
[
mathbf{w}^{3n}_N
=
begin{bmatrix}
e^{ifrac{2 pi}{4}* 3 * 0}
e^{ifrac{2 pi}{4}* 3 * 1}
e^{ifrac{2 pi}{4}* 3 * 2}
e^{ifrac{2 pi}{4}* 3 * 3}
end{bmatrix}
=
begin{bmatrix}
1
e^{ifrac{3 pi}{2}}
e^{i 3 pi}
e^{ifrac{9 pi}{2}}
end{bmatrix}
=
begin{bmatrix}
1
-i
-1
i
end{bmatrix}
]
We will be able to signify those 4 foundation vectors relating to their âpaceâ: how briskly they transfer across the unit circle. To try this, we merely have a look at the rightmost column vectors, the place the overall calculation effects seem. The values in that column correspond to positions pointed to via the revolving foundation vector at other deadlines. Because of this taking a look at a unmarried âreplace of placeâ, we will see how briskly the vector is shifting in one time step.
Taking a look first at (mathbf{w}^{0n}_N), we see that it does no longer transfer in any respect. (mathbf{w}^{1n}_N) is going from (1) to (i) to (-1) to (-i); another step, and it will be again the place it began. Thatâs one revolution in 4 steps, or a step dimension of (frac{pi}{2}). Then (mathbf{w}^{2n}_N) is going at double that tempo, shifting a distance of (pi) alongside the circle. That method, it finally ends up finishing two revolutions general. In the end, (mathbf{w}^{3n}_N) achieves 3 whole loops, for a step dimension of (frac{3 pi}{2}).
The item that makes those foundation vectors so helpful is that they’re mutually orthogonal. This is, their dot product is 0:
[
langle mathbf{w}^{kn}_N, mathbf{w}^{ln}_N rangle = sum_{n=0}^{N-1} ({e^{ifrac{2 pi}{N}k n}})^* e^{ifrac{2 pi}{N}l n} = sum_{n=0}^{N-1} ({e^{-ifrac{2 pi}{N}k n}})e^{ifrac{2 pi}{N}l n} = 0
] {#eq-dft-2}
Letâs take, for instance, (mathbf{w}^{2n}_N) and (mathbf{w}^{3n}_N). Certainly, their dot product evaluates to 0.
[
begin{bmatrix}
1 & -1 & 1 & -1
end{bmatrix}
begin{bmatrix}
1
-i
-1
i
end{bmatrix}
=
1 + i + (-1) + (-i) = 0
]
Now, weâre about to look how the orthogonality of the Fourier foundation considerably simplifies the calculation of the DFT. Did you realize the similarity between those foundation vectors and the best way we wrote the instance sign? Right here it’s once more:
[
f(x) = cos(frac{2 pi}{N} k x)
]
If we set up to constitute this serve as relating to the root vectors (mathbf{w}^{kn}_N = e^{ifrac{2 pi}{N}okay n}), the internal product between the serve as and each and every foundation vector might be both 0 (the âdefaultâ) or a more than one of 1 (in case the serve as has an element matching the root vector in query). Fortunately, sines and cosines can simply be transformed into advanced exponentials. In our instance, that is how that is going:
[
begin{aligned}
mathbf{x}_n &= cos(frac{2 pi}{64} n)
&= frac{1}{2} (e^{ifrac{2 pi}{64} n} + e^{-ifrac{2 pi}{64} n})
&= frac{1}{2} (e^{ifrac{2 pi}{64} n} + e^{ifrac{2 pi}{64} 63n})
&= frac{1}{2} (mathbf{w}^{1n}_N + mathbf{w}^{63n}_N)
end{aligned}
]
Right here step one immediately effects from Eulerâs formulation, and the second one displays the truth that the Fourier coefficients are periodic, with frequency -1 being the similar as 63, -2 equaling 62, and so forth.
Now, the (okay)th Fourier coefficient is bought via projecting the sign onto foundation vector (okay).
Because of the orthogonality of the root vectors, best two coefficients might not be 0: the ones for (mathbf{w}^{1n}_N) and (mathbf{w}^{63n}_N). They’re bought via computing the internal product between the serve as and the root vector in query, this is, via summing over (n). For each and every (n) ranging between (0) and (N-1), we now have a contribution of (frac{1}{2}), leaving us with a last sum of (32) for each coefficients. For instance, for (mathbf{w}^{1n}_N):
[
begin{aligned}
X_1 &= langle mathbf{w}^{1n}_N, mathbf{x}_n rangle
&= langle mathbf{w}^{1n}_N, frac{1}{2} (mathbf{w}^{1n}_N + mathbf{w}^{63n}_N) rangle
&= frac{1}{2} * 64
&= 32
end{aligned}
]
And analogously for (X_{63}).
Now, taking a look again at what torch_fft_fft()
gave us, we see we had been in a position to reach on the identical end result. And weâve realized one thing alongside the best way.
So long as we stick with indicators composed of a number of foundation vectors, we will compute the DFT on this method. On the finish of the bankruptcy, weâll broaden code that can paintings for all indicators, however first, letâs see if we will dive even deeper into the workings of the DFT. 3 issues weâll need to discover:
-
What would occur if frequencies modified â say, a melody had been sung at a better pitch?
-
What about amplitude adjustments â say, the song had been performed two times as loud?
-
What about part â e.g., there have been an offset prior to the piece began?
In all circumstances, weâll name torch_fft_fft()
best after weâve made up our minds the outcome ourselves.
And in the end, weâll see how advanced sinusoids, made up of various parts, can nonetheless be analyzed on this method, equipped they are able to be expressed relating to the frequencies that make up the root.
Various frequency
Suppose we quadrupled the frequency, giving us a sign that seemed like this:
[
mathbf{x}_n = cos(frac{2 pi}{N}*4*n)
]
Following the similar common sense as above, we will categorical it like so:
[
mathbf{x}_n = frac{1}{2} (mathbf{w}^{4n}_N + mathbf{w}^{60n}_N)
]
We already see that non-zero coefficients might be bought just for frequency indices (4) and (60). Choosing the previous, we download
[
begin{aligned}
X_4 &= langle mathbf{w}^{4n}_N, mathbf{x}_n rangle
&= langle mathbf{w}^{4n}_N, frac{1}{2} (mathbf{w}^{4n}_N + mathbf{w}^{60n}_N) rangle
&= 32
end{aligned}
]
For the latter, weâd arrive on the identical end result.
Now, letâs be sure our research is right kind. The next code snippet accommodates not anything new; it generates the sign, calculates the DFT, and plots them each.
x <- torch_cos(frequency(4, N) * sample_positions)
plot_ft <- serve as(x) p_phase)
plot_ft(x)

This does certainly verify our calculations.
A different case arises when sign frequency rises to the best possible one âallowedâ, within the sense of being detectable with out aliasing. That would be the case at one part of the collection of sampling issues. Then, the sign will appear to be so:
[
mathbf{x}_n = frac{1}{2} (mathbf{w}^{32n}_N + mathbf{w}^{32n}_N)
]
In consequence, we finally end up with a unmarried coefficient, akin to a frequency of 32 revolutions in line with pattern duration, of double the magnitude (64, thus). Listed here are the sign and its DFT:
x <- torch_cos(frequency(32, N) * sample_positions)
plot_ft(x)

Various amplitude
Now, letâs take into accounts what occurs after we range amplitude. For instance, say the sign will get two times as loud. Now, there might be a multiplier of two that may be taken outdoor the internal product. Consequently, the one factor that adjustments is the magnitude of the coefficients.
Letâs check this. The amendment is in line with the instance we had prior to the very ultimate one, with 4 revolutions over the sampling duration:
x <- 2 * torch_cos(frequency(4, N) * sample_positions)
plot_ft(x)

Thus far, we now have no longer as soon as observed a coefficient with non-zero imaginary phase. To switch this, we upload in part.
Including part
Converting the part of a sign method transferring it in time. Our instance sign is a cosine, a serve as whose worth is 1 at (t=0). (That still was once the â arbitrarily selected â place to begin of the sign.)
Now think we shift the sign ahead via (frac{pi}{2}). Then the height we had been seeing at 0 strikes over to (frac{pi}{2}); and if we nonetheless get started ârecordingâ at 0, we will have to discover a worth of 0 there. An equation describing that is the next. For comfort, we think a sampling duration of (2 pi) and (okay=1), in order that the instance is an easy cosine:
[
f(x) = cos(x – phi)
]
The minus signal might glance unintuitive in the beginning. But it surely does make sense: We now need to download a price of one at (x=frac{pi}{2}), so (x – phi) must review to 0. (Or to any more than one of (pi).) Summing up, a extend in time will seem as a detrimental part shift.
Now, weâre going to calculate the DFT for a shifted model of our instance sign. However when you like, take a peek on the phase-shifted model of the time-domain image now already. Youâll see {that a} cosine, behind schedule via (frac{pi}{2}), is not anything else than a sine beginning at 0.
To compute the DFT, we apply our familiar-by-now technique. The sign now looks as if this:
[
mathbf{x}_n = cos(frac{2 pi}{N}*4*x – frac{pi}{2})
]
First, we categorical it relating to foundation vectors:
[
begin{aligned}
mathbf{x}_n &= cos(frac{2 pi}{64} 4 n – frac{pi}{2})
&= frac{1}{2} (e^{ifrac{2 pi}{64} 4n – frac{pi}{2}} + e^{ifrac{2 pi}{64} 60n – frac{pi}{2}})
&= frac{1}{2} (e^{ifrac{2 pi}{64} 4n} e^{-i frac{pi}{2}} + e^{ifrac{2 pi}{64} 60n} e^{ifrac{pi}{2}})
&= frac{1}{2} (e^{-i frac{pi}{2}} mathbf{w}^{4n}_N + e^{i frac{pi}{2}} mathbf{w}^{60n}_N)
end{aligned}
]
Once more, we now have non-zero coefficients just for frequencies (4) and (60). However they’re advanced now, and each coefficients are not similar. As an alternative, one is the advanced conjugate of the opposite. First, (X_4):
[
begin{aligned}
X_4 &= langle mathbf{w}^{4n}_N, mathbf{x}_n rangle
&=langle mathbf{w}^{4n}_N, frac{1}{2} (e^{-i frac{pi}{2}} mathbf{w}^{4n}_N + e^{i frac{pi}{2}} mathbf{w}^{60n}_N) rangle
&= 32 *e^{-i frac{pi}{2}}
&= -32i
end{aligned}
]
And right here, (X_{60}):
[
begin{aligned}
X_{60} &= langle mathbf{w}^{60n}_N, mathbf{x}_N rangle
&= 32 *e^{i frac{pi}{2}}
&= 32i
end{aligned}
]
As same old, we test our calculation the use of torch_fft_fft()
.
x <- torch_cos(frequency(4, N) * sample_positions - pi / 2)
plot_ft(x)

For a natural sine wave, the non-zero Fourier coefficients are imaginary. The part shift within the coefficients, reported as (frac{pi}{2}), displays the time extend we implemented to the sign.
In the end â prior to we write some code â letâs put all of it in combination, and have a look at a wave that has greater than a unmarried sinusoidal element.
Superposition of sinusoids
The sign we assemble might nonetheless be expressed relating to the root vectors, however it’s not a natural sinusoid. As an alternative, this is a linear aggregate of such:
[
begin{aligned}
mathbf{x}_n &= 3 sin(frac{2 pi}{64} 4n) + 6 cos(frac{2 pi}{64} 2n) +2cos(frac{2 pi}{64} 8n)
end{aligned}
]
I gainedât cross during the calculation intimately, however it’s no other from the former ones. You compute the DFT for each and every of the 3 parts, and collect the consequences. With none calculation, alternatively, thereâs fairly a couple of issues we will say:
- For the reason that sign is composed of 2 natural cosines and one natural sine, there might be 4 coefficients with non-zero genuine portions, and two with non-zero imaginary portions. The latter might be advanced conjugates of one another.
- From the best way the sign is written, it’s simple to find the respective frequencies, as neatly: The all-real coefficients will correspond to frequency indices 2, 8, 56, and 62; the all-imaginary ones to indices 4 and 60.
- In the end, amplitudes will end result from multiplying with (frac{64}{2}) the scaling components bought for the person sinusoids.
Letâs test:

Now, how can we calculate the DFT for much less handy indicators?
Coding the DFT
Thankfully, we already know what needs to be accomplished. We need to undertaking the sign onto each and every of the root vectors. In different phrases, weâll be computing a number of interior merchandise. Common sense-wise, not anything adjustments: The one distinction is that basically, it’ll no longer be imaginable to constitute the sign relating to only a few foundation vectors, like we did prior to. Thus, all projections will in truth should be calculated. However isnât automation of tedious duties something we now have computer systems for?
Letâs get started via declaring enter, output, and central common sense of the set of rules to be applied. As all the way through this bankruptcy, we keep in one size. The enter, thus, is a one-dimensional tensor, encoding a sign. The output is a one-dimensional vector of Fourier coefficients, of the similar duration because the enter, each and every conserving details about a frequency. The central thought is: To acquire a coefficient, undertaking the sign onto the corresponding foundation vector.
To enforce that concept, we want to create the root vectors, and for each and every one, compute its interior product with the sign. This may also be accomplished in a loop. Strangely little code is needed to perform the function:
dft <- serve as(x) {
n_samples <- duration(x)
n <- torch_arange(0, n_samples - 1)$unsqueeze(1)
Toes <- torch_complex(
torch_zeros(n_samples), torch_zeros(n_samples)
)
for (okay in 0:(n_samples - 1)) {
w_k <- torch_exp(-1i * 2 * pi / n_samples * okay * n)
dot <- torch_matmul(w_k, x$to(dtype = torch_cfloat()))
Toes[k + 1] <- dot
}
Toes
}
To check the implementation, we will take the ultimate sign we analysed, and examine with the output of torch_fft_fft()
.
[1] 0 0 192 0 0 0 0 0 64 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
[29] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
[57] 64 0 0 0 0 0 192 0
[1] 0 0 0 0 -96 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
[29] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
[57] 0 0 0 0 96 0 0 0
Reassuringly â when you glance again â the consequences are the similar.
Above, did I say âlittle codeâ? Actually, a loop isn’t even wanted. As an alternative of operating with the root vectors one-by-one, we will stack them in a matrix. Then each and every row will dangle the conjugate of a foundation vector, and there might be (N) of them. The columns correspond to positions (0) to (N-1); there might be (N) of them as neatly. For instance, that is how the matrix would search for (N=4):
[
mathbf{W}_4
=
begin{bmatrix}
e^{-ifrac{2 pi}{4}* 0 * 0} & e^{-ifrac{2 pi}{4}* 0 * 1} & e^{-ifrac{2 pi}{4}* 0 * 2} & e^{-ifrac{2 pi}{4}* 0 * 3}
e^{-ifrac{2 pi}{4}* 1 * 0} & e^{-ifrac{2 pi}{4}* 1 * 1} & e^{-ifrac{2 pi}{4}* 1 * 2} & e^{-ifrac{2 pi}{4}* 1 * 3}
e^{-ifrac{2 pi}{4}* 2 * 0} & e^{-ifrac{2 pi}{4}* 2 * 1} & e^{-ifrac{2 pi}{4}* 2 * 2} & e^{-ifrac{2 pi}{4}* 2 * 3}
e^{-ifrac{2 pi}{4}* 3 * 0} & e^{-ifrac{2 pi}{4}* 3 * 1} & e^{-ifrac{2 pi}{4}* 3 * 2} & e^{-ifrac{2 pi}{4}* 3 * 3}
end{bmatrix}
] {#eq-dft-3}
Or, comparing the expressions:
[
mathbf{W}_4
=
begin{bmatrix}
1 & 1 & 1 & 1
1 & -i & -1 & i
1 & -1 & 1 & -1
1 & i & -1 & -i
end{bmatrix}
]
With that amendment, the code appears much more chic:
dft_vec <- serve as(x) {
n_samples <- duration(x)
n <- torch_arange(0, n_samples - 1)$unsqueeze(1)
okay <- torch_arange(0, n_samples - 1)$unsqueeze(2)
mat_k_m <- torch_exp(-1i * 2 * pi / n_samples * okay * n)
torch_matmul(mat_k_m, x$to(dtype = torch_cfloat()))
}
As you’ll be able to simply check, the outcome is identical.
Thank you for studying!
Photograph via Trac Vu on Unsplash