Discrete Frequency

Discrete-Time Signals and Systems

Luis F. Chaparro , in Signals and Systems using MATLAB, 2011

8.3 Periodicity of Sampled Signals—MATLAB

Consider an analog periodic sinusoid x(t) = cos(3πt + π/4) being sampled using a sampling period T s to obtain the discrete-time signal x [ n ] = x ( t ) | t = n T s = cos ( 3 π T s n + π 4 ) .

(a)

Determine the discrete frequency of x[n].

(b)

Choose a value of T s for which the discrete-time signal x[n] is periodic. Use MATLAB to plot a few periods of x[n], and verify its periodicity.

(c)

Choose a value of T s for which the discrete-time signal x[n] is not periodic. Use MATLAB to plot x[n] and choose an appropriate length to show the signal is not periodic.

(d)

Determine under what condition the value of T s makes x[n] periodic.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780123747167000120

Introduction to the Design of Discrete Filters

Luis F. Chaparro , in Signals and Systems using MATLAB, 2011

11.7 Warping Effect of the Bilinear Transformation—MATLAB

The nonlinear relation between the discrete frequency ω (rad) and the continuous frequency Ω (rad/sec) in the bilinear transformation causes warping in the high frequencies. To see this consider the following:

(a)

Use MATLAB to design a Butterworth analog band-pass filter of order N = 12 and with half-power frequencies Ω1 = 10 and Ω2 = 20 (rad/sec). Use the MATLAB function bilinear with K = 1 to transform the resulting filter into a discrete filter. Plot the magnitude and the phase of the discrete filter.

(b)

Increase the order of the filter to N = 14 and keep the other specifications the same. Design an analog band-pass filter and use again bilinear with K = 1 to transform the analog filter into a discrete filter. Plot the magnitude and the phase of the discrete filter. Explain your results.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780123747167000156

Fourier Analysis of Discrete-Time Signals and Systems

Luis F. Chaparro , in Signals and Systems using MATLAB, 2011

Computation of the Fourier Series using MATLAB

Given that periodic signals only have discrete frequencies, and that the Fourier series coefficients are obtained using summations, the computation of the Fourier series can be implemented with a frequency discretized version of the DTFT, or the DFT, which can be efficiently computed using the FFT algorithm. To illustrate this using MATLAB, consider three different signals as indicated in the following script and displayed in Figure 10.9.

Figure 10.9. Computation of Fourier series coefficients of different periodic signals: the corresponding magnitude line spectrum for each signal is shown on the right.

Each of the signals is periodic and we consider 10 periods to compute their FFTs, which provides the Fourier series coefficients (only 3 periods are displayed in Figure 10.9). A very important issue to remember is that one needs to input exactly one or more periods, and that the FFT length must be that of a period or of multiples of a period. Notice that the MATLAB function sign is used to generate a periodic train of pulses from the cosine function. The need to divide by the number of periods used will be discussed later in section 10.4.3.

%%%%%%%%%%%%%%%%%%%%%%%%%

% Fourier series using FFT

%%%%%%%%%%%%%%%%%%%%%%%%%

N = 10; M = 10; N1 = M ∗ N;n = 0:N1 - 1;

x = cos(2 ∗ pi ∗ n/N); % sinusoid

x1 = sign(x); % train of pulses

x2 = x - sign(x); % sinusoid minus train of pulses

X = fft(x)/M;X1 = fft(x1)/M;X2 = fft(x2)/M; % ffts of signals

X = X/N;X1 = X1/N;X2 = X2/N; % FS coefficients

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780123747167000144

Quantum Efficiency in Complex Systems, Part I: Biomolecular systems

P. Nalbach , M. Thorwart , in Semiconductors and Semimetals, 2010

1.1 Transport – incoherent hopping

Pigments or chromophores have electronic excitations with discrete frequencies in the range of visible light. Photons accordingly excite electrons of the chromophore i from a ground state | g i to an excited state | e i . Multiple simultaneous excitations can safely be neglected because the typical photon flux in the visible range on a sunny day is about 1 0 2 1 photons/m 2 . With a cross section of about 1Å2, an individual chromophore absorbs about 10 photons per second. With transfer times on the order of 50 ps, simultaneous multiple excitations are extremely unlikely in nature. In contrast, under experimental laser illumination, however, multiple excitations are likely. Restricting our considerations to the electronic ground and excited state, we obtain the Hamiltonian

(2.1) H ch , i = E g , i | g i g i | + E e , i | e i e i | .

with the ground (excited) state energy E g,i (E e,i ).

The estimation about multiple excitations holds as well for complexes of chromophores through which the excitation energy shall be transported to reach the reaction center. Accordingly, we can focus on the single excitation subspace spanned by states | i , which denotes a state with chromophore i in its excited state and all other chromophores in the complex in their ground states. Naturally, different chromophores i have different excitation or site energies E i , and the site-energy Hamiltonian is given as

(2.2) H ch = i E i | i i | .

Each chromophore is embedded within a protein, which protects it from its immediate environment, namely polar solvent (mainly water). The chromophore, as a chemical complex, has a characteristic vibration spectrum and by coupling to the embedding protein enlarges its vibration spectrum, accordingly. Naturally, the electronic excitations couple to the vibrational spectrum of the molecule. Furthermore, fluctuations of the surrounding solvent, or due to other close-by proteins, disturb the electronic excitation of the chromophore. Assuming that the complex is stable in its ground and excited state, we can describe all vibrational degrees of freedom in a lowest order approximation by harmonic fluctuations with the Hamiltonian

(2.3) H e v , i = 1 2 κ p j , κ 2 + ω j , κ 2 q j , κ 2 + | e i e i | κ ν κ ( j ) q j , κ

with momenta p i,κ , displacement q i,κ , frequency ω i,κ , and coupling ν κ ( i ) for chromophore i and index κ running over all fluctuation modes. We have, thereby, defined the fluctuations around the positions of the atoms in their electronic ground state.

We have furthermore neglected fluctuations, which generate transitions between the electronic ground and excited state because there are no vibrations in resonance with the optical electronic transition and thus these fluctuations have little effect (Weiss, 2008). The linewidth of the optical absorption is determined by the fluctuations of the transition energy, which we consider in the following. A more detailed discussion of the environmental fluctuation spectrum will be given in Section 2.3. Restricting our consideration as before to the single excitation subspace, the Hamiltonian describing the coupling to the environments becomes

(2.4) H e v = 1 2 i κ p j , κ 2 + ω j , κ 2 q j , κ 2 + i | i i | κ ν κ ( j ) q j , κ .

Excitation of a chromophore also modifies its dipole moment, and this changing dipole allows the chromophores to interact with each other. Thus, Coulomb dipole–dipole coupling, V ij , allows for a transfer of excitation (without electron transfer) from chromophore i to j resulting in a interchromophore coupling Hamiltonian

(2.5) H ch–ch = i , j V i j | i j | + | j i | .

The theoretical formulation for energy transfer through the dipole–dipole coupling, which is nowadays widely applied to photosynthetic systems was provided by Theodor Förster in the 1940s (Förster, 1948). It holds for the case when the chromophores are spatially rather distant and thus weakly coupled and the environmental fluctuations, in contrast, are rather strong. Formally, Förster's treatment also holds when the chromophores are very close and thus strongly coupled by coherent exciton motion (Knox and Gülen, 1993). Förster solved the dynamical problem of the total Hamiltonian H = H ch + H ch–ch + H e v by treating the electronic coupling H ch–ch perturbatively up to second order which results in energy transfer by incoherent hopping between the chromophores.

The Förster approach results in incoherent hopping by construction, but, at the same time, this behavior fits expectations. Temperature and associated thermal fluctuations are by themselves not necessarily enough to overdamp the microscopic quantum dynamics because electronic couplings up to 100/cm-1 = 150   K/kB are found in the FMO complex. Room temperature, thus, only exceeds the electronic coupling scale by a factor of two, which is generally not enough to overdamp dynamics (Leggett et al., 1987; Weiss, 2008). From analyzing absorption linewidths, however, it is known that the coupling of the environmental fluctuations to the chromophores is rather strong or, in other words, the reorganization energy is of the order of the electronic coupling itself. Under the commonly used Markov assumption, i.e., when the reorganization of the environment itself is faster (or, in other words, the correlation function of the environmental fluctuations decays faster) than the electronic transitions, such a strong coupling to the environment is sufficient to overdamp the dynamics (Leggett et al., 1987; Weiss, 2008). Nevertheless, it is a longstanding question (Reineker, 1982) whether the transfer of excitonic energy can also have quantum coherent features.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B978012375042600002X

Discrete Fourier Analysis

Luis F. Chaparro , Aydin Akan , in Signals and Systems Using MATLAB (Third Edition), 2019

11.4.3 Computation of the DFT via the FFT

Although the direct and the inverse DFT uses discrete frequencies and summations making them computational feasible, there are still several issues that should be understood when computing these transforms. Assuming that the given signal is finite length, or made finite length by windowing, we have:

1.

Efficient computation with the Fast Fourier Transform or FFT algorithm—A very efficient computation of the DFT is done by means of the FFT algorithm, which takes advantage of some special characteristics of the DFT as we will discuss later. It should be understood that the FFT is not another transformation but an algorithm to efficiently compute DFTs. For now, we will consider the FFT a black box that for an input x [ n ] (or X [ k ] ) gives as output the DFT X [ k ] (or IDFT x [ n ] ).

2.

Causal aperiodic signals—If the given signal x [ n ] is causal of length N, i.e., samples

{ x [ n ] , n = 0 , 1 , , N 1 } ,

we proceed to obtain { X [ k ] , k = 0 , 1 , , N 1 } or the DFT of x [ n ] by means of an FFT of length L = N . To compute an L > N DFT we simply attach L N zeros at the end of the above sequence and obtain L values corresponding to the DFT of x [ n ] of length L (why this could be seen as a better version of the DFT of x [ n ] is discussed below in frequency resolution).
3.

Noncausal aperiodic signals—When the given signal x [ n ] is noncausal of length N, i.e., samples

{ x [ n ] , n = n 0 , 0 , 1 , , N n 0 1 }

are given, we need to recall that a periodic extension of x [ n ] or x ˜ [ n ] was used to obtain its DFT. This means that we need to create a sequence of N values corresponding to the first period of x ˜ [ n ] , i.e.,

x [ 0 ] x [ 1 ] x [ N n 0 1 ] causal samples x [ n 0 ] x [ n 0 + 1 ] x [ 1 ] noncausal samples

where, as indicated, the samples x [ n 0 ] x [ n 0 + 1 ] x [ 1 ] are the values that make x [ n ] noncausal. If we wish to consider zeros after x [ N n 0 1 ] to be part of the signal, so as to obtain a better DFT transform as we discuss later in frequency resolution, we simply attach zeros between the causal and noncausal components, that is,

x [ 0 ] x [ 1 ] x [ N n 0 1 ] causal samples 0 0 0 0 x [ n 0 ] x [ n 0 + 1 ] x [ 1 ] noncausal samples

to compute an L > N DFT of the noncausal signal. The periodic extension x ˜ [ n ] represented circularly instead of linearly would clearly show the above sequences.
4.

Periodic signals—If the signal x [ n ] is periodic of fundamental period N we will then choose L = N (or a multiple of N) and calculate the DFT X [ k ] by means of the FFT algorithm. If we use a multiple of the fundamental period, e.g., L = M N for some integer M > 0 , we need to divide the obtained DFT by the value M. For periodic signals we cannot choose L to be anything but a multiple of N as we are really computing the Fourier series of the signal. Likewise, no zeros can be attached to a period (or periods when M > 1 ) to improve the frequency resolution of its DFT—by attaching zeros to a period we distort the signal.

5.

Frequency resolution—When the signal x [ n ] is periodic of fundamental period N, the DFT values are normalized Fourier series coefficients of x [ n ] that only exist at the harmonic frequencies { 2 π k / N } , as no frequency components exist at other frequencies. On the other hand, when x [ n ] is aperiodic, the number of possible frequencies depend on the length L chosen to compute its DFT. In either case, the frequencies at which we compute the DFT can be seen as frequencies around the unit circle in the Z-plane. In both cases one would like to have a significant number of frequencies in the unit circle so as to visualize well the frequency content of the signal. The number of frequencies considered is related to the frequency resolution of the DFT of the signal.

If the signal is aperiodic we can improve the frequency resolution of its DFT by increasing the number of samples in the signal without distorting the signal. This can be done by padding the signal with zeros, i.e., attaching zeros to the end of the signal. These zeros do not change the frequency content of the signal (they can be considered part of the aperiodic signal), but permit us to increase the available frequency components of the signal displayed by the DFT.

On the other hand, the harmonic frequencies of a periodic signal, of fundamental period N, are fixed to 2 π k / N for 0 k < N . In the case of periodic signals we cannot pad the given period of the signal with an arbitrary number of zeros, because such zeros are not part of the periodic signal. As an alternative, to increase the frequency resolution of a periodic signal we consider several periods. The DFT values, or normalized Fourier series coefficients, appear at the harmonic frequencies independent of the number of periods considered, but by considering several periods zero values appear at frequencies in between the harmonic frequencies. To obtain the same values for one period, it is necessary to divide the DFT values by the number of periods used.

6.

Frequency scales—When computing the N-length DFT of a signal x [ n ] of length N, we obtain a sequence of complex values X [ k ] for k = 0 , 1 , , N 1 . Since each of the k values corresponds to a discrete frequency 2 π k / N the k = 0 , 1 , , N 1 scale can be converted into a discrete frequency scale [ 0 2 π ( N 1 ) / N ] (rad) (the last value is always smaller than 2π to keep the periodicity in frequency of X [ k ] ) by multiplying the integer scale { 0 k N 1 } by 2 π / N . Subtracting π from this frequency scale we obtain discrete frequencies [ π π 2 π / N ] (rad) where again the last frequency does not coincide with π in order to keep the periodicity of 2π of the X [ k ] . Finally, to obtain a normalized discrete frequency scale we divide the above scale by π so as to obtain a non-units normalized scale [ 1 1 2 / N ] . If the signal is the result of sampling and we wish to display the continuous-time frequency, we then use the relation where T s is the sampling period and f s the sampling frequency:

(11.55) Ω = ω T s = ω f s (rad/s)  or f = ω 2 π T s = ω f s 2 π (Hz)

giving scales [ π f s π f s ] (rad/s) and [ f s / 2 f s / 2 ] (Hz) and where according to the Nyquist sampling condition f s f m a x , for f m a x the maximum frequency in the signal.

Example 11.23

Determine the DFT, via the FFT, of the causal signal

x [ n ] = sin ( π n / 32 ) ( u [ n ] u [ n 33 ] )

and of its advanced version x 1 [ n ] = x [ n + 16 ] . To improve their frequency resolution compute FFTs of length L = 512 . Explain the difference between computing the FFTs of the causal and the noncausal signals.

Solution: As indicated above, when computing the FFT of a causal signal the signal is simply inputted into the function. However, to improve the frequency resolution of the FFT we attach zeros to the signal. These zeros provide additional values of the frequency components of the signal, with no effect on the frequency content of the signal.

For the noncausal signal x [ n + 16 ] , we need to recall that the DFT of an aperiodic signal was computed by extending the signal into a periodic signal with an arbitrary fundamental period L, which exceeds the length of the signal. Thus, the periodic extension of x [ n + 16 ] can be obtained by creating an input array consisting of x [ n ] for n = 0 , , 16 , followed by L 33 zeros (L being the length of the FFT and 33 the length of the signal) zeros to improve the frequency resolution, and x [ n ] , n = 16 , , 1 .

In either case, the output of the FFT is available as an array of length L = 512 values. This array X [ k ] , k = 0 , , L 1 can be understood as values of the signal spectrum at frequencies 2 π k / L , i.e., from 0 to 2 π ( L 1 ) / L (rad). We can change this scale into other frequency scales, for instance if we wish a scale that considers positive as well as negative frequencies, to the above scale we subtract π, and if we wish a normalized scale [ 1 1 ) , we simply divide the previous scale by π. When shifting to a [ π π ) or [ 1 1 ) frequency scale, the spectrum also needs to be shifted accordingly—this is done using the MATLAB function fftshift. To understand this change recall that X [ k ] is also periodic of fundamental period L.

The following script is used to compute the DFT of x [ n ] and x [ n + 16 ] . The results are shown in Fig. 11.14. □

Figure 11.14

Figure 11.14. Computation of the FFT of causal and noncausal signals. Notice that as expected the magnitude responses are equal, only the phase responses change.

Example 11.24

Consider improving the frequency resolution of a periodic sampled signal

y ( n T s ) = 4 cos ( 2 π f 0 n T s ) cos ( 2 π f 1 n T s ) f 0 = 100 Hz , f 1 = 4 f 0

where the sampling period is T s = 1 / ( 3 f 1 ) s/sample.

Solution: In the case of a periodic signal, the frequency resolution of its FFT cannot be improved by attaching zeros. The length of the FFT must be the fundamental period or a multiple of the fundamental period of the signal. The following script illustrates how the FFT of the given periodic signal can be obtained by using 4 or 12 periods. As the number of periods increases the harmonic components appear in each case at exactly the same frequencies, and only zeros in between these fixed harmonic frequencies result from increasing the number of periods. The magnitude frequency response is, however, increasing as the number of periods increases. Thus we need to divide by the number of periods used in computing the FFT.

Since the signal is sampled, it is of interest to have the frequency scale of the FFTs in Hz so we convert the discrete frequency ω (rad) into f (Hz) according to

f = ω 2 π T s = ω f s 2 π

where f s = 1 / T s is the sampling rate given in samples/s. The results are shown in Fig. 11.15. □

Figure 11.15

Figure 11.15. Computation of the FFT of a periodic signal using 4 and 12 periods to improve the frequency resolution of the FFT. Notice that both magnitude and phase responses look alike, but when we use 12 periods these spectra look sharper due to the increase in the number of frequency components added.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128142042000223

Introduction to the Design of Discrete Filters

Luis F. Chaparro , Aydin Akan , in Signals and Systems Using MATLAB (Third Edition), 2019

Frequency Warping

A minor drawback of the bilinear transformation is the non-linear relation between the analog and the discrete frequencies. Such a relation creates a warping that needs to be taken care of when specifying the analog filter using the discrete filter specifications.

The analog frequency Ω and the discrete frequency ω according to the bilinear transformation are related by

(12.21) Ω = K tan ( ω / 2 ) ,

which when plotted displays a linear relation around the low frequencies but it warps as we get into larger frequencies (see Fig. 12.9).

Figure 12.9

Figure 12.9. Relation between Ω and ω for K = 1.

The relation between the frequencies is obtained by letting σ = 0 in the second equation in (12.20). The linear relationship at low frequencies can be seen using the expansion of the tan ( . ) function

Ω = K [ ω 2 + ω 3 24 + ] ω T s

for small values of ω or ω Ω T s . As the frequency increases the effect of the terms beyond the first one makes the relation non-linear. See Fig. 12.9.

To compensate for the non-linear relation between the frequencies, or the warping effect, the following steps to design a discrete filter are followed:

1.

Using the frequency warping relation (12.21) the specified discrete frequencies ω p , and ω s t are transformed into specified analog frequencies Ω p and Ω s t . The magnitude specifications remain the same in the different bands—only the frequency is being transformed.

2.

Using the specified analog frequencies and the discrete magnitude specifications an analog filter H N ( s ) that satisfies these specifications is designed.

3.

Applying the bilinear transformation to the designed filter H N ( s ) , the discrete filter H N ( z ) that satisfies the discrete specifications is obtained.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128142042000235

Discrete-time Signals and Systems

Luis Chaparro , in Signals and Systems Using MATLAB (Second Edition), 2015

9.2.4.1 Discrete-time Sinusoids

Discrete-time sinusoids are a special case of the complex exponential, letting α = e j ω 0 and A  =   Ae , we have according to Equation (9.16)

(9.17) x [ n ] = A α n = | A | e j ( ω 0 n + θ ) = | A | cos ( ω 0 n + θ ) + j | A | sin ( ω 0 n + θ )

so the real part of x[n] is a cosine, while the imaginary part is a sine. As indicated before, discrete sinusoids of amplitude A and phase shift θ are periodic if they can be expressed as

(9.18) A cos ( ω 0 n + θ ) = A sin ( ω 0 n + θ + π / 2 ) - < n <

where w 0  =   2πm/N (rad) is the discrete frequency, for integers m and N  >   0 which are not divisible. Otherwise, discrete-time sinusoids are not periodic.

Because ω is given in radians, it repeats periodically with 2π as fundamental period, i.e., adding a multiple 2πk (k positive or negative) to a discrete frequency ω 0 gives back the frequency ω 0

(9.19) ω 0 + 2 π k = ω 0 k positive or negative integer

To avoid this ambiguity, we will let - π < ω π as the possible range of discrete frequencies and convert any frequency ω 1 outside this range using the modular representation. For a positive integer k two frequencies ω 1  = ω  +   2πk, and ω where - π ω π are equal in modulo 2π, which can be written

ω 1 ω ( mod 2 π )

Thus as shown in Figure 9.5, frequencies not in the range [−π, π) can be converted into equivalent discrete frequencies in that range. For instance, ω 0  =   2π can be written as ω 0  =   0   +   2π so that an equivalent frequency is 0; ω 0  =   7π/2   =   (8     1)π/2   =   2   ×   2π  π/2 is equivalent to a frequency   π/2. According to this ( mod 2 π ) representation of the frequencies, a signal sin (3πn) is identical to sin (πn), and a signal sin (1.5πn) is identical to sin (−0.5πn)   =   −sin (0.5πn).

Example 9.14

Consider the following four sinusoids

x 1 [ n ] = sin ( 0.1 π n ) , x 2 [ n ] = sin ( 0.2 π n ) , x 3 [ n ] = sin ( 0.6 π n ) , x 4 [ n ] = sin ( 0.7 π n ) - < n <

Find if they are periodic and if so their fundamental periods. Are these signals harmonically related? Use MATLAB to plot these signals from n  =   0,     ,40. Comment on which of these signals resemble sampled analog sinusoids and indicate why some do not.

Solution

To find if they are periodic, we rewrite the given signals as

x 1 [ n ] = sin ( 0.1 π n ) = sin 2 π 20 n , x 2 [ n ] = sin ( 0.2 π n ) = sin 2 π 20 2 n x 3 [ n ] = sin ( 0.6 π n ) = sin 2 π 20 6 n , x 4 [ n ] = sin ( 0.7 π n ) = sin 2 π 20 7 n

indicating the signals are periodic of fundamental periods 20, with frequencies harmonically related. When plotting these signals using MATLAB the first two resemble analog sinusoids but not the other two. See Figure 9.6.

One might think that x 3[n] and x 4[n] look like that because of aliasing, but that is not the case. To obtain cos (ω 0 n) we could sample an analog sinusoid cos ( Ω 0 t ) using a sampling period T s   =   1 so that according to the Nyquist condition

T s = 1 π Ω 0

where π / Ω 0 is the maximum value permitted for the sampling period for no aliasing. Thus x 3 [ n ] = sin ( 0.6 π n ) = sin ( 0.6 π t ) | t = nT s = n when T s   =   1 and in this case

T s = 1 π 0.6 π 1.66

Comparing this with x 2 [ n ] = sin ( 0.2 π n ) = sin ( 0.2 π t ) | t = nT s = n we get that

T s = 1 π 0.2 π = 5

Thus when obtaining x 2[n] from sin (0.2πt) we are oversampling more than when we obtain x 3[n] from sin (0.6πt) using the same sampling period, as such x 2[n] resembles more an analog sinusoid than x 3[n], but no aliasing is present. Similarly for x 4[n].  

Remarks

1.

The discrete-time sine and cosine signals, as in the continuous-time case, are out of phase π/2 radians.

2.

The discrete frequency ω is given in radians since n, the sample index, does not have units. This can also be seen when we sample a sinusoid using a sampling period T s so that

cos ( Ω 0 t ) | t = nT s = cos ( Ω 0 T s n ) = cos ( ω 0 n )

where we defined ω 0 = Ω 0 T s , and since Ω 0 has rad/sec as units and T s has sec as units then ω 0 has radians as units.
3.

The frequency Ω of analog sinusoids can vary from 0 (dc frequency) to ∞. Discrete frequencies ω as radian frequencies can only vary from 0 to π. Negative frequencies are needed in the analysis of real-valued signals, thus - < Ω < and - π < ω π . A discrete-time cosine of frequency 0 is constant for all n, and a discrete-time cosine of frequency π varies from −1 to 1 from sample to sample, giving the largest variation possible for the discrete-time signal.

Figure 9.5. Discrete frequencies ω and their equivalent frequencies in mod 2π.

Figure 9.6. Periodic signals x i [n], i  =   1,2,3,4, given in Example 9.14.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780123948120000097

The Z-transform

Luis F. Chaparro , Aydin Akan , in Signals and Systems Using MATLAB (Third Edition), 2019

Abstract

The Z-transform provides a way to represent and process discrete-time signals and systems. In the polar Z-plane the radius is a damping factor and the angle corresponds to the discrete frequency in radians. The one-sided Z-transform is used to solve difference equations that may result from ordinary differential equations, and the implementation of the convolution sum becomes multiplication of polynomials. In the discrete domain there are recursive as well as non-recursive filters that can be represented and analyzed using the Z-transform. Modern control uses the state-variable representation and the Z-transform is used to obtain system realizations and the solution of the system. MATLAB is used to find the direct and inverse Z-transforms. The analysis of two-dimensional signals and systems is aided by the application of the two-dimensional Z-transform, converting the convolution into product of polynomials and making possible to have algebraic methods for stability testing.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128142042000211

High-performance embedded computing

João M.P. Cardoso , ... Pedro C. Diniz , in Embedded Computing for High Performance, 2017

2.6.2 Dynamic Voltage and Frequency Scaling

Dynamic voltage and frequency scaling (DVFS) is a technique that aims at reducing the dynamic power consumption by dynamically adjusting voltage and frequency of a CPU [33 ]. This technique exploits the fact that CPUs have discrete frequency and voltage settings as previously described. These frequency/voltage settings depend on the CPU and it is common to have ten or less clock frequencies available as operating points. Changing the CPU to a frequency-voltage pair (also known as a CPU frequency/voltage state) is accomplished by sequentially stepping up or down through each adjacent pair. It is not common to allow a processor to make transitions between any two nonadjacent frequency/voltage pairs.

Measuring Power and Energy

Power dissipation can be monitored by measuring the current drawn from the power supply to the system or to each device. There are specific boards providing this kind of measurements but this scheme requires access to the power rails for the inclusion of a shunt resistor from the V cc supplied and the device/system under measurement (note that P  = V cc  × I cc). This is typically a problem and only useful in certain conditions or environments. Another possibility is to use pass-through power meters as the ones provided for USB interfaces.

Some computing systems provide built-in current sensors and the possibility to acquire from the software side the power dissipated. Examples of this are the support provided by the ODROID-XU3, a which includes four current/voltage sensors to measure the power dissipation of the ARM Cortex big.LITTLE A15 and A7 cores, GPU and DRAM individually, and the NVIDIA Management Library (NVML) b which allows to report the current power draw in some of their GPU cards.

By measuring the average current and knowing the voltage supply we can derive the average power dissipated and the energy consumed during a specific execution period.

A software power model based on hardware sensing is used in the Running Average Power Limit (RAPL) c driver provided for Intel microprocessors since the Sandy Bridge microarchitecture. d The measurements are collected via a model-specific microprocessor register.

Recent versions of platform-independent libraries such as the performance API (PAPI) e also include support for RAPL and NVML-based power and energy readings in addition to the runtime performance measurements based on hardware counters of the microprocessors. Monitoring power in mobile devices can be done by specific support such as the one provided by PowerTutor f in the context of Android-based mobile platforms.

One important aspect of monitoring power dissipation is the power sampling rate (i.e., the maximum rate possible to measure power) which can be too low in some contexts/systems.

Finally, other possibilities for measuring power and energy are the use of power/energy models for a certain platform and application and/or the use of simulators with capability to report estimations of the power dissipated.

Dynamic frequency scaling (DFS) and dynamic voltage scaling (DVS) are techniques to reduce the power dissipation when voltage and frequency ranges are not fully interdependent, i.e., when changes in clock frequency do not imply (up to a certain point) changes in the supply voltage and vice versa. Decreasing the clock frequency without changing the supply voltage (possibly maintaining it to the level needed to operate at the maximum clock frequency) implies a decrease of power dissipation but may lead to insignificant changes in energy consumption (theoretically we would expect the same energy consumption). Decreasing the supply voltage without changing the operating frequency implies both power and energy reductions.

The DVFS technique can be seen as a combination of DFS and DVS and when the interdependence between power supply and operating frequency is managed in a global way. However, in CPUs where the voltage-frequency interdependence exists, DFS, DVS, and DVFS are often used with the same meaning, i.e., the dynamic scaling of voltage-frequency.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128041895000028

Analytical Chemistry

Ulrich J. Krull , Michael. Thompson , in Encyclopedia of Physical Science and Technology (Third Edition), 2003

III.A.3 Fourier Transform Analysis

One application of high-speed computers to data analysis is often found in spectrophotometric applications, such as infrared and nuclear magnetic resonance techniques. Samples can be irradiated with broad ranges of frequencies from the appropriate regions of the electromagnetic spectrum and will absorb certain discrete frequencies dependent on sample chemistry. Each independent frequency that can be observed (resolved) in the range of energies employed can be represented as a sinusoidal oscillation. The simultaneous superpositioning of all the available frequencies produces both constructive and destructive interference, resulting in a well-defined complex waveform pattern. Interaction of the sample with discrete frequencies will alter the waveform pattern, which will then contain the analytical interaction information in the form of a time "domain." This can be converted to a conventional frequency-domain spectrum by the fast Fourier transform algorithm, so that individual frequencies that make up the superimposed waveform can be individually identified and plotted in conventional formats. Data must be sampled and digitized at a rate at least twice the value of the ratio of the range of frequencies encountered divided by the frequency resolution desired. The major advantage of this technique is that all frequencies are simultaneously measured, and a complete conventional spectrum can be constructed in seconds for any one measurement. Since these spectra are digitized and contain frequency reference information, it is possible to sum sequential spectra to improve signal-to-noise ratio. Signals increase linearly with spectral addition, while noise increases as the square root of the number of spectra that are combined.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B0122274105000259