Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

A little ELI5 for those who haven't had Laplace transforms at school, from someone who only had a Laplace 101 course, so for what's it worth: Laplace transforms allow you to convert differential equations into easier equations, and back: the differentials and integrals become multiplications and divisions. So you can take a differential equation, transform it into the Laplace domain, manipulate it, and convert it back. And that's cool because differential equations tend to appear everywhere, for instance to model springs, electrical circuits with caps and coils, the surface of a soap bubble in a metal rod, etc. A sibling is the z-transform, which is like the digital version. This one is used for instance to design digital audio filters. I'm sure some math wizards here can elaborate and correct me.


I don't have any real understanding over the Laplace transform, but I understand Fourier transform well enough that it makes sense to me. Back then, I saw an claim that Laplace transform is a generalization of Fourer transform in the sense that it transforms a function not only to a space of frequencies and phases of sine waves, but to a larger space of parameters of exponentials. Note that the parameter space of the sine waves is subset of the (complex) parameter space of exponentials.

Is this claim correct?


If you understand fourier transform well, then perhaps this viewpoint will help. The fourier transform is 'just' a change of basis, with the basis being the sinusoidal functions. Why these? Well, because if we take a look at the discrete fourier transform, the matrix that changes the basis is both unitary [which is has to be as a change of basis] and vandermonde. So we can think of it as both a 'change of basis' and as a 'evaluation of polynomial'. This is where most of the power of the fourier transform comes from.

Similarly, the laplace transform is also a change of basis. But the basis it chooses is a very special one --- it's the eigenvectors of the differential operator. Note that

    d/dx(e^(ax)) = ae^ax
So e^(ax) literally an eigenvector of `(d/dx)`. And as we all know, going to the eigenbasis of a given operator/linear transform/matrix makes it easier to manipulate. The laplace transform is a change of basis that digonalizes the differential operator. This makes it easy to solve differentials.


One also can think of transforms as the eigenfunctions of the continuous part of the spectrum of a differential operator. In differential equations (DE) theory, a well posed DE has a structure (the DE itself), a domain and enough independent boundary conditions.

Fourier transform will show up for an harmonic oscilator in the whole real line with incoming and outgoing wave boundary conditions, while Laplace will show up when working on semi-infinite interval whit initial conditions and proper convergence at infinity.

These are the most common, but not the only transforms one can build. There are also Melin and Hankel transforms, and by playing with the operator, the domain and the boundary conditions, we can construct the adequate transform for each given problem.

Spectral theory of DE’s is such a beautiful topic.


Do you have any good text book where this is included? Seems like something very interesting to read about formally.


Math 213 at uwaterloo has publicly available notes in spades: https://www.gabrielwong.net/notes/math213_notes.pdf#page9


I'm not sure this is what you're looking for, but the Feynman lectures give the basic idea in an elementary way: https://www.feynmanlectures.caltech.edu/I_23.html

> This idea of using exponentials in linear differential equations is almost as great as the invention of logarithms, in which multiplication is replaced by addition. Here differentiation is replaced by multiplication. . . . See how simple it is! Differential equations are immediately converted, by sight, into mere algebraic equations


Haven't read it but this currently-free Springer textbook covers DEs, Laplace and Fourier transformations, "Differential Equations and Their Applications": https://link.springer.com/book/10.1007%2F978-1-4612-4360-1


search for "Introduction to Signals & Systems"


Is the Laplace eigenbasis considered a basis in a relaxed sense that permits non-orthogonality or does the notion of orthogonality itself change to counteract the apparent redundancy?


Bases only need to be linearly independent and spanning; non-orthogonality is already permitted. Not all bases are orthogonal.


Right, thanks, it's been too long.


This is an excellent explanation.


Exactly what I needed, thanks!


Yes. The Fourier transform characterizes on a circle (usually the unit circle in the complex domain) at different frequencies. It is properly defined for periodic signals.

The Laplace transform takes any exponential spiral in the complex plane, and reduces to the Fourier transform if you only care about the unit circle.

I appreciate that doesn’t make things clearer unless you already have some understanding of integral transforms in the complex plane (in which case, you probably know this already). However, I have never met a simple intuitive explanation of the Laplace transform, - and actually no meaningful explanation that doesn’t involve integrals.


With Fourier you can analyze oscillatory characteristics of function (frequency and phase). With Laplace you can also analyze amplification/attenuation.


The Fourier transform, well, transforms a time-based phenomenon such as an alternating current sine wave into a frequency spectrum where you can observe the frequency spectrum components of the signal. A pure nice sine becomes a spike (delta function) located at a specific frequency. Music, as we observe it through our ears and can view it on an oscilloscope becomes moving spikes (lots of them :) in the frequency spectrum where the "amplitude" at a given frequency relates to the "amount" of that frequency in the music.

The behaviour of a filter is much easier to describe in the frequency spectral domain than it would be in the time domain.

Now to the direct current (DC) view. This cannot be handled by the Fourier transform -- at least the DC-part of the signal cannot be transformed to the frequency domain. As shown in the article, there were "steps", "ramps" and such. A typical scenario would be to describe what happens in your amplifier during startup, to describe how electrical circuits are behaving during startup before reaching the "running" state.

The Laplace transform will handle these types of scenarios, and can thus be used to study (or describe) systems during other types of transitions than the "steady state" when you are up and running.

Regarding filters, the Fourier transform describes things going on at the unit circle, while the Laplace transform can be used to study both the interior and exterior of the plane. In this sense, creating filters relates to locate "poles" and "zeros" in the plane (amplification and attenuation) which can be observed on the unit circle as the behaviour on periodic signals.


Why can DC not get fouriertransformed? Usually the f(p=0) is the DC component.


I recall from some undergraduate classes I took (in mechanical engineering) that you can "convert" between a Laplace transform and a Fourier transform by saying that s = i * omega. This has some appeal from a purely algebraic standpoint but doesn't seem rigorous to me. For one, the limits of the integral aren't the same. How valid is this? I always assumed it was an approximation.

Is the Laplace transform in some sense similar to a one-sided/semi-infinite Fourier transform, provided that change of variables is made?

Years ago in a complex analysis class I worked out the contour integration for a few Fourier transforms as I recall, but I've had no similar training for the Laplace transform and have forgotten many details.


Yeah, they're only equal when your f(t) = 0 for all t < 0. Otherwise they can be quite different, because the integral limits differ. In an undergrad engineering context you're often evaluating the response of a system to some input and it's common to have "at t=0 the switch is closed/mass is released/etc." where the function is assumed to have been 0 previously.


Seems obvious in retrospect but wasn't obvious to me. Thanks for the insight!


For a little more intuition about why you can do that replacement: laplace is a representation of a system as a sum of sinusoids * exponentials, which are your two axis in the laplace plane. Frequency on the iw axis and exponential on the a axis. If you think of that replacement as s = iw + a | a = 0, you'll see the exponential terms go away and you're left with just the sinusoidal parts:

  f(t) * e^(iw + 0)t 
  = f(t) * e^(iwt) * e(at) 
  = f(t) * e(iwt) * e^(0t) 
  = f(t) * e(iwt) 
integrated over time, which is your fourier transform subject to the condition above. It's just the laplace transform along the Y axis, or, the frequency response at steady state when not growing/decaying exponentially.


I believe you're confusing discrete and continuous time, Fourier in continuous time is Laplace evaluated along s = jw or the vertical axis and not e^jw, the unit circle


You are correct. I mixed how I tend to visualize and think of them with what actually happens....


Please check this video presentation for a simple overview of Laplace Transform [1]. Basically Fourier Transform (FT) is a special case (subset) of Laplace Transform (LT) where the signal waveform revolves around a unit circle (real power of exponents) . Similarly, LT is basically a generalization (superset) of FT that the signal waveforms revolve off the unit circle (complex power of exponents).

The discrete FT or DFT, however, as the name clearly implied, is the discrete version of FT and similarly discrete Laplace Transform (DLT) is the discrete version of LT. The main difference is that DFT covers finite sum but DLT covers infinite sum.

The faster version of DFT (without compromising the resolution accuracy) is called FFT and it is probably the most useful and important algorithm in the 21st century! The inverse FFT is called IFFT and it was discovered around the same time of FFT. The faster version of DLT is interestingly called Chirp-Z Transform (CZT) and somehow its inverse (ICZT) discovery is at a much later date as has been reported recently [2] and also featured in HN [3]. This much later date of discovery is mainly due to the complexity of complex power exponents (pardon the pun but cannot resist).

Fun fact, CT was discovered by Lawrence Rebinar who was working at AT&T's speech processing lab (SPL) [4]. The lab is so well funded that Kernighan and Ritchie who were belong to the other lab has to scrap by the older computer of the SDL (the infamous PDP-7) where Unix was originally developed on when Multics project got canceled.

[1]https://youtu.be/n2y7n6jw5d0

[2]https://www.electronicsweekly.com/news/research-news/dsp-inv...

[3]https://news.ycombinator.com/item?id=21230757

[4]https://ieeexplore.ieee.org/document/1276120


Sounds right to my (very very rusty) recollection. Laplace transforms are a magic trick that let you easily solve some kinds of differential equations.


>> Laplace transforms are a magic trick that let you easily solve some kinds of differential equations.

To mathematicians I don't think they're so much magic. When I took differential equations class it was frustrating that they went too fast for me to fully digest what was "really" going on. It didn't feel out of reach, but something I needed to look at a couple different ways but didn't have time (or the internet) to do so. Think I'm gonna checkout 3blue1brown after this - he can probably close that gap for me.


> Think I'm gonna checkout 3blue1brown after this - he can probably close that gap for me.

You might like this lecture from MIT's OCW: [1]. It's my favorite source for motivating the Laplace transform. It's a bit difficult to make this concept "simple", and this resource assumes that you already have some familiarity with the following concepts: infinite series, power series, radius of convergence, and (indefinite) integration.

The tl;dw is that the Laplace transform is a generalization of a power series.

[1] https://www.youtube.com/watch?v=sZ2qulI6GEk

Edit: I also wrote up a form of this video elsewhere if anyone's interested. It's kinda long though, and I didn't want to spam this thread with it.


Something also worth mentioning is that it isn't just useful when dealing with differentiation / anti-differentiation but also when dealing with convolution.


I've had some problems understanding the Laplace transform. Maybe somebody here can point me towards some material.

I have an interest understanding how IIR filters are designed, and I always get stuck at this part in DSP books. The Laplace transform is used, but as well as finding the mathematics difficut I don't really understand why it is being used at all. I think it is trying to replicate the effect of an analog circuit?


I like practical examples for learning about math-heavy stuff and I came to greater understanding while looking into imaging, specifically how DCT (discrete cosine transformation) works.

You learn how an image is dissected into two matrices (or one complex matrix) containing amplitudes and phases of respective frequencies. A good start for me was playing around with openCV and reading about JPEG (uses DCT).

Why transform an image in the first place? Because you can just set the highest frequencies to zero without influencing the image in real space too much. This effect is leveraged by classical JPEG compression, you just delete data not that important for the image. Being able to analyze, filter, change frequencies in a signal has a lot of other applications.

There are better links but maybe this is a start: https://www.mathworks.com/help/images/discrete-cosine-transf...

There is a ton of literature about DCT because its widespread application. A few google searches lead to good learning material. Fourier and in general LaPlace transformations are a little different, but far easier to understand after seeing an example of their application in my opinion.

This also touches the topic of the article. The problem is that transforming between real space and spectral space results in rounding errors. The article describes a new approach to minimize these.


You can describe a circuit by it's time domain behavior. Or you can describe the circuit by it's frequency domain behavior. Both are valid and congruent.

The thing is a lot of questions are easy to answer in the frequency domain.

For instance, you want to know if a circuit with feedback will oscillate. Hard to answer using time domain equations. But in the frequency domain there is a simple constraint. If for all frequencies where the the gain is greater than one the phase shift is less than 180 degrees, circuit won't oscillate. This is obviously rather useful.

Also a point with a lot of 'books' the authors get caught up in describing how something is done that they never explain why something is done. I've found often the answer is simple yet opaque and frustratingly never talked about.


This. I remember having adequate cursory knowledge of Fourier Transform to the point of understanding the value of FFT algorithms, but the Laplace Transform was explained like hell so I failed my robotics classes.


If you have an electronic circuit, you can model each element with a differential equation. E.g. voltage across a capacitor is modelled as the integral of current, voltage across an inductor is dI/dt.

This is a useful fact for a simple circuit in a classroom, but the differential equations for any circuit with more than a few components soon become insanely complex.

With the Laplace transform you (more or less) replace an integral with 1/s and a differential with s, plus some constants derived from the component values.

Then you can simplify for s, and use the Inverse Laplace Transform to convert the final expression in s into an expression in t.

You have now solved an insanely complex differential equation with some basic algebra, and your final expression in t - with component constants, and some exponentials that appear after the inverse transform - accurately models how the circuit responds over time.

There's also a related fairly simple trick for converting the s-domain representation into a frequency/phase plot which tells you how the circuit operates in the frequency domain.

And another related fairly simple trick for converting the continuous s-domain into the z-domain for DSP calculations over a sampled time series.

Because the same theory also applies in other domains - spring/mass systems, and so on - you can use the same technique there too.


Yes this very good. As it the point that restating the problem in a different domain is a very common way to make a problem tractable.

Examples

Converting numbers to logs allows you to multiply and divide by mere addition and subtraction. If you wonder why RF engineers represent power in db this is why.

Mapping an equation in terms of forces integrated over a path to one using vectors and energy.


Thanks for this!!!!


Yes, one way of designing an IIR filter is to design the continuous-time version and convert it to discrete. There are other (usually better) ways, but if you've already got a good understanding of continuous-time filter behaviour, it's a usable on-ramp.


In DSP you would use the z transform instead.


And don’t forget those like me for whom school is a distant memory!

What are the domains where this new method can be applied? Is it mostly physics simulations and the likes?


Control systems, signal filters (noise attenuation), modeling epidemics, modeling queues, modeling reliability of repairable systems, modeling recurrent events(such as failures), renewal processes, modeling inventory plans, probability in general (because of the connection with moment generating function) ...


I got taught them in a course on linear systems which was a pre-requisite course to control theory.

Lots of electrical circuits, mechanical systems and electro-mechanical systems can be modelled using laplace transforms if they are linear systems.

I did an electrical and electronic engineering degree and we got to skip the tedious differential equation solving lectures that the mechanical, civil and chemical engineers had to attend because of Monsieur Laplace.


>from someone who only had a Laplace 101 course

Laplace transforms are an entire course?


Appears as an exercise on page 505 of this (excellent and freely downloadable, at least for now) book: https://link.springer.com/book/10.1007%2F978-3-319-01195-0.


From my time at the uni, I wish we'd had a proper course that covered Laplace (and the important special cases, e.g. Fourier and z) transforms properly. Instead, the coverage was interspersed to general math courses and to the courses that needed to apply them.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: