For a more current example of a mathematical technique that preceded formalization by a considerable amount, consider renormalization. Particularly renormalization over a calculation that takes place over Feynman diagrams.
For decades physicists were happily using this to predict experiment, while mathematicians were tearing their hair out trying to make some formal sense of this, even if only in a limited context. I'd have to do some poking around to find out whether mathematicians are happy about it yet, even though the idea is older than I am.
I wouldn't say physicists were 'happily' using renormalization back in the day. Rather, some of them were happy and many others were very concerned. Then, the ideas behind renormalization were formalized in terms of (poorly named) renormalization group flows, and by the early 90s it was on quite solid mathematical footing (though there remains to this day quite a bit of confusing folklore and fear around the topic, complete with plenty of bad pedagogy that has given many an undergrad as well as many the interested mathematician a lot of anxiety)
However, simultaneously with us coming to understand renormalization better, we've also come to realize it's really not such a big deal and it was supremely overused back in the day. Nowadays, most modern field theorists think in terms of 'effective field theories' and are not nearly so interested in trying to sum infinite perturbative series so we have a lot less use for renormalization (though it does still have its place)
That makes sense. I ran across the idea, informally, back in the 1990s in grad school. And what I heard then was concerning. I heard physicists saying that it was fine. But I also saw that their notion of fine and mine did not agree...
I had not really updated my understanding much since then.
For Feynman diagrams in particular, isn't the status quo that they're known to not converge (i.e., the motivation generating the idea is definitely wrong), but the first few terms are still predictive for some reason?
It's not that the motifivation / generating idea is definitely wrong, the problem you're alluding to here is just that they form an asymptotic series[1]. An asymptotic series is one that diverges if you sum the whole series, but will accurately approximate some function if you take a finite partial sum.
The reason that so many perturbative series are only asymptotic in quantum mechanics actually makes a lot of sense if you think about what these perturbative series are saying. Typically, each order of the perturbative series is meant to account for the physics at smaller and smaller length scales, but we know that our physical theories aren't applicable at infinitely small length scales because we know that at the very least, we need a theory of quantum gravity.
So in some sense, the problem is just that we shouldn't even want to sum these series to infinite order because there is no reason to think that the ultraviolet behaviour in that series has any physical meaning.
2. Those ideas can't possibly be correct, because quantum gravity can't quite fit in.
3. Suppose anyway that the ideas are good enough; look at this cool equation.
4. Yeah, it doesn't... technically...converge, but the first few terms are pretty good.
Your comment about not knowing things in the limit yet makes perfect sense, but the motivating argument for that math in the first place is limiting behavior. It makes a ton of sense that none of these ideas converge, but it's peculiar that some of them are basically right for small terms when they're created from a foundation of convergence.
> but the motivating argument for that math in the first place is limiting behavior.
No it isn't. The entire reason we truncate the series and dont sum the entire series off to infinity is exactly that the limiting behaviour isn't well defined.
Thanks! It is to the peculiar irrereverance & technical idiosyncrasies of Oliver Heaviside (aka his genius) that we owe the early leaps in the applications of Maxwell's nascent electromagnetic theory.
You must have missed this: "In the end Laplace transforms, easier to use with a more rigorous structure and incorporating the powerful tool of convolution, overtook the operational calculus of Heaviside, and his methods largely fell victim to history."
Thanks. Agreed, that could have been a good point to mention that Laplace transforms are more or less the same as Heaviside’s method. As I read it, the article leaves the opposite impression instead.
They're closely related attempts to solve the same problem. The difference is - ironically - Laplace transforms are more rigorous, and don't leave as many loose ends.
Basically Laplace is a complete solution, while Heaviside's calculus wasn't.
It took about 100 years to work this out. Laplace's original work was early 19th century, but the transform didn't become widely used in engineering until after WWII.
Roman numerals were designed for a world where calculations were done on an abacus, and numerical systems merely recorded inputs and outputs.
In that world they are better than Arabic numerals, for the simple reason that your brain doesn't have to translate so hard between what you see, and what you record.
Also, this is how mental arithmetic is taught in modern elementary schools!
Lots of focus on "ten friends", and performing addition and subtraction by splitting into parts to make "ten" and then adding or subtracting the leftover:
The weird parts of Roman numerals is the lack of clear place value separation (no spacing, variable-length places I, II, VIII, etc), and using 5 V and 10 X so 2 different scales for place value, and the asymmetry of I,II,III,IV, V instead of the more consistent I,II,IIV,IV,V, and using relative position for sign instead of an explicit negative sign symbol.
I always loved that the derivative of the heavyside operator is equivalent to the dirac delta operator. The idea of impulse and how to apply that to a system is such a unique and useful unlock in E&M and has such a nice analog of connecting the circuit.
One of those things that made it click for me that math truly is defined rules of operations over definitions and could be constructed as to be useful for us, and not just a handed down pure concept. We need to model this very specific thing, here's an operator for it.
In the latter part of high school, and in my early years in college as a computer engineer, I found physics and the philosophy of it really interesting. I remember the first time, in 9th grade, that our teacher showed us how to predict the bounce-height of a ball using some basic algebra. I was immediately in love with physics, and it was the first time that I realized math could be fun.
In my senior year, AP Physics C: E&M would become one of my favorite courses of my entire scholastic career (largely thanks to my teacher). While Calc 3 wasn't required for the AP test, he introduced the concepts so that he could properly walk us through the history of the field from the perspective of its founders, up to and concluding with Maxwell's equations. We read a lot of the original papers that introduced certain operators and equations, including works from Newton, Leibniz, Heaviside, Maxwell, Einstein, and Dirac. Ironically, I failed the AP exam (2/5) but had a very easy time with Calc III, linear algebra, and diff eq in college thanks to that course.
I really miss the feeling of wonder and astonishment I had when I was first exposed to these concepts -- it's been long enough that my memory of them is fuzzy now, but I don't get the same satisfaction from re-reading them.
Gosh this takes me back to my EEE degree. Very difficult to understand at first, way to abstract if you have not seen the electromagnetic phenomena play out in real life and you are not well versed in engineering mathematics.
Beautiful, I love the irreverence. Reminds me a lot of the "umbral calculus" for computing combinatorial identities. It proceeds in much the same way - deliberately make an (unjustified) abuse of notation, work with it at face value regardless, and reap the rewards...
...or spend hours debugging the mess you've made if it doesn't work =P
For a more current example of a mathematical technique that preceded formalization by a considerable amount, consider renormalization. Particularly renormalization over a calculation that takes place over Feynman diagrams.
For decades physicists were happily using this to predict experiment, while mathematicians were tearing their hair out trying to make some formal sense of this, even if only in a limited context. I'd have to do some poking around to find out whether mathematicians are happy about it yet, even though the idea is older than I am.
I wouldn't say physicists were 'happily' using renormalization back in the day. Rather, some of them were happy and many others were very concerned. Then, the ideas behind renormalization were formalized in terms of (poorly named) renormalization group flows, and by the early 90s it was on quite solid mathematical footing (though there remains to this day quite a bit of confusing folklore and fear around the topic, complete with plenty of bad pedagogy that has given many an undergrad as well as many the interested mathematician a lot of anxiety)
However, simultaneously with us coming to understand renormalization better, we've also come to realize it's really not such a big deal and it was supremely overused back in the day. Nowadays, most modern field theorists think in terms of 'effective field theories' and are not nearly so interested in trying to sum infinite perturbative series so we have a lot less use for renormalization (though it does still have its place)
That makes sense. I ran across the idea, informally, back in the 1990s in grad school. And what I heard then was concerning. I heard physicists saying that it was fine. But I also saw that their notion of fine and mine did not agree...
I had not really updated my understanding much since then.
For Feynman diagrams in particular, isn't the status quo that they're known to not converge (i.e., the motivation generating the idea is definitely wrong), but the first few terms are still predictive for some reason?
It's not that the motifivation / generating idea is definitely wrong, the problem you're alluding to here is just that they form an asymptotic series[1]. An asymptotic series is one that diverges if you sum the whole series, but will accurately approximate some function if you take a finite partial sum.
The reason that so many perturbative series are only asymptotic in quantum mechanics actually makes a lot of sense if you think about what these perturbative series are saying. Typically, each order of the perturbative series is meant to account for the physics at smaller and smaller length scales, but we know that our physical theories aren't applicable at infinitely small length scales because we know that at the very least, we need a theory of quantum gravity.
So in some sense, the problem is just that we shouldn't even want to sum these series to infinite order because there is no reason to think that the ultraviolet behaviour in that series has any physical meaning.
[1] https://en.wikipedia.org/wiki/Asymptotic_expansion
That's a bit of a strange perspective:
1. We have some ideas.
2. Those ideas can't possibly be correct, because quantum gravity can't quite fit in.
3. Suppose anyway that the ideas are good enough; look at this cool equation.
4. Yeah, it doesn't... technically...converge, but the first few terms are pretty good.
Your comment about not knowing things in the limit yet makes perfect sense, but the motivating argument for that math in the first place is limiting behavior. It makes a ton of sense that none of these ideas converge, but it's peculiar that some of them are basically right for small terms when they're created from a foundation of convergence.
> but the motivating argument for that math in the first place is limiting behavior.
No it isn't. The entire reason we truncate the series and dont sum the entire series off to infinity is exactly that the limiting behaviour isn't well defined.
Related:
Heaviside’s Operator Calculus - https://news.ycombinator.com/item?id=569934 - April 2009 (6 comments)
This is also interesting: https://www.johndcook.com/blog/2022/10/12/operational-calcul... (via https://news.ycombinator.com/item?id=33179121, but no comments there)
Thanks! It is to the peculiar irrereverance & technical idiosyncrasies of Oliver Heaviside (aka his genius) that we owe the early leaps in the applications of Maxwell's nascent electromagnetic theory.
This author published my favorite book on mental math, called Dead Reckoning, you might like it!
Great. Possibly missed the opportunity to point out that Heaviside’s method is more or less the same as Laplace transforms.
You must have missed this: "In the end Laplace transforms, easier to use with a more rigorous structure and incorporating the powerful tool of convolution, overtook the operational calculus of Heaviside, and his methods largely fell victim to history."
Thanks. Agreed, that could have been a good point to mention that Laplace transforms are more or less the same as Heaviside’s method. As I read it, the article leaves the opposite impression instead.
They're closely related attempts to solve the same problem. The difference is - ironically - Laplace transforms are more rigorous, and don't leave as many loose ends.
Basically Laplace is a complete solution, while Heaviside's calculus wasn't.
It took about 100 years to work this out. Laplace's original work was early 19th century, but the transform didn't become widely used in engineering until after WWII.
There are several similar variants of different kinds of math that make just as much sense as mainstream methods to me. It all feels very arbitrary.
I think that's what got me into software. If we're just making shit up either way, then useful artifacts is a nice bonus.
But it's the same thing with math. All of science and engineering can be seen as useful artifacts that you obtain as a bonus from math.
Yeah, or air.
Besides, there's plenty more to science and engineering than just math.
Maybe start using Roman numerals then?
Roman numerals are obviously inferior, not a fair comparison at all.
Roman numerals were designed for a world where calculations were done on an abacus, and numerical systems merely recorded inputs and outputs.
In that world they are better than Arabic numerals, for the simple reason that your brain doesn't have to translate so hard between what you see, and what you record.
Also, this is how mental arithmetic is taught in modern elementary schools!
Lots of focus on "ten friends", and performing addition and subtraction by splitting into parts to make "ten" and then adding or subtracting the leftover:
8+5 = 8 + (10-8) + 5 - (10-8) = 10 + (5-2) = 10+3 = 13
13-5 = 10 + (3-5) = 10 + -(5-3) = 10-2 = 8
The weird parts of Roman numerals is the lack of clear place value separation (no spacing, variable-length places I, II, VIII, etc), and using 5 V and 10 X so 2 different scales for place value, and the asymmetry of I,II,III,IV, V instead of the more consistent I,II,IIV,IV,V, and using relative position for sign instead of an explicit negative sign symbol.
It wasn't obvious for the people who used them. And that's the point.
Yes but I'm talking about several alternative systems existing in parallel, not something that can be blamed on time.
I always loved that the derivative of the heavyside operator is equivalent to the dirac delta operator. The idea of impulse and how to apply that to a system is such a unique and useful unlock in E&M and has such a nice analog of connecting the circuit.
One of those things that made it click for me that math truly is defined rules of operations over definitions and could be constructed as to be useful for us, and not just a handed down pure concept. We need to model this very specific thing, here's an operator for it.
In the latter part of high school, and in my early years in college as a computer engineer, I found physics and the philosophy of it really interesting. I remember the first time, in 9th grade, that our teacher showed us how to predict the bounce-height of a ball using some basic algebra. I was immediately in love with physics, and it was the first time that I realized math could be fun.
In my senior year, AP Physics C: E&M would become one of my favorite courses of my entire scholastic career (largely thanks to my teacher). While Calc 3 wasn't required for the AP test, he introduced the concepts so that he could properly walk us through the history of the field from the perspective of its founders, up to and concluding with Maxwell's equations. We read a lot of the original papers that introduced certain operators and equations, including works from Newton, Leibniz, Heaviside, Maxwell, Einstein, and Dirac. Ironically, I failed the AP exam (2/5) but had a very easy time with Calc III, linear algebra, and diff eq in college thanks to that course.
I really miss the feeling of wonder and astonishment I had when I was first exposed to these concepts -- it's been long enough that my memory of them is fuzzy now, but I don't get the same satisfaction from re-reading them.
Gosh this takes me back to my EEE degree. Very difficult to understand at first, way to abstract if you have not seen the electromagnetic phenomena play out in real life and you are not well versed in engineering mathematics.
Kennelly–Heaviside layer
https://en.wikipedia.org/wiki/Kennelly%E2%80%93Heaviside_lay...
Beautiful, I love the irreverence. Reminds me a lot of the "umbral calculus" for computing combinatorial identities. It proceeds in much the same way - deliberately make an (unjustified) abuse of notation, work with it at face value regardless, and reap the rewards...
...or spend hours debugging the mess you've made if it doesn't work =P