I know the article is about sec(x) but I want to share this tidbit about its cousin, the hyperbolic secant: sech(x) is its own Fourier transform (modulo rescalings). That’s right, exp(-x^2) is not the only one.
If I understand correctly, the Hermite functions are the eigenfunctions of the Fourier Transform and thus all have this property -- with the Gaussian being a special case. But sech(x) is doubly interesting because it is not a Hermite function, though it can be represented as an infinite series thereof. Are there other well-behaved examples of this, or is sech(x) unique in that regard?
As well as the linear combinations (including infinite sums!) of Hermite functions with the same eigenvalue under the Fourier transform. (Those eigenvalues are infinitely degenerate). You could express sech(x) as such a sum.
There has to be a link to the harmonic oscillator here. That's the Hamiltonian that's symmetric under exchange of position and momentum, and the Hermite functions are its eigenfunctions.
Indeed, the (quantum) harmonic oscillator Hamiltonian (with suitable scalings) commutes with the Fourier transform. Since the former has the Hermite functions as eigenbasis, the Hermite functions also form an eigenbasis for the latter.
The most elegant proof IMHO is the one that avoids the original problem entirely.
Int[csc(x) dx] = 2 Int[csc(2u) du]
= 2 Int[du / (2 cos(u) sin(u))]
= Int[sec^2(u) du / tan(u)]
= log(tan(u)) + C
= log(tan(x/2)) + C
Then Int[sec(x)] = Int[csc(u)] = log(tan(u/2)) + C = log(tan(pi/4 - x/2)) + C.
Of course, this was no use to Mercator, because the logarithm hadn't been invented yet. But you aren't just pulling a magic factor out of nowhere. There is definitely a bit of cleverness in rearranging the fraction — you have to be used to trying to find instances of the power rule when dealing with integrals of fractions.
This was the one I was taught in my high school. It has some cleverness (e.g., some trig. transformations) but looks less like coming out of nowhere than the original.
Neither in (German) high school nor in the many math courses of a physics B.Sc. have I ever used the secant function. I am surprised the article does not explain it in the beginning. I assume for other people it must be a common function?
iirc, haversine is useful for transforming 2-d "as the crow flies" coords to their 3-d equivalents. at longer distances a body's curvature is really noticeable and often overlooked
Interesting. versine has a lovely and intuitive geometric definition. If you construct a right triangle from the origin to some point on the circle, most people who have done trig will know that the x-coordinate of that point is r cos theta, where theta is the angle and r is the radius. Geometrically the distance from the origin to where the triangle rests on the x axis is r cos theta. But what about the rest of that radius? ie the line segment on the x-axis from there to where the circle intersects the x-axis?
That is r versin theta (ie r - r cos theta). Pretty cool no? I mean I've literally never had to find the length of that line, but that's how you would if you wanted to..
It's a US thing. Europeans just write 1/cos(x) instead of treating it as a special thing with its own name. The Americans have sec, csc, and a bunch of others I never bothered to learn. It doesn't seem to add all that much to me? (Of course, it's a bit hypocritical since I gladly use tan(x).)
... which were not used in my education but whenever i saw them i wished they had been, they lay out a geometric interpretation of all of them. by "old" i mean "look like Leonardo drew them"
I'm sure you used inverse of a cosine multiple times. Didactic math today is just not bothering to give it a name. Probably because people think that sin, cos and tan is enough. Even ctg which is just inverse of tan is often skipped.
I know what you mean, but as a sibling pointed out for everyone else's benefit, parent is using the word inverse where they mean reciprocal.
The inverse of cosine is arccosine (sometimes written acos or cos^{-1}). Secant is the reciprocal of cos ie sec x = 1/cos(x)).
Likewise cotan is the reciprocal of tan (1/tan). The inverse of tan is atan/arctan/tan^{-1}.
This is confusing for a lot of people because if you write x^{-1} that means 1/x. If you write f^{-1} and f is a function, then _generally_ it means the inverse of f. In the case of trig functions this is doubly confusing because people write sin^2 theta meaning (sin theta)^2 but sin^-1 theta means arcsin theta.
That's why in my maths studies they started by teaching you to do the inverse with a -1 so when you see it you don't get confused but changed to preferring arcsin etc as this is unambiguous and if you learn to write this way you won't confuse others.
It does not help that both reciprocal and inverse come from French, and that their common meanings are reversed in English. I'm not sure whether the meaning of both words has remained constant over time in these two languages, as they both roughly mean "the opposite" and if you want to avoid ambiguity, you simply add context. For example, if you say "inverse function" or "multiplicative inverse" it's not ambiguous.
That’s right, it’s a distribution. And that fact has me, a non-mathematician, personally caused some huge headaches, because I thought I could treat it just like a function… Yeah, turns out really weird things happen if you try to do so without knowing what you’re doing. For example, taking its square does not make sense.
Oops, replied to the wrong comment. This is the one I meant to reply to, which is talking about the impulse train, which is not a function: https://news.ycombinator.com/item?id=43741539
>Neither in (German) high school nor in the many math courses of a physics B.Sc. have I ever used the secant function
I think we used it in geometry in US high school, but only to complete an assignment or two to show we could use trig functions correctly. I had to relearn how all of them worked to help my kid with homework, it's mostly look at the angles and sides you have available and pick which trig function is necessary to figure out which one you're solving for. I'm sure there are real life uses for trig functions, and I hate to be one of those "when are we ever going to use this" types, but I've never used any of them outside of math classes.
If we're playing the map-projection-advocacy game, I'd say the Mollweide projection is underrated among equal-area maps [0]. (For local maps, use whatever you want, appropriately centered.) Sure, it distorts shapes away from the central meridian, but locally it only adds a simple horizontal skew. I'm not a big fan of how many equal-area 'compromise' projections lie about how long the lines of latitude are.
>[the Mercator projection] unnecessarily distorts shapes and in particular makes the Americas and Europe look much larger than they actually are. This has been linked, not without rational, to colonialism and racism.
The fact that on many maps Europe is much smaller that it appears should just make you all the more impressed by its achievements.
It amuses me that doing software and hardware engineering for decades and never once thinking about trigonometric functions other than perhaps sine and cosine, and then I get interested in software defined radio and find myself running into all of the functions! That's especially true with the discreet mathematics that SDR uses.
I remember teaching integral of sec x to high schoolers with multiplication of sec x + tan x. I mean it is not obvious but it is not like something that would take 100 years.
And the author talks like logarithm was invented long after integration
Dude I am not joking but today was the day that we were introduced to indefinite integration as a formal chapter in maths at my coaching and we did secx integration.
Basically our sir told us to multiply / divide by sec + tan and observe that its becoming something like integration f(x)^(-1) f'(x) * dx and if we let f(x) as t and this f'(x) * dx becomes dt
Actually we can also prove the latter and I had to look at my notes because I haven't revised them yet but its basically f(x) = t
so f'(x) = dt/dx
so f'(x)* dx = dt
then we get
so integration f(x)^n * f'(x) * dx = integral t^n * dt (where t = f(x))
integral t^-1 dt so we get ln(t) and this t or f(x) was actually sec x + tan x so its ln(sec + tan) and in fact by doing some cool trigonometry we can say this as ln(tan(pi/4 + x/2)) + c
also cosec x integration is ln(tan(x/2)) + c
I haven't read the article but damn, HN, this feels way too specific for me LOL.
So I just started reading the article and it seems that it mentions a point about teachers telling their students to verify it by differentiating the value of integral of secx ie. ln(| tan x + secx|) and it equals secx
and in fact our sir himself told us that he would've also let us do this if we were in normal batches (we are in a slightly higher batch, but most students are still normal and it was easy to digest to be honest except when I was writing this previous comment, I actually found that our sir had complicated the step of f'(x) = df(x)/dx by letting us assume f(x) as t and so on..,maybe it makes it easier to understand considering f(x) to be its own variable like t instead, but that actually confused me a little bit when I was writing the previous comment) , still nothing too hard.
I actually want to ask here because I was too afraid to ask this to sir, but is there a way, a surefire way to solve any integral , like can computers solve any integral?
Remarkably, there isn’t a way to solve most integrals symbolically. We say that the set of “elementary functions”, i.e. ordinary looking symbolic functions, is not closed under integration. Even if you try to add special functions in you cannot feasibly make it closed under integration.
I’ll try to write something more detailed later but in the meantime you should look up Liouville’s theorem and non-elementary antiderivatives.
Which is course leads to the misnomer in the title: The integral was long solved by numeric means, more easily so with the inventions, but the proof of the solution, took a while... and as some other brilliant hacker-newsestition, pointed out, it because even easier with an ingenious u-substitution, related to the solution of the integral of 1/x discovered in the late 1930s...
( The solution is both possible, and proved, and there is a goddamned youtube video about the trick, and its not a minor trick either, like the proofs of int (sec (x)) or int (1/x ). )
In my text book, and current text books, it is said it cannot be resolved by elementary means, and it cannot, but it can be solved, and proven by one whopper of an idea.
It feels like LLMs could be good contenders for solving symbolically integrals. After spending some time, it really feels like translating between two languages.
Derive 2 for Dos. Green Screen 286 I think or 386 computers in a small side room. Later Windows version was better. Then there was the DOS version of Minitab 5 I think that came as floppy disks in the back of a spiral bound book which I used to generate data sets for students to process for homework so everyone got a slightly different sample.
You can do a lot of numerical maths just with a noddy spreadsheet of course.
At the 1940s Manhattan project, back when computer meant a job: "person who computes mathematical statements", major advancements were made in the integration of hyperbolic PDEs, by substituting electro-mechanical and then vacuum-tube machines to do the job. You know, those hard-wired vacuum tube monsters like ENIAC.
You could argue that the First useful thing electronic computers did was integration...
Electronics themselves work by understanding integration.
It's full circle. But with Lisp and Lambda Calculus even an Elementary school kid could understand integration, as you are literally describing the process as if they were Lego blocks.
Albeit in Forth would be far easier.
It's almost telling the computer that multiplying it's iterated addition, and dividing, iterated substraction.
Floating numbers are done with specially memory 'blocks', and you can 'teach' the computer to multiply numbers bigger than 65536 in the exact same way humans do with pen and paper.
Heck, you can set float numbers by yourself by telling Forth how to do the float numbers by following the standard and setting up the f, f+, f/... and outputting rules by hand.
Slower than a Forth done in assembly? Maybe, for sure;
but natively, in old 80's computers, Forth was 10x faster than Basic.
From that to calculus, it's just telling the computer new rules*.
And you don't need an LLM for that.
I know the article is about sec(x) but I want to share this tidbit about its cousin, the hyperbolic secant: sech(x) is its own Fourier transform (modulo rescalings). That’s right, exp(-x^2) is not the only one.
Learned something new today, thank you!
If I understand correctly, the Hermite functions are the eigenfunctions of the Fourier Transform and thus all have this property -- with the Gaussian being a special case. But sech(x) is doubly interesting because it is not a Hermite function, though it can be represented as an infinite series thereof. Are there other well-behaved examples of this, or is sech(x) unique in that regard?
Yes the dirac comb for example. Actually there are infinitely many.
https://en.wikipedia.org/wiki/Dirac_comb
and for other:
http://www.systems.caltech.edu/dsp/ppv/papers/journal08post/...
A question I hadn't even thought to ask, thanks.
So, basically, the eigenfunctions of the Fourier transform are Hermite polynomials times a Gaussian [0] [1].
[0] https://math.stackexchange.com/questions/728670/functions-th...
[1] https://en.wikipedia.org/wiki/Hermite_polynomials#Hermite_fu...
As well as the linear combinations (including infinite sums!) of Hermite functions with the same eigenvalue under the Fourier transform. (Those eigenvalues are infinitely degenerate). You could express sech(x) as such a sum.
There has to be a link to the harmonic oscillator here. That's the Hamiltonian that's symmetric under exchange of position and momentum, and the Hermite functions are its eigenfunctions.
Indeed, the (quantum) harmonic oscillator Hamiltonian (with suitable scalings) commutes with the Fourier transform. Since the former has the Hermite functions as eigenbasis, the Hermite functions also form an eigenbasis for the latter.
It still involves e, though: sech(x) = 2 * e^x / (e^(2x) + 1)
Makes sense, given that the definition of e goes hand in hand with its property of e^x being its own integral and derivative.
The impulse train is another well-known one, though I suppose someone will chime in here to rebut that it's arguably not a function.
The most elegant proof IMHO is the one that avoids the original problem entirely.
Int[csc(x) dx] = 2 Int[csc(2u) du]
= 2 Int[du / (2 cos(u) sin(u))]
= Int[sec^2(u) du / tan(u)]
= log(tan(u)) + C
= log(tan(x/2)) + C
Then Int[sec(x)] = Int[csc(u)] = log(tan(u/2)) + C = log(tan(pi/4 - x/2)) + C.
Of course, this was no use to Mercator, because the logarithm hadn't been invented yet. But you aren't just pulling a magic factor out of nowhere. There is definitely a bit of cleverness in rearranging the fraction — you have to be used to trying to find instances of the power rule when dealing with integrals of fractions.
This was the one I was taught in my high school. It has some cleverness (e.g., some trig. transformations) but looks less like coming out of nowhere than the original.
Neither in (German) high school nor in the many math courses of a physics B.Sc. have I ever used the secant function. I am surprised the article does not explain it in the beginning. I assume for other people it must be a common function?
Trig is full of functions that fall into disuse and are forgotten.
For example "versine"
versin theta = 1-cos theta.
There is also "haversine" which is (1-cos theta)/2. Which is used in navigation apparently https://en.wikipedia.org/wiki/Versine
See R.W. Sinnott, "Virtues of the Haversine", Sky and Telescope, vol. 68, no. 2, 1984, p. 159
iirc, haversine is useful for transforming 2-d "as the crow flies" coords to their 3-d equivalents. at longer distances a body's curvature is really noticeable and often overlooked
Interesting. versine has a lovely and intuitive geometric definition. If you construct a right triangle from the origin to some point on the circle, most people who have done trig will know that the x-coordinate of that point is r cos theta, where theta is the angle and r is the radius. Geometrically the distance from the origin to where the triangle rests on the x axis is r cos theta. But what about the rest of that radius? ie the line segment on the x-axis from there to where the circle intersects the x-axis?
That is r versin theta (ie r - r cos theta). Pretty cool no? I mean I've literally never had to find the length of that line, but that's how you would if you wanted to..
It's a US thing. Europeans just write 1/cos(x) instead of treating it as a special thing with its own name. The Americans have sec, csc, and a bunch of others I never bothered to learn. It doesn't seem to add all that much to me? (Of course, it's a bit hypocritical since I gladly use tan(x).)
I imagine it was more useful when using tables to lookup/approximate the values before calculators with trig support were a thing.
speak for your own european country, in my neck of the woods (EE) we were taught and we worked with both secant and cosecant.
They were taught to us in Spain, I suppose they don't make an appearance often, but they are perfectly familiar.
I used sec, cosec and others during my math degree in the UK too.
the only other one is cot, actually.
Personally I thought they were nice to have because coming up with the integral of 1/cos on the fly is pretty brutal in a long integral
there are these old-fashioned looking drawings...
(quick search, didn't find the old ones, but similar to these)
https://mathematicaldaily.weebly.com/secant-cosecant-cotange...
https://www.pinterest.com/pin/enter-image-description-here--...
... which were not used in my education but whenever i saw them i wished they had been, they lay out a geometric interpretation of all of them. by "old" i mean "look like Leonardo drew them"
In the UK we certainly use sec(x)
I'm sure you used inverse of a cosine multiple times. Didactic math today is just not bothering to give it a name. Probably because people think that sin, cos and tan is enough. Even ctg which is just inverse of tan is often skipped.
I know what you mean, but as a sibling pointed out for everyone else's benefit, parent is using the word inverse where they mean reciprocal.
The inverse of cosine is arccosine (sometimes written acos or cos^{-1}). Secant is the reciprocal of cos ie sec x = 1/cos(x)).
Likewise cotan is the reciprocal of tan (1/tan). The inverse of tan is atan/arctan/tan^{-1}.
This is confusing for a lot of people because if you write x^{-1} that means 1/x. If you write f^{-1} and f is a function, then _generally_ it means the inverse of f. In the case of trig functions this is doubly confusing because people write sin^2 theta meaning (sin theta)^2 but sin^-1 theta means arcsin theta.
That's why in my maths studies they started by teaching you to do the inverse with a -1 so when you see it you don't get confused but changed to preferring arcsin etc as this is unambiguous and if you learn to write this way you won't confuse others.
It does not help that both reciprocal and inverse come from French, and that their common meanings are reversed in English. I'm not sure whether the meaning of both words has remained constant over time in these two languages, as they both roughly mean "the opposite" and if you want to avoid ambiguity, you simply add context. For example, if you say "inverse function" or "multiplicative inverse" it's not ambiguous.
Inverse function: https://en.wikipedia.org/wiki/Inverse_function / https://fr.wikipedia.org/wiki/Bijection_r%C3%A9ciproque
Reciprocal: https://en.wikipedia.org/wiki/Multiplicative_inverse / https://fr.wikipedia.org/wiki/Inverse
Wikipedia seems to have chosen "multiplicative inverse" over "reciprocal" for title, even though they are clearly indicated as synonymous.
That’s a really good point. I will try to remember to do that in future.
The secant is the reciprocal of a cosine – the hypotenuse over the adjacent
That’s right, it’s a distribution. And that fact has me, a non-mathematician, personally caused some huge headaches, because I thought I could treat it just like a function… Yeah, turns out really weird things happen if you try to do so without knowing what you’re doing. For example, taking its square does not make sense.
It is a function. What do you mean?
Oops, replied to the wrong comment. This is the one I meant to reply to, which is talking about the impulse train, which is not a function: https://news.ycombinator.com/item?id=43741539
The weird thing about 1/cos is it’s discontinuous wherever cos is 0 but, yes, it’s a function.
Yeah, that was replied to the wrong comment.
>Neither in (German) high school nor in the many math courses of a physics B.Sc. have I ever used the secant function
I think we used it in geometry in US high school, but only to complete an assignment or two to show we could use trig functions correctly. I had to relearn how all of them worked to help my kid with homework, it's mostly look at the angles and sides you have available and pick which trig function is necessary to figure out which one you're solving for. I'm sure there are real life uses for trig functions, and I hate to be one of those "when are we ever going to use this" types, but I've never used any of them outside of math classes.
Law of sines is useful when constructing things with a known angle outside of CAD.
If we're playing the map-projection-advocacy game, I'd say the Mollweide projection is underrated among equal-area maps [0]. (For local maps, use whatever you want, appropriately centered.) Sure, it distorts shapes away from the central meridian, but locally it only adds a simple horizontal skew. I'm not a big fan of how many equal-area 'compromise' projections lie about how long the lines of latitude are.
[0] https://en.wikipedia.org/wiki/Mollweide_projection
You probably don't live in New Zealand? (Yes I know it's there. Barely.)
https://xkcd.com/977/
>[the Mercator projection] unnecessarily distorts shapes and in particular makes the Americas and Europe look much larger than they actually are. This has been linked, not without rational, to colonialism and racism.
The fact that on many maps Europe is much smaller that it appears should just make you all the more impressed by its achievements.
A refreshing Hacker News article after a week of repetitive political garbage. Thank you!
Pure math never ceaceses to amaze, even if the title was a misnomer.
spell checkers are taking a full week off.
About how long it'd take me to solve the integral in my calculus finals.
It amuses me that doing software and hardware engineering for decades and never once thinking about trigonometric functions other than perhaps sine and cosine, and then I get interested in software defined radio and find myself running into all of the functions! That's especially true with the discreet mathematics that SDR uses.
Oh! This was already discussed five years ago with 77 pts and 40 comments (https://news.ycombinator.com/item?id=24304311)
This is also the inverse Gudermannian function [1]. That Wikipedia page has some nice geometrical insights.
[1] https://en.m.wikipedia.org/wiki/Gudermannian_function
I remember teaching integral of sec x to high schoolers with multiplication of sec x + tan x. I mean it is not obvious but it is not like something that would take 100 years.
And the author talks like logarithm was invented long after integration
Dude I am not joking but today was the day that we were introduced to indefinite integration as a formal chapter in maths at my coaching and we did secx integration.
Basically our sir told us to multiply / divide by sec + tan and observe that its becoming something like integration f(x)^(-1) f'(x) * dx and if we let f(x) as t and this f'(x) * dx becomes dt Actually we can also prove the latter and I had to look at my notes because I haven't revised them yet but its basically f(x) = t
so f'(x) = dt/dx so f'(x)* dx = dt then we get
so integration f(x)^n * f'(x) * dx = integral t^n * dt (where t = f(x)) integral t^-1 dt so we get ln(t) and this t or f(x) was actually sec x + tan x so its ln(sec + tan) and in fact by doing some cool trigonometry we can say this as ln(tan(pi/4 + x/2)) + c
also cosec x integration is ln(tan(x/2)) + c
I haven't read the article but damn, HN, this feels way too specific for me LOL.
So I just started reading the article and it seems that it mentions a point about teachers telling their students to verify it by differentiating the value of integral of secx ie. ln(| tan x + secx|) and it equals secx
and in fact our sir himself told us that he would've also let us do this if we were in normal batches (we are in a slightly higher batch, but most students are still normal and it was easy to digest to be honest except when I was writing this previous comment, I actually found that our sir had complicated the step of f'(x) = df(x)/dx by letting us assume f(x) as t and so on..,maybe it makes it easier to understand considering f(x) to be its own variable like t instead, but that actually confused me a little bit when I was writing the previous comment) , still nothing too hard.
I actually want to ask here because I was too afraid to ask this to sir, but is there a way, a surefire way to solve any integral , like can computers solve any integral?
Remarkably, there isn’t a way to solve most integrals symbolically. We say that the set of “elementary functions”, i.e. ordinary looking symbolic functions, is not closed under integration. Even if you try to add special functions in you cannot feasibly make it closed under integration. I’ll try to write something more detailed later but in the meantime you should look up Liouville’s theorem and non-elementary antiderivatives.
symbolically no (in fact I believe it can be proven that it's impossible)
numerically sure (ie definite integrals can be evaluated for given values)
Which is course leads to the misnomer in the title: The integral was long solved by numeric means, more easily so with the inventions, but the proof of the solution, took a while... and as some other brilliant hacker-newsestition, pointed out, it because even easier with an ingenious u-substitution, related to the solution of the integral of 1/x discovered in the late 1930s...
( The solution is both possible, and proved, and there is a goddamned youtube video about the trick, and its not a minor trick either, like the proofs of int (sec (x)) or int (1/x ). )
In my text book, and current text books, it is said it cannot be resolved by elementary means, and it cannot, but it can be solved, and proven by one whopper of an idea.
The research is left as an exercise.
I've learned it in Austrian highschool, but then from university on nobody needed it anymore.
[flagged]
It feels like LLMs could be good contenders for solving symbolically integrals. After spending some time, it really feels like translating between two languages.
Wolfram engine was taking integrals just fine way before LLMs were even a thing.
And Lisps too, fitting a sector from a disk:
https://justine.lol/sectorlisp2/
And probably a small forth too, with a dictionary defining every math word, something not so different to Lisp.
LLM's? 4GB of RAM? Your grampa's 486 with 16MB of RAM can do calculus too.
Derive 2 for Dos. Green Screen 286 I think or 386 computers in a small side room. Later Windows version was better. Then there was the DOS version of Minitab 5 I think that came as floppy disks in the back of a spiral bound book which I used to generate data sets for students to process for homework so everyone got a slightly different sample.
You can do a lot of numerical maths just with a noddy spreadsheet of course.
Macsyma, PDP10 + ITS under Maclisp.
https://en.m.wikipedia.org/wiki/PDP-10
https://en.m.wikipedia.org/wiki/Incompatible_Timesharing_Sys...
https://en.m.wikipedia.org/wiki/Macsyma
Fun fact: old Macsyma's math code still runs at is on modern Linux'/BSD's with Maxima. Even plots work the same, albeit in a different output format.
A 386 it's far more powerful than this.
At the 1940s Manhattan project, back when computer meant a job: "person who computes mathematical statements", major advancements were made in the integration of hyperbolic PDEs, by substituting electro-mechanical and then vacuum-tube machines to do the job. You know, those hard-wired vacuum tube monsters like ENIAC.
You could argue that the First useful thing electronic computers did was integration...
https://www.tandfonline.com/doi/full/10.1080/00295450.2021.1...
Electronics themselves work by understanding integration.
It's full circle. But with Lisp and Lambda Calculus even an Elementary school kid could understand integration, as you are literally describing the process as if they were Lego blocks.
Albeit in Forth would be far easier. It's almost telling the computer that multiplying it's iterated addition, and dividing, iterated substraction.
Floating numbers are done with specially memory 'blocks', and you can 'teach' the computer to multiply numbers bigger than 65536 in the exact same way humans do with pen and paper.
Heck, you can set float numbers by yourself by telling Forth how to do the float numbers by following the standard and setting up the f, f+, f/... and outputting rules by hand. Slower than a Forth done in assembly? Maybe, for sure; but natively, in old 80's computers, Forth was 10x faster than Basic.
From that to calculus, it's just telling the computer new rules*. And you don't need an LLM for that.