As and old physicist and a computer programmer these days, I am so jealous of the things you can build these days "vibe coding". That someone with moderate knowledge of programming can build these things is fascinating.
Now, on the physics part, I would like to "see" the phase transition that you have in 2D. I don't know if that is missing from this simulation or if I am not looking at it with the correct eyes.
For me this feels irrelevant. These tools are marketed for developers for their day-to-day jobs that involve building products. Devs don't look up information on people or build some complex mathematical things daily. They build things that consist of different parts, which in turn can consist of different contexts and can be a combination of other things as well. It can be a straightforward approach or it can be a legacy codebase that also need to incorporate new features with new stacks. The real test is in the real world scenarios. But every time it's about a narrowly scoped thing, the tests, the marketing. And they try to build an image that the combination of these scoped tasks can somehow bring you the ability to build at large scale. They don't say it, but they implicitly mean it with the way they present all this. Computers can compute, they can detect patterns and do analytics part, they can build assumptions based on the data they have. But they need the data, they need parameters, they need not only an operator, they need the source for the material they base their computations and output on. And somehow all the marketing completely ignores this fact. And this is damaging.
I really enjoy using the poem Love and Tensor Algebra as a visualisation benchmark for models. There's something about it that requires a sense of abstract processing.
In my eye, GPT models always perform horribly at this, with Claude and Gemini coming in close second/third.
As each new tool drops it makes me wonder if I should convert. Currently I mostly code in vs code or chat with Claude code but I don’t really mix the two even though I know I can with the Claude add on. My gf uses colab with Gemini and it seems rather spiffy for data science. And now antigravity. I just wonder when it will end and devtools will slow down their development cycle a bit.
I know what you mean, but LLMs are just a tool. Probably the joy is actually taken out by some form of pressure to use them even when it doesn't make sense, like commercial/leadership pressure.
Weird take. For me they let me focus on the things I want to be working on, instead of having to write repetitive boilerplate stuff, so it's the opposite. But also nobody is forcing you to use them.
I tried vscode with copilot again after a year of cursor.
It’s faster but the LLM aspect is unusable. The diffing is still slow as molasses and the chat is very slow also. LLM wise it’s a joke vs cursor. But it is less laggy (because cursor is basically vibe coded crapware that they release multiple bug fixes for per day for the errors they introduce into production on every single release)
I've seen a lot less of that the last couple of weeks. My understanding is that when the main model spits out a diff that somehow doesn't apply cleanly, a cheap model is invoked to 'intelligently' apply it. So it shouldn't normally happen.
"Google Antigravity" makes it worse though. When I first read that I thought it was going to be some kind of quantum computing hype job.
On the other hand, I was thinking the other day what a shitty band name "The Beatles" are in isolation from the music. If the product is good enough it kind of frames the name and makes a bad name completely work.
I would not expect someone without a good knowledge of both javascript and the Ising model of ferromagnetism to make that in one hour. Especially now that google search is more and more crap, just looking for info would take longer.
Is this not just a cellular automaton? That's well within the usual range of college sophomore lab assignments.
To be honest the student may not necessarily care what the Isling model is, but they don't have to and neither does an LLM. It takes a very modest amount of code to apply some rules and update a grid of pixels. At least when I was in school it was totally normal to expect students to make something like this in an hour.
It's actually kind of ironic that in this case such a simple project now means the opposite of what it did back then. Students got these assignments as a form of encouragement to show that their skills were immediately useful and that more "serious" science need not be so scary.
As and old physicist and a computer programmer these days, I am so jealous of the things you can build these days "vibe coding". That someone with moderate knowledge of programming can build these things is fascinating.
Now, on the physics part, I would like to "see" the phase transition that you have in 2D. I don't know if that is missing from this simulation or if I am not looking at it with the correct eyes.
For me this feels irrelevant. These tools are marketed for developers for their day-to-day jobs that involve building products. Devs don't look up information on people or build some complex mathematical things daily. They build things that consist of different parts, which in turn can consist of different contexts and can be a combination of other things as well. It can be a straightforward approach or it can be a legacy codebase that also need to incorporate new features with new stacks. The real test is in the real world scenarios. But every time it's about a narrowly scoped thing, the tests, the marketing. And they try to build an image that the combination of these scoped tasks can somehow bring you the ability to build at large scale. They don't say it, but they implicitly mean it with the way they present all this. Computers can compute, they can detect patterns and do analytics part, they can build assumptions based on the data they have. But they need the data, they need parameters, they need not only an operator, they need the source for the material they base their computations and output on. And somehow all the marketing completely ignores this fact. And this is damaging.
I really enjoy using the poem Love and Tensor Algebra as a visualisation benchmark for models. There's something about it that requires a sense of abstract processing.
In my eye, GPT models always perform horribly at this, with Claude and Gemini coming in close second/third.
As each new tool drops it makes me wonder if I should convert. Currently I mostly code in vs code or chat with Claude code but I don’t really mix the two even though I know I can with the Claude add on. My gf uses colab with Gemini and it seems rather spiffy for data science. And now antigravity. I just wonder when it will end and devtools will slow down their development cycle a bit.
Personally, I wonder if I should switch careers
LLMs do take a lot of the joy out of it.
I know what you mean, but LLMs are just a tool. Probably the joy is actually taken out by some form of pressure to use them even when it doesn't make sense, like commercial/leadership pressure.
Weird take. For me they let me focus on the things I want to be working on, instead of having to write repetitive boilerplate stuff, so it's the opposite. But also nobody is forcing you to use them.
To what, something regulated?
Idk, something that needs arms and legs, and still a bit technical.
Ironically, the further you get from a keyboard, the less money you get paid.
For now
look into sustainance farming
I'm wondering if I should move to a remote village.
I’ve been learning how to be a bike mechanic
I tried vscode with copilot again after a year of cursor.
It’s faster but the LLM aspect is unusable. The diffing is still slow as molasses and the chat is very slow also. LLM wise it’s a joke vs cursor. But it is less laggy (because cursor is basically vibe coded crapware that they release multiple bug fixes for per day for the errors they introduce into production on every single release)
> The diffing is still slow as molasses
I've seen a lot less of that the last couple of weeks. My understanding is that when the main model spits out a diff that somehow doesn't apply cleanly, a cheap model is invoked to 'intelligently' apply it. So it shouldn't normally happen.
I saw antigravity and physics in the title and I was very confused when it was about a cursor-like IDE
Such a dumb name for an IDE, damn...
People now come up with "cool" names, then the actual products/libraries. This is why I ignore most of them.
I think it's a reference to https://xkcd.com/353/
jetbrains, hadoop, kafka (kaka!?) mongodb... git... it couldve been worse.
"Google Antigravity" makes it worse though. When I first read that I thought it was going to be some kind of quantum computing hype job.
On the other hand, I was thinking the other day what a shitty band name "The Beatles" are in isolation from the music. If the product is good enough it kind of frames the name and makes a bad name completely work.
this... is not very good for an hour? i would expect an undergrad to be able to cook this up in an hour
I would not expect someone without a good knowledge of both javascript and the Ising model of ferromagnetism to make that in one hour. Especially now that google search is more and more crap, just looking for info would take longer.
Is this not just a cellular automaton? That's well within the usual range of college sophomore lab assignments.
To be honest the student may not necessarily care what the Isling model is, but they don't have to and neither does an LLM. It takes a very modest amount of code to apply some rules and update a grid of pixels. At least when I was in school it was totally normal to expect students to make something like this in an hour.
It's actually kind of ironic that in this case such a simple project now means the opposite of what it did back then. Students got these assignments as a form of encouragement to show that their skills were immediately useful and that more "serious" science need not be so scary.
theyd cook it up in an hour and then spend the next 2 days stuck on some stupid issue. not so with ai tools.