

Probably boxed up by a clanker, as long as the weight matches, it’s good to go.


Probably boxed up by a clanker, as long as the weight matches, it’s good to go.


This is mostly correct. It’s also the case that “dreams” are formed after you wake up. You aren’t dreaming while you are asleep, your brain is firing random shit that makes no sense. As soon as you start to wake it tries to piece together what the fuck was going on into something resembling a narrative. This piecing together is part of the waking up and not a part of the sleeping. This is why you can have a dream about an alarm going of for seemingly tens of minutes or even hours, while you are being woken up by your alarm going off. Your alarm probably hasn’t been going for more than a few seconds, but your brain incorporates it into the narrative. Now this isn’t to say you can’t have a bad dream or nightmare and be woken up by them. The random firing can definitely cause enough stress to wake you up. Especially if you are ill (fever dreams) or under a lot of stress in general, your brain can misbehave during sleep and wake you up. It’s just that the “story” part of the dream only happens when you wake up, while you are sleeping it is random.


Elon Musk says…
He says a lot of shit, the world would be a better place if everyone would have ignored this idiot since he was born. Don’t repost his bullshit, don’t promote him, ignore him and hope he goes away soon.


Think of it this way:
If I ask you can a car fly? You might say well if you put wings on it or a rocket engine or something, maybe? OK, I say, so I point at a car on the street and ask: Do you think that specific car can fly? You will probably say no.
Why? Even though you might not fully understand how a car works and all the parts that go into it, you can easily tell it does not have any of the things it needs to fly.
It’s the same with an LLM. We know what kinds of things are needed for true intelligence and we can easily tell the LLM does not have the parts required. So an LLM alone can never ever lead to AGI, more parts are needed. Even though we might not fully understand how the internals of an LLM function in specific cases and might also not know what parts exactly are needed for intelligence or how those work.
A full understanding of all parts isn’t required to discern large scale capabilities.


Interesting little detail: Even though the light doesn’t interact directly with the dark matter, so in a sense it just passes through, the light can still be affected by the dark matter indirectly. Because the dark matter does have mass, or at least interacts gravitationally like it has mass, it actually deforms space-time. This deformation can cause light to travel through a longer path than one might expect.
This has been used to create dark matter “maps”, to show where there is more and where there is less dark matter. It also shows up in gravitational lensing.


I’ve seen this format floating around for a bit. Altman has been begging for more money for years now. And he’s received it as well, but they are burning through it so fast they are always short.


“What are you doing Cooper? DOCKING” - Interstellar (2014)
They are trolling, it doesn’t matter at all.


A table saw is for lengthwise cuts, for cutting long things like these you need a cut-off saw.
Fun fact, you don’t really need to tap soft aluminium like this. You can just drive the bolt straight in with an impact driver. I thought it was sketch at first, having always tapped them beforehand. But my buddy said it’s a waste of time, just drive the bolts in right away. So I tried it and he was right, it works perfectly every time. They form perfect threads so you can easily remove and re-add the bolt just like when it was tapped beforehand.


we end up with lost history
Oof, I felt this in my soul


There are a couple of things I do agree with in regards to the comments in code. They aren’t meant as a replacement for documentation. Documentation is still required to explain more abstract overview kind of stuff, known limitations etc. If your class has 3 pages of text in comments at the top, that would probably be better off in the documentation. When working with large teams there are often people who need to understand what the code can and can’t do, how edge cases are handled etc. but can’t read actual code. By writing proper documentation, a lot of questions can be avoided and often help coders as well with a better understanding of the system. Writing doc blocks in a matter that can be extracted into a documentation helps a lot as well, but I feel that does provide an easy way out to not write actual documentation. Of course depending on the situation this might not matter or one might not care, it’s something that comes up more when working in large teams.
Just like writing code, writing proper comments is a bit of an art. I’ve very often seen developers be way too verbose, commenting almost every line with the literal thing the next line does. Anyone who can read the code can see what it does. What we can’t see is why it does this or why it doesn’t do it in some other obvious way. This is something you see a lot with AI generated code, probably because a lot of their training was done on tutorials where every line was explained so people learning can follow along.
This also ties in with keeping comments updated and accurate when changing code. If the comment and the code doesn’t match with each other, which one is true? I’ve in the past worked on legacy codebases where the comments were almost always traps. The code didn’t match the comments at all, sometimes obviously so, most times only very subtle. We were always guessing was the implementation meant to be the comment and the difference just a mistake? The codebase was riddled with bugs, so it’s likely. Or was the code changed at a later point on purpose and the comments neglected?
Luckily these days we have good tools in regards to source control, with things like feature branches, pull requests with tools that allow for discussion and annotation. That way at least usually the origin of a change is traceable. And code review can be applied before the change is merged, so mistakes like neglecting comments can be caught.
Now I don’t agree with the principle of no comments at all. Just because a tool has some issues and limitations doesn’t mean it gets banned from our toolbox. But writing actual useful comments is very hard and can be just as hard as writing good code. Comments also aren’t a cheat card for writing bad code, the code needs to stand on its own and be enhanced by the comments.
It’s one of those things we’ve been arguing about over my entire 40 year career. I don’t think there is a right way. Whatever is best depends on the person, the team, the system etc. And like with many things, there are people who are good and people who suck. That’s just the way the cookie crumbles.
Would have been funny if it was original. Just randomly wasting peoples time with copy-pasta is not cool.
Omg the comments are so out of hand. I regularly do code reviews on colleagues who use AI to write code (some whilst protesting, but still). The comments are usually the worst part.
The thing writes entire novels in the summary that do nothing but confuse and add cognitive load. It adds comments to super obvious things, describing what the code does instead of why. Yes AI I can read code, I know assigning a variable a value is how shit works. And I have still got PTSD from those kinds of comments from a legacy system I worked on for years that did the exact same, except the comments and the code didn’t match up, so it was a continuous guess which one was the intended one. It also likes to put responses to the prompt in the comments. So for example when it assigned A to a variable and it was supposed to be B, when you point this out it adds a comment saying something like: This is supposed to be B not A. But when you read those comments after the fact, it makes zero sense. Like of course it should be B? Why should it ever be A?
And it often generates a bunch of markdown docs which are plain drivel, luckily most devs just delete those before I see them.
My personal experience is in 30% of cases the AI is just plain wrong and the result is nonsense, delete that shit and try again. In the 70% that does have some kind of answer there is ALWAYS at least one big issue and usually multiple. It’s a 50/50 if the code is workable with some kinks to work out, or if it is seriously flawed and needs a lot of work. For experienced devs it can be a helpful thing if they have writers block, to give them something to be angry about, showing them how they can do better. But for inexperienced devs it’s just plain terrible, the code is shit and the dev doesn’t even know. And worse still the dev doesn’t learn. I try to sit down with them, explain the shortcomings and how to do better. But they don’t learn, they just know what stuff to write in the prompt, in order to not get me on their case. Or they will say stuff like: but it works right? Facepalm
That company I do work for also tried getting their sysadmins and devops people to use AI. Till one day there was a permissions issue, which admittedly was pretty complicated, where they ended up solving it with AI. The team was happy, the upper management was happy, high fives all around. Till the grumpy old sysadmin who has 40 years of experience takes a look and hits the big ol’ red alarm button of doom. Full investigation later, the AI had fucked up and created a huge hole in the security. There was zero evidence it had been exploited, but that doesn’t matter. All the work still needed to be done, all the paperwork filed, proper agencies informed, because the security issue was there. Management eased up on AI usage for those people real fast.
It’s so weird how people in charge want to use AI, but aren’t even really sure of what it is and what it isn’t. And they don’t listen to what the people with actual knowledge have to say. In their minds we are probably all just covering our asses to not be out of a job.
But for real if anyone in management is listening, take it from an old asshole who has done this job since the 80s: AI fucking sucks!