It’s not even actually that bad, at least not since January of 2020: https://stackoverflow.com/a/59687740/1858225
It’s not even actually that bad, at least not since January of 2020: https://stackoverflow.com/a/59687740/1858225
Huh. I had forgotten that git does actually create a file with the branch name. But it doesn’t actually screw up the .git
folder or lose your data when you try to do a rename like this; it just rejects the rename unless you also use the “force” option. This has been the case since at least January of 2020. But apparently it actually doesn’t always use a local file for branch names, so sometimes there’s a problem and sometimes there isn’t, which I guess is arguably worse than just having consistently-surprising behavior.
I honestly don’t even understand the joke. Case-insensitive file names cause problems, but what does that have to do with version control branch names?
Yeah, consistency is good, which is why it’s good to follow the spec. I’m saying that the decision to make errors be flat strings in the spec was a bad one. A better design would be what you have, where code
is nested one level below error
, plus permitting extra implementation-defined fields in that object.
The spec requires errors to be a single string, and also mandates using the space character as a separator? I’m not a fan of deviating from spec, but those are…bad choices in the spec.
Understandable; no time to check details when your fuse is that short
The second button is actually a pretty major change!
It means both.
It had a reasonably clear warning, though; a screenshot is included in this response from the devs. But note that the response also links to another issue where some bikeshedding on the warning occurred and the warning was ultimately improved.
In reality, that was added four and a half years after this issue was opened.
Yes, the dialog was changed, as part of this linked issue (and maybe again after that; this whole incident is very old). After reading some of the comments on that issue, I agree with the reasoning with some of the commenters that it would be less surprising for that menu option to behave like git reset --hard
and not delete tracked files.
The user clicked an option to “discard” all changes. They then got a very clear pop-up saying that this is destructive and cannot be undone (there’s a screenshot in the thread).
Doesn’t prolog already “not work half the time”? (Disclaimer: I haven’t used it.)
The article is more about the behavior of members of the C++ committee than about the language. (It also has quite a few tangents.)
I understand what you’re saying, but I want to do whatever I can to promote the shift in attitudes that’s already happening across the industry.
And being late or never delivering out of fear of shipping buggy code is even worse.
From a business perspective, yes, usually true. But shipping buggy software can also harm your company’s reputation. I doubt that this has been researched enough yet to be quantifiable, but it’s easy to think of companies who were well known for shipping bugs (Microsoft, CD Projekt Red) and eventually suffered in one way or another for it. In both of those cases, you’re probably right; Windows was good enough in the 90s to dominate the desktop market, and Cyberpunk 2077 was enough of a technical marvel (for those who had the hardware to experience it) that it probably bolstered the studio’s reputation more than harmed it. But could Microsoft have weathered the transition to mobile OSes better if it hadn’t left so many consumers yearning for more reliable software? And is Microsoft not partly to blame for the general public just expecting computers to be generally flaky and unreliable?
Imagine if OSes in the 90s crashed as rarely as desktop OSes today. Imagine if desktop OSes today crashed as rarely as mobile OSes today. Imagine if mobile OSes crashed rarely enough that the average consumer never experienced it. Wouldn’t that be a better state of things overall?
I care about types not just because I like having stronger confidence in my own software, but because, as a user, bugs are really annoying, and yes, I’m confident that stronger type systems could have caught bugs I’ve seen in the wild as a user.
You are saying “yes” to a comment explaining why the Google AI response cannot possibly be correct, so what do you mean “and [it’s] correct”?
This article somehow links to both the Reference and the Ferrocene spec, but still concludes that an official non-Ferrocene spec is necessary.
Why doesn’t the Ferrocene spec accomplish what the author wants? He states:
In other words, without a clear and authoritative specification, Rust cannot be used to achieve EAL5.
What? Why can’t the Ferrocene spec (and compiler) be used? Do Ferrocene and TÜV SÜD not count as “some group of experts”?
(Regarding the author’s opening paragraphs, the Reference does make the same distinction about drop scopes for variables versus temporaries, though I can see why he finds the Ferrocene spec clearer. But that doesn’t demonstrate that the Reference is useless as a stand-in for a specification.)
That’s actually not how any language has ever been written, though it’s easy to get that impression from how much the C and C++ communities emphasize their formal specifications.
But in fact, both languages were in production use for over a decade before they had a formal spec. And languages with formal specifications are actually a tiny minority of programming languages.
My point is that the claim in the comic and in other comments that this corrupts your repo or loses work simply isn’t true.