Hiker, software engineer (primarily C++, Java, and Python), Minecraft modder, hunter (of the Hunt Showdown variety), biker, adoptive Akronite, and general doer of assorted things.

  • 8 Posts
  • 86 Comments
Joined 2 years ago
cake
Cake day: August 10th, 2023

help-circle





  • There is more value in understanding how to extend and customize your editor than in searching for a new one. Use whatever your workplace provides the best support for, and then customize it from there.

    I think there’s something to be said for shaking up your environment periodically as well and trying new things. Sure, there’s a week where you edit at a snails pace, followed by a month where you edit a bit slower than normal, but different tools really do have different pros and cons.

    For the code bases I’ve worked in, this evolved from necessity as the code files were so large many editors were struggling, the rules for the style so custom that editors can’t be properly configured to match, or the editor performance in general was questionable.

    I went through a journey of sorts from IDEs to Electron based editors to Emacs and currently am working with Kakoune (and I’ve passed over a bunch of other editors like Sublime, Helix, and Zed that couldn’t meet my requirements or didn’t match my sensibilities – even though a thing or two here or there really was excellent). Pretty much every change has been the result of the editor pain points that couldn’t be addressed without actually working on the editor itself.


  • I’ve recently taken to kakoune which was one of the inspirations for Helix.

    It’s not as fancy (in terms of built-in features) out of the box, but it’s very performant, integrates with tmux well, and for the C++ and Python I’m writing I haven’t felt the need for much beyond token based word completion and grep.

    The client server model it uses has really let me improve my tmux skills because I’m working inside of it more and using it for editor splits.

    I don’t know if Helix does this, but I’ve also come to love the pipe operator (where you just pipe a selection into some external program and the selection gets replaced with the output, so you can use the e.g. the sort command to sort text). You can also pretty easily add in custom extensions via command line programs.


  • But 99.9% of code I write is safe rust - which most people just call rust.

    That’s true for your application maybe, but they go on to say how one should consider whether or not their problem is going to fit well within the rules of the rust borrow checker and that needs to be talked about more (vs just assuming Rust is the safest option).

    The second time you write any project it will be easier and faster as you learn a lot form the first time you write something. If zig is always the rewrite it will come off better. Almost all rewrites are better or faster, even if you are moving to a slower language - the language makes a difference to performance and ease of writing. But far more does how you write things and the data structures/algorithms you use.

    I’m going to agree to disagree with you there. I’d throw this in the category of persistent myth. Yes, ideally, you learn from your first experience and make everything better, but the reality is you often just end up with different mistakes.

    Rewrites aren’t often done outside of hobbyish projects because they’re very expensive, stop new feature development, and you really can end up with something that’s worse than what you started with (this is especially true if you’ve switched languages or frameworks).

    Overall they seem to want to write as much unsafe as they can and are writing rust like it is C. This is not a good idea and why zig will be better suited.

    They do explain with citations why it makes more sense (i.e. you end up with something more performance) to write their VM outside of the restrictions of the borrow checker.

    But you can write a VM without large amounts of unsafe if you want to and it can be performant.

    I think the claim is a bit of a stretch off the cuff. Ideally to retort this some rustacian would implement a mark and sweep VM in Rust with maximal use of the borrow checker.

    Edit: Looking at their code, it’s not all just one big unsafe block either. But it is something that they frequently had to drop down to, to implement this particular garbage collection strategy.


  • You’re ignoring that simple principles make great guidelines for not overthinking things.

    Name some great “simple principles;” everything has nuance and trying to distill things into “well it’s just this simple principle…” is a great way to get catastrophic mistakes.

    And you’re doing so in the context of an article about the dangers of overthinking things.

    You did not understand the point of that article if you think it’s about the dangers of over thinking. The issue with DRY is that it leads to making refractors without thinking about whether or not the refractor makes sense. That’s the exact issue the author is warning about, think about whether or not dry makes sense.

    That has ABSOLUTELY NOTHING to do with how many times the code has been repeated. It has everything to do with why it’s repeated.

    You’re coming across like one of the rookies who need this warning.

    I’ll toss that right back at you bud. You don’t seem to understand the actual problem.

    Consider counting to three, before applying DRY. It works.

    It does not. I literally fixed a bug today because the same algorithm, doing the same job, was used in two different places formatted differently, exactly two, and they got out of sync resulting in memory corruption.

    That’s what DRY is intended to fix. Not “I have three [or whatever number] things doing the same thing so now I should DRY this code up”, I’ve seen HORRIBLE refractors from DRY applied to 3 things; absolute spaghetti inheritance hierarchies that were “DRY.”

    I hate talking about DRY because it’s this principle that so many people think “oh I’m doing it correctly; I’m doing good things!” and they actually make the code SO MUCH worse.

    EDIT: Here’s exact quotes from the article (emphasis theirs):

    Applying DRY principles too rigidly leads to premature abstractions that make future changes more complex than necessary. Consider carefully if code is truly redundant or just superficially similar. While functions or classes may look the same, they may also serve different contexts and business requirements that evolve differently over time. Think about how the functions’ purpose holds with time, not just about making the code shorter.


  • You’re both saying the same thing though.

    We’re not quite saying the same thing though because …

    It’s not a 2 vs 3 issue. You can have an infinite number of instances of the same logic and it still not be a case for generalization because it’s not actually general … it’s just an infinitely large program. You can also have two copies of the same code that should be reduced because they are general (e.g. you have the exact same algorithm for generating a UUID copied into two different spots). If you’re thinking about it in terms of quantity you’re already doing it wrong.

    It’s not fixable by “just” copying something.

    Those two points are really important points.


  • The code in the article isn’t complicated enough that I’d bother. It even ends up with about the same number of lines of code, hinting that you probably haven’t simplified things much.

    I think it’s a good example of the problem though. People take that same idea and apply it too liberally. The point isn’t that specific code, it’s about not apply DRY to code that’s coincidentally identical.

    But otherwise, I disagree with the article. If it’s complicated enough to bother abstracting the logic, the worst that can happen in the above situation is that you just duplicate that whole class once you discover that it’s not the same. And if that never happens, you only have 1 copy to maintain.

    That’s… Not at all true in practice. What often happens with these “DRY” abstractions when they’ve been improperly applied is you end up with an inheritance hierarchy or a crazy template or some other thing. You’re really lucky if you can just copy some code and find your way out of the weeds.

    There are plenty of bad abstractions in the wild and novices applying DRY is a common source of them.




  • Sure, there’s a cost to breaking things up, all multiprocessing and multithreading comes at a cost. That said, in my evaluation, single for “unity builds” are garbage; sometimes a few files are used to get some multiprocessing back (… as the GitHub you mentioned references).

    They’re mostly a way to just minimize the amount of translation units so that you don’t have the “I changed a central header that all my files include and now I need to rebuild the world” (with a world that includes many many small translation units) problem (this is arguably worse on Windows because process spawning is more expensive).

    Unity builds as a whole are very very niche and you’re almost always better off doing a more targeted analysis of where your build (or often more importantly, incremental build) is expensive and making appropriate changes. Note that large C++ projects like llvm, chromium, etc do NOT use unity builds (almost certainly, because they are not more efficient in any sense).

    I’m not even sure how they got started, presumably they were mostly a way to get LTO without LTO. They’re absolutely awful for incremental builds.


  • Slow compared to what exactly…?

    The worst part about headers is needing to reprocess the whole header from scratch … but precompiled headers largely solve that (or just using smaller more targeted header files).

    Even in those cases there’s something to be said for the extreme parallelism in a C++ build. You give some of that up with modules for better code organization and in some cases it does help build times, but I’ve heard in others it hurts build times (a fair bit of that might just be inexperience with the feature/best practices and immature implementations, but alas).