Me: What do you mean my server is old? I built it last year!
Server Uptime: 988 days(!)
Oh…
Check out my open source game engine! https://strayphotons.net/ https://github.com/frustra/strayphotons
I have been developing this engine on and off for over 10 years, and still have big plans.
Me: What do you mean my server is old? I built it last year!
Server Uptime: 988 days(!)
Oh…
I personally have spent those 100s (actually more like 1000s) of hours studying Software Engineering, and I was doing my best to give an example of how current AI tools are not a replacement for experience. Neither is having access to a sewing machine or blowtorch and hammer (you still need to know about knots and thread / metallurgy / the endless amount of techniques for using those tools).
Software in particular is an extremely theoretical field, similar to medicine (thus my example with a doctor).
ChatGPT is maybe marginally better than a simple web search when it comes to learning. There is simply no possible way to compress the decade of experience I have into a few hours of using an LLM. The usefulness of AI for me starts and ends at fancy auto-complete, and that literally only slightly speeds up my already fast typing speed.
Getting a good result out of AI for coding requires so much prerequisite knowledge to ask the right questions, a complete novice is not even going to know what they should be asking for without going through those same 100s of hours of study.
Not everyone has 100s of hours free time to sink into this and that skill
That’s life, buddy. Nobody can learn everything, so communities rely on specialists who can master their craft. Would you rather your doctor have 100s of hours of study and practice, or a random person off the street with ChatGPT? If something is worth studying for 100s of hours, then there’s more nuance to the skill than any layman or current AI system can capture in a few sentence prompt.
Legitimately it is a winning strategy: https://www.history.com/articles/us-invasion-of-panama-noriega
Check again. Going from 600:1 to 60:1
On the bright side, after 10 years of doing it, you might improve the ratio to 1 hour of feeling like an idiot and 1 minute of feeling like a genius.


This practice was banned already last year in January. If you think this is grounds to invade a sovereign nation then you’ve got some twisted world views. Maybe the US should be liberated from it’s oppressive anti-abortion, anti-trans government, hm? Sounds like the exact same argument to me. Every country has its problems. If you think invading is going to make anything better, then fuck right off.


It’s a struggle even finding the manual these days if you don’t already know where it is / what it’s called. I was searching about an issue with my car recently and like 90% of the results are generic AI-generated “How to fix ______” with no actual information specific to the car I’m searching for.


Check out the benchmark I edited in to my original post. These are not user-provided strings in my case.


C++ already does that for short strings
I’ve already been discussing this. Maybe read the rest of the thread.
Also the case in the standard library
I think you’re missing the point of why. I built this to be a nearly drop in replacement for the standard string. If this wasn’t the case it would need to do even more processing and work to pass the strings to anything.
discontinued because it was against the standard.
Standards don’t matter for an internal type that’s not exposed to public APIs. I’m not trying to be exactly compatible with everything under the sun. There’s no undefined behavior here so it’s fine
I had blanked this from my memory, but my very first programming job was to reimplement some FoxPro code in… Visual Basic. FoxPro is so strange to work in. It’s like programming in SQL, and the codebase I was in had global variables everywhere.


I don’t use 256 bytes everywhere. I use a mix of 64, 128, and 256 byte strings depending on the specific use case.
In a hot path, having the data inline is much more important than saving a few hundred bytes. Cache efficiency plus eliminating heap allocations has huge performance benefits in a game engine that’s running frames as fast as possible.


22 characters is significantly less useful than 255 characters. I use this for resource name keys, asset file paths, and a few other scenarios. The max size is configurable, so I know that nothing I am going to store is ever going to require heap allocations (really bad to be doing every frame in a game engine).
I developed this specifically after benchmarking a simpler version and noticed a significant amount of time being spent in strlen(), and it had real benefits in my case.
Admittedly just storing a struct with a static buffer and separate size would have worked pretty much the same and eliminated the 255 char limitation, but it was fun to build.


One cool trick that can be used with circular buffers is to use memory mapping to map the same block of memory to 2 consecutive virtual address blocks. That way you can read the entire contents of the buffer as if it was just a regular linear buffer with an offset.


I came up with a kind of clever data type for storing short strings in a fixed size struct so they can be stored on the stack or inline without any allocations.
It’s always null-terminated so it can be passed directly as a C-style string, but it also stores the string length without using any additional data (Getting the length would normally have to iterate to find the end).
The trick is to store the number of unused bytes in the last character of the buffer. When the string is full, there are 0 unused bytes and the size byte overlaps the null terminator.
(Only works for strings < 256 chars excluding null byte)
Implementation in C++ here: https://github.com/frustra/strayphotons/blob/master/src/common/common/InlineString.hh
Edit: Since a couple people don’t seem to understand the performance impact of this vs regular std::string, here’s a demo: https://godbolt.org/z/34j7obnbs This generates 10000 strings like “Hello, World! 00001” via concatenation. The effect is huge in debug mode, but still has performance benefits with optimizations turned on:
With -O3 optimization
std::string: 0.949216ms
char[256] with strlen: 0.88104ms
char[256] without strlen: 0.684734ms
With no optimization:
std::string: 3.5501ms
char[256] with strlen: 0.885888ms
char[256] without strlen: 0.687733ms
(You may need to run it a few times to get sample numbers due to random server load on godbolt)
Changing the buffer size to 32 bytes has a negligible performance improvement over 256 bytes in this case, but might be slightly faster due to the whole string fitting in a cache line.


Have you heard of the concepts of Internet bandwidth and compression? This article isn’t talking about poor quality microphones or cameras. You could have the best camera and microphone in the world, but it won’t do you any good on dial-up, low speed “broadband”, or unreliable connection to the Internet.
they want them to be rich, happy and healthy so they keep coming back and spending more money.
See there’s the problem right there. They don’t need customers to be any of those things to suck every last cent out of them. Corporations would love nothing more than becoming a monopoly on human essentials like food, water, housing, etc… because people will go to great lengths to afford food whether they like it or not.


I guess that makes sense it’d happen more in big buildings. The runs in most houses wouldn’t be long enough to have a noticeable induced current without the electrician adding a few extra loops for fun :)
Thanks for humoring my skepticism, it’s been interesting to think about how this would happen.
3/4 of a second is quite noticeable. Most UI animations are only 100-200ms, and if you disable them, things feel faster but less “polished”. Try it out yourself on your phone UI if you’ve got an Android.