• 12 Posts
  • 107 Comments
Joined 2 years ago
cake
Cake day: June 7th, 2024

help-circle
  • I’ve been using these for constrained, boring development tasks since they first came out. “Pro” versions too. Like converting code from one language to another, or adding small features to existing code bases. Things I don’t really want to bother taking weeks to learn, when I know I’ll only be doing them once. They work fine if you take baby steps, make sure you do functional/integrated testing as you go (don’t trust their unit tests–they’re worthless), and review EVERYTHING generated. Also, make sure you have a good, working repo version you can always revert to.

    Another good use is for starting boilerplate scaffolding (like, a web server with a login page, a basic web UI, or REST APIs). But the minute you go high-level, they just shit the bed.

    The key point in that article is the “90%” one (in my experience it’s more like 75%). Taking a project from POC/tire-kicking/prototype to production is HARD. All the shortcuts you took to get to the end fast have to be re-done. Sometimes, you have to re-architect the whole thing to scale up to multiple users vs just a couple. There’s security, and realtime monitoring, and maybe compliance/regulatory things to worry about. That’s where these tools offer no help (or worse, hallucinate bad help).

    Ultimately, there’s no substitute for battle-tested, scar-tissued, human experience.







  • Arduino is based on the ‘giant loop’ model, where you initialize settings in the setup() function, then wait for events (inputs, timers, handlers, etc) in the loop() function.

    Each time, the loop() function has to finish before it can be called again. So if there are timing related actions, there’s a chance they may fall out of sync or stutter. If you want to advance an animation frame, you’ll need to maintain all the state, and hope the loop gets called often enough so the frame can advance. If you want to sync up the animation to an RTC, then you’ll want to track whether the current loop syncs up with a time code before deciding whether to advance the animation (or not). Pretty soon your giant loop will likely get complicated and messy.

    Another option is to look at something like SoftPWM for controlling LEDs and see how they set up animation timing. Or to use the millis() function instead of delay() to manage timing. Adafruit has a nice tutorial on that: https://learn.adafruit.com/multi-tasking-the-arduino-part-1/using-millis-for-timing

    To get more asynchronous activity going, the next option is to move to a more task-based system like FreeRTOS. Here you set up ‘tasks’ which can yield to each other, so you can have more asynchronous events. But the mental model is very different than the Arduino loop. The toolchain is also completely different. Here’s a decent primer: https://controllerstech.com/freertos-on-arduino-tutorial-part-1/

    If your target device is an ESP32, the underlying OS is actually FreeRTOS. Arduino is a compatibility layer on top. So you can use the Arduino IDE and toolchain to write FreeRTOS tasks. Many peripheral device drivers can also be shared between the two. However, the minute you switch to tasks, the Arduino loop doesn’t work any more. Examples here: https://randomnerdtutorials.com/esp32-freertos-arduino-tasks/

    From your description, it sounds like you may want to switch to FreeRTOS tasks.





  • This looks great!

    Can you use it to overlay text fields and fill them?

    Most of my uses are basic. Like filling out a PDF form that doesn’t have proper form entry fields. These are usually older government or bureaucratic/healthcare/school forms.

    I end up adding text boxes and entering values, or adding an X on top of a checkbox, adding a signature PNG file and scaling it to fit the size. Sometimes I have to add a highlight overlay. Then I save it all as a single flattened PDF file.

    Amazingly, this is hard to do in Acrobat and a lot of apps. I end up using a janky, 10-yo desktop app that is no longer supported.



  • When designing large, complex systems, you try to break things down into manageable chunks. For example, the bit that deals with user login or authentication. The payment bit. Something that needs to happen periodically. That sort of thing.

    Before you know it, there are tens, or hundreds of chunks, each talking to each other or getting triggered when something happens. Problem is, how do these bits share data with each other. You can copy all the data between each chunk, but that’s not very time efficient. And if something goes wrong, you end up with a mess of inconsistent data everywhere.

    So what bits of data do you keep in a shared place? What gets copied around from place to place? And what gets only used for that one function to get the job done? This is the job of software architects to sort out.

    The author says the more copies of something you make, the more complexity and ‘state’ management you have to deal with. He’s right, but there are ways to mitigate the problem.






  • On the AI coding IDE side, VSCode has pretty much hoovered up everyone, mainly because JetBrains offered their own AI option, which kept competitors away. On the server side, though, integrating with AI is still wide open.

    You eventually have to hit Python because of all the ML libraries. But you can run that as a separate microservice or process. Here’s a chance to do something whacky, like let JS invoke Python-ML inline, or port the main ML libraries to JS, or cross-compile JS to CUDA (just spit-balling here). It’ll be a lot easier to try these experiments than trying to push it upstream into Node.

    Plus, Bun is used by a bunch of cross-platform CLI tools, including Claude Code, so they can make sure there are no breaking changes.

    TBH, I’m surprised nobody’s snapped up Mojo (and Chris Lattner). They have a lot more advanced, AI-relevant, cross-platform tech.


  • It’s not a bad outcome. Bun is cool but has $0 revenue and some hand-wavy thing about future paid cloud services. This way, larger companies will give them a more serious shot than they would a small startup.

    It still doesn’t have a revenue story, but it’s now strapped onto the side of one of the few AI companies with a decent chance of surviving the next AI Winter. And if Anthropic goes sideways, the Bun engineers can fork the code and keep going.





  • Both use Nordic processors and the move to Zephyr OS should make it easier to go over. But the Pebble watches have a Nordic nRF52840 with 1MB flash and 256 MB RAM, but PineTime has the an nRF52832 with 512KB of flash and 64KB RAM. It will be a challenge squeezing everything down.

    Pebble also has an ePaper display (B&W on Duo, 64 color on PT2) vs IPS capacitive touch display on PineTime. Then there’s the matter of all the peripherals (IMU, mic, speaker, compass, haptics, buttons) that need to be supported. PineTime also has a heart-rate monitor that PebbleOS may not support (yet).

    It’s doable, but I suspect the lower flash/RAM will be a barrier. Someone might still try do to the port, given that the cheapest Pebble device is $149 and the PineTime is $27.