

I’ve come around on it somewhat at work. Recent models really are getting pretty impressive. It’s at the point where I can tell it to read a Jira ticket and implement it, and for simple ones it basically just does it. I’m not sure it’s worth the massive environmental and infrastructures detriments (or rather, I’m pretty sure it’s not), but it’s definitely a productivity boost.
It’s also creating cognitive debt tho - every change it does for me automagically is one I don’t have to think about and ‘earn’ myself. You could argue the AI compensates for that by then explaining the code for you, but I think it will lead to some bad results in the mid-long term.
For any personal programming, I don’t/wouldn’t use it, beyond just replacing Google searches maybe. It defeats the fun of it, and cost money on top of that.





Yeah, I worry about that. Seems like everything a human can do to prove they’re human, imitation machine learning will be able to look at those inputs and outputs and fake it.
If you can’t have spaces filter out bots (already hard, probably impossible in the future), trust and validity completely go away. Spam will become unmanageable, and in many cases more subtle. Abuse will scale and could become unblockable.
I’m sure there sill be some kind of good ideas to push back, but I’d certainly say I’m pessimistic atm.