I think some apps allow it, not sure though. You can also move to an instance defederated from .ml if that instance is more aligned with what you’d like to see. You can also just subscribe to communities and not browse “All.”
I think some apps allow it, not sure though. You can also move to an instance defederated from .ml if that instance is more aligned with what you’d like to see. You can also just subscribe to communities and not browse “All.”
I’ve used a 2.5" hdd on a rPi before using a usb-to-sata adapter (powered from rPi’s USB port). I’ve used a 3.5" hdd using an hdd enclosure that’s externally powered.
This works well too, and with many different models: https://github.com/guardrails-ai/guardrails
Humans used to live in socialist-like societies before agriculture. I.e. “primitive communism.” I’d argue socialism is more aligned with basic human nature than capitalism.
Code should be self-documenting. That way it is never outdated. Here’s an example of this similar to what you can expect to see in practice:
def nabla_descent(X, y, theta, alpha, delta, nabla):
m = len(y)
for _ in range(delta):
h = X.dot(theta)
nabla = (1/m) * X.T.dot(h - y)
theta = theta - alpha * nabla
return theta
I (probably unreasonably) despise using web front-ends for desktop applications.
GTK is OK. QT is very feature rich, but that adds complexity. Both can be cross-compiled to most systems and shipped with all the required libraries pretty easily.
I haven’t used it in a long while, but I remember liking Java Swing for some reason. Java should be “write once, run anywhere.” But, cross-compiling isn’t usually too hard, so not sure how much that matters. There’s more modern frameworks for JVM-based languages now, but I haven’t tried them.
I’ve noticed Gradio is popular in the ML community (web-tech based, and mostly used for quick demos/prototypes).
Edit: For web applications, I prefer Angular’s more traditional architecture over React’s hook architecture.
Haven’t tried Gemini; may work. But, in my experience with other LLMs, even if text doesn’t exceed the token limit, LLMs start making more mistakes and sometimes behave strangely more often as the size of context grows.
I usually just use VS Code to do full-text searches, and write down notes in a note taking app. That, and browse the documentation.
Nah, LLMs have severe context window limitations. It starts to get wackier after ~1000 LOC.
Python is quite slow, so will use more CPU cycles than many other languages. If you’re doing data-heavy stuff, it’ll probably also use more RAM than, say C, where you can control types and memory layout of structs.
That being said, for services, I typically use FastAPI, because it’s just so quick to develop stuff in Python. I don’t do heavy stuff in Python; that’s done by packages that wrap binaries complied from C, C++, Fortran, or CUDA. If I need tight-loops, I either entirely switch to a different language (Rust, lately), or I write a library and interact with it with ctypes.
C# is actually pretty nice. Ecosystem, not so much, but D doesn’t really have one anyways :)
If you’re talking about naive bayes filtering, it most definitely is an ML model. Modern spam filters use more complex ML models (or at least I know Yahoo Mail used to ~15 years ago, because I saw a lecture where John Langford talked a little bit about it). Statistical ML is an “AI” field. Stuff like anomaly detection are also usually ML models.
I’ve heard high velocity rounds (such as rifle rounds) send a kind of shockwave through your body. Dunno if it’s true or not.
I’ve used them as a proxy for a web app at the last place I worked. Was just hoping they’d block unwanted/malicious traffic (not sure if it was needed, and it wasn’t my choice). I, personally, didn’t have any problems with their service.
Now, if you take a step back, and look at the big picture, they are so big and ubiquitous that they are a threat to the WWW itself. They are probably one of the most valuable targets for malicious actors and nation states. Even if Cloudflare is able to defend against infiltration and attacks in perpetuity, they have much of the net locked-in, and will enshittify to keep profits increasing in a market they’ve almost completely saturated.
Also, CAPTCHAs are annoying.
Likely transformers now (I think SD3 uses a ViT for text encoding, and ViTs are currently one of the best model architectures for image classification).
I think similar, and arguably more fine-grained, things can be done with Typescript, traditional OOP (interfaces, and maybe the Facade pattern), and perhaps dependency injection.
I’ve put together 2 computers the last couple years, one Intel (12th gen, fortunately) and one AMD. Both had stability issues, and I had to mess with the BIOS settings to get them stable. I actually had to under-clock the RAM on the AMD (probably had something to do with maxing-out the RAM capacity, but I still shouldn’t need to under-clock, IMO). I think I’m going to get workstation-grade components the next time I need to build a computer.
I just ask ChatGPT to review pull requests.
This is likely just stock manipulation. Interview was in June, and just now released the day before TSMCs earnings report.
It’s not really automation though. The store is outsourcing labor to the consumer.