• 0 Posts
  • 11 Comments
Joined 2 years ago
cake
Cake day: June 17th, 2023

help-circle
  • There’s operations that treat bits like floats and operations that treat them like various kinds of ints, but the meaning of bits is in the eye of the beholder. There’s even good examples of mixing and matching integer and floating point operations to clever effect, like with the infamous fast inverse square root. I feel like people often think mathematical objects mean something beyond what they are, when often math is kind of just math and it is what it is (if that makes sense… it’s kind of like anthropomorphizing mathematical objects and viewing them through a specific lens, as opposed to just seeing them as the set of axioms that they are). That’s kind of how I feel with this stuff. You can treat the bits however you want and it’s not like integer operations and bitwise operations have no meaning on supposedly floating point values, they do something (and mixing these different types of operations can even do useful things!), it just might not be the normal arithmetic operations you expect when you interpret the number as a float (and enjoy your accidental NaNs or whatever :P).

    The difference of static and dynamic typing being when you perform the type checking is partially why I consider it to be a somewhat arbitrary distinction for a language (obviously decidable static type checking is limited, though), and projects like typescript have shown that you can successfully bolt on a static type system onto a dynamic language to provide type checking on specific parts of a program just fine. But obviously this changes what you consider to be a valid program at compile time, though maybe not what you consider to be a valid program overall if you consider programs with dynamic type errors to be invalid too (which there’s certainly precedence for… C programs are arguably only real C programs when they’re well-defined, but detecting UB is undecidable).


  • I kind of feel like “untyped” is a term that doesn’t really have a proper definition right now. As far as I can tell when people say “untyped” they usually mean it as a synonym for whatever they consider “dynamically typed” to mean (which also seems to vary a bit from person to person, haha). Sometimes people say assembly is untyped exactly for this reason, but you could also consider it to have one type “bits” and all of the operations just do things on bits (although, arguably different sized registers have different types). Similarly, people sometimes consider “dynamically typed languages” to just be “unityped” (maybe monotyped is more easily distinguished from untyped, haha) languages at their core, and if you squint you can just think of the dynamic type checks as a kind of pattern matching on a giant sum type.

    In some sense values always have types because you could always classify them into types externally, and you could even consider a value to be a member of multiple types (often programming languages with type systems don’t allow this and force unique types for every value). Because you could always classify values under a type it feels kind of weird to refer to languages as being “untyped”, but it’s also kind of weird to refer to a language as “typed” when there isn’t really any meaningful typing information and there’s no type system checking the “types” of values. Types sort of always exist, but also sort of only exist when you actually make the distinctions and have something that you call a “type system”… In some sense the distinction between static and dynamic typing is sort of an arbitrary implementation detail too (though, of course, it has impacts on the experience of programming, and the language design makes a bit of a difference in terms of what’s decidable :) (and obviously the type system can determine what programs you consider to be “valid”)… But you can absolutely have a mix of static type checking and dynamic typing, for instance… It’s all a little more wishy washy than people tend to think in my opinion).


  • “1.99999…” is an integer, though! If you’re computing with arbitrary real numbers and serializing it to a string, how do you know to print “2” instead of “1.9999…”? This shouldn’t be decidable, naively if you have a program that prints “1.” and then repeatedly runs a step of an arbitrary Turing machine and then prints “9” if it did not terminate and stops printing otherwise, determining if the number being printed would be equal to 2 would solve the halting problem.

    Arbitrary precision real numbers are not represented by finite binary integers. Also a right shift on a normal binary integer cannot tell you if the number is even. A right shift is only division by 2 on even numbers, otherwise it’s division by 2 rounded down to the nearest integer. But if you have a binary integer and you want to know if it’s even you can just check the least significant bit.



  • You know, I was going to let this slide under the notion that we’re just ignoring the limited precision of floating point numbers… But then I thought about it and it’s probably not right even if you were computing with real numbers! The decimal representation of real numbers isn’t unique, so this could tell me that “2 = 1.9999…” is odd. Maybe your string coercion is guaranteed to return the finite decimal representation, but I think that would be undecidable.



  • C++ is technically a completely different programming language to C, but they share a lot of similarities because C++ is sort of derived from C (and now they’ve both evolved somewhat separately). The main addition at the start was OOP being baked in to C++. A typical C program is often a valid C++ program as well, but there are some subtle differences in a few areas that can cause problems. C++ has a lot of features compared to C, a more complex type system, a big templating system for compile-time computation, and focuses a lot on adding low/no cost abstractions to make writing programs easier without incurring a high cost at run-time… That said many people do still prefer C, often for its simplicity in comparison.



  • I like Haskell, but the syntax is probably the worst part about it. The ability to define your own infix operators with arbitrary precedence / associativity is really cool and useful, but can make it a complete mess to read because then you have no idea how any of the operators combine. I vaguely like the syntax, there’s something kind of clean about it, but frankly if it was just a lisp it would be so much easier for people to pick up (aside from the fact that nobody would because it would look like lisp).


  • Yeah. I think Forth is kind of just interesting for what it is and it fits it’s niche well. If you’re looking into Forth you probably appreciate it for what it is, and it’s a super flexible language so it can kind of be what you want it to be. It’s obviously not perfect, and it’s not the ideal fit for what most people want to do… but I guess people just don’t really expect it to be more than it is and it’s a smaller community so nobody is too vocal or angry about it. People will complain about other niche languages like lisp, ocaml, prolog, or Haskell all the time, but people don’t say much about Forth, and when somebody does talk about it it’s pretty much all praise. The Forth people are just content I guess!