• 0 Posts
  • 17 Comments
Joined 2 years ago
cake
Cake day: June 18th, 2023

help-circle








  • While C is certainly better for some problems in my experience, it too is very hard to use in large projects with a mix of developers, and it is unsuitable for most higher level applications in most companies.

    I think C has its place in the world still, which is mostly confined low level embedded, kernel space and malware. I do believe that the market segment that used to rely on C++ is today better served by either Go or Rust, depending on the project.

    That said, while I LOVE working with Rust, it suffers from many of the same issues I mentioned for C++ in my comment above when working in a mixed skillset team.


  • wim@lemmy.sdf.orgtoProgrammer Humor@lemmy.ml*Permanently Deleted*
    link
    fedilink
    arrow-up
    38
    arrow-down
    1
    ·
    edit-2
    1 year ago

    Everything is fine within the scope of a college course or project.

    Where C++ breaks down is large, complicated projects where you colaborate with other developers over multiple years.

    I worked in C++ for almost a decade, and while there were a few good projects I encountered, most suffered from one or more of the following problems:

    • C++ has so many parts, everyone picks a subset they think is “good”, but noone seems to fully agree on what that subset is.
    • A side effect of the many possibilities C++ offers to compose or abstract your project is that it allows for developers to be “clever”. However, this often results in code that is hard to maintain or understand, especially for other developers.
    • Good C++ is very hard. Not everyone is a C++ veteran that read dozens of books or has a robust body of knowledge on all its quirks and pitfalls, and those people are also often assigned to your project and contribute to it. I was certainly never an expert, despite a lot of time and effort spent learning and using C++.


  • Agreed, but for many services 2 or 3 nines is acceptable.

    For the cloud storage system I worked on it wasn’t, and that had different setups for different customers, from a simple 3 node system (the smallest setup, mostly for customers trialing the solution) to a 3 geo setup which has at least 9 nodes in 3 different datacenters.

    For the finanicial system, we run a live/live/live setup, where we’re running a cluster in 3 different cloud operators, and the client is expected to know all of them and do failover. That obviously requires little more complexity on the client side, but in many cases developers or organisations control both anyway.

    Netflix is obviously at another scale, I can’t comment on what their needs are, or how their solution looks, but I think it’s fair to say they are an exceptional case.


  • Sorry, yes, that was durability. I got it mixed up in my head. Availability had lower targets.

    But I stand by the gist of my argument - you can achieve a lot with a live/live system, or a 3 node system with a master election, or…

    High availability doesn’t have to equate high cost or complexity, if you can take it into account when designing the system.



  • I used to work on an on premise object storage system before, where we required double digits of “nines” availability. High availability is not rocket science. Most scenarios are covered by having 2 or 3 machines.

    I’d also wager that using the cloud properly is a different skillset than properly managing or upgrading a Linux system, not necessarily a cheaper or better one from a company point of view.


  • Got to agree with @Zushii@feddit.de here, although it depends on the scope of your service or project.

    Cloud services are good at getting you up and running quickly, but they are very, very expensive to scale up.

    I work for a financial services company, and we are paying 7 digit monthly AWS bills for an amount of work that could realistically be done with one really big dedicated server. And now we’re required to support multiple cloud providers by some of our customers, we’ve spent a TON of effort trying to untangle from SQS/SNS and other AWS specific technologies.

    Clouds like to tell you:

    • Using the cloud is cheaper than running your own server
    • Using cloud services requires less manpower / labour to maintain and manage
    • It’s easier to get up and running and scale up later using cloud services

    The last item is true, but the first two are only true if you are running a small service. Scaling up on a cloud is not cost effective, and maintaining a complicated cloud architecture can be FAR more complicated than managing a similar centralized architecture.