If you model and infer some aspect of the user that is considered personal (eg de-anonymize) or sensitive (eg infer sexuality) by means of an inference system, then you are in the area of GDPR. Further use of these inferred data down the pipeline can be construed as unethical. If they want to be transparent about it they have to open-source their user-modeling and decision making system.
Fancier algorithms are not bad per se. They can be ultra-productive for many purposes. In fact, we take no issue with fancy algorithms when published as software libraries. But then only specially trained folks can seize their fruit, which it happens it is people working for Big Tech. Now, if we had user interfaces that could let the user control several free parameters of the algorithms and experience different feeds, then it would be kinda nice. The problem boils down to these areas:
Political interference and proliferation of fascist “ideas” is just a function that is possible if and only if all of the above are in play. If you take all this destructive shit away, a software that would let you explore vast amounts of data with cool algorithms through a user-friendly interface would not be bad in itself.
But you see, that is why we say “the medium is the message” and that “television is not a neutral technology”. As a media system, television is so constructed so that few corporations can address the masses, not the other way round, nor people interact with their neighbor. For a brief point in time, the internet promised to subvert that, when centralized social media brought back the exertion of control over the messaging by few corporations. The current alternative is the Fediverse and P2P networks. This is my analysis.