“Telegram is not a private messenger. There’s nothing private about it. It’s the opposite. It’s a cloud messenger where every message you’ve ever sent or received is in plain text in a database that Telegram the organization controls and has access to it”
“It’s like a Russian oligarch starting an unencrypted version of WhatsApp, a pixel for pixel clone of WhatsApp. That should be kind of a difficult brand to operate. Somehow, they’ve done a really amazing job of convincing the whole world that this is an encrypted messaging app and that the founder is some kind of Russian dissident, even though he goes there once a month, the whole team lives in Russia, and their families are there.”
" What happened in France is they just chose not to respond to the subpoena. So that’s in violation of the law. And, he gets arrested in France, right? And everyone’s like, oh, France. But I think the key point is they have the data, like they can respond to the subpoenas where as Signal, for instance, doesn’t have access to the data and couldn’t respond to that same request. To me it’s very obvious that Russia would’ve had a much less polite version of that conversation with Pavel Durov and the telegram team before this moment"


It’s also important to continue educating people about the fact that Signal is incredibly problematic as well, but not in the way most people think.
The issue with Signal is that your phone number is metadata. And people who think metadata is “just” data or that cross-referencing is some kind of sci-fi nonsense, are fundamentally misunderstanding how modern surveillance works.
By requiring phone numbers, Signal, despite its good encryption, inherently builds a social graph. The server operators, or anyone who gets that data, can see a map of who is talking to whom. The content is secure, but the connections are not.
Being able to map out who talks to whom is incredibly valuable. A three-letter agency can take the map of connections and overlay it with all the other data they vacuum up from other sources, such as location data, purchase histories, social media activity. If you become a “person of interest” for any reason, they instantly have your entire social circle mapped out.
Worse, the act of seeking out encrypted communication is itself a red flag. It’s a perfect filter: “Show me everyone paranoid enough to use crypto.” You’re basically raising your hand.
So, in a twisted way, Signal being a tool for private conversations, makes it a perfect machine for mapping associations and identifying targets. The fact that Signal is operated centrally with the server located in the US, and it’s being developed by people with connections to US intelligence while being constantly pushed as the best solution for private communication should give everyone a pause.
The kicker is that thanks to gag orders, companies are legally forbidden from telling you if the feds come knocking for this data. So even if Signal’s intentions are pure, we’d never know how the data it collects is being used. The potential for abuse is baked right into the phone-number requirement.
Opinion: I think painting in Signal in such negative light is more harmful in the practical sense. Having fragmented messaging towards the public that does not care about many of these aspects just makes them a lot more hesitant to change, from my perspective.
We as a community should, in my opinion, pick a “good enough” solution for the majority of the people we interact with. That in itself is a market force to show interest and demand for private solutions. Most people I know don’t have the tools or knowledge or time to understand nuances and all they’ll hear are conflicting messages.
For us more technically inclined people: hell yeah, let’s figure out the ideal model and bring it up to maturity so others can join when it’s fleshed out. E.g. when lemmy came to my attention in the reddit 3rd party app fiasco, I was really confused on how to sign up and use it. And I’m no stranger to tech.
Edit: spelling
I’d probably suggest Deltachat. It’s defederated and has always on encryption, but is so incredibly simple and easy to onboard and use, and doesn’t require a phone number or even an email. It also works on all platforms with a single app.
There are plenty of good enough options like SimpleX Chat out there that don’t have this problem. The whole argument that people should just ignore the obvious issue with Signal is frankly weird.
Accept defects != ignore
My original comment that you replied to was explaining the defects. People are free to decide whether they want to accept them or not. Your comment is saying that it’s harmful to discuss these defects which implies that we should just ignore them.
The link is not working. Would you be so kind as to provide the article link?
oh here https://www.washingtonpost.com/technology/2021/06/15/faq-data-subpoena-investigation/
Apparently they don’t store contact info.
https://signal.org/blog/looking-back-as-the-world-moves-forward/
The problem is that you just have to trust them because only people who actually operate the server know what they do or do not store. Trust me bro, is not a viable security model. As a rule, you have to assume that any info an app collects, such as your phone number, can now be used in adversarial fashion against you.
And that is the problem with anything you don’t write yourself. And for anything you do write yourself: Are you smarter than the three-letter agencies?
There are plenty of chat services that aren’t centralized and hosted in the USA.
Sure… and my point is that you have to trust those services that aren’t hosted in the USA. It’s a choice you have to make. I’m not judging either way, just pointing out because what I responded to in the comment to which I replied was:
Which is true of open source unless you read the code and can verify nothing nefarious exists; which is true if you use a service in a country you trust; which is true no matter what you’re doing.
Not all entities are deserving of the same level of trust - some are more trustworthy than others - but you are still making a decision to trust someone unless you write the code yourself or verify the code yourself.[1]
And had the capability and time to do so ↩︎
Not at all. Not everyone needs to audit open source, only a few interested experts do. Most importantly, auditing is possible because its out in the open.
The just trust me model of signal means its impossible to audit, unless they give us their centralized database and server code.
You don’t have to trust anybody when you run your own server, or you use a server that doesn’t collect information it has no business collecting.
You have to trust the people that wrote the code.
Again, you’re trusting the authors of the code.
Which is fine, but it’s a choice to trust them.
There’s a big difference between having confidence in open source code that has been audited by many people, and knowing for a fact that the service collects specific information. In the former case, you can never be absolutely sure that the code is not malicious so there is always a risk, but in the latter case you know for a fact that the service is collecting inappropriate information and you have to trust that people operating the service are not using it in adversarial ways. These two scenarios are in no way equivalent.
It’s a choice to trust the entire open source community around the project and all the security researchers who have been looking at the code.
Frankly, I have trouble believing that you don’t understand the difference here and are making your argument in good faith.
Let’s back up to what I replied to in the first place:
I even took the time to quote that, because it’s important.
Of course there are different levels of trust. But what you said is flatly wrong and misinformation, if you want to get technical about it. Arguing in bad faith? I beg your fucking pardon, friend.
Just becuase it’s less likely to find nefarious code in open source doesn’t mean it doesn’t exist. There ahve been multiple cases of it found in open source code. Blindly trusting something because it’s open source or you host it on your own server is a very very false sense of security, especially in the context of the larger discussion, which came about in regard to what information is exposed by certain messaging clients.
It’s also a matter of the importance of what you’re doing.
I wrote a little CRUD app a while back to track me giving my cat medication. I sanitized inputs, but I left it open without a login on my server, just an obscure URL that didn’t get published anywhere. All you could do was click a button to indicate the cat had been medicated, or another button to delete the latest entry. That was plenty of security for that. If I was writing a banking app, I’d use a bit more.
So yes, in the same way as that, hosting something you use to chat with friends about whatever is one thing; trying to communicate secretly from a country where your comms might lead to being put to death is quite another. And in the latter case, it’s important to know that no matter what you use, unless you wrote it or read all the source code, you are trusting others with your life. Perhaps you feel comfortable doing that, but you should be aware of it.
So no, this is not a discussion in bad faith at all, it is valuable on multiple levels.
What’s important is that you’re quoting me out of context, and that makes all the difference. The actual statement you’re replying to is:
The fact that you proceed to quote me out of context and then accuse me of being wrong shows that you lack even a modicum of intellectual integrity. Then you proceed to make a straw man arguing against something I never claimed.
So yes, this is very clearly a discussion in bad faith, where you’re arguing against a straw man while ignoring what I actually wrote. It’s especially incredible since I even followed up with a more detailed explanation which you just ignored:
Do better.
And the client, too.
Precisely.
And it’s worth repeating here - the level of trust needed is affected by the nature of what you might lose if that trust is broken. For non-important things, trusting a third-party company is probably fine. If you’re in a country and being found out might mean you get put to death, though, the stakes are a bit higher.
No need for that when self hosted open source projects exist
But again, you either read the source to confirm there’s nothing nefarious, or… you trust the programmers.
Which is not a problem, but it is a choice to trust. All I’m pointing out. :)
Well yeah everything is a choice when trust is the matter, but there is a difference between choosing a community project that can be audited by different transparent parties and choosing a private company on their own servers (even on source available projects)
There’s no such social graph to speak of. Signal does not know who is speaking to whom.
Three-letter agencies have served them legal subpoenas many many times and they never turn over anything more than the above information.
Filter for…what, exactly? The hundreds of millions of people who value private and secure communications?
We do, because they publish them publicly.
The only people who know this are people operating the server. Period.
See the link I provided above.
Yup, that’s precisely what it’s a filter for.
Trust me bro is not a viable model for anybody who actually gives a shit about their privacy.
The reality of the situation is that Signal asks users for information it has no business collecting during the sign up process, and this information can be used in adversarial ways against the users. People using Signal are making a faith based judgment to trust the operators of this server.
We all know this, for reasons I’ve already stated.
Your link is broken.
100M people is not a filter…
No one said anything about that? That is not the model.
The business is connecting users. It’s one of the reasons it is the most viable private and secure chat platform. It’s why I have a dozen connections on Signal and literally 0 on every other platform. Because you actually know who’s using it. You can have the most private and secure messaging system in the world but if you can’t use it to actually chat with anyone, then what good is it?
No, we don’t all know this. What we actually know that people like you say this and expect the rest of us to trust you blindly, which is itself concerning.
Your browser plugins are broken, the link is fine. That said, here’s non archived version https://www.washingtonpost.com/technology/2021/06/15/faq-data-subpoena-investigation/
Given world population and modern data analysis capabilities it absolutely is.
That’s literally the model. Signal asks you for your phone number when you register, what happens with that information after that is only known to people operating the server. Let me know what part of that you’re still struggling to understand.
That word salad has fuck all to do with the point I made, which once again, is that you have to trust people who operate the server in how they handle this information.
Ah yes, because there’s absolutely no conceivable way to verify whom you’re connecting with aside from sharing your phone number with an American company. You couldn’t possibly use any out of band channel to verify who the person you’re communicating with is.
You have no source for that other than Signal’s “just trust us” claims.
Removed by mod
Best alternative?
It really depends on your needs and what people you communicate with are willing to use. A few platforms that are notable in no particular order.
SimpleX Chat is probably the gold standard right now. It uses absolutely no user IDs such as phone numbers, no usernames, no random strings of text. Instead, it creates unique, pairwise decentralized message queues for every single contact you have. Because there is no global identity, there is no metadata connecting your conversations together.
Session is a popular Signal alternative. It doesn’t require a phone number and routes your messages through an onion-routed decentralized network that’s similar to Tor. Since your IP address is hidden and messages are bounced through multiple nodes, no single server ever knows who is talking to whom, stripping away metadata.
Jami is completely decentralized, open-source platform. It uses Distributed Hash Tables to connect users directly to one another without a central server. Notably, it supports high-quality voice and video calls.
Session is a security downgrade. It doesnt support forward secrecy which is hella important.
Session actually does implement a form of forward secrecy through the Session Protocol. https://getsession.org/blog/session-protocol-v2
It seems that forward secrecy is still in development from the blog you showed.
I still wouldnt use session for the reasons stated in this Soatok’s (a cryptographer) blogs. Even if they fix(ed) these problems, I have no trust for their security implementations. Why use session instead of something like Briar?
https://soatok.blog/2025/01/14/dont-use-session-signal-fork/ https://soatok.blog/2025/01/20/session-round-2/
I’m not advocating for using Session specifically, I just listed it as a viable alternative to Signal. Given that it’s forked from Signal presumably it’s an easier switch for people who like the general mechanics of Signal and its encryption system.
Understood.
I really want simplexchat to evolve and get more features. If they ever make a lot of mod tools and the possibility to make giant servers with thousands with chatrooms like discord I could see it having mass appeal due to the ease of “signup”
yeah it definitely has some promise
heard SimpleX is really good, the only thing that bothers me is their vc funding model. It makes me feel a bit suspicious.
Yeah, I’m leery about anything where vcs are involved as well for obvious reasons. The tech itself does seem solid though, and it is open source. If it does start moving in a sketchy direction at least it could be forked at that point.
I like your analysis, and would love your thoughts on matrix(assuming you have ofc)
People keep finding significant vulnerabilities in its cryptography and the Matrix team tries to deflect or create strawmans for why it isnt actually a vuln. Soatok found a vulnerability in 2024 by just browsing the source code for tiny bit of time, and again just two weeks ago after looking for a couple hours. In both cases, Matrix then responded to his vuln report with hostility, saying it wasnt actually a vulnerability. He is sitting on another vulnerability.
Having a cleartext mode is a security downgrade and no secure messenger should support cleartext. It only barely got functional forward secrecy recently. VoIP in most Matrix clients (and servers) still use Jitsi backend which isn’t E2EE, even with the release of the newer (secure) Element call protocol. Matrix leaks tons of metadata, such as usernames, room names, emoji reactions, generate URL embedded previews. Rooms arent encrypted by default. It is also a UX nightmare and often times you cant decrypt your messages.
Matrix is not secure. You’d be better off with XMPP and OMEMO which has its own problems and isn’t secure either. Sill better than Matrix.
It’s better than Signal since you don’t have to disclose any personal info, but people have pointed out some issues with federation in it. Again, it’s one of those things that may or may not matter based on your use case.
That link seems dated (Nov. 2024). If anyone finds a more current critique, pls send. I also get auto-kicked from HLC simplex group, so I’m not sure what to think of them but commando’s matrix server was amazing befored abandoned