Oof. So not only are they giving their remaining managers more reports, but those managers will be expected to do lots of other, non-management work.
Sure, nothing can go wrong there... Even if they didn't have non-managerial work to do, 15+ direct reports is just too many. They're not going to get to spend enough time meeting each report's needs, not a chance.
I think as layoffs emails go, it's a pretty good one (as the current top comment points out[0]), but boy, I would not want to be working at a company like what Coinbase is turning into. Non-technical teams shipping code to prod? No thanks. "AI-native pods"? No thanks. I do like the idea of one-person teams; I was at my most productive when I was in that kind of role (though I'm not sure my experience generalizes). I get that companies are still struggling to figure out how to adapt to LLMs, but... damn.
Pretty solid severance package for the folks being laid off, though.
I do think the most efficient form of team is a "cell" of three people. One is a little unstable.
If you look at Coinbase in 2020 they had roughly 1,200 employees. By 2022 they had roughly 4,500 employees.
They over hired and now they are pairing back, this is all it is.
Oof. That smacks of hubris and valley-buzzwordism.
> Leaders will own much more, with as many as 15+ direct reports.
> Every leader at Coinbase must also be a strong and active individual contributor.
So, a manager who's managing 15 people AND expected to ship -- that sounds awful for both sides.
As someone who lived through multiple rounds of layoffs at big tech companies this seemed quite generous.
No, you didn't. You watched engineers use AI to ship in days something that looks like what used to take a team weeks. After enough rounds of feature evolution, you'll realise that what they actually shipped isn't at all the same. Anthropic's C compiler, which also seemed like a good start that would have taken people much longer to deliver, ended up being impossible to turn into something actually workable.
In a year or so, software developed by "AI-native talent who can manage fleets of agents to drive outsized impact" - which is another way of saying people who ship code they don't understand and therefore haven't fixed the architectural mistakes the agents make - will become impossible to evolve, and then things will get very interesting.
AI can help software developers in many ways, but not like that.
As a security engineer this statements fills me dread.
I was shocked at how easy it was to train and develop a model that can replace senior leadership in a company.
The CEO was the easiest. I simply loaded the model with as much corporate jargon, double talk and the ability to talk down to people. The model nearly wrote itself.
Then simply ingesting the Wall Street Journal, Barrons, Financial Times and SEC 10-K reports and annual reports, I was able to compile the perfect CFO. It was able to spit out regulatory reports, answer questions on investor calls.
Strangely, the component of the model I had write in house was the ability to give up part of their bonus to keep key people employed. Seems in all of those financial reports, there were no examples of anyome that the model could leverage.
> Non-technical teams are now shipping production code
if you vibe code financial systems this cannot mean anything good for your business
Can anyone share how and when they see market is getting in a better shape?
Specifically I am curious, how we would be working with AIs even if market gets in a better shape
How long would it be that people realise that they are playing "passing the parcel" with a ticking explosive?
The reads like typical MBA-efficiency-idiocy taken to the extreme. Clearly this guy is so deeply isolated from the actual work that he cannot even begin to comprehend just how utterly stupid this idea is. It's one thing to push for 100x engineering "output" with "AI", but something completely different to expect a single person to be 3-4 persons in one. Pure schizophrenia - but at least companies like Coinbase which adopt the AI-first illusion will burn themselves faster and leave the room for something new and genuinely innovative.
If you're a leader and you've said that your company is too big and have to downsize by 10+%. This is a you're the problem.
Firstly, the business needs to have active business and new initives. If you are not supporting that: You've failed.
If you're so inefficient that you need that extra 14%, you made that mistake.
If you "overhired" and didn't find a way to use that extra capacity to find the business.. you are the problem.
If you say that AI has changed your business, that 14% more people means 14%*the AI lift of more capacity to accomplish greater things.
It's not the talent, and it's not the talents' fault for your issues. A lot of people assume that layoffs means removal of bad performers. The reality is not there.
Cheap money went away which caused companies to start asking hard questions about productivity and how much those dedicated managers were contributing.
This is going to end poorly for them. The only good managers I've had over around 20 years in the industry were 100% people managers and had no IC type of role expectations.
I've personally walked away from multiple manager role interview loops when I ask about the split only to find that they expected managers to also take on partial roles with IC engineering work. I know I can't be effective in either when having to juggle two entirely different hats, and in my anecdotal experience I've never seen anyone else do it well either.
Is this code for "we're firing all the old people"? As I understand it, I can say I'll only hire proficient English speakers (a "bona fide occupational requirement"), but I can't say I'll only hire native speakers, as that would discriminate against various protected groups. This seems like the same thing—proficiency may be a bona fide requirement, but expecting they learned this year's workflow first is age discrimination.
I don't expect ethical conduct from crypto companies and will not be sad if they are sued into oblivion.
With the amount of tech leaders blabbering about this, I came to the conclusion that the profession of the future is going to be Security Engineer.
Since roughly 2018 I reckon, at least.
Is Brian here? Can he speak more to this? What exactly are non technicals shipping to production code?
I've got no position in Coinbase but is that a wise thing to say as a public company? I'd be alarmed if I were a share holder
Experimenting or cost-cutting? Are these one-person "teams" you g to be paid more for having multi-domain roles regardless of how fast AI can churn out pseudo-MVPs?
We're going to see this become a trend beyond Coinbase, IMO. The idea that companies just want employees to be more productive is a farce. The C-suite would prefer to make no profit, have few to no employees, and get personally richer in the process.
However, I understand rationale, as the money was not in-flowing enough.
---- edit ----
When reading about AI-native talent who can manage fleets of agents, I shout out. Hire me. I will tell you why this won't work
As someone who did have 15 direct reports for a while, it’s a joke.
You basically are their manager in name only. Your time is so split you can’t give any one direct reports the attention they deserve. Quarterly and annual reviews are a farce because you genuinely don’t really know how people are doing except the signals you can receive when you’re not in a meeting with one of your 15 reports.
Just goes to show how far up their own asses some CEOs are. Meanwhile real people just want a boss who cares. Hope Brian feels happier with an extra billion dollars or whatever this year!
* explains the reasons (financials, AI enablement)
* talks about what folks who are leaving get in detail (first) and thanks them
* talks to the folks who are staying
Layoffs are hard, no doubt, and I am not sure he's making the right choice. I see plenty of doubt about some of the actions in other comments that echoes mine. I certainly wouldn't want to have 15 direct reports and also ship production code regularly. But as CEO, it's his job to make these kinds of choices.
The proof is in the pudding as they say. We'll see how Coinbase does with this new orientation in the next year or so and that will determine if this was a wise or foolish move. Is there a flood of talent leaving? Major breaches? Business as usual with better than expected profits?
Time will tell.
While AI is likely a productivity boost, the underlying reason is not AI.
What happens when this person inevitably leaves and they have no one who knows even a little bit about the process or tools used?
Boy that's scary for a company that's effectively fintech...
Crypto was a big hype of last decade.
Every year that goes by there are fewer people interested in an old hype, and therefore a smaller and smaller market for coinbase.
Coinbase is on a path to death. It might take 20 years, but the decline has already begun.
What I'm really intrigued by is the non technical staff deploying code to production. Now that's a gamble I want to see in the crypto space.
What's the theory on this? It seems to be common conclusion, but I don't understand why AI changes the situation here.
I understand that AI means you can do more with fewer people. Fewer people means less coordination overhead and fewer managers and fewer layers. What I don't get is why you want your managers to be doing IC work more so with AI than before. I don't see why anything changes about needing roughly 1 first line manager for every 6-8 people, or why it would be more beneficial now that the managers have production programming responsibilities.
Both before and after AI it's important that managers have real technical knowledge of the codebase. Having managers do actual production IC work in my experience has been a bad allocation of resources, though, and I don't see why AI changes that.
(a) Someone has to do the management tasks. Why do we think that isn't a full time job anymore?
(b) When managers do production IC work, in my experience it increases the load on ICs in review, because the manager one would _expect_ to not be _as_ expert as pure ICs on the codebase, and yet they are perceived as "senior". ICs then have overhead in having to manage that power imbalance in review. I have known a few extremely productive manager/ICs… but the effect on their teams was not super great. It made the manager into something of a micromanager and the actual ICs lacked autonomy.
Geeks who didn't even stand near professional sports should really shut up about anything sport related, lol.
I would really like to see professional, established coach running around with young prodigies on a peak of their biology.
> - AI-native pods: We’ll be concentrating around AI-native talent who can manage fleets of agents to drive outsized impact. We’ll also be experimenting with reduced pod sizes, including “one person teams” with engineers, designers, and product managers all in one role.
And AI clowns will cheer and applaud this, not seeing that they're now doing the job of 5(!) people with the same salary. Why is nobody talking about this?
Also, I find it really bizarre that those neo feudal lords see their companies as just a life stock to count. They don't even count people, just see them as numbers to reduce/scale up. Modern tsardom, but instead of being tied via official decree you're now tied by your lifestyle and family.
"Some of you may die, but that is a sacrifice I am willing to make"
It wasn't that long ago that, in SV, the dominant values were humility, kindness and openness to all views (even if behind the scenes there was the ruthlessness demanded by capitalism). The last few years have seen this value system corrode, and it seems like its hurting everyone. From the tech workers constantly churning for no good reason, to the tech executives sequestered in their own thought bubbles until reality finally hits them (usually, too late to change).
However, do we really need them to AI-wash the fact that as a lot of companies, this company over-hired during ZIRP? Do we really need them to AI-wash the fact that the crypto hype is gone, therefore their business is smaller? “Company as intelligence” and “AI productivity” are just buzzwords so their stock price doesn’t suffer.
+ 2021 | 3,730 employees + 2022 | 4,706 employees + 2023 | 3,416 employees + 2024 | 3,772 employees + 2025 | 4,951 employees + 2026 | 4,250*
*Estimated following May 2026 layoffs.
So the reduction gets them closer, but still higher than where they were in 2024. Given the fact that the crypto business doesn't seem to be growing much over the last few years it can be argued that they over hired in 2025 and going back to 2024 numbers just makes sense. And as others have said in the comments, they haven't turned a profit so likely this makes business sense and the AI shine is trying to make the news less ugly for investors.
Today, not a single mention in that email.
I can't help but feel that there is a superficial chasing of trends at play here (adopting the same playbook that Block used earlier).
Question is, where will we all be in 3 years from now?
Popular conception of what a manager is is wildly unambitious.
Weekly 1:1 is performative and useless. It's not what makes a good manager. What makes a good manager is:
* Having excellent domain knowledge and judgement
* Having the respect of the team, to settle disputes
* Solving problems when needed
* Hiring and retaining an excellent team
* Picking the right things to work on
... etc ...If a manager is doing these things well I don't need a standing meeting at all. Or we can meet quarterly to check in.
Email is a thing.
Before that the manager was essentially the best engineer in the team (or the one that wanted to get promoted). Being a manger meant you were respected directly for your skills and you were expected to still be a full time contributor. Directors meant you were one of the best ICs out there. Now, being a manager or a director means you sometimes did an MBA in an unrelated field. This brought a ton of politics, nonsense meetings (because the most visible output for managers is more meetings where they can posture).
Let's go back to what it used to be. We don't need weekly 1:1s to check on feelings. We don't need a full layer of managers syncing with each others and taking political decisions that will mainly advance them. We don't need another layer of gatekeepers.
I'm not saying all managers are bad, but this charade has been pushed a bit too far.
What needs? If you squeeze people hard enough there are no needs anymore, only responsibilities and urgent+important backlogs that have no bottom.
Welcome to 2026.
It is pretty stupid and pathetic tbh, but easier than making an effective organization.
It's because crypto goes in a cycle and now it's down. You should expect layoffs from them again in 2029/30.
Notable is what they're not doing--annual reviews. This duty is now handled by the all seeing "intelligence" machine that can evaluate employees in real-time.
Freedom for who, exactly? Coinbase's executives, I suppose.
> COBRA is the Consolidated Omnibus Budget Reconciliation Act. It gives workers and their families who lose their health benefits the right to choose to continue group health benefits provided by their group health plan for limited periods of time under certain circumstances such as voluntary or involuntary job loss, reduction in the hours worked, transition between jobs, death, divorce, and other life events. [0]
Right?? I saw that too. My first thought is that any good managers left will be racing for the exit. You can't fake "managing 15 people" with AI. You have to actually have the 1:1s and do the performance calibrations. How are they going to have time left for IC work??
I got laid off 3 years ago and got a mere 2 weeks + 1 month of COBRA. It was a tech company, but not a big one.
We do this every day. I'm sorry to say, we are indeed shipping in days what used to take weeks.
Like the guy who "just gets math" is often NOT a good teacher.
This resonates but I can't put my finger on why for the founders of AirBnB. Do you have examples? Obviously true for Elon.
It seems like the previous generation of founders were always paranoid that their companies could/would fail in an instant, which led to the management styles of Andy Grove, Gates, Jobs etc (and I'd argue Larry and Sergey as well). That mindset meant they knew they couldn't afford to be surrounded by yes man and their egos were secure enough when challenged by their underlings.
Despite the intensity of all three, you hear stories of how Gates only respected people who could credibly argue back against him, Jobs empowered his team, etc. The current generation of founders seem to believe their own mythical BS to such an extent that anyone who disagrees with them is culled from the organization, resulting in a natural selection effect of only the yes-men survive.
F these leaders.
Companies above a certain scale- let's use Dunbar's Number as a good threshold- need full time managers to handle the necessary information flow through the company. Middle-manager is actually something that AI can't do yet, because their main job is to figure out what things everyone else around them needs to know (inside and outside their team), which requires a theory of mind that current LLM's just don't have. Is this policy change worth telling your team about? Is this feature creep worth telling other teams about? That is the decision that managers have to make dozens of times a day, and it requires a model of what various people know, to know whether this is important to them or not.
In my BU there were directors with 2 direct reports. Even at the next level up, the number of non-IC directs is only high single digits. There are many managers who were already engaging technically with the product (not PRs but playing an active role in planning work) and they have no idea what directors are actually doing...aside from attending meetings with other directors.
Almost all decision-making capacity has been moved outside of teams which has resulted in almost no actual work (because everything needs to be cleared by someone with no engagement with product) and people leaving (because promo decisions are made by people who have no idea what anyone is contributing, the worst ICs are the only ones they can retain ofc).
It is a terrible environment to work in.
I don't necessarily think the manager should be best IC but definitely someone who is genuinely talented with sufficient scope and responsibility to make good decisions/add value for ICs. There are way too many passengers today.
Also, this is true of higher-level ICs. At my work, they have no real engagement with product so have influence through ambiguous statements about the general direction that get passed around like the word of God. None of these decisions, so far, have been helpful or relevant.
It seems quite counterproductive to assume such a system would scale to everyone else, or that everyone else could possibly implement this. This is cowboy levels of human resource management, not careful engineering.
In most sports you've retired by the age of 40 and most coaches are older than that. I would say that's the reason it's common in sports, but that's the exception not the rule
I mean, I want to work... and I absolutely despise the push to keep dev wages down, even at higher levels. But the reality is, at least from my own experience, that most software orgs and projects are actually over-staffed and would operate better with fewer, more experienced staff. Rather than filling hundreds of butts in seats.
There are strengths, but if you think its writing stream of code and just using it as is, I would LOVE to compete against you.
Player-coach used to be a thing in professional sports a long, long time ago. There's a reason you don't have it anymore. A coach can't be expected to take the long-term view while also expecting to contribute. Most examples were players near the end of their career and they didn't tend to do very well.
The only place you see it is in fun adult leagues. Perhaps the message then is that Coinbase wants to be less professional and more amateur-like?
I'm remember of when I went out for drinks with a startup consultant friend and she mentioned one founder she spoke with refer to his staff as "biological units" when addressing use of proceeds to hire additional staff.
The crypto market winter that started in Q4 last year led to Coinbase's ~worst quarter ever ($667M loss). Crypto has not recovered. Coinbase has done nothing to stem the outflows. That same quarter HOOD showed a net profit of $605M; and showed a $346M profit last week. COIN and HOOD are two very similar companies.
COIN's earnings are in two days. They preceded the earnings call with layoffs, which is always a bad sign. And HOOD's net income has dropped by like 40%, though they're still at least profitable. You should be prepared for COIN to announce a similar drop; except, COIN wasn't even profitable before. Its going to be a bloodbath.
> Also, I find it really bizarre that those neo feudal lords see their companies as just a life stock to count. They don't even count people, just see them as numbers to reduce/scale up. Modern tsardom, but instead of being tied via official decree you're now tied by your lifestyle and family.
People don't work somewhere like Coinbase if they're concerned about morality or mitigating the harms done to society.
This is a really strange nit. You are aware it's an analogy about skill and role. To reduce this to being about biology and the impacts of senescence on ability is weird, and doesn't really apply here.
A good manager is worth their worth in gold even if they produce zero technical output. I've had managers that were absolutely instrumental in my career as a programmer, and they did close to zero IC work.
>>Before that the manager was essentially the best engineer in the tea
Yes, and it was absolutely awful. Keep the best engineer in the team as the best engineer on the team. Call them experts, distinguished, senior++, whatever, don't make them managers.
>>Let's go back to what it used to be
God, please don't.
>>We don't need weekly 1:1s to check on feelings.
Speak for yourself please. I find weekly 1:1 extremely important for the entire team, especially in fully remote roles.
Most devs aren’t working on cutting edge, low level, mission critical systems. AI is great for that. Every company I personally know have been fast shipping features that are being used daily by millions of people for the past 7 months.
We have the same thing on my team, and we also understand the limitations of AI generated code. If you’re more or less experienced, you can easily see the “good” and “bad” sides of it. So you kinda plan it out in a way that you can “evolve AI generated software”. I wouldn’t say the same thing in 2025 January, but it’s much different times now. Things are already working.
I have literally built and shipped multiple things that would have taken me many many months to do and I’ve done it in under a week.
Many of these are LLM heavy features where the LLM can literally self-evaluate and self-optimize. I start with a general feature, it will generate adverse, synthetic data, it will build a feature, optimize it the figure out new places to improve. 1 year ago, this would have taken an entire team months to do, now, it’s 2 or 3 days of work.
Look at the best models from Spring 2025, and compare with now (and similarly for Springs 2024 and 2025). Armstrong and lots of others are betting that this trend will continue, and if it does, the LLMs will ship code the LLMs understand, and whether any human specifically understands any particular part will mostly not matter.
https://en.wikipedia.org/wiki/Unionization_in_the_tech_secto...
They'll switch to async communications for everything, and ideally have a bot that answers Mm-humm like a psychologist on his chair.
More seriously, the solution is to move to a flatter org, but that's a drastic change with unknown consequences for most companies.
This sounds suboptimal to me - probably the kind of employee I would avoid for as long as possible.
But also the type of investor who is into crypto in the first place will probably love this
Crypto bros :handshake: AI bros
> You basically are their manager in name only. Your time is so split you can’t give any one direct reports the attention they deserve. Quarterly and annual reviews are a farce because you genuinely don’t really know how people are doing except the signals you can receive when you’re not in a meeting with one of your 15 reports.
Don't forget "No pure managers". So, it's 15+ direct reports while also being "a strong and active individual contributor".
Is there a flood of talent leaving after this one? Major breaches? Only time will tell.
Buckle up, and don’t forget your pudding!”
Hope they're well paid!
There's a reason for this change. As players became elite and specialized by position, the budget for specialization expanded. At the top, teams could afford a distinct role for coaching focus. Since the stakes are really high (the difference between 1-3 points is measured in dozens of millions of dollars of impact due to relegation - a concept that is missing on most US elite sports) it follows specialization drive is sky-high at elite levels.
Thus, soccer player coaches have mostly dissappeared at elite level. But the role is alive and well in the semipro tier.
In roles where there's no binary, extreme outcome from specialization, like in semi pro soccer, or at an ENG role at a random company , it is only natural to have someone wear multiple hats and not specialize.
I do systems programming. Before AI feature development roughly went like, design, implement, test, review with some back edges and a lot of time spent in test and review.
AI has made the implementation part much faster, at the cost of even more time spent testing and reviewing, though still an improvement overall.
We do not see the weeks to days improvement though. The bottleneck before was testing and reviewing, and they are even bigger bottlenecks now.
What kind of work do you do, and what kind of workflow were you using before and after AI to benefit so much?
I see AI-native as those who have embraced it, and are learning to leverage it appropriately.
Huh? If it came out this year then everybody had a chance to learn it this year?
They hear this from the sellside, from activists, from the guys managing their private market allocations etc.
Plenty of us here can conceive, design, architect, build, ship and own things from soup to nuts, and feel a lot more invested in the result as a consequence.
If the compensation is good, and it feels less shackled and less bureaucratic, is that necessarily a bad thing?
And something else I don't get about these AI related layoff announcements: if AI was a productivity boost wouldn't you hire more engineers and technical staff to capture the value? Or else you're basically saying "we're a tech company that has no idea what to do with more super-engineers".
Except for that tone-deaf part at the end, where right after he talks to the people who "will be leaving" (that is, the people getting kicked out), he says that Coinbase will be stronger and healthier for this. Which makes it hard not to draw the conclusion that the people "leaving" are part of the unhealth.
The CEO probably does not even think that, and just wants to reduce costs. But from what was written, the implications are decidecly suboptimal.
But managers should mostly be about two things IMHO:
> Facilitating for ICs.
> COACHING. To elevate ICs and help propagate the desired "culture".
And I don't think they're trying this thing that Coinbase is trying either.
I have an example in my line of work. Full service rewrite in a new language. Would have taken forever without AI. AI makes it easier, faster. The service has better throughput, uses less machines. Having a complete full test harness that allows us to ensure we are meeting all the functionality of the previous service is key. AND we are keeping the old service on standby because we know we don't know what might be wrong with the new one.
What's your example?
I have experienced areas where high productivity can be had without much loss in quality. So I can believe it. But it really depends on what you’re doing and I firmly believe many companies will run out of easy stuff that we can blaze through with AI fairly quickly. At least that’s where we seem to be heading
It has poisoned more than one company (especially startups). Its the "go big or go home" mentality. The "the market is ours to take if we just put more fuel to this fire" mentality.
was in a startup once (Reid was an investor). The CEOs bought into blitzscaling, told the whole company we're going to "blitzscale". Hired 2 directors (with 0 reports). They had amibitions of hiring 100s of engineers. Then reality struck. There was no revenue and no path to revenue (because early days of AI). The blitzscaling was "paused". The directors had 1 EM report to them each. You can imagine what happened in the months after that.
I feel like managers should be able to contribute. Managing a good team isn't that hard, though managing a bad team (or a good team in the midst of a ton of bad processes) is a nightmare.
I would forget half the processes I use if I didn't document them all religiously. The benefit now is that I can save myself significant time by having an LLM help me write the docs.
/s
> Over the past year, l've watched engineers use Al to ship in days what used to take a team weeks. Nontechnical teams are now shipping production code and many of our workflows are being automated.
Or maybe they have to start designing shoes first, IDK.
Maybe they're using AI for testing and reviewing more than you are, not just for coding?
Maybe they're using AI for testing and reviewing more than you are?
Age-ism is reinforced by senior people resisting the notion that they need to change and adapt. I'm not like that (I'm 51). But I'm having a lot of tedious debates with people lately about how they don't want to use AI tools, how their job is somehow special so they can't use it, etc. Many of those people are actually quite a bit younger than me. There definitely is a pattern here of people that are a bit set in their ways not adapting and being a bit stubborn. Age-ism is unfair to people that are actually putting in the work to learn and adapt. But life is unfair.
Nobody actually has more than 6-12 months of experience with agentic coding tools at this point because the tools were pretty much unusable before then. I was using ChatGPT and a few other tools before that for occasionally copy pasting bits of code or figuring out bugs. But that's not really the same thing.
Half a year is not a huge gap to bridge if for whatever reason you are a bit behind on this. So, get on with it. It should not take you that long to catch up. Especially if you are a bit older, the best way to counter age-ism is showing that you have all the skills already.
Either it's badly named or people are trying to be included (?).
With very rare exceptions, professional athletes are just not as good athletically at 40/50 as they were at 20. They may be smarter in some ways--which maybe means they'd be better as coaches.
I'm not sure this carries over well to engineering unless you mean that the young people are willing to grind for a lot more hours on nights and weekends.
He won the 2004 Euro Championship, the 2005 FIFA Beach Soccer World Cup along with a number of top 4 places over his 15 years as player and/or coach.
“We at the coding company LovelyBeeBunny should be like the samurai’s of the old, willing to pull our swords to die for emperor…” etc. And it is always riddled with complete misunderstanding of the analogous subject, whether sports, history, or warfare.
Except that he got good at his short game by the end. LLMs will get there sooner than we think.
If you're truly "managing fleets of agents" there's no way you're able to sift through the good and the bad in the output. If your AI-generated code is evolvable (which is hard to tell right now) then you're not writing it with "fleets of agents". If you are writing it with fleets of agents, I would bet it's not evolvable; you just haven't reached the breaking point yet.
I find this particularly funny. There were more than a couple Star Trek Episodes where some alien planet depends on some advanced AI or other technology that they no longer understand, and it turns out the AI is actually slowly killing them, making them sterile, etc. (e.g. https://en.wikipedia.org/wiki/When_the_Bough_Breaks_(Star_Tr... )
Sure, Star Trek is fiction, but "humans rely on a technology that they forget how to make" is a pretty recurrent theme in human history. The FOGBANK saga was pretty recent: https://en.wikipedia.org/wiki/Fogbank
It just amazes me that people think "Sure, this AI generated code is kinda broken now, but all we need is just more AI code to fix it at some unknowable point in the future because humans won't be able to understand it!"
This is how I feel. It’s building things for me that work. I don’t care how it works under the hood in many cases.
The problem is that executives could take the 15-20% productivity boost and be content, but they read stuff like this, get greedy, and they don't understand the risk they're taking.
It would be slop, but the market would love it
The extreme being people that produce only one report a month and that more than justifies their income + bonus.
The question remains, if there are no pure managers, then is this CSM / Sales shipping production code? If yes, then it's indeed scary...
> No pure managers: Every leader at Coinbase must also be a strong and active individual contributor. Managers should be like player-coaches, getting their hands dirty alongside their teams.
4 months basic severance pay + 1 month for 2 years emploument is nice? so total 5 months severance after 2 years of working for them or only 6 months after 4 years
let me guess you are from US if you think this is nice, as European I would say this is fairly standard, nothing to brag about, 3 months should be bare minimum by law
However, I don't think this is that unusual in SV layoff packages.
I'll stop you right there. AI is not good at systems programming, it's good at CRUD web development, which is where most people are seeing the gains.
For things like web frontents/backends, though, it works beautifully. I ship things in days that would take me weeks to write by hand, and I'm very fast at writing things by hand. The AI also ships many fewer bugs than our average senior programmer, though maybe not fewer bugs than our staff programmers.
Congratulations. But you completely missed my point. I didn't say old people can't be in tune with AI.
> I see AI-native as those who have embraced it
That's not what the word "native" means. In the human language situation I referred to, it's about the language you learned first. It's not a synonym of proficient or fluent. If you learned to code first without AI tools, you are not AI-native by any definition I would understand, no matter how good at using AI you may be.
It's not just "English-native" that makes me think they have this meaning in mind. It's also the term "digital native" that gets thrown around a lot and is absolutely about how old you are. https://en.wikipedia.org/wiki/Digital_native
{1} scottlamb: "I suspect their lofty stated goal of X is a lie, to disguise their true goal of Y, which is something common which companies find much easier and more-desirable."
{2} CityOfThrowaway: "You are wrong, because it's obvious that X is achievable... if you define 'native' in a certain way."
{3} Terr_: "Uh, what? That doesn't make sense. The feasibility of X isn't part of Scottlamb's argument. Even if we assume X is possible, it isn't evidence they actually intend X over Y.
I'm not sure exactly which children they're planning to replace all their staff with, nor how they plan to get around the child labour laws.
You might assume they aren't going to be so stupid as to try to exclude everyone who isn't new to programming. I wouldn't. They're a crypto business.
See also "digital native", a popular term which is absolutely about growing up after the technology in question was ubiquitous. https://en.wikipedia.org/wiki/Digital_native
The two extremes of company culture are status cultures and service cultures.
In a status culture the product is the internal status hierarchy. External products are largely incidental goals, and customers and markets are only valued to the extent they create metrics that can be exploited by status seekers. Likewise employees.
In a service culture the goal is customer service through high quality output and employee development.
US corps lean far more to status culture than service culture. This is excellent for short termism, but the culture often becomes dysfunctional, if not outright abusive, and sooner or later it implodes, because status cultures aren't good at accepting reality, or at accurately reading it when they do accept it.
And status cultures tend to cargo cult management, where the C-suite is comparing its status to other C-suites, and copying apparent status-raising actions without thinking them through.
In good times a status culture will overhire, because hiring more employees looks like growth. In bad times status cultures will overfire because "cutting the slack" is lowest common denominator status management.
AI is the same on steroids. You get the promise of more growth with fewer employees, and that's hard to resist, even though it's entirely speculative and could easily be catastrophic. (Company results, and especially lasting company results, are orthogonal to whether some employees get good results with AI, because what actually affects results is how predictable the improvements are, whether there are likely downsides, and whether they're structurally in the right places.)
Whether managers should also be ICs is a side issue.
The macro is not great right now. The world economy is on a razor's edge. If things unwind, we could all be in for a world of economic hurt. There aren't many levers to pull us out this time around, either.
Crypto is in an even worse state. Investors want liquidity for the uncertainty. Plus there's the looming Q-day that keeps getting pushed earlier and earlier by the experts while we're also inching nearer and nearer on the clock.
A company_is_ the sum of its people, their talents and aligned behind a mission statement.
This is so far misguided, I can't help but think this 'biological unit' of a founder won't last long.
E.g. you can't just spew nonsense like "let's work together like a bee hive, everything for the Queen/CEO, no matter the personal cost to an individual" without others pointing out the stupidity of comparing humans with bees.
You can't just come up with a desirable adjective and start coming up with random scenarios in which those characteristics may occur. "Let's make the company strong as a gorilla, big as an elephant, smart as Von Neumann, bright as a Sun, as courageous as young guys from youtube fails compilations." This makes no sense whatsoever.
The GP post describes a common problem in _most_ workplaces in the market today. It’s not specific to crypto, AI, or anything in between.
Edit: it’s because the loss is an accounting loss due to mark to market adjustment, while the company is operationally profitable.
I assume that’s still no great, but not nearly as dire as the reported loss suggests, and not a sign of a dying company.
In the end, everyone is replaceable. But a king is a bit more difficult to replace, as historically shown.
I think LLMs are great, and I think people who can use them to get to the green in one and take it from there will soar, just like people who could identify a problem and solve it themselves did in the past.
- big institutional allocators
- activists
- the sellside
- guys managing their private market allocations
[1] Of course permissions are such that the tools can't do anything that would damage any of the systems.
So on one hand they are the most secure business on the Internet and on the other hand YOLO!
Do fintech customers share your ideals as to what is "critical stuff" and what isn't? How much of this business could _plausibly_ be "non critical?"
But the few years to come are going to be wild for a lot of folks out there.
I don't expect Coinbase to publish a "we're hiring everyone back" in 5 years from now, but I hope at some point media will spot those trends as they'll - I have no doubts - will happen, and propagate that tune.
why not, managers should be like left handed specialist relievers, they come in for a short time to handle a specific issue and otherwise let the team alone
It'd be looking a gift horse in the mouth to whine about "well they get 22+% at XYZ"
You know, hire, stop hiring, then start firing
It's always been someone higher up the ranking wants meetings, training or something dumb because his golf buddy sold him on Kafka support contracts in inappropriate situations, or an architect needs to shoehorn some tech in so they can have it in their designs ready for their next job role. I spend probably more time in meetings than doing coding.
Why can't I have an AI that takes my meetings for me?
The best complement to AI will be a human who is part architect (they know not to build the new system on lovable, and they understand the company's digital assets) and part business analyst (can communicate effectively and tease out and distill requirements from customer team).
That indicates someone who has top notch communication skills and also quite a bit of experience i.e older.
The simple truth is that I had to constantly learn something new and this is how it is in this profession. We’ve been in the trenches and we did it over and over again.
Now I’m using AI full time, doing same thing I always did - shipping products.
Newcomers with first set of skills don’t understand what is meta responsibility in this field - it’s never coding something, it’s shipping products to solve business needs.
They aren't saying that they don't know what to do with the AI productivity boost, but rather they think it worth taking a huge productivity hit right now so they can invest in the future. Whether their vision of the future is realistic...
That's the problem with building your castle on a quicksand whose fundamentals aren't in the same order of magnitude as the market cap you command. When all you truly offer is gambling, eventually a shinier casino will open up and eat your lunch.
This cycle is about max extraction and fraud - Legitimized by the presidential family cashing out billions in meme coins, insider trading and forks of existing protocols.
Hacks have also been hitting hard. North Korea has stolen 500m this year alone and 2b last year.
So… no thriving. On the opposite. Dying is a more appropriate word at this time. Some would call this an opportunity. I see more pain ahead.
No wonder Coinbase is laying off people with the excuse of AI. The reality is that volume is zero. At this stage only me and a bunch of other retail weirdos keep on buying bitcoin paycheck by paycheck…
Actually, these scenarios happen in hockey as well. Teams will pick up character guys who have been through it all who are expected to contribute more off ice than on it. Corey Perry is one who comes to mind lately but they're never given a "coach" title. It's entirely possible though that these players may be expected to be a go-between guy between the coach and younger players to help them manage the pressure or to help with encouragement. They're definitely not getting prime minutes though.
I guess that would possibly be the same expectation of a manager who still codes. I can't see them doing anything critical. It's likely picking up some minor bugs or nice-to-have, low priority feature work. I was a manager before and while I didn't reach 15 reports, I was up to 12 at one time. There's just really no focus time that you need for coding. Maybe that's a bit different with AI but even then you still need to find time to make changes and validate. And that's time that takes away from other higher impact things that you could be doing for the team.
Have some empathy for people losing their jobs because of upper management’s incompetence.
not sure if focus should be on athletic sports. Chess is better analogy to software I think.
When I grew up those were the very definition of "not girly". Our math and comp sci faculties at uni would bend over backwards for any of the girl students.
I would agree though that academics in general were "not manly" and at school at least streams of "academic" or "sporty" existed. For boys anyway.
For the girls (less fascinated by sports) the top sporties were often top academics as well.
History has shown that being academic is always better than sporty (if you gave to pick one.) The "status" given to sports is often an acknowledgment that it's a poor financial path, but we can offer "status" instead.
Yes, sports metaphors can be amusing, but its the winners we're smiling at.
The benefits of unionization extend beyond this particular situation or company.
They can help shift the balance of power back to the employee and help them guard against being squeezed by their employer to produce more or take on more work for less benefits or compensation.
American tech workers have been fortunate to avoid such aggressive practices, but working conditions will only deteriorate from here, with workers crushed between LLMs and offshoring.
It is on every single worker to make sure that they don't please the system beyond what is reasonable. Often the problem is people who overwork themselves to please and set the bar over the reasonable amount of work. Still when the majority does not raise their output to an unhealthy amount that must be accepted as a ceiling.
This has always been the case where I work, long before AI.
Let OP make his “hire me and I’ll tell you why your AI first approach is bunk” market.
YMMV, I suppose, but this combined with the AI nonsense just makes the dislike even harder.
Sounds tight I love the direction industry is heading lol.
For the end user it looks like an evil cash-grab, but really it's the company protecting itself from regulatory vengeance.
They’ve added tokens and altcoins to the platform, but I don’t think that’s a particularly strong long-term bet.
That doesn’t make one model universally better. There are clear tradeoffs on both sides. But it is part of the equation worth considering in response to your point.
https://www.cryptopolitan.com/user-tricked-grok-bankrbot-to-...
I must live in a different Europe then. I'd say this would be EXTREMELY generous for Europe.
Either way, I'd still be shitting my pants. 16 weeks is not a lot of time to find another job in today's environment. I know devs who have been out of work for years and had to resort to stocking shelves at Home Depot to tread water.
The boost is for what are glorified crud apps which it 1000x the tedious work. However, the choices it makes along the way quickly blows up without cleaning. Seniors know how to keep their workstation clean or they should.
It is even more abstraction, even harder to follow the code I'm "writing" with AI.
Also I have a fear that if/when the AI tide recedes, I'll be the one caught with my pants down since I have been forced to vibe code the majority of my career. As opposed to greybeards who can fall back on their decades of knowledge.
For sure this part screams LLM
Its all lip service - either AI generated or hand written.
It's totally random to accuse them of using "AI-native" to fire old people.
It is not specific to a crypto company. But the element of it being a crypto company cannot be ignored. Crypto companies are not like ordinary businesses. They have very unique qualities to them. Same with crypto industry as a whole. Ever been to a crypto conference for example? I have read about and have seen the videos. These things have the highest concentration of the scammers and the gullible any one place.
When building software, if you can state an unambiguous goal and what rules apply you are more than halfway done. It's not uncommon to work on something for a year and discover you have been building the wrong thing. Navigating that ambiguity is where all the value in software engineering is.
If the average programmer is this bad, then there must be better-than-average programmers reviewing the code. The problem with agents is that they can produce code at a far higher volume than the average programmer.
Anyway, I don't know how well the average programmer programs, but if you commit agent-generated code without careful review, your codebase will be cooked in a year or two.
In real organizations people tend to raise their performance to the [often unreasonable] level of expectations, even when situation stops being sustainable long-term for the whole group.
Suggesting that people should simply avoid overperforming assumes a level of control they don’t really have.
What do you think will actually happen at Coinbase now? Is it more likely that people will start saying hard “no,” or that they would stretch to meet the new expectations despite the personal cost?
As difficult as it is to use CSS to centre a field, the stakes are in a different ball park.
And surely the place you work hired with this in mind. Many places have not, and yet now expect PMs who haven’t coded in years, or in many cases not at all, to contribute to their products’ codebases.
One group are the ones who are staying. They lose teammates, they have to restructure work and fear whether there will be another round soon, which may hit them.
And then there are customers, investors, ... who need to be assured they are not dealing with a failing company.
Who actually is required?
.. fundamentally, it's only the person collecting payment.
1. What statistics support this assumption? (Either for Coinbase specifically, or "tech companies" in general.)
2. Nobody has to be a literal greybeard in order to be in the crosshairs of downsizing. Just look at Amazon's "make them quit before vesting finishes" pattern.
Have some empathy for the misled retail investor that gambled their savings to thieves?
Isn't this wrong? I thought engineered systems meant something designed with limits.
---- edit ----
TBH I will post an article, I'm finishing it. But it won't be so doomy, but rather on what to avoid to not fail
I don't think this is true. Humans typically prefer "thanks for the hard work, here's your severance" to "you suck, here's your severance, loser."
Humans like being treated with respect, and words are a big part of that. Money is nice, but it's not the only thing we care about.
https://www.businessinsider.com/ai-isnt-killing-software-cod...
Actually, it sounds like you’re the one who hasn’t been to a crypto conference :)
Sure, there are good player-coaches, but there are also great pure leaders. There are also very bad player-coaches. A coach who is trying too hard and too deep to be a player when they are less "fit" (or skilled) has historically led to many problems in many cases
What licenses of theirs were terminated? Seems to me that the regulatory oversight is a joke.
Your coins frozen with no reason given even internally except for "machine said no" - no one gets any slap on the wrist unless you sue real hard, happen to win, and most likely that'll be just a scratch that won't be noticed enough to change any attitudes.
The Man sees that someone they don't like transferring their coins through the fintech company - that's what those companies are really concerned about, because it would be a punch in the gut the company will feel.
Thus, the incentives. Current social design doesn't punish for false positives (until they hit really high levels), only false negatives.
All I wanted to say was I don't find 4 months something particularly "nice" as European, though I am sure there are even some Europeans who would find it nice since they work for crappy companies in countries with less protection, so they are in lose lose situation, no US benefits (salary/taxes), no Europe benefits (severance pay/notice period).
AI has solved simple CRUD, yes, but CRUD, was easy before.
The term that best suits "people who embrace AI-assisted programming" is AI-first programmers, which is what they literally mean by the looks of it. Clearly, they just use what they think sounds cooler.
Wow. That’s my cue to never use Coinbase again.
"We’re not building Skynet, we’re cutting costs and putting the survivors on prompt duty"
Anything in that format gives that AI feel
But the reality is it is a standard MBA driven "bottom x%" cull dressed up with some 4d chess strategy.
Did I miss some news where Coinbase literally stole people’s money, or at least did something that could reasonably be called evil?
I noticed it was especially bad for on-call and incident response; these managers get pulled in to all the incidents because of their status and supposed involvement, but are not particularly useful in those rooms, adding even more cooks to the already crowded kitchen.
It can certainly overlap with what makes a great engineer, but not most of the time.
Internal tools keep the lights on and allow customer facing code to function!
Operational tooling also isn’t a sexy thing, but it’s vital for any company to function.
Just a vague nonsense about compliance, that magickly aligns with padding their float. In reality they are using compliance and regulatory language as a shield to prop up their numbers. They are using KYC/AML to hold your funds hostage, as it's the most plausible explanation that also allows them to legally seize it under a legal sounding explanation. The fact that they do have to perform KYC/AML and there are penalties for not doing so just happen to make it a valid enough sounding excuse for when it's used overly aggressively because it lines up with other goals.
If they move the hair trigger to freeze funds 2x as often as they need to against the innocent false-positives to pass compliance checks, due to a hair trigger, then it falls under plausible deniability and even better when the regulator comes they can say some insane bullshit about how good their KYC/AML is. If they freeze it less often but instead just steal some for a little while and then return it, then it's more obvious a crime has been committed. It's obvious what they're up to.
Of course the KYC/AML/ regulatory officers are probably just pawns in this. The executives in the crypto and fintech space tell these people they need to set the sensitivity up to the 9s which does increase KYC/AML 'true positives' but the unspoken part is that money is now locked up into the company's accounts which creates a moral hazard in their fiduciary duty. They know damn well what that actually does is inflate their float, at the cost of a bunch of false positives. In theory that's satisfying AML because a function of doing so is you trigger more true positives, but in reality it's merely stealing money to increase floats not actually optimizing to meet the cutoffs to keep your license. But no one is actually going to come out and say this. It will probably take a class action suite, which I have little doubt will eventually happen when someone comes out and admits one day that these regulatory compliance triggers were intentionally set on the sensitive side for non-regulatory reasons.
The competition is also stiff with decades of experience and network effects
The truth is these crypto shops have a pretty poor reputation in the traditional finance industry. Nobody in trading tech goes to work for them unless they offer insane salaries, because they (we) know it's an unstable place to be.
1. you get fired with 2 months notice period and they will tell you, you don't need to bother to come anymore = 2 months of severance, you can sit at home, look for job for 2 months with full salary
2. on top of this you will get also extra 2 months severance pay
so in total de facto 4 months of severance pay , but I understand shitty companies will expect you to work even during notice period (especially if they are firing you) and somehow expect you will be delivering same results, smarter companies know the reality when they are firing someone and just tell him not bother coming anymore, this was my case in last 1-2 jobs I've had more than 10 years ago when I was still employee (plus they wanted to give me 1 month severance pay, but I argued about years I worked there and certain operation practices which could be published, so got 2 months, unlike my less assertive colleagues), I'm nowadays contractor/freelance for companies outside Europe so no law protection for me
my wife is always employed as employee and got fired this winter under conditions I mentioned in point 1&2 and got 2+2 months after 1 year of work, two jobs ago she was fired without severance but didnt need to work during notice period
plus I've found funny mention of the 6 months COBRA as some benefit, you are covered by insurance in Europe regardless of your job status whether employed or unemployed you are always covered by universal healthcare
I'd love to hear more about the positive effects of designers and PMs using AI, especially more on the PM side, if you care to go into more detail
Many founders recycle into tech jobs after they discover exactly why failure rates of startups are so brutal. Apparently 15-25% of employees aged 30–39 at major SV companies have a failed or acquihired startup in their history. Golden handcuffs can appear very pretty after you've missed out on striking gold by yourself.
If engineers already know up front with clarity what they need to build, and, the leadership are very focused and concentrate resources on doing a few things.. then increasing the rate at which LOC is written is not beneficial - because getting the product built right is what matters.
Execution of unrelated ideas seems like a natural follow on, and having managed several such "labs" efforts, it's actually a good idea but it inevitably grinds up against the lack of will to continue investing in the face of headwinds, especially since the main business line is several orders of magnitude larger than anything labs can deliver in a foreseeable timeframe.
The only way I can rationalize that so many people refuse to believe this is happening is that they are on the seller side and not the buyer side of engineering labor. This means they have blind sides to the buyers view of the market (some sort of information asymmetry), and secondly they exhibit cognitive dissonance to protect their self-esteem as a seller.
There's not much equivalent to "fit" here, just skill, and they decided they don't want the pure leaders, they want ones that are knuckle deep in the sausage.
Good decision or not, that very basic analogy is completely fine.
I'm not convinced a polite but AI-written email hits the same note. At the very least it's unintentionally disrespectful, which isn't a direct challenge. Your boss doesn't care enough to write an email by hand, but also doesn't care enough to burn bridges and insult you.
Knowing what you don't know and knowing how to get qualified information from people around you makes up for a lot of not having a programming background.
If anything, the managers with technical backgrounds who weren't active programmers tended to significantly underestimate the difficulty of doing something because back in their day, things were different or some such nonsense.
Your support will be provided by an AI bot almost as smart as Clippy because it was trained on the marketer's corpus of emails.
The worst part of using something like Coinbase is having to do yet another bank transfer, waiting for it to clear, doing KYC/AML yet again, etc etc for what most people is just to buy one or two single asset (BTC or maybe ETH probably). Instead just click buy in Robinhood or Schwab along with everything else.
They are legally prevented from telling you by the regulators, at least in the US.
As far as I understand, they're often not allowed to disclose that. E.g.,
https://www.bitsaboutmoney.com/archive/seeing-like-a-bank/
> In the specific case of “Why did the bank close my account, seemingly for no reason? Why will no one tell me anything about this? Why will no one take responsibility?”, the answer is frequently that the bank is following the law. As we’ve discussed previously, banks will frequently make the “independent” “commercial decision” to “exit the relationship” with a particular customer after that customer has had multiple Suspicious Activity Reports filed. SARs can (and sometimes must!) be filed for innocuous reasons and do not necessarily imply any sort of wrongdoing.
> SARs are secret, by regulation. See 12 CFR § 21.11(k)(1) from the Office of Comptroller of the Currency...
Just a minute ago 5.5 looked at some human-written code of mine from last year and while it was making the changes I asked for it determined the existing code was too brittle (it was) and rewrote it better. It didn't mention this in its summary at the end, I only know because I often watch the thinking output as it goes past before it hides it all behind a pop-open.
There is ZERO CHANCE they have used ai unintentionally
> also doesn't care enough to burn bridges and insult you.
By actively using ai they are stating that you are so much beyond them that even a personal "eff you" is not worth the time. One would have to actively try and poke some personally hurtful areas to come off more insulting than use of ai.
Went on for about a year, worse each week, before i left.
sure you can earn more, but there are plenty of benefits coming from Europe, for instance how many days of vacation you have by law in US? what's the point of the more money in US if employer will work you to death with no work/life balance
I found amusing mention of COBRA for 6 months, that's in most of the EU permanent benefit of all citizens not given by employer, your stuff is just paid from the universal healthcare and doesn't matter whether you are employed or unemployed, in US you can end up in situation you don't earn enough to have good health insurance, but you earn enough to not be covered by insurance for low income people, no such thing possible in EU (thought his doesn't really affect IT field)
A friend of mine works for one of the major crypto firms and they're starting to deploy algorithmic trading bots on their own exchange.
The spreads on these markets can be diabolical
It's obvious when someone gets their money frozen for a month only to just have to perform a KYC check that even if the KYC check was legitimate, and these kinds of results are common over years, the delay was a result of a business decision that increased their float.
I think you're conflating the requirements with the BSA with how executives are using it in a hostile way against customers. They can make the deliberate decision to slow down KYC/AML officers and checks after a trigger, while putting them on a hair trigger, while citing secrecy under the BSA. That is the regulatory nonsense under which they are dressing up a business, non-regulatory decision. It's there to provide plausible deniability.
The compliance officer in this case is plausibly just following the law but in reality they're just running cover for increasing the float -- maybe even unwittingly.
If interest in tokens and altcoins wanes, Coinbase may be in a weak position.
I also find I need to run an llm code review or two against any code it produces to even get to the point where’s it’s ready for human review.
In any case they served as an extremely valuable tool.
Put otherwise, suppose I run a bank and you deposit your paycheck. I decide our reserves are a little low so I set KYC/AML triggers even more sensitive on a hair trigger so that an extra of 0.2% of innocent paychecks get held up an extra 4 weeks (I have also conveniently slow down / underhire customer service) which also causes me to catch 1 or 2 more real criminals. That's not KYC/AML even though that's the mechanism by which I claim to have held it. I'm not bound by the BSA secrecy in such case since the underlying trigger was for increasing the float rather than actually KYC/AML compliance.
------- re: below due to throttling ---------
I am accusing fintech and crypto businesses in general of committing mass fraud through intentionally setting KYC/AML on an artificially sensitive trigger to increase their floats, yes.
I do not know if Coinbase specifically does that -- my limited experience with them is they are one of the few fintech companies that hasn't fucked me over.
I have an absolutely massive body of evidence that leads me to that conclusion, through my own transactions and frozen funds as well as studying a wide amount of CS complaints that show evidence that KYC/AML checks on frozen funds are stalled for weeks to months without any plausible explanation of what is happening which is not a KYC/AML regulatory action but rather an intentional choice to raise floats for free interest and padding their numbers.
Of course what's extraordinarily ironic here is when fintech claims you violate KYC/AML then "law says we provide no evidence" but if you turn around and accuse them then the industry shills will scream "without evidence" while simultaneously saying your counterparty doesn't have to provide it! They are hypocrites! The very people accusing you without evidence betray their own sins accusing you of same! They were the ones that set the bar that they don't need to present evidence, not me.
If you ask AI to generate hundred different paragraphs and choose the one which best conveys what you actually feel and want to communicate.
Is it is still a perfect nothing?
You do get how that's worse, right? The person rather spends their time arguing with the clanker than thinking about the person and putting those thought into words, however unstructured they are.
Just one rebuttal ago, it was explained why it was okay to freeze customer funds without providing any evidence.
Now we are Jekyll and Hyde'ing back to getting upset about an accusation without evidence. That was a crux of my entire case! I am being damned, for allegedly, using the same standard of evidence as my accuser (though I dispute I am presenting as little as them)!
If that's your case, then you have concluded and rested my case for me in my favor. The entire KYC/AML argument falls apart because it fails your requirement to present evidence at accusation.
Either accusation without present evidence bad, in which case KYC/AML as it is used in stalling people for weeks to months without providing evidence totally falls apart and I rest my case -- or -- that standard of evidence is OK in which I've at least presented as much or more evidence as fintechs provide in their accusation against customers (nothing) and in that instance I also rest my case.
Whichever of these last two Jekyll and Hyde responses we pick, it isn't working against me.
So essentially you have three choices:
1. Spend time writing (or have written by a copywriter) in corporate fluff dialect, where the actual message is still understandable by all parties. At the cost of appearing tone deaf.
2. Spend time reiterating with a bot that speaks some undefined sub-dialect of LLMinese where the reception of the message is unknown. At the cost of appearing even more tone deaf and insulting than a corporate cog.
3. Spend time restructuring message in genuine voice. At the cost of maybe being heard more harshly than intended.
I fail to see how option 2 can be perceived as anything but the worst, unless you assume that the target audience does not distinguish LLMinese from actual speech.