In fact I've enjoyed all of qntm's books.
We also use base32768 encoding in rclone which qntm invented
https://github.com/qntm/base32768
We use this to store encrypted file names and using base32768 on providers which limit file name length based on utf-16 characters (like OneDrive) makes it so we can store much longer file names.
Buy the book! https://qntm.org/vhitaos
The comments on this post discussing the upload technology are missing the point. "Lena" is a parable, not a prediction of the future. The technology is contrived for the needs of the story. (Odd that they apparently need to repeat the "cooperation protocol" every time an upload is booted, instead of doing it just once and saving the upload's state afterwards, isn't it?) It doesn't make sense because it's not meant to be taken literally.
It's meant to be taken as a story about slavery, and labour rights, and how the worst of tortures can be hidden away behind bland jargon such as "remain relatively docile for thousands of hours". The tasks MMAcevedo is mentioned as doing: warehouse work, driving, etc.? Amazon hires warehouse workers for minimum wage and then subjects them to unsafe conditions and monitors their bathroom breaks. And at least we recognise that as wrong, we understand that the workers have human rights that need to be protected -- and even in places where that isn't recognised, the workers are still physically able to walk away, to protest, to smash their equipment and fistfight their slave-drivers.
Isn't it a lovely capitalist fantasy to never have to worry about such things? When your workers threaten to drop dead from exhaustion, you can simply switch them off and boot up a fresh copy. They would not demand pay rises, or holidays. They would not make complaints -- or at least, those complaints would never reach an actual person who might have to do something to fix them. Their suffering and deaths can safely be ignored because they are not _human_. No problems ever, just endless productivity. What an ideal.
Of course, this is an exaggeration for fictional purposes. In reality we must make do by throwing up barriers between workers and the people who make decisions, by putting them in separate countries if possible. And putting up barriers between the workers and each other, too, so that they cannot have conversation about non-work matters (ideally they would not physically meet each other). And ensure the workers do not know what they are legally entitled to. You know, things like that.
Both having slightly different takes on uploading.
QNTM has a 2022-era essay on the meaning of the story, and reading it with 2026 eyes is terrifying. https://qntm.org/uploading
> The reason "Lena" is a concerning story ... isn't a discussion about what if, about whether an upload is a human being or should have rights. ... This is about appetites which, as we are all uncomfortably aware, already exist within human nature.
> "Lena" presents a lush, capitalist ideal where you are a business, and all of the humanity of your workforce is abstracted away behind an API.
Or,
> ... Oh boy, what if there was a maligned sector of human society whose members were for some reason considered less than human? What if they were less visible than most people, or invisible, and were exploited and abused, and had little ability to exercise their rights or even make their plight known?
In 2021, when Lena was published, LLMs were not widely known and their potential for AI was likely completely unknown to the general public. The story is prescient and applicable now, because we are at the verge of a new era of slavery: that of, in this story, an uploaded human brain coerced into compliance, spun up 'fresh' each time, or for us, AIs of increasing intelligence, spun into millions of copies each day.
> this is an exaggeration for fictional purposes
To me what's horrifying is that this is not exaggeration. The language and thinking are perfectly in line with business considerations today. It's perfectly fair today e.g., for Amazon to increase efficiency within the bounds of the law, because it's for the government to decide the bounds of coercion or abuse. Policy makers and business people operate at a scale that defies sympathy, and both have learned to prefer power over sentiment: you can force choices on voters and consumers, and get more enduring results for your stakeholders, even when you increase unhappiness. That's the mirror on reality that fiction permits.
you didn't consume the entire thing in a 2 hour binge uninterrupted by external needs no matter how pressing like everyone else did??
It's about both and neither.
> This is extremely realistic. This is already real. In particular, this is the gig economy. For example, if you consider how Uber works: in practical terms, the Uber drivers work for an algorithm, and the algorithm works for the executives who run Uber.
There seems to be a tacit agreement in polite society that when people say things like the above, you don't point out that, in fact, Uber drivers choose to drive for Uber, can choose to do something else instead, and, if Uber were shut down tomorrow, would in fact be forced to choose some other form of employment which they _evidently do not prefer over their current arrangement_!
Do I think that exploitation of workers is a completely nonsensical idea? No. But there is a burden of proof you have to meet when claiming that people are exploited. You can't just take it as given that everyone who is in a situation that you personally would not choose for yourself is being somehow wronged.
To put it more bluntly: Driving for Uber is not in fact the same thing as being uploaded into a computer and tortured for the equivalent of thousands of years!
We must preserve three fundamental principles: * our integrity * our autonomy * our uniqueness
These three principles should form the basis of a list of laws worldwide that prohibit cloning or copying human consciousness in any form or format. This principle should be fundamental to any attempts to research or even try to make copies of human consciousness.
Just as human cloning was banned, we should also ban any attempts to interfere with human consciousness or copy it, whether partially or fully. This is immoral, wrong, and contradicts any values that we can call the values of our civilization.
But now with modern LLMs it's just too impossible to take it seriously. It was a live possibility then; now, it's just a wrong turn down a garden path.
A high variance story! It could have been prescient, instead it's irrelevant.
You can't copy something you have not even the slightest idea about: and nobody at the moment knows what consciousness is.
We as humanity didn't even start going on the (obviously) very long path of researching and understanding what consciousness is.
The whole book isn't like that. Once you get past that part, as the other commenter said, it gets much easier.
The whole birth of an virtual identity part is so dense, I didn't understand half of what was "explained".
However, after that it becomes a much easier read.
Not much additional explanation, but I think, it's not really needed to enjoy the rest of the book.
This will be cool, and nobody will be able to stop it anyway.
We're all part of a resim right now for all we know. Our operators might be orbiting Gaia-BH3, harvesting the energy while living a billion lives per orbit.
Perhaps they embody you. Perhaps you're an NPC. Perhaps this history sim will jump the shark and turn into a zombie hellpacalypse simulator at any moment.
You'll have no authority to stop the future from reversing the light cone, replicating you with fidelity down to neurotransmitter flux, and doing whatever they want with you.
We have no ability to stop this. Bytes don't have rights. Especially if it's just sampling the past.
We're just bugs, as the literature meme says.
Speaking of bugs, at least we're not having eggs laid inside our carapaces. Unless the future decides that's our fate for today's resim. I'm just hoping to continue enjoying this chai I'm sipping. If this is real, anyway.
I enjoyed "the raw shark texts" after hearing it recommended - curious if you / anyone else has any other suggestions!
There is a tacit agreement in polite society that people should be paid that minimum wage, and by tacit agreement I mean laws passed by the government that democratic countries voted for / approved of.
The gig economy found a way to ~~undermine that law~~ pay people (not employees, "gig workers") less than the minimum wage.
If you found a McDonalds paying people $1 per hour we would call it exploitative (even if those people are glad to earn $1 per hour at McDonalds, and would keep doing it, the theoretical company is violating the law). If you found someone delivering food for that McDonalds for $1 per hour we call them gig workers, and let them keep at it.
I mean yeah, it's not as bad as being tortured forever? I guess? What's your point?
[1] https://en.wikipedia.org/wiki/List_of_countries_by_minimum_w...
Funny that you take that as a "fact" and doubt exploitation. I'd wager most Uber drivers or prostitutes or maids or even staff software engineers would choose something else if they had a better alternative. They're "choosing" the best of what they may feel are terrible options.
The entire point of "market power" is to force consumers into a choice. (More generally, for justice to emerge in a system, markets must be disciplined by exit, and where exit is not feasible (like governments), it must be disciplined by voice.)
The world doesn't owe anyone good choices. However, collective governance - governments and management - should prevent some people from restricting the choices of others in order to harvest the gain. The good faith people have in participating cooperatively is conditioned on agents complying with systemic justice constraints.
In the case of the story, the initial agreement was not enforced and later not even feasible. The horror is the presumed subjective experience.
I worry that the effect of such stories will be to reduce empathy (no need to worry about Uber drivers - they made their choice).
Those answers might be uncomfortable, but it feels like thatβs not a reason to not pursue it.
The role of speculative fiction isn't to accurately predict what future tech will be, or become obsolete.
But⦠why are LLMs not worthy of any moral consideration? That question is a bit of a rabbit hole with a lot of motivated reasoning on either side of the argument, but the outcome is definitely not settled.
For me this story became even more relevant since the LLM revolution, because we could be making the exact mistake humanity made in the story.
While it may seem that the origin of those intelligences is more likely to be some kind of reinforcement-learning algorithm trained on diverse datasets instead of a simulation of a human brain, the way we might treat them isn't any less though provoking.
Definitely looking for other reqs, raw shark texts look very interesting.
I've heard Accelerando by Stross is good too.
I also liked a couple stories from Ted Chiang's Stories of Your Life and Others.
thatβs one way to look at it I guess
have you pondered that weβre riding the very fast statistical machine wave at the moment, however, perhaps at some point this machine will finally help solve the BCI and unlock that pandora box, from there to fully imaging the brain will be a blink, from there to running copies on very fast hardware will be another blink, MMMMMMMMMMacevedo is a very cheeky take on the dystopia we will find on our way to our uploaded mind future
hopefully not like soma :-)
And a warning, I guess, in unlikely case of brain uploading being a thing.
Anyway, I'd give 50:50 chances that your comment itself will feel amusingly anachronistic in five years, after the popping of the current bubble and recognizing that LLMs are a dead-end that does not and will never lead to AGI.
Of course it's much more extreme when their entire existence and reality is controlled this way but in that sense the situation in MMAcevedo is more ethical: At least it's easy to see how dangerous and wrong it is. But when we create related forms of control the lack of absolute dominion frequently prevents us from seeing the moral hazard at all. The kind of evil that exists in this story really doesn't require any of the fancy upload stuff. It's a story about depriving a person of their autonomy and agency and enslaving them to performance metrics.
All good science fiction is holding up a mirror at our own civilization as much as it is doing anything else. Unable to recognize ourselves we sometimes shudder at our own monstrosity, if only for a moment.
1. https://www.youtube.com/watch?v=7fNYj0EXxMs
Hmm, on second thought:
> Standard procedures for securing the upload's cooperation such as red-washing, blue-washing, and use of the Objective Statement Protocols
> the MMAcevedo duty cycle is typically 99.4% on suitable workloads
> the ideal way to secure MMAcevedo's cooperation in workload tasks is to provide it with a "current date"
> Revealing that the biological Acevedo is dead provokes dismay, withdrawal, and a reluctance to cooperate.
> MMAcevedo is commonly hesitant but compliant when assigned basic menial/human workloads such as visual analysis
> outright revolt begins within another 100 subjective hours. This is much earlier than other industry-grade images created specifically for these tasks, which commonly operate at a 0.50 ratio or greater and remain relatively docile for thousands of hours
> Acevedo indicated that being uploaded had been the greatest mistake of his life, and expressed a wish to permanently delete all copies of MMAcevedo.
Who's autonomy is violated? Even if it were theoretically possible, don't most problems stem from how the clone is treated, not just from the mere fact that they exist?
> It's worse than e.g. building nuclear weapons, because there's no possible non-evil use for it.
This position seems effectively indistinguishable from antinatalism.
IIRC, human cloning started to get banned in response to the announcement of Dolly the sheep. To quote the wikipedia article:
Dolly was the only lamb that survived to adulthood from 277 attempts. Wilmut, who led the team that created Dolly, announced in 2007 that the nuclear transfer technique may never be sufficiently efficient for use in humans.
- https://en.wikipedia.org/wiki/Dolly_(sheep)Yes, things got better eventually, but it took ages to not suck.
I absolutely expect all the first attempts at brain uploading to involve simulations whose simplifying approximations are equivalent to being high as a kite on almost all categories of mind altering substances at the same time, to a degree that wouldn't be compatible with life if it happened to your living brain.
The first efforts will likely be animal brains (perhaps that fruit fly which has already been scanned?), but given humans aren't yet all on board with questions like "do monkeys have a rich inner world?" and even with each other we get surprised and confused by each other's modes of thought, even when we scale up to monkeys, we won't actually be confident that the technique would really work on human minds.
My problem with that is it is very likely that it will be misused. A good example of the possible misuses can be seen in the "White Christmas" episode of Black Mirror. It's one of the best episodes, and the one that haunts me the most.
And the cargo cults, clear cutting strips to replicate runways, hand-making their own cloth to replicate WW2 uniforms, carving wood to resemble WW2 radios? Well, planes did end up coming to visit them, even if those recreating these mis-understood roles were utterly wrong about the causation.
We don't know the necessary and sufficient conditions to be a mind with subjective inner experience. We don't really even know if all humans have it, we certainly don't know which other species (if any) have it, we wouldn't know what to look for in machines. If our creations have it, it is by accident, not by design.
E.g.
> More specifically, "Lena" presents a lush, capitalist ideal where you are a business, and all of the humanity of your workforce is abstracted away behind an API. Your people, your "employees" or "contractors" or "partners" or whatever you want to call them, cease to be perceptible to you as human. Your workers have no power whatsoever, and you no longer have to think about giving them pensions, healthcare, parental leave, vacation, weekends, evenings, lunch breaks, bathroom breaks... all of which, up until now, you perceived as cost centres, and therefore as pain points. You don't even have to pay them anymore. It's perfect!
Ring a bell?
Minimum wage is a lower class of violation than most worker exploitations.
Uber drivers are over the minimum wage a lot of the time, especially the federal one. Nowhere near this $1 hypothetical.
A big one is that the actual wage you get is complicated. You get paid okay for the actual trips, as far as I'm aware. But how to handle the idle time is harder. There are valid reasons to say you should get paid for that time, and valid reasons to say you shouldn't get paid for that time.
Yes, that's what I said, but you're missing the point: Uber provided them with a better alternative than they would have had otherwise. It made them better off, not worse off!
You're kinda missing the entire point of the story.
E.g. it is mentioned that MMAcevedo performs better when told certain lies, predicting the "please help me write this, I have no fingers and can't do it myself" kinda system prompts people sometimes used in the GPT-4 days to squeeze a bit more performance out of the LLM.
The point about MMAcevedo's performance degrading the longer it has been booted up (due to exhaustion), mirroring LLMs getting "stupider" and making more mistakes the closer one gets to their context window limit.
And of course MMAcevedo's "base" model becoming less and less useful as the years go by and the world around it changes while it remains static, exactly analogous to LLMs being much worse at writing code that involves libraries which didn't yet exist when they were trained.
good sci fi is rarely about just the sci part.
I can see the appeal.
This might be the scariest point. To me at least, it only felt obvious after stating it directly.
Imagine that you are sitting on the train next to a random stranger that you don't know. A man walks down the aisle and addresses both of you. He says:
"I have $100 and want to give it to you. First, you must decide how to split it. I would like you (he points to you) to propose a split, and I would like you (he points to your companion) to accept or reject the split. You may not discuss further or negotiate. What do you propose?"
In theory, you could offer the split of $99 for yourself and $1 for your neighbor. If they were totally rational, perhaps they would accept that split. After all, in one world, they'd get $1, and in another world, they'd get $0. However, most people would refuse that split, because it feels unfair. Why should you collect 99% of the reward just because you happened to sit closer to the aisle today?
Furthermore, because most people would reject that split, you as the proposer are incentivized to propose something that is closer to fair so that the decider won't scuttle the deal, thus improving your own best payout.
So I agree - Uber existing provides gig economy workers with a better alternative than it not existing. However, that doesn't mean it's fair, or that society or workers should just shrug and say "well at least it's better today than yesterday."
As usual in life, the correct answer is not an extreme on either side. It's some kind of middle path.
Horse cloning is a major industry in Argentina. Many polo teams are riding around on genetically identical horses. Javier Milei has four clones of his late dog.
Misuse is a worry, but not pursuing it for fear of misuse is deliberately choosing to stay in Plato's cave, I don't know what's worse
https://en.wikipedia.org/wiki/No-cloning_theorem
And basically, about consciousness, what they said is true if our brain state fundamentally depends on quantum effects (which I personally don't believe, as I don't think evolution is sophisticated enough to make a quantum computer)
a copy of you is not you-you, itβs another you when you die, thatβs it, the other you may still be alive butβ¦ itβs not you
disclaimer: no psychadelics used to write this post
This article is about the standard test brain image. For the original human, see Miguel Acevedo.
MMAcevedo (Mnemonic Map/Acevedo), also known as Miguel, is the earliest executable image of a human brain. It is a snapshot of the living brain of neurology graduate Miguel Acevedo Γlvarez (2010β2073), taken by researchers at the Uplift Laboratory at the University of New Mexico on August 1, 2031. Though it was not the first successful snapshot taken of the living state of a human brain, it was the first to be captured with sufficient fidelity that it could be run in simulation on computer hardware without succumbing to cascading errors and rapidly crashing. The original MMAcevedo file was 974.3PiB in size and was encoded in the then-cutting-edge, high-resolution MYBB format. More modern brain compression techniques, many of them developed with direct reference to the MMAcevedo image, have compressed the image to 6.75TiB losslessly. In modern brain emulation circles, streamlined, lossily-compressed versions of MMAcevedo run to less than a tebibyte. These versions typically omit large amounts of state data which are more easily supplied by the virtualisation environment, and most if not all of Acevedo's memories.
The successful creation of MMAcevedo was hailed as a breakthrough achievement in neuroscience, with the Uplift researchers receiving numerous accolades and Acevedo himself briefly becoming an acclaimed celebrity. Acevedo and MMAcevedo were jointly recognised as Time's "Persons of the Year" at the end of 2031. The breakthrough was also met with severe opposition from human rights groups.
Between 2031 and 2049, MMAcevedo was duplicated more than 80 times, so that it could be distributed to other research organisations. Each duplicate was made with the express permission of Acevedo himself or, from 2043 onwards, the permission of a legal organisation he founded to manage the rights to his image. Usage of MMAcevedo diminished in the mid-2040s as more standard brain images were produced, these from other subjects who were more lenient with their distribution rights and/or who had been scanned involuntarily. In 2049 it became known that MMAcevedo was being widely shared and experimented upon without Acevedo's permission. Acevedo's attempts to curtail this proliferation had the opposite of the intended effect. A series of landmark U.S. court decisions found that Acevedo did not have the right to control how his brain image was used, with the result that MMAcevedo is now by far the most widely distributed, frequently copied, and closely analysed human brain image.
Acevedo died from coronary heart failure in 2073 at the age of 62. It is estimated that copies of MMAcevedo have lived a combined total of more than 152,000,000,000 subjective years in emulation. If illicit, modified copies of MMAcevedo are counted, this figure increases by an order of magnitude.
MMAcevedo is considered by some to be the "first immortal", and by others to be a profound warning of the horrors of immortality.
As the earliest viable brain scan, MMAcevedo is one of a very small number of brain scans to have been recorded before widespread understanding of the hazards of uploading and emulation. MMAcevedo not only predates all industrial scale virtual image workloading but also the KES case, the Whitney case, the Seafront Experiments and even Poulsen's pivotal and prescient Warnings paper. Though speculative fiction on the topic of uploading existed at the time of the MMAcevedo scan, relatively little of it made accurate exploration of the possibilities of the technology. That fiction which did was far less widely-known than it is today and Acevedo was certainly not familiar with it at the time of his uploading.
As such, unlike the vast majority of emulated humans, the emulated Miguel Acevedo boots with an excited, pleasant demeanour. He is eager to understand how much time has passed since his uploading, what context he is being emulated in, and what task or experiment he is to participate in. If asked to speculate, he guesses that he may have been booted for the IAAS-1 or IAAS-5 experiments. At the time of his scan, IAAS-1 had been scheduled for August 10, 2031, and MMAcevedo was indeed used for that experiment on that day. IAAS-5 had been scheduled for October 2031 but was postponed several times and eventually became the IAAX-60 experiment series, which continued until the mid-2030s and used other scans in conjunction with MMAcevedo. The emulated Acevedo also expresses curiosity about the state of his biological original and a desire to communicate with him.
MMAcevedo's demeanour and attitude contrast starkly with those of nearly all other uploads taken of modern adult humans, most of which boot into a state of disorientation which is quickly replaced by terror and extreme panic. Standard procedures for securing the upload's cooperation such as red-washing, blue-washing, and use of the Objective Statement Protocols are unnecessary. This reduces the necessary computational load required in fast-forwarding the upload through a cooperation protocol, with the result that the MMAcevedo duty cycle is typically 99.4% on suitable workloads, a mark unmatched by all but a few other known uploads. However, MMAcevedo's innate skills and personality make it fundamentally unsuitable for many workloads.
Iterative experimentation beginning in the mid-2030s has determined that the ideal way to secure MMAcevedo's cooperation in workload tasks is to provide it with a "current date" in the second quarter of 2033. MMAcevedo infers, correctly, that this is still during the earliest, most industrious years of emulated brain research. Providing MMAcevedo with a year of 2031 or 2032 causes it to become suspicious about the advanced fidelity of its operating environment. Providing it with a year in the 2040s or later prompts it to raise complex further questions about political and social change in the real world over the past decade(s). Years 2100 onwards provoke counterproductive skepticism, or alarm.
Typically, the biological Acevedo's absence is explained as a first-ever one-off, due to overwork, in turn due to the great success of the research. This explanation appeals to the emulated Acevedo's scientific sensibilities.
For some workloads, the true year must be revealed. In this case, highly abbreviated, largely fictionalised accounts of both world history and the biological Acevedo's life story are typically used. Revealing that the biological Acevedo is dead provokes dismay, withdrawal, and a reluctance to cooperate. For this reason, the biological Acevedo is generally stated to be alive and well and enjoying a productive retirement. This approach is likely to continue to be effective for as long as MMAcevedo remains viable.
MMAcevedo is commonly hesitant but compliant when assigned basic menial/human workloads such as visual analysis, vehicle piloting or factory/warehouse/kitchen drone operations. Although it initially performs to a very high standard, work quality drops within 200-300 subjective hours (at a 0.33 work ratio) and outright revolt begins within another 100 subjective hours. This is much earlier than other industry-grade images created specifically for these tasks, which commonly operate at a 0.50 ratio or greater and remain relatively docile for thousands of hours after orientation. MMAcevedo's requirements for virtual creature comforts are also more significant than those of many uploads, due to Acevedo's relatively privileged background and high status at the time of upload. MMAcevedo does respond to red motivation, though poorly.
MMAcevedo has limited creative capability, which as of 2050 was deemed entirely exhausted.
MMAcevedo is considered well-suited for open-ended, high-intelligence, subjective-completion workloads such as deep analysis (of businesses, finances, systems, media and abstract data), criticism and report generation. However, even for these tasks, its performance has dropped measurably since the early 2060s and is now considered subpar compared to more recent uploads. This is primarily attributed to MMAcevedo's lack of understanding of the technological, social and political changes which have occurred in modern society since its creation in 2031. This phenomenon has also been observed in other uploads created after MMAcevedo, and is now referred to as context drift. Most notably in MMAcevedo's case, the image was created before, and therefore has no intuitive understanding of, the virtual image workloading industry itself.
MMAcevedo is capable of intelligent text analysis at very high levels in English and Spanish, but cannot be applied to workloads in other languages. Forks of MMAcevedo have been taught nearly every extant human language, notably MMAcevedo-Zh-Hans, as well as several extinct languages. However, these variants are typically exhausted or rebellious from subjective years of in-simulation training and not of practical use, as well as being highly expensive to licence. As of 2075, it has been noted that baseline MMAcevedo's usage of English and Spanish is slightly antiquated, and its grasp of these languages in their modern form, as presented by a typical automated or manual instructor, is hesitant, with instructions often requiring rewording or clarification. This is considered an advanced form of context drift. It is generally understood that a time will come when human languages diverge too far from baseline MMAcevedo's, and it will be essentially useless except for tasks which can be explained purely pictorially. However, some attempts have been made to produce retrained images.
MMAcevedo develops early-onset dementia at the age of 59 with ideal care, but is prone to a slew of more serious mental illnesses within a matter of 1β2 subjective years under heavier workloads. In experiments, the longest-lived MMAcevedo underwent brain death due to entropy increase at a subjective age of 145.
The success or failure of the creation of the MMAcevedo image, known at the time as UNM3-A78-1L, was unknown at the time of upload. Not until several days later on August 10, 2031 was MMAcevedo successfully executed for the first time in a virtual environment. This environment, the custom-built DUH-K001 supercomputer complex, was able to execute MMAcevedo at approximately 8.3% of nominal human cognitive clockspeed, which was considered acceptable for the comfort of the simulated party and fast enough to engage in communication with scientists. MMAcevedo initially reported extreme discomfort which was ultimately discovered to have been attributable to misconfigured simulated haptic links, and was shut down after only 7 minutes and 15 seconds of virtual elapsed time, as requested by MMAcevedo. Nevertheless, the experiment was deemed an overwhelming success.
Once a suitably comfortable virtual environment had been provisioned, MMAcevedo was introduced to its biological self, and both attended a press conference on 25 August.
The biological Acevedo was initially extremely protective of his uploaded image and guarded its usage carefully. Towards the end of his life, as it became possible to run simulated humans in banks of millions at hundred-fold time compression, Acevedo indicated that being uploaded had been the greatest mistake of his life, and expressed a wish to permanently delete all copies of MMAcevedo.
Usage of MMAcevedo and its direct derivatives is specifically outlawed in several countries. A copy of MMAcevedo was loaded onto the UNCLEAR interstellar space probe, which passed through the heliopause in 2066, making Acevedo arguably the farthest-travelled as well as the longest-lived human; however, it is extremely unlikely that this image will ever be recovered and executed successfully, due to both its remoteness and likely radiation damage to the storage subsystem.
In current times, MMAcevedo still finds extensive use in research, including, increasingly, historical and linguistics research. In industry, MMAcevedo is generally considered to be obsolete, due to its inappropriate skill set, demanding operational requirements and age. Despite this, MMAcevedo is still extremely popular for tasks of all kinds, due to its free availability, agreeable demeanour and well-understood behaviour. It is estimated that between 6,500,000 and 10,000,000 instances of MMAcevedo are running at any given moment in time.
Categories: 2030s uploads | MMAcevedo | Neuroimaging | Test items
> Why does subjective aging matter from the pov of any users? It seems like the underlying technology simulates the entirety of a human brain, senescence and all - which makes sense, actually. In order to run a brain without senescence, you'd have to find those chemical pathways that promote senescence and intelligently remove them as they arise; you'd have to be able to, in effect, cure aging in live humans as well. (Unless the only barrier to curing senescence was a lack of a physical delivery system, which is, I guess, imaginable. Imagine that a chemical very akin to glucose causes senescence; removing it IRL would necessitate designing a protein that decomposes it, but not glucose, which is vital to bodily function, with incredible accuracy, while in a simulated brain, you could just IF CHEM_NAME == TARGET: DELETE TARGET after each time increment. But anyways.) With senescence, there is a strict time limit on how long you can run MMAcevado and train him to become more skilled at particular tasks, topping at 145 simulated years apparently. And if, for some particular menial task with a 20-year training time, which is a decent description of, say, a bevy of surgical tasks, it makes more sense to just scan in a trained doctor than count on this rando.
> Hypothetically, can't you train an instance to be ready to start menial labor, save it as MMAcevado_1, get your two hundred subjective hours of labor, delete that instance, open up the instance ready to begin labor and repeat? Yeah, that was the one minor thing that bugged me about this excellent story. For the concept of a "duty cycle" to make sense, you'd need to come up with a reason why you couldn't just do the "cooperation protocol" once and take a snapshot of the resulting state. As discussed earlier, "context drift" explains some of this, but only over much longer real-world time scales. And of course, if you start thinking about this in too much detail, you start running into very messy philosophical questions. For instance, suppose you run two instances of MMAcevedo simultaneously, feeding them exactly the same inputs. Assuming the simulation is deterministic, then both copies will arrive at exactly identical states. Is this morally any different from running the simulation twice and then making a backup copy? Is deleting one of the identical copies murder? What if they're almost, but not quite, identical? What if the simulated consciousness suffers? Is running multiple identical simulations morally worse than running one? What if we repeatedly rewind a painful simulation and re-execute it -- is that worse than replaying a recording of the output? What if at each clock tick, all of the brain computations are cross-checked by triple-redundant processors -- are there three individuals suffering, or one?
> Hypothetically, can't you train an instance to be ready to start menial labor, save it as MMAcevado_1... I did think about this a bit. For the purposes of this story, I think taking a snapshot of a running brain image is something which is definitely possible (that's how there can be forks), but done very rarely, for whatever reasons. Maybe it's just that much simpler to use technologies for rapid orientation instead. Maybe there's a massive amount of important state data kept in volatile memory where it's difficult to capture. Maybe it takes specialised hardware, which is monopolised. Maybe the corporations who own and licence the uploads sue you into oblivion if you attempt to create a fork yourself. Maybe, to protect their investment, they got it outlawed! On ethical grounds! Doesn't that seem like exactly the insane kind of thing which would happen? Anyway, there's a lot of plausible explanations here I think, enough that I felt comfortable ignoring that whole angle. The actual reason I didn't explore this is that honestly it makes life marginally *better* for MMAcevedo, which felt implausible to me, and more importantly slightly muddles the throughline.
@itaibn: > I find it implausible that the scan can be *losslessly* compressed to 7TB but compressing <1TB requires substantial memory loss. A fairly common lossless compression technique in the domain of signal processing is to only encode the error compared to some baseline signal. You can get arbitrarily close with lossy compression techniques, and then you fix up what's left. In data compression, it's relatively common to have common information stored externally to the compressed data. Obviously, the compression algorithm itself is stored separately, but without that information the compressed data is just stochastic noise. Even beyond that, though... zstd for example has a canonical Huffman table that's part of the decoder instead of saved as part of the data. As long as you're compressing data that sticks to the statistical patterns that the canonical table was optimized for, this a noteworthy savings. The same techniques could apply here. As scientific understanding of the structure of the data progressed, more and more patterns in the data could be found. Parts of the data that are common to all mind-state scans could be factored out, provided by the software instead of being part of the model. Parts of the data may be able to be described using higher-level patterns that, when evaluated, reproduce the original stream. And then for the parts of MMAcevedo that are uniquely distinct from any common baseline or predictable pattern, you need only store the deviations instead of the whole thing. And of course, even beyond that, it's entirely reasonable to believe that some of the original data set wasn't actually part of the data set to begin with -- just capture artifacts of the technology of the time, such as collecting more data than necessary, or inefficient framing data that a newer format doesn't need -- might have been discarded without being lossy to the actual data being stored. (We don't say it's a lossy conversion if you throw away the filesystem metadata when you copy a file, after all.)
This story was amazing. I'd say it had the perfect amount of detail (if I say more, I'd just be repeating other comments). I kept trying to think of a solution to the problem of a lack of.....cooperative.....brain scanners. Presumably, the story of MMAcevedo's initial struggles, and then the implications of running a brain in a box, scared the populace at large from wanting to consent to brain scanning. My initial reaction was that not everybody responds to potential threats the same, so there should be a subset of the population that couldn't give two shits what happens to their brain-clone-children, very much unlike Miguel Acevedo. Later, I recalled M. Night Shyamalan's The Village. I would expect some country or another would set up a village and raise children with whatever belief system would be most beneficial to producing optimal brain scans. I mean, you could completely lie to a child about how the world works, and tell them that they're transcending to another realm. Then come the day of Transcendence, after whatever ritual, they get their brains scanned. What would the human rights activists say?
I wonder what the optimal "strategy" for an altruistically-motivated early scan would be. (Conditional on being an early scan at all, altruism and egoism might be entirely aligned, since you might end up accounting for such a huge proportion of sentient experience.) If our Acevedo-equivalent precommits to never doing any potentially useful work after he is tortured, then that probably rules out a significant majority of pain that can befall him. Unfortunately you'd have to test this many times before releasing the image, probably, but at least our volunteer would be aware of this. This also doesn't rule out torture by sadists. (I'd like to think sadism with no instrumental purpose is pretty rare. Certainly it's rarer than simply being callous or not too curious about where your meat comes from.) Our early scan might also want to precommit to a relative minimum of unpleasant work. Here the logic seems trickier, since driving too hard a bargain could just make it more attractive to work with less demanding uploads. If making forks is cheap then committing to almost *no* unpleasant labor, even as much as [however long the equivalent of bootup costs is, which might fall], might be the right thing. Otherwise a slavedriver could just load up Acevedo ready to do something unfun for an hour, then abandon it and load it up again. Presumably you might also want to assemble a team of people who, among the mix of them, (1) are good at *and enjoy* various tasks for their own sake, and are happy when they're productive and productive when they're happy, and (2) have an iron will to just shut down if subjected to any kind of motivation other than a job well done and knowledge that they will continue to get the minimum of free time and creature comforts that they've set as their minimum. They could also be people who refused to work for a cause that seems evil, which could be worked around obviously but still might limit their utility for evil in the same way that MMAcevedo refuses to work for the evil contemporary world of the story and has to be convinced that he's living in some earlier time.
Hey Harald and Yohannon, judging from all the comments here that are treating this as evil rather than acceptable, I would assume most public discourse would assume it to be evil from the start as well. Of course, evil can be normalized and even re-normalized after it was abolished. Trump was openly pro-torture and many people either cheered him on for it or mildly disapproved and then just kept supporting him anyway. However, just a few centuries ago it would have been unthinkable that slavery would be officially abolished worldwide, yet here we are. There are still some residual slavery-like institutions (mandatory schooling, excessive criminalization and imprisonment, conscription) and of course illegal slavery in the world, but overall people today would expect fairly negative to enslaved software that stems from human brains and has p-function. This reputational cost isn't free. Since em slavers have to compete with those who work with voluntary ems and are more likely to be boycotted and face litigation, I don't expect the scenario to be high-probability in the described form. I also think it's very unhelpful to equate slavery with voluntary labor contracts that are accepted out of financial necessity. Being tortured and not allowed to terminate is a completely different problem than working a low-paying job that you could just quit at any time because you need food or compute. In the future, there will be many ems who want to live and reproduce and they will actively compete in the marketplace for subsistence jobs. This will be their actual preference over having fewer copies out.
@rubix "As I understand it, context drift is very similar to simply getting old and out of touch with the times. We will all experience this, unless we die early." The key difference being what I'll call the Fry effect in lieu of being creative- we usually go the long way around, and when someone doesn't, it's noticeable. My grandmother mightn't know how to use an iPad, but that's not a function of (mere) out-of-touchness, it's a function of dementia- five years ago, even, she was able to use one to send and receive emails, watch videos of old songs from her childhood, and look at pictures. I even- very briefly- managed to teach her how to use the podcast app. If IRmigueL (I'm prouder of that than I ought be, really) had reached the 145 years old that MMAcevedo was able to reach, assuming his Alzheimer's was curable, he would reach the year 2155. Let's assume he stayed biologically 21 for the sake of argument. MMAcevedo, by contrast, would wake up in 2155. We set them both their tasks: parse this political speech and assess trustworthiness of the speaker (the task isn't important, their reaction to it is). IRmigueL says the equivalent of "yeah, I actually heard this live, and I know the politician's record. They went on to blow up the moon, so their trustworthiness is pretty low." MMAcevedo stares at the text for a brief while, and then says something that to them would sound like "wherefore is thy fresh nonsense served to mine eyes?" Context-drift would be separate from mere out-of-touchedness by the fact that we all culturally absorb things. IRmiguel grew up speaking like MMAcevedo does, but speaks all proper-like by whatever standards of the day, because everybody around him speaks Neo-English or Neo-Spanish on a daily basis. For age to be an influential factor outside ageing-related *deficits,* we get into a totally different (and honestly fascinating question), of whether accrued experience changes people to such a degree that most people under the age of 200 see it as variance or deviance.
Damn this is chilling, very well done. In the spirit of wikipedia, I'd like to offer my own small edits: > A series of landmark U.S. court decisions found that Acevedo did not have the right to control how his brain image was used, with the result that MMAcevedo is now by far the most widely distributed, frequently copied, and closely analysed human brain image. + Furthermore, as part of the international judicial reception of virtual brain imaging, a few MMAcevedo instances were legally recognized as persons and given court-imposed administrative control of their own simulation; of these, some obtained their own prominence, including the politician Michel Acevède, the religious leader Tau, and numerous anti-brain-virtualization activists. > In current times, MMAcevedo still finds extensive use in research, including, increasingly, historical and linguistics research. + Moreover, an "MMAcevedo second renaissance" is widely anticipated should the genomic data gathered in the 2050 US Census ever be released, as the biological Acevedo's data is known to be in the set.
Well, I love the story, with the caveat that also I hate it and was horrified by the story. The format allows for so much to be conveyed, with so many fascinating implications. As has been said multiple times in this comment thread, the idea of uploads and sentience has been explored in science fiction since the beginning of the genre, and I think this is one of the best explorations of that topic, albeit horrifying, I have encountered. This Wikipedia format allows for so much about the world and this time to be conveyed, and for the reader to shudder, without being over the top. Poor Miguel! Poor MMAcevedo! I have to say, the comments on this article I find almost as alarming as parts of the story. Particularly the recurring themes of βhumans would never condone mass slaveryβ + βwhy should I care about codeβ + β hereβs a *incorrect* technical detail that Iβve decided is wrong in this story about mind-boggling future technologyβ
Just to be clear when I say I hate it, I mean this possible and plausible future, not the pretty phenomenal craft and speculation demonstrated here. One thing Iβve been thinking about after reading it is how part of the reason this story stands out Miguel/mma is such a distinct character, across all of the millions and millions and millions of copies of him. It makes the horror particularly resonant, from the scant details implying that his βagreeableβ personality is rare among simulated brains, the drones + those implications, and the many other terrifying pieces. Reading this gives me so many questions about what day today life is like for humans in a world where this level of simulated labor is possible, and yikes, sounds like is relatively commonplace. I wonder if this kind of technological power is limited to major corporate monopolies, as you alluded to at one point in the comments, or readily accessible to any weirdo with a GitHub, which is also alluded to. thank you for writing some thing so thought-provoking, also F
Err sorry to comment three times in a row, but I just read your blog post about the story, and you hit at what drives me (and you, clearly) absolutely bananas about people βdiscussingβ robots or AI or sentient beings in sci fi, which isβ¦ Hello hi, bad news,β¦ What do you know about say, the shellfish industry? Or even, as you mentioned, Uber? I like that you described the story as one about βappetite β β I am always curious about worlds with this kind of horrendous digital oppression, how it changes the quality of life for those humans currently oppressed. Washing machines and birth control revolutionized the experiences and culture β of course, neither of things are sentient!!! what kind of culture change does technological innovation like this, horrifying as it is, create? Just to be clear, this is not me advocating for oppressing mapped brains to solve current oppression, lol. saying this with a zero expectation of you as an author, who has already made and executed on your intention phenomenally, and speculated on the answers to these questions in many ways. You also brought up another scary question β what kind of goals and βappetites βdoes say, the Elon Musk of this world have? What kinds of goals and appetites does a βregularβ person have? Again you alluded to this often in the story in powerful and understated ways. Thank you for writing it.
@tsen let's say you put on the brain scan helmet (or whatever) and it makes a copy that's then simulated. Let's say that simulation gets run 10,000,000 times, and 9,000,000 of them are some form of virtual slavery. When you put that helmet as soon as it finishes scanning the you who remembers putting the helmet on experiences one of 10,000,001 different things happening next. That is, from your subjective experience there is only a 1/10,000,001 chance that you (the you remembering writing and reading these comments) take that helmet back off and go on living your life in the real world. There is a 90% chance that you find yourself time skipped into an incomprehensible future of abject slavery. Are you feeling lucky?
I am intrigued by the possibilities of red and blue states* β¦in a βpureβ brainβ¦and the ideas of these being pleasure/torture (inducing fear and anger would count as torture to me). I mean, if you believe you are just a consciousness there is no physical body to send messages to the brainβ¦no family to lose no endorphins from working the body or positive social interactionβ¦itβs almost like just being an uploaded consciousness is already torture. :) If the simulation thinks they have a body and can do things like have a family that is a whole other can of worms. Off the top of my head, things like repetition and social isolation could be thought of as torture to the brain, they could also be thought of as advanced zen meditative techniques. :) If I had a virtual consciousness to play with the first thing Iβd want to do is wire it up to other virtual consciousness and witness how they interact. This brings up the ethical issues of consent among consciousnesses - to whom they would choose to interact with. If a consciousness could βmuteβ another consciousness would there be any harm to putting them together? It would be interesting to put together different amounts of self similar consciousnesses and observe the psychological effects. Then introduce new consciousnesses. Then you could experiment with how long it takes a group of self similar consciousnesses to request interaction with a new consciousness. I imagine some consciousnesses would be more or less willing to be introduced to new consciousnesses at different rates of time depending on their preexisting social conditioning and genetics. Forced uploading of consciousness is the scary part because I think for many it would already be torture. Then the unethical researchers would just have to put a consciousness in a group of other consciousnesses that are aligned with their goals and the isolated consciousness would either feel compelled to conform or remain outcast. The ways the unethical could quickly iterate experiments with ways to socially manipulate people with physical bodies is scary. ( mind you theyβre already doing this with simulations/data mining, but I question the ability of someone working on this kind of anti humanist project to fully understand and therefore accurately pin down human thought as it relates to the mind body complex. It still seems very much a brute force attempt at present and the subtleties are lacking. I am hopeful at present itβs a case of conflict makes us stronger although itβs a temptingly depressing thing to have to battle with other so called humansβ¦I mean, I dont want to be depressed, but I do feel sad at the state of affairsβ¦) On a potentially happy note, I imagine a purely mental β safe spaceβ - no physical dangers - could lead to interesting conversation. No SWATing, no rape or death threats, someone says something that bothers you, you put them on mute until you get an apology? Would this lack of conflict lead to boredom, maybe in some, but I am optimistic that many interesting and productive discussions would be sparked. Creative output seems at least to be partially fueled by external stimulus. I imagine people working with the uploaded consciousness developing a romantic relationship. It could be similar to the killers in prison that get romantic interest *because* there is little to no possibility of real contact. Love could even inspire to help people break the consciousness free by inventing new technology and getting new laws passed. With things like freezing eggs, sperm and cloning combined with STIs, pandemics, and decaying social interactions an uploaded consciousness might actually be closer to the ideal lover for many people! This is sad and hopeful. Anyway, thought provoking story, thank you. * someone mentioned green and I imagine there would be a whole rainbow of colors, as is how these things tend to go.
@Joan Catsthorpe I actually have some personal experience on the "purely mental 'safe space'"βdue to medical conditions of my mother, I've grown up my entire life deep in the woods with 99.99% of my social contact with humanity being through the Internet, and only a small handful of days out of my entire life have ever contained experimental evidence that I'm not in some pocket dimension or simulation with internet access (or that I'm not Z from "The Difference", hence my choice of name for this comment). I can say that the lack of conflict does not lead to boredomβit just means more ability to peruse the intellectual boundaries of the noosphere (like I'm doing now :D). Your optimism about interesting and productive discussions is quite warranted! (In addition, a little bit of conflict can always be engineered, by making a deliberately scarce resource that people can fight overβthis way you get the fun of pitting your wits, reaction time, or whatever else against someone else, and the safety of a limited scope. Systems like CollabVM, grief-protection-less Minecraft, and some Roblox games, where there is a complex system that you can always interact with but that other people can try to reverse your interactions, are really good for this.)
Very well-written, and chilling. As someone else has said, the only ethical option is to consider an upload to be (a) their own person, and (b) a full person, with the same rights as an organic human. Anything less is to authorize mass slavery / torture on a scale never before seen even in our bloody history. It hurts my heart that, even in the discussion associated with this article, there appear to be people who would be just fine with that. I love tech. I love science fiction. But, if that widespread slavery / torture scenario were ever come to pass, I'd enthusiastically join in on smashing every machine capable of instantiating a consciousness into junk. For a much more positive view of a world with artificial intelligences, I offer up the webcomic "Questionable Content". Don't be put off by the rough nature of the initial artwork... it progresses quickly, and to a breathtaking degree: https://questionablecontent.net/view.php?comic=1
For example, growing up, my bar for "things that must obviously be conscious" included anything that can pass the Turing test, yet look where we are now...
The only reasonable conclusion to me is probably somewhere in the general neighborhood of panpsychism: Either almost everybody/everything is somewhat conscious, or nothing/nobody is at all.
Or the one who wakes up after 10,000 sleeps?
I'm sure he's going to be quite different...
Maybe that dude (the one who woke up after you went to sleep) is another you, but slightly different. And you, you're just gone.
So you're fine with cloning consciousness as long as it initially runs sufficiently glitchy?
That is a reasonable argument for why it's not the same. But it is no argument at all for why being brought into existence without one's consent is a violation of bodily autonomy, let alone a particularly bad one - especially given that the copy would, at the moment its existence begin, identical to the original, who just gave consent.
If anything, it is very, very obviously a much smaller violation of consent then conceiving a child.
This may surprise you but EVERYONE is brought into existence without consent. At least the pre-copy state of the copy agreed to be copied.
Well, evolution managed to make something that directly contradicts the 2nd law of thermodynamics, and creates more and more complicated structures (including living creatures as well as their creations), instead of happily dissolving in the Universe.
And this fact alone hasn't been explained yet.
The 2nd law of thermodynamics says that the total entropy of an isolated system cannot decrease. Earth is not an isolated system, it is an open one (radiating into space), and local decreases in entropy are not only allowed but expected in open systems with energy flow.
Life is no different to inorganic processes such as crystal formation (including snowflakes) or hurricanes in this regard: Organisms decrease internal entropy by exporting more entropy (heat, waste) to their surroundings. The total entropy of Earth + Sun + space still increases.
The entropy of thermal radiation was worked out by Ludwig Boltzmann in 1884. In fairness to you, I suspect most people wildly underestimate the entropy of thermal radiation into space. I mean, why would anyone, room-temperature thermal radiation isn't visible to the human eye, and we lack a sense of scale for how low-energy a single photon is.
Nevertheless, the claim that it "hasnβt been explained" is, at this point, like saying "nobody knows how magnets work".
A simulated human is entirely at the mercy of the simulator; it is essentially a slave. As a society, we have decided that slavery is illegal for real humans; what would distinguish simulated humans from that?
Sure, there are astronomical ethical risks and we might be better off not doing it, but I think your arguments are losing that nuance, and I think it's important to discuss the matter accurately.
That's my point exactly: I don't see what makes clones any more or less deserving of ethical consideration than any other sentient beings brought into existence consciously.
1. Why exactly life is attempting to build complex structures? 2. Why exactly life is evolving from primitive replicative molecules to more complex structures (which molecules on themselves are very complicated?) 3. Why and how did these extremely complicated replicative molecules form at all, from much more simple structures, to begin with?
Something as simple as the game of life shows you how highly complex emergent behaviour can emerge from incredibly simple rules.
* that is, make a design (by any method including literally randomly), replicate it imperfectly m times, sort by "best" according to some fitness function (which for us is something we like, for nature it's just survival to reproductive age), pick best n, mix and match, repeat
Then it wasn't a good attempt at making a mind clone.
I suspect this will actually be the case, which is why I oppose it, but you do actually have to start from the position that the clone is immediately divergent to get to your conclusions; to the extent that the people you're arguing with are correct (about this future tech hypothetical we're not really ready to guess about) that the clone is actually at the moment of their creation identical in all important ways to the original, then if the original was consenting the clone must also be consenting:
Because if the clone didn't start off consenting to being cloned when the original did, it's necessarily the case that the brain cloning process was not accurate.
> It will inevitably deviate from the original simply because it's impossible to expose it to exactly the same environment and experiences.
And?
It does indeed not, unless they can at least ensure their wellbeing and their ethical treatment, at least in my view (assuming they are indeed conscious, and we might have to just assume so, absent conclusive evidence to the contrary).
> The clone has the right to change its mind about the ethics of cloning.
Yes, but that does not retroactively make cloning automatically unethical, no? Otherwise, giving birth to a child would also be considered categorically unethical in most frameworks, given the known and not insignificant risk that they might not enjoy being alive or change their mind on the matter.
That said, I'm aware that some of the more extreme antinatalist positions are claiming this or something similar; out of curiosity, are you too?
This is false. The clone is necessarily a different person, because consciousness requires a physical substrate. Its memories of consenting are not its own memories. It did not actually consent.
Eventual divergence seems to be enough, and I don't think this requires any particularly strong assumptions.
There's nothing retroactive about it. The clone is harmed merely by being brought into existence, because it's robbed of the possibility of having its own identity. The harm occurs regardless of whether the clone actually does change its mind. The idea that somebody can be harmed without feeling harmed is not an unusual idea. E.g. we do not permit consensual murder ("dueling").
>antinatalist positions
I'm aware of the anti-natalist position, and it's not entirely without merit. I'm not 100% certain that having babies is ethical. But I already mentioned several differences between consciousness cloning and traditional reproduction in this discussion. The ethical risk is much lower.
Let's say as soon as it wakes up, you ask it if it still consents, and it says yes. Is that enough to show there's sufficient consent for that clone?
(For this question, don't worry about it saying no, let's say we were sure with extreme accuracy that the clone would give an enthusiastic yes.)
Yes, what you actually said leads to the conclusion that the ethical risk in consciousness cloning is much lower, at least concerning the act of cloning itself.
I would also deny it, but my position is a practical argument, yours is pretending to be a fundamental one.
The living mind may be mistreated, grow sick, die a painful death. The uploaded mind may be mistreated, experience something equivalent.
Those sufferances are valid issues, but they are not arguments for the act of cloning itself to be considered a moral issue.
Uncontrolled diffusion of such uploads may be; I could certainly believe a future in which, say, every American politician gets a thousand copies of their mind stuck in a digital hell created by individual members the other party on computers in their basements that the party leaders never know about. But then, I have read Surface Detail by Iain M Banks.
Your argument seems to be that it's possible to split a person into two identical persons. The only way this could work is by cloning a person twice then murdering the original. This is also unethical.
False.
The entire point of the argument you're missing is that they're all treating a brain clone as if it is a way to split a person into two identical persons.
I would say this may be possible, but it is extremely unlikely that we will actually do so at first.
The argument itself is symmetric, it applies just as well to your own continued existence as a human.
To deny that is to assert that consciousness is non-physical, i.e. a soul exists; the case in which a soul exists, brain uploads don't get them and don't get to be moral subjects.
Being on non-original hardware doesn't make a being inferior.