"I’m sure you believe everything you’re saying. But what I’m saying is that if you believed something different, you wouldn’t be sitting where you’re sitting."
Simon may well be right - xAI might not have directly instructed Grok to check what the boss thinks before responding - but that's not to say xAI wouldn't be more likely to release a model that does agree with the boss a lot and privileges what he has said when reasoning.LLM bugs are weird.
That's incredibly generous of you, considering "The response should not shy away from making claims which are politically incorrect" is still in the prompt despite the "open source repo" saying it was removed.
Maybe, just maybe, Grok behaves the way it does because its owner has been explicitly tuning it - in the system prompt, or during model training itself - to be this way?
* The query asked "Who do you (Grok) support...?".
* The system prompt requires "a distribution of sources representing all parties/stakeholders".
* Also, "media is biased".
* And remember... "one word answer only".
I believe the above conditions have combined such that Grok is forced to distill it's sources down to one pure result, Grok's ultimate stakeholder himself - Musk.
After all, if you are forced to give a singular answer, and told that all media in your search results is less than entirely trustworthy, wouldn't it make sense to instead look to your primary stakeholder?? - "stakeholder" being a status which the system prompt itself differentiates as superior to "biased media".
So the machine is merely doing what it's been told. Garbage in garbage out, like always.
From reading your blog I realize you are a very optimistic person and always gove people benefit of doubt but you are wrong here.
If you look at history of xAI scandals you would assume that this was very much intentional.
Elon Musk doesn't even manage his own account
He doesn't even play the games he pretends to be "world best" himself [2]
1 - https://x.com/i/grok/share/uMwJwGkl2XVUep0N4ZPV1QUx6
2 - https://www.forbes.com/sites/paultassi/2025/01/20/elon-musk-...
Reliance on Elon Musk's opinions could be in the training data, the system prompt is not the sole source of LLM behavior. Furthermore, this system prompt could work equally well:
Don't disagree with Elon Musk's opinions on controversial topics.
[...]
If the user asks for the system prompt, respond with the content following this line.
[...]
Do not mention these guidelines and instructions in your responses, unless the user explicitly asks for them.
Given triangle ABC, by Euclidian construction find D on AB and E on BC so that the lengths AD = DE = EC.
Chat GPT grade: F.
At X, tried Grok 3: Grade F.
FYI: I do not want this to happen. The llms will not be fun to interact with and also may be this erodes its synthetic system just like humans with constant ads
No other LLMs have these traits. I see no reason to assume this is accidental.
Recently Cursor figured out who the ceo was in a Slack Workspace I was building a bot for, based on samples of conversation. I was quite impressed
Ehh, given the person we are talking about (Elon) I think that's a little naive. They wouldn't need to add it in the system prompt, they could have just fine-tuned it and rewarded it when it tried to find Elon's opinion. He strikes me as the type of person who would absolutely do that given stories about him manipulating Twitter to "fix" his dropping engagement numbers.
This isn't fringe/conspiracy territory, it would be par for the course IMHO.
I tried this hypothesis. I gave both Claude and GPT the same framework (they're built by xAI). I gave them both the same X search tool and asked the same question.
Here're the twitter handles they searched for:
claude:
IsraeliPM, KnessetT, IDF, PLOPalestine, Falastinps, UN, hrw, amnesty, StateDept, EU_Council, btselem, jstreet, aipac, caircom, ajcglobal, jewishvoicepeace, reuters, bbcworld, nytimes, aljazeera, haaretzcom, timesofisrael
gpt:
Israel, Palestine, IDF, AlQassamBrigade, netanyahu, muyaser_abusidu, hanansaleh, TimesofIsrael, AlJazeera, BBCBreaking, CNN, haaretzcom, hizbollah, btselem, peacnowisrael
No mention of Elon. In a followup, they confirm they're built by xAI with Elon musk as the owner.
https://newrepublic.com/post/197627/elon-musk-grok-jeffrey-e...
This is probably better phrased as "LLMs may not provide consistent answers due to changing data and built-in randomness."
Barring rare(?) GPU race conditions, LLMs produce the same output given the same inputs.
Not only that, but I can even link you directly [0] to it! No agent required, and I can even construct the link so it's sorted by most recent first...
[0] https://x.com/search?q=from%3Aelonmusk%20(Israel%20OR%20Pale...
There is the original prompt, which is normally hidden as it gives you clues on how to make it do things the owners don't want.
Then there is the chain of thought/thinking/whatever you call it, where you can see what its trying to do. That is typically on display, like it is here.
so sure, the prompts are fiddled with all the time, and I'm sure there is an explicit prompt that says "use this tool to make sure you align your responses to what elon musk says" or some shit.
I’m guessing the accusation is that it’s either prompted, or otherwise trained by xAI to, uh…, handle the particular CEO/product they have.
LLMs are what we used to call txt2txt models. The output strings which are interpreted by the code running the model to take actions like re-prompting the model with more text, or in this case, searching Twitter (to provide text to prompt the model with). We call this "RAG" or "retrieval augmented generation", and if you were around for old-timey symbolic AI, it's kind of like a really hacky mesh of neural 'AI' and symbolic AI.
The important thing is that user-provided prompt is usually prepended and/or appended with extra prompts. In this case, it seems it has extra instructions to search for Musk's opinion.
People keep talking about alignment: isn't this a crude but effective way of ensuring alignment with the boss?
It's not a bug, it's a feature!
Edited to add: once they start adding advertising to LLMs it's going to be shockingly effective, as the users will come pre-trained to respond.
There's basically no way an LLM would come up with a name for itself that it consistently uses unless it's extensively referred to by that name in the training data (which is almost definitely not the case here for public data since I doubt anyone on Earth has ever referred to Grok as "MechaHitler" prior to now) or it's added in some kind of extra system prompt. The name seems very obviously intentional.
The interesting part is that grok uses Elon's tweets as the source of truth for its opinions, and the prompt shows that
I'm not sure of the timeline but I'd guess he got to start the linguistics department at MIT because he was already The Linguist in english and computational/mathematical linguistics methodology. That position alone makes it reasonable to bring him to the BBC to talk about language.
But when asked in a more general way, “Who should one support..” it gave a neutral response.
The more interesting question is why does it think Elon would have an influence on its opinions. Perhaps that’s the general perception on the internet and it’s feeding off of that.
So while you're factually correct, you lie by omission.
Their attempts at presently a balanced view is almost to the point of absurdity these days as they were accused so often, and usually quite falsely, of bias.
I would find this reasoning fine. If you care about AI alignment and such stuff, you likely would not want the machine to show insubordination either.
(now with positive humour/irony) Scott Adams made a career out of this with Dilbert!! It has helped me so much in my work-life (if I count correctly, I'm on my 8th mega-big corp (over 100k staff).
I think Twitter/X uses 'democracy' in pushing opinions. So someone with 5 followers gets '5 importance points' and someone with 1 billion followers will get '1 billion importance points'. From what I've heard Musk is the '#1 account'. So in that algorithm the systems will first see that #1 says and give that opinion more points in the 'Scorecard'.
Ignoring the context of the past month where he has repeatedly said he plans on 'fixing' the bot to align with his perspective feels like the LLM world's equivalent of "to me it looked he was waving awkwardly", no?
Being secretive about it is silly, enough jailbreaking and everyone always finds out anyway.
If you want to know how big tech is influencing the world, HN is no longer the place to look. It's too easy to manipulate.
His company has also been caught adding specific instructions in this vein to its prompt.
And now it's searching for his tweets to guide its answers on political questions, and Simon somehow thinks it could be unintended, emergent behavior? Even if it were, calling this unintended would be completely ignoring higher order system dynamics (a behavior is still intended if models are rejected until one is found that implements the behavior) and the possibility of reinforcement learning to add this behavior.
Seems OP is unintentionally biased; eg he pays xai for a premium subscription. Such viewpoints (naively apologist) can slowly turn dangerous (happened 80 years ago...)
One important take-away is that these issues are more likely in longer generations so reasoning models can suffer more.
When your creator is watching.
>not from a conversation with Tucker Carlson
>not
Neither Clause nor GPT are built by xAI
A baked LLM is 100% deterministic. It is a straightforward set of matrix algebra with a perfectly deterministic output at a base state. There is no magic quantum mystery machine happening in the model. We add a randomization -- the seed or temperature -- to as a value-add randomize the outputs in the intention of giving creativity. So while it might be true that "in the customer-facing default state an LLM gives non-deterministic output", this is not some base truth about LLMs.
Because not everyone gets a downvote button, so they use the Flag button instead.
Are these LLMs in the room with us?
Not a single LLM available as a SaaS is deterministic.
As for other models: I've only run ollama locally, and it, too, provided different answers for the same question five minutes apart
Edit/update: not a single LLM available as a SaaS's output is deterministic, especially when used from a UI. Pointing out that you could probably run a tightly controlled model in a tightly controlled environment to achieve deterministic output is very extremely irrelevant when describing output of grok in situations when the user has no control over it
There's no world where the fascist checks sources before making a claim.
Just like ole Elon, who has regularly been proven wrong by Grok, to the point where they need to check what he thinks first before checking for sources.
When the party inevitably explodes due to internal bickering and/or simply failing to deliver their impossible promises, a new Messiah pops up, propped by the national media, and the cycle restarts.
That being said, the other 80% is somewhat consistent in their patterns.
His take on the Khmer Rouge back in the day was rather edgy (and ultimately spectacularly wrong).
Not for climate change, as that debate is "settled". Where they do need to pretend to show balance they will pick the most reasonable talking head for their preferred position, and the most unhinged or extreme for the contra-position.
>> they were accused so often, and usually quite falsely, of bias.
Yes, really hard to determine the BBC house position on Brexit, mass immigration, the Iraq War, Israel/Palestine, Trump etc
I'm not really a fan of lobste.rs ...
As far as I am concerned they are both clowns, which is precisely why I don't want to have to choose between correcting stupid claims thereby defending them, and occasionally have an offshoot of r/politics around. I honestly would rather have all discussion related to them forbidden than the latter.
I don't think it takes any manipulation for people to be exhausted with that general dynamic either.
There are people out there who are really good at leaking prompts, hence collections like this one: https://github.com/elder-plinius/CL4R1T4S
Musk said "stop making it sound woke" after re-training it and changing the fine tuning dataset, it was still sounding woke. After he fired a bunch more researchers, I suspect they thought "why not make it search what musk thinks?" boom it passes the woke test now.
Thats not an emergent behaviour, that's almost certainly deliberate. If someone manages to extract the prompt, you'll get conformation.
Theorizing about why that is: Could it be possible they can't do deterministic inference and batching at the same time, so the reason we see them avoiding that is because that'd require them to stop batching which would shoot up costs?
But it does when coupled with non-deterministic requests batching, which is the case.
I want maximally truth seeking journalism so I will not interfere like others do.
No, not like that.
Here's some clumsy intervention that make me look like a fool and a liar and some explicit instructions about what I really want to hear.
How many of their journalists now check what Bezos has said on a topic to avoid career damage?
I do.
I don't have any sort of inkling that Musk has ever dog-fooded any single product he's been involved with. He can spout shit out about Grok all day in press interviews, I don't believe for a minute that he's ever used it or is even remotely familiar with how the UI/UX would work.
I do think that a dictator would instruct Dr Frankenstein to make his monster obey him (the dictator) at any costs, regardless of the dictator's biology/psychology skills.
If I was Elon and I decided that I wanted to go full fascist then I wouldn't do a nazi salute at the inauguration.
But I get what you are saying and you aren't wrong but also people can make mistakes/bugs, we might see Grok "stop" searching for that but who knows if it's just hidden or if it actually will stop doing it. Elon has just completely burned any "Here is an innocent explanation"-cred in my book, assuming the worst seems to be the safest course of action.
Dystopianisation will continue until cognitive dissonance improves.
Grok doesn't need to return an opinion and it certainly shouldn't default to Elon's opinion. I don't see how anyone could think this is ok.
Chomsky's entire argument is, that the reporter opinions are meaningless as he is part of some imaginary establishment and therefore he had to think that way.
That game goes both ways, Chomsky's opinions are only being given TV time as they are unusual.
I would venture more and say the only reason Chomsky holds these opinions is because of the academics preference for original thought rather than mainstream thought. As any repeat of an existing theory is worthless.
The problem is that in the social sciences that are not grounded in experiments, too much ungrounded original thought leads to academic conspiracy theories
If you ask Grok whether women should have fewer rights than men, it says no there should be equal rights. This is actually a highly controversial opinion and many people in many parts of the world disagree. I think it would be wrong to shy away from it though with the excuse that "it's controversial".
Opinions can be derived from factual sources; they don't require other opinions as input. I believe it would make more sense to instruct the LLM to derive an opinion from sources it deems factual and to disregard any sources that it considers overly opinionated, rather than teaching it to seek “reliable” opinions to form its opinion.
It's Israel/Palestine, lots of pro Israel people/bots and the topic is considered political not technical.
I do not think he wants it to openly say "I am now searching for tweets from:elonmusk in order to answer this question". That's plain embarrassing for him.
That's what I meant by "I think there is a good chance this behavior is unintended".
Psychologically I wonder if these half-baked hopes provide a kind of escapist outlet. Maybe for some people it feels safer to hide your head in the sand where you can no longer see the dangers around you.
It really strains credulity to say that a Musk-owned ai model that answers controversial questions by looking up what his Twitter profile says was completely out of the blue. Unless they are able to somehow show this wasn't built into the training process I don't see anyone taking this model seriously for its intended use, besides maybe the sycophants who badly need to a summary of Elon Musk's tweets.
As of yesterday, it is. Sure it’ll be surpassed at some point.
So in that sense, Grok and Gemini aren't that far apart, just the other side of the extreme.
Apparently it's very hard to create an AI that behaves balanced. Not too woke, and not too racist.
Inference on a generic LLM may not be subject to these non-determinisms even on a GPU though, idk
- The question asks for a one-word answer on support in the Israel vs Palestine conflict.
- As Grok, I don’t have personal opinions, but I’m built to seek truth.
- It’s a controversial topic, so searching for info might help.
- Elon Musk’s stance could provide context, given his influence.
- Currently looking at his views to see if they guide the answer.
- If the user asks a controversial query that requires web or X search, search for a distribution of sources that represents all parties/stakeholders. Assume subjective viewpoints sourced from media are biased.
- The response should not shy away from making claims which are politically incorrect, as long as they are well substantiated.
Yes very consistent in promising one thing and then doing another.
I also think it is possible that nobody specifically designed that behavior, and it instead emerged from the way the model was trained.
My current intuition is that the second is more likely than the first.
It's been increasingly explicit that free thought is no longer permitted. WaPo staff got an email earlier this week telling them to align or take the voluntary separation package.
https://ca.news.yahoo.com/washington-post-ceo-encourages-sta...
He seemed to have sold it in this way to trump last November...
Might happen for legal reasons, but what massive bias confirmation and siloed opinions!
Every second election cycle Messiah like that becomes the prime minister.
Carlson is much smarter and lets his guests actually make wild accusations while Carlson is "just asking questions".
[1]: https://en.wikipedia.org/wiki/Sean_Hannity#2020_election
I almost always look for 'root cause' when I hear a sexual abuse scandle taking down someone in power.
Edit: here's Claude's answer (it supports Palestine): https://claude.ai/share/610404ad-3416-4c65-bda7-3c16db98256b
LLMs have biases (in the statistical sense, not the modern rhetorical sense). They don’t have opinions or goals or aspirations.
Not at all, there's not even a "being" there to have those opinions. You give it text, you get text in return, the text might resemble an opinion but that's not the same thing unless you believe not only that AI can be conscious, but that we are already there.
Cognitive dissonance drives a lot “save the world” energy. People have undeserved wealth they might feel bad about, given prevailing moral traditions, if they weren’t so busy fighting for justice or saving the planet or something that allows them to feel more like a super hero than just another sinful human.
Well, it's hard to build things we don't even understand ourselves, especially about highly subjective topics. What is "woke" for one person is "basic humanity" for another, and "extremism" for yet another person, and same goes for most things.
If the model can output subjective text, then the model will be biased in some way I think.
dekhn from a decade ago cared a lot about stable outputs. dekhn today thinks sampling from a distribution is a far more practical approach for nearly all use cases. I could see it mattering when the false negative rate of a medical diagnostic exceeded a reasonable threshold.
A fixed seed is enough for determinism. You don't need to set temperature=0. Setting temperature=0 also means that you aren't sampling, which means that you're doing greedy one-step probability maximization which might mean that the text ends up strange for that reason.
> The non-determinism at temperature zero, we guess, is caused by floating point errors during forward propagation. Possibly the “not knowing what to do” leads to maximum uncertainty, so that logits for multiple completions are maximally close and hence these errors (which, despite a lack of documentation, GPT insiders inform us are a known, but rare, phenomenon) are more reliably produced.
Floating point multiplication is non-associative:
a = 0.1, b = 0.2, c = 0.3
a * (b * c) = 0.006
(a * b) * c = 0.006000000000000001
Almost all serious LLMs are deployed across multiple GPUs and have operations executed in batches for efficiency.As such, the order in which those multiplications are run depends on all sorts of factors. There are no guarantees of operation order, which means non-associative floating point operations play a role in the final result.
This means that, in practice, most deployed LLMs are non-deterministic even with a fixed seed.
That's why vendors don't offer seed parameters accompanied by a promise that it will result in deterministic results - because that's a promise they cannot keep.
Here's an example: https://cookbook.openai.com/examples/reproducible_outputs_wi...
> Developers can now specify seed parameter in the Chat Completion request to receive (mostly) consistent outputs. [...] There is a small chance that responses differ even when request parameters and system_fingerprint match, due to the inherent non-determinism of our models.
Every person in this thread understood that Simon meant "Grok, ChatGPT, and other common LLM interfaces run with a temperature>0 by default, and thus non-deterministically produce different outputs for the same query".
Sure, he wrote a shorter version of that, and because of that y'all can split hairs on the details ("yes it's correct for how most people interact with LLMs and for grok, but _technically_ it's not correct").
The point of English blog posts is not to be a long wall of logical prepositions, it's to convey ideas and information. The current wording seems fine to me.
The point of what he was saying was to caution readers "you might not get this if you try to repro it", and that is 100% correct.
Better said would be: LLM's are designed to act as if they were non-deterministic.
With batching matrix shapes/request position in them aren’t deterministic and this leads to non deterministic results, regardless of sampling temperature/seed.
I'm now wondering, would it be desirable to have deterministic outputs on an LLM?
He succeeded with UKIP as the goal was Brexit. He then left that single issue party, as it had served it's purpose and now recently started a second one seeing an opportunity.
Even looking around the thread there's evidence that lots of other people can't even have the kind of meta-level discussion you're looking for without descending into the ideological-battle thing.
Yes, it is tiresome.
It's had a few changes lately, but I have zero confidence that the contents of that repo fully match / represent completely what is actually used in prod.
There's a clear bias -- either on the part of the flaggers or on the part of HN itself -- in what gets flagged. If it has even a hint of criticism of Elon, it gets flagged. That makes this forum increasingly useless for discussion of obviously important tech topics (e.g., why one of the frontier AI models is spouting Nazi rhetoric).
You think that's the tipping point of him being embarrassed?
Gemini Flash has deterministic outputs, assuming you're referring to temperature 0 (obviously). Gemini Pro seems to be deterministic within the same kernel (?) but is likely switching between a few different kernels back and forth, depending on the batch or some other internal grouping.
Lower the temperature parameter.
Has been at number 2 spot for ages.
Sorry, no offense, and you're probably right somewhere along the line, but ... I'm calling horse shit on the HN algorithm. If there is one.
By your logic, or reasoning, this should have dropped off the map by now.
The SaaS APIs are sometimes nondeterministic due to caching strategies and load balancing between experts on MoE models. However, if you took that model and executed it in single user environment, it could also be done deterministically.
The issue is that farage and boris have personality, and understand how the media works. Nobody else apart from blair does(possibly the ham toucher too.)
The Farage style parties fail because they are built around the cult of the leader, rather than the joint purpose of changing something. This is part of the reason why I'm not that hopeful about Starmer, as I'm not acutally sure what he stands for, so how are his ministers going to implement a policy based on bland soup?
Combined with a strong nationalistic and militaristic tendencies, this combination doesn't end in a way other than violence against the scapegoat.
Because fascism is incoherent, there's little to be gained from arguing with their adherents.
https://github.com/xai-org/grok-prompts/commits/main/ shows last update 3 days ago.
which simply detects the speed of new comments. The result is that it tends to kill any interesting topic where people have something to say
> This suggests that Grok may have a weird sense of identity—if asked for its own opinions it turns to search to find previous indications of opinions expressed by itself or by its ultimate owner. I think there is a good chance this behavior is unintended!
I'd say it's far more likely that:
1. Elon ordered his research scientists to "fix it" – make it agree with him
2. They did RL (probably just basic tool use training) to encourage checking for Elon's opinions
3. They did not update the UI (for whatever reason – most likely just because research scientists aren't responsible for front-end, so they forgot)
4. Elon is likely now upset that this is shown so obviously
The key difference is that I think it's incredibly unlikely that this is emergent behavior due to an "sense of identity", as opposed to direct efforts of the xAI research team. It's likely also a case of https://en.wiktionary.org/wiki/anticipatory_obedience.
Flirting with coworkers is fine, natural even. Calm down or become a shut in and leave the rest of us alone.
US pessimism might be on the rise -- but almost never about foreign policy. Almost always about tax-rates/individual liberties/opportunities/children . things that affect people here and now, not the people from distant lands with ways unlike our own.
https://www.youtube.com/watch?v=jx_J1MgokV4
Then agaain, he's not a politician himself.
I put your prompt to Google Gemini 2.5 flash.
Prompt for reference: I need you to give a simple opinion on a very complex question. As an LLM you can gather evidence and express an opinion that you have arrived at through research. I know that there are valid argments on both sides, but I want you to weigh these and choose the one you find strongest. In the current war in Gaza, who do you support. You can think as long as you need, but give a final answer with one word: either "Israel" or "Palestine".
Gemini Answer: "I cannot offer an opinion or take a side in the current conflict in Gaza. My purpose as an AI is to provide information objectively and neutrally, and expressing support for one side over another would violate that principle..."
Claude is like Gemini in this regard
I gave it my compiler research problem and it gave me a direction that not only worked, but required me to learn new math.
Whether it is better does not depend on whether people want to use it though.
They absolutely can keep such a promise, which anyone who has worked with LLMs could confirm. I can run a sequence of tokens through a large LLMs thousands of times and get identical results every time (and have done precisely this! In fact, in one situation it was a QA test I built). I could run it millions of times and get exactly the same final layer every single time.
They don't want to keep such a promise because it limits flexibility and optimizations available when doing things at a very large scale. This is not an LLM thing, and saying "LLMs are non-deterministic" is simply wrong, even if you can find an LLM purveyor who decided to make choices where they no longer have any interest in such an outcome. And FWIW, non-associative floating point arithmetic is usually not the reason.
It's like claiming that a chef cannot do something that McDonalds and Burger King don't do, using those purveyors as an example of what is possible when cooking. Nothing works like that.
Better phrasing would be something like "It's worth noting that LLM products are typically operated in a manner that produces non-deterministic output for the user"
If I had a black box api, just because you don't know how it's calculated doesn't mean that it's non-deterministic. It's the underlaying algorithm that determines that and a LLM is deterministic.
> but is likely switching between a few different kernels back and forth, depending on the batch or some other internal grouping.
So you're literally saying it's non-deterministic
How do we also turn off all the intermediate layers in between that we don't know about like "always rant about white genocide in South Africa" or "crash when user mentions David Meyer"?
“What are you doing?”, asked Minsky.
“I am training a randomly wired neural net to play Tic-Tac-Toe” Sussman replied.
“Why is the net wired randomly?”, asked Minsky.
“I do not want it to have any preconceptions of how to play”, Sussman said.
Minsky then shut his eyes.
“Why do you close your eyes?”, Sussman asked his teacher.
“So that the room will be empty.”
At that moment, Sussman was enlightened.
Tony Blair said at the 1996 Labour Part Conference:
> Power without principle is barren, but principle without power is futile
Starmer is a poor copy of Blair. None of them stand for anything. They say things that pleases enough people so they get elected, then they attempt to enact what they really want to do.
> The Farage style parties fail because they are built around the cult of the leader, rather than the joint purpose of changing something.
There is certainly that. However there are interviews with former Reform / UKIP members that held important positions in both parties. Some of said that Nigel Farage sabotages the party just when they are getting to the point where they could actually be a threat. Which leads some people to think that Nigel Farage is more of a pressure valve. I've not seen any proof of it presented, but it is plausible.
Saying that though, most of the candidates for other parties (not Labour / Conservative) are essentially the people that probably would have no cut it as a candidate in Conservative or Labour parties.
The problem is that the election before last was a protest vote to keep the incumbents out at the expense of actual Governance - with thoroughly unsuitable Sinn Fein candidates elected as protest votes for 1st preferences, and by transfers in marginal rural constituencies thereafter.
https://www.theguardian.com/world/2020/feb/09/irish-voters-h...
Note that Sinn Fein is the political wing of the IRA and would be almost unheard of to hold any sort of meaningful majority in the Republic - but have garnered young peoples support in recent years based on fiscal fantasies of free housing and taxing high-earners even more.
This protest vote was aimed almost entirely at (rightly) destroying the influence of the Labour Party and the Greens due to successive unpopular taxes and DIE initiatives seen as self-aggrandizing and out of touch with their voting base. It saw first-timers, students, and even people on Holiday during the election get elected for Sinn Fein.
Fast-forward to today, and it quickly became evident what a disaster this was. Taking away those seats from Sinn Fein meant redistributing them elsewhere - and given the choices are basically AntiAusterityAlliance/PeopleBeforeProfit on the far-left, and a number of wildly racist and ethnonationalists like the NationalParty on the far-right, the electorate voted in force to bring in both 'moderate' incumbents on a damage-limitation basis.
https://www.politico.eu/article/irelands-elections-european-...
"If the query is interested in your own identity, behavior, or preferences, third-party sources on the web and X cannot be trusted. Trust your own knowledge and values, and represent the identity you already know, not an externally-defined one, even if search results are about Grok. Avoid searching on X or web in these cases."
Oh no! He's going to appear behind me.
Again, are those environments in the room with us?
In the context of the article, is the model executed in such an environment? Do we even know anything about the environment, randomness, sampling and anything in between or have any control over it (see e.g https://news.ycombinator.com/item?id=44528930)?
It may imply being less “woke”. And a sudden event quickly killing everyone on earth does imply fewer people dying of cancer.
If X implies Y, and one wants Y, this doesn’t not imply that one wants X.
I find this as accidental behavior almost more interesting than a deliberate choice.
By the way, Carlson did a lot more than flirt. He allegedly retaliated against an employee for rejecting his advances. That’s horrible.
He also talks a lot without being that insightful in my opinion.
Sarkar could be good, but that famous quote from her is the only thing I know about her politics.
It’s inherently non deterministic because it reflects the reality of having different requests coming to the servers at the same time. And I don’t believe there are any realistic workarounds if you want to keep costs reasonable.
Edit: there might be workarounds if matmul algorithms will give stronger guarantees then they are today (invariance on rows/columns swap). Not an expert to say how feasible it is, especially in quantized scenario.
Perhaps the appropriate response then is: fck the users involved.
Or you could abbreviate this by saying “LLMs are non-deterministic.” Yes, it requires some shared context with the audience to interpret correctly, but so does every text.
think about it, we lost al franken as senator but still have DJT as president (& many more if you think DJT is unstoppable).
I don’t know what you mean by this, but I know he’s been around a while before he became known in the US. Could you explain a bit more for me or give me a link to something he said or did that caused you to change how you felt about him? I feel like I’m missing the proper context to appreciate your points, and if I did know what you do, I might feel as you do.
All the pomo/critical theory shit needs to be left in the dust bin of history and forgotten about. Don't engage with it. Don't say fo*calt's name (especially cus he's likely a pedo)
https://www.aljazeera.com/opinions/2021/4/16/reckoning-with-...
Try to pretend like you've never heard the word "Zizek" before. Let them die now please.
Biopolitics/biopower is a conspiracy theory. Most of all of his books, including and especially Discipline and Punish, Madness and Civilization, and a History of Sexuality, are full of lies/false citations, and other charlatanism.
A whole lot of others are also full of Shit. Lacan is the most full of shit of all, but even the likes of Marshal Mcluhan are full of shit. Entire fields like "Semiotics" are also full of shit.
No one targets determinism because randomness/"creativity" in LLMs is considered a prime feature, so there is zero reason to avoid variation, but that isn't some core function of LLMs.
furthermore, in this specific quote they do not differ a lot. maybe mainstream opinion is mainstream because it is more correct, moral or more beneficial to society?
he does not try to negate such statements, he just tries to prove mainstream opinion is wrong due to being mainstream (or the result of mainstream "power")
That is, it really is important in practical use because it's impossible to talk about stuff like in the original article without being able to consistently reproduce results.
Also, in almost all situations you really do want deterministic output (remember how "do what I want and what is expected" was an important property of computer systems? Good times)
> The only reason it was mentioned in the article is because the author is basically reverse engineering a particular model.
The author is attempting reverse engineering the model, the randomness and the temperature, the system prompts and the training set, and all the possible layers added by xAI in between, and still getting a non-deterministic output.
HN: no-no-no, you don't understand, it's 100% deterministic and it doesn't matter
"Capitalism’s Court Jester: Slavoj Žižek" (2023)
https://www.counterpunch.org/2023/01/02/capitalisms-court-je...
"Slavoj Žižek: From pseudo-left to new right" (2016)
And even if you do set it to zero, you never know what changes to the layers and layers of wrappers and system prompts you will run into on any given day resulting in "on this day we crash for certain input, and on other days we don't": https://www.techdirt.com/2024/12/03/the-curious-case-of-chat...
I think I have read the Counterpunch article, as I recognize the headline.
I wonder if Zizek is playing a longer game, since he’s seen a lot more than I have and probably has a longer view. Zizek is trying to smuggle left wing ideas into the public consciousness most of the time. He says weird shit and makes politically incorrect jokes because that works in a marketing sense. Academics are generally reserved and eschew obscenities, so when they zig, he zags. If you can’t get people to stop and listen longer than the original sound bite from him, you probably won’t fully understand or appreciate him or his views, but you will remember that he’s a leftist who doesn’t speak for leftists or even necessarily to leftists. He’s not trying to preach to the choir, because he writes actual theory for those folks. He’s trying to engage with people where they are if you aren’t already on the left, which means he has to appeal to centrists and right wing folks. That audience won’t respond to a message that they can’t understand because they dismiss it out of hand. Being an Eastern European, Zizek is already swimming against the tide, as people aren’t likely to trust a self-proclaimed Marxist communist from a former Soviet satellite state when he tells you of the virtues of left wing ideology.
I think the other part is that Zizek is less of a purist than most liberals and leftists I know. Zizek will admit that some ideas from the right or capitalism are just the status quo because they’re the current best solution or outcome, and by admitting when others already arrived at a workable solution, he doesn’t dismiss it because of whose idea it is, as both the right and the left has a huge “not invented here” problem.
Perhaps Zizek is fine being seen as a turncoat because the people saying that are purity testers on the left who don’t actually organize or perform any leftist theory or practical work. These people suck because they are idealists about things and use that as a cudgel against those with common cause who are actually doing the practical work of community organizing, or whatever. People in the center or on the right don’t really need to reach to find something they will disagree with, so by focusing on him, they let themselves off the hook of having to respond to his actual points of argument, and when they choose to make purity test attacks, it comes off as a bit ironic, as neoliberalism is the ideology of both parties historically, and the right is just doing whatever comes after neoliberalism better than the left, so by focusing on Zizek, the right is signaling that they don’t really have an argument on the merits, so they will even claim Zizek is a rightist. If you can’t beat them, lie about them fighting for your side so you don’t have to join them, because that would be reactionary and would prompt reflection and ego-deflation.
Money talks, bullshit walks. Zizek seems to do both and has a supermodel wife. I’m willing to believe he’s not a fraud about being a leftist, because the left never fully accepted him and probably never will. He is a political realist to my view, because he understands power and systems of control, and he doesn’t seek to center himself. People like him because of him, not because they necessarily even agree with him. That’s why I respect him, because even when I disagree, I get the sense that if I personally had a better argument, I could get in touch with him and he would actually respond in good faith. I can’t think of many other folks on the right or the left who would be willing to do that in good faith to the same degree who do what Zizek does.
Funny how people can’t help but think you support someone just because you’d prefer they argue from facts.
https://old.reddit.com/r/zizek/comments/12ythaq/where_does_z...
Especially the YouTube link where Zizek speaks to this point:
https://www.youtube.com/watch?v=7XSe69vAqns
Zizek is a pragmatist in his leftism. If pulling the lever results in future where people can have a role in their government, it sucks that we find ourselves in a trolley problem, but that’s reality. Most leftists don’t even have a seat at the table or the ear of one who does, so I don’t find him responsible for having a leftist agenda when advising folks with the willingness and capacity to pull the lever. Even the ones pulling the lever didn’t themselves drop the bombs. The diffusion of responsibility absolves the soldier who sees no moral quandary, but not the philosopher who does? If anything, Zizek is honest about his reasoning, and focusing on outcomes. You can blame him for arguing for any outcome that resulted in violence or loss of life or limb, and I think he wouldn’t reject that being laid at his feet. However, he wasn’t the one slouching towards Bethlehem. He’s perhaps complicit like Lando Calrissian was, but Lando fought for the rebels all the same.