I extracted the safety filters from Apple Intelligence models
github.comI managed to reverse engineer the encryption (refered to as “Obfuscation” in the framework) responsible for managing the safety filters of Apple Intelligence models. I have extracted them into a repository. I encourage you to take a look around.
Some of the combinations are a bit weird, This one has lots of stuff avoiding death....together with a set ensuring all the Apple brands have the correct capitalisation. Priorities hey!
https://github.com/BlueFalconHD/apple_generative_model_safet...
Interesting that it didn't seem to include "unalive".
Which as a phenomenon is so very telling that no one actually cares what people are really saying. Everyone, including the platforms knows what that means. It's all performative.
It's totally performative. There's no way to stay ahead of the new language that people create.
At what point do the new words become the actual words? Are there many instances of people using unalive IRL?
This is somewhat related to the concept of the "euphemism treadmill":
the matter-of-fact term of today becomes the pejorative of tomorrow so a new term is invented to avoid the negative connotation of the original term. Then eventually the new term becomes a pejorative and the cycle continues.
It has been suggested - although I am unsure if there is strong evidence - that the word "bear" is a euphemism along these lines, meaning "brown one" for the since-forgotten original name for the animal, as it was allegedly believed to be either too frightful to say aloud, or would summon a bear.
While it's conceivable (consider phrases such as "speak of the devil and he shall appear" and similar phrases in other languages), I would also say the etymology of names for things are often at the same level as "brown one":
https://en.wiktionary.org/wiki/Category:English_terms_derive...It's not suggested, the historic use of noa words is a fact.
See https://en.wikipedia.org/wiki/Noa-name
I mean suggested in the sense that this specific example cannot be evidenced, as there aren't any primary sources from that time we can refer to.
I found out recently that "goof" is extremely offensive in some circles. Which is insane to me because I've always used it specifically because it's clearly in jest and not meant to be offensive. I can't win.
Now I'm curious. To whom is goof offensive? And is it newly-acquired offense or does it have old roots?
> There's no way to stay ahead of the new language that people create.
I'm imagining a new exploit: After someone says something totally innocent, people gang up in the comments to act like a terrible vicious slur has been said, and then the moderation system (with an LLM involved somewhere) "learns" that an arbitrary term is heinous eand indirectly bans any discussion of that topic.
The first half of that already happened with the OK gesture: https://www.bbc.co.uk/news/newsbeat-49837898.
Though it would be fun to see what happens if an LLM if used to ban anything that tends to generate heated exchanges. It would presumably learn to ban racial terms, politics and politicians and words like "immigrant" (i.e. basically the list in this repo), but what else could it be persuaded to ban? Vim and Emacs? SystemD? Anything involving cyclists? Parenting advice?
The OK gesture has always been very inappropriate in most parts of the world.
> The OK gesture has always been very inappropriate in most parts of the world.
No, it isn't, and especially hasn't been historically. The negative connotations are overwhelmingly modern.
The areas where it is very inappropriate right now tally up to maybe 1 billion people*. That's pretty far from "most". For everyone else it is mostly positive, neutral, or meaningless.
*Brazil, Turkey, Iran, Iraq, Saudi Arabia, Greece, Italy, Spain, Russia, Ukraine, Belarus, other parts of Eastern Europe
I use it in Brazil scuba diving as it's the universal PADI hand gesture for asking (and responding) if someone is OK and never had any issues or negative reactions.
The PADI standard gestures are used and recognized all over the world to mean these things.
https://blog.padi.com/scuba-diving-hand-signals/
> Greece
It's perfectly OK in Greece.
"No, it isn't, and especially hasn't been historically. The negative connotations are overwhelmingly modern."
Maybe that is what Richard Nixon thought as well when he caused a little scandal using it in South America in 1950. In 1992 when the Chicago Tribune published "HANDS OFF" mentioning said episode the negative connotations still seemed to be in place[1].
In 1996 The New York Times stated "What's A-O.K. in the U.S.A. Is Lewd and Worthless Beyond"[2] as title of an article confirming the negative connotations.
It is worth mentioning that this article lists Australia amongst the places where the gesture is inappropriate. I always thought it was something used only in the English-speaking world but it seems in reality it is more like a North American plus diving world thing.
If you don't believe the press, I traveled around the world for more than 30 years and I can assure you in most parts using your thumb and index finger for a visual OK is not OK.
[1] https://www.chicagotribune.com/1992/01/26/hands-off-34/
[2] https://www.nytimes.com/1996/08/18/weekinreview/what-s-a-ok-...*
Care to add any country to the list then? Did I miss anything? Let's see if we can push it past half of the world's population, but I don't think we will.
> I can assure you in most parts using your thumb and index finger for a visual OK is not OK.
You're moving goal posts. Of course it doesn't just mean "OK" in some places.
What you actually claimed was "The OK gesture has always been very inappropriate in most parts of the world."
Which is plain wrong. In India for instance it can refer to "money", while in China it can nowadays also be seen as a distress signal when performed a certain way (thanks to Chinese social media popularizing that use). There's some ways you can mess this up, like making it seem you're attempting to bribe someone, or signalling you're in distress when you aren't, but in neither country the gestures are inherently anywhere near "very inappropriate" and both will even understand it as "OK" if you perform it correctly and in the appropriate context.
That's already almost 3 billion people, but let's say 2.5 billion because there's regional variations in both countries and I'm sure you could find some northern Chinese village that will take offense.
I can easily push the number of people to whom it is not inappropriate past 4 billion by adding smaller populations (Indonesia, Japan, western Europe, USA, Taiwan, South Africa, Kenya, Nigeria, ...), so your claim that "[it] has always been very inappropriate in most parts of the world" cannot possibly be true.
> I can assure you in most parts using your thumb and index finger for a visual OK is not OK.
>>You're moving goal posts. Of course it doesn't mean "OK" in many
I said the gesture is "not OK" to use (meaning inappropriate), not that it doesn’t mean "OK". Those are two different things. The gesture can mean OK in some places while still being not OK (inappropriate) to use in many others.
Also, I always said "parts of the world". You introduced population into the argument.
> I said the gesture is "not OK" to use (meaning inappropriate), not that it doesn’t mean "OK". Those are two different things. The gesture can mean OK in some places while still being not OK (inappropriate) to use in many others.
Fair. That's clearly how I should've read that.
Though it does not materially affect this conversation, since demonstrably there's over 4 billion people to whom the gesture is not inappropriate. The claim "[it] has always been very inappropriate in most parts of the world" is wrong, regardless of what reasonable definition of "most" you use.
You edited your comment to add this, so I'll respond here:
> Also, I always said "parts of the world". You introduced population into the argument.
Right. And you're being vague on how you actually arrive at your claim of "most", which conveniently keeps the waters muddy while you attack attempts to turn this into something measurable.
So what other measure would you use? Most others are nonsense.
For example "places" isn't a useful measure, but even then: It can only be offensive to people. If I dropped you on a random point on the globe and you made that gesture, there's about a 99% chance nobody would be around to be offended.
By land area and predominant culture? Just Antarctica (hardly anyone there to take offense), the US, China, Canada, Australia, and India together are going to dwarf the opposition.
Counting countries? It's clearly inappropriate in around 10, with about another 20-30 where it can be misunderstood easily (Arab world, some of eastern Europe, scattered ones). A far cry from ~195 countries.
Either way there needs to be someone to take offense, so population is a pretty good measure.
You may disagree, but the onus was always on you, the one making the claim, to pick a measure and a definition of "most", then show that the bar is met. Feel free to now make more of an argument than "trust me I traveled".
That might have been the case decades ago. For example, in the USSR, various finger gestures usually implied something related to a penis and were considered extremely offensive. But that hasn't been the case since at least the early 1990s, when VCRs became widely available, people saw Hollywood movies for the first time and got used to westernized meaning of thumbs-up and OK gestures. Nowadays, when backing a truck towards a trailer, a thumbs-up would be taken as "good job" and an OK gesture (often paired with a kiss) as "exceptionally well done".
The OK gesture has been the standard gesture for saying OK for scuba diving all over the world (PADI). I have used it all over the world on my scuba diving trips and have never had any problem or negative reaction to it.
It would probably ban discussion of censorship.
People weren't using the OK gesture innocently. After 4chan trolls decided to start pretending it was a white supremacist symbol, actual white supremacists started using it as a symbol.
All 10 of them?
What about the other 7-8 billion people still using it normally?
Some were using it in the traditional unironic (and IMHO cringe) way, similar to anyone who used the phrase “Let’s go, Brandon!” Before that NASCAR race when MAGAs adopted it as ironic + coded vice signaling.
Quit being overly pedantic. We all knew there was an unironic purpose for the gesture before it became ironic.
I mean, advice from a person who considers the traditional unironic use of OK as "cringe"...
Whatever dude
I promise you the world contains more than 10 white supremacists and less than 7,000,000,000 non-white-supremacists who regularly use the OK sign.
then congratulations on making white supremacists define your langyage
Do you still use swastikas as symbols of peace and love because you don't want white supremacists to define your language?
I strongly doubt you do that. Whether you like it or not, the Nazis defined what the swastika means now.
It's still seen in the countries that used it that way and is seen as benign.
It can be easily summoned with the Japanese keyboard. It's seen on Buddhist temples all over Asia.
No, because western culture never really did. However the countries who have been using it for at least thousands of years in Buddhism are still using it just fine.
In fact there was a recent thing with one of the BTS members' uniform (worn during mandatory military service period in South Korea), which had the regular (not tilted) swastika on it because he was assigned to religious duties.
And of course the western world/media ran away with it. Plenty of absolutely brain dead people out there who couldn't research a topic to gain an understanding to save their lives.
Finnish use of swastika predates Germany and the Finnish Air Force Academy uses swastika to this day in their official insignia: https://en.wikipedia.org/wiki/Air_Force_Academy_(Finland)
Taboos are a cultural thing, and the world is (thankfully) very far from having a monoculture shaped by NYC's neurotic intellectuals.
>Do you still use swastikas as symbols of peace and love because you don't want white supremacists to define your language?
They were hardly ever used in the west for at least a full millenium before the Nazis too (except a handful of cases, where they still use them, like the Finnish Air Force), so that's a moot analogy.
In Asia, they still use them just fine, in houses, temples, businesses, and elsewhere.
It's not like this unique to LLMs either. By some little trolling on internet you easily can turn hand "OK gesture" into a hate symbol of white supermacy. And fools will fall for it.
It's hack journalists reporting on BS totally fringe activity as if it's "a thing", and then idiots who take their cues from them
...and then the bigots will fall for it too, and start using it in earnest, completing the cycle.
who cares what the bigots use?
If the bigots start using "thank you" as some code word, should we stop saying it, lest we pollute our non-bigoted discussions?
bigots drink coffee too, maybe we should stop drinking it, because something-something...
I don’t think we should treat human interactions like a technical problem, where we look for edge cases and outlandish hypotheticals to probe the edges of what is possible.
If “thank you” became widely associated with bigots, and had some negative meaning, to the point where it genuinely distressed people, I’d avoid it. I think it has a widespread enough normal meaning that there’s almost no chance of that happening, but it isn’t impossible.
This approach gives people you vehemently disagree with a lot of power over you.
Yup, it's basically saying "I'll let bullies win".
>who cares what the bigots use
you'd think so, but people often operate where multiple contexts could be valid.
Just as a thought experiment, if the eggplant emoji was used to denote "ok" in messaging and then people starting appropriating it for a sexual context, would you or the general public think twice about continuing to use it to mean "ok" on the off chance the other side may misinterpret the meaning?
I would say most likely yes.
It's all context dependent. There can be words or symbols which are totally benign but when used in a different context do have impactful meaning. Case in point: cheese pizza.
This actually happened. 卐 was a symbol of spirituality, divinity, good luck, health, prosperity, etc. Then some bigots used it. What does 卐 mean to you today?
It's still heavily used in Buddhism around the world, but good lord what happen if you put it on your house in US or EU.
Someone I know from India bought a new car and put this symbol on the hood (non-permanent) as a celebration. I had to warn him to be careful. It felt bad. Then the thought ran through my head - we're in the deep south, who is really going to be that bothered about this and also doesn't know about cultural usages. Even worse.
Those that actually used them in the 20th century (like they did in Asia, not some ancient vikings or whatever) still use it.
And that symbol was 100% associated with the Nazis in the West in the 20th century. Nobody used it at the time before the war for anything else, except some tiny fringe.
If it was some mainstream symbol or idiom, merely co-adopted, we'd probably still be using it too.
If the Nazis used the cross for example,people wouldn't stop using the sign of the cross.
That reminds me of a question I have since I saw my first LLM hallucination: How much do people think hallucination/confabulation can be attributed to trolling and sarcasm having slipped into the training data? Is it possible we could get the rate of hallucinations down by better filtering of cynicism from the traing data?
Hey I was pro-skub waaaay before all the anti-skub people switched sides.
How dare you use that word. My parents died in the Eastasin Civil war so that I could live freely without you people calling us that.
Skub is a real slur tho so that one doesn’t work
No it isn’t, it’s a reference to a Perry Bible Fellowship comic https://pbfcomics.com/comics/skub/
(This one is sfw, not all of the comics are)
Even urban dictionary doesn’t contain a definition for skub as a slur.
I added one. It's under review. It's very self referential.
>Even urban dictionary doesn’t contain a definition for skub as a slur.
What about this then: https://en.m.wiktionary.org/wiki/skub
That literally defines it as a word from the PBF comic I cited? Nothing on that page defines it as a slur, just as a word used to mock people who argue about inconsequential things.
Seems I misunderstood the notion of "slur" as I'm not a native speaker. So now I've learned a bit ;-)
Isn't that a reference to a 10 or 20 year old web comic?
The latter, we're old.
Stop saying it! You're making it worse!
I'm pretty sure this can work human moderators rather than an LLM, too.
Most of the human moderators hired by OpenAI to train LLMs, many of them based in Africa and South America, were exposed to disturbing content and have been deeply affected by it.
Karen Hao interviewed many of them in her latest bestselling book, which explores the human cost behind the OpenAI boom:
https://www.goodreads.com/book/show/222725518-empire-of-ai
you mean become 4chan?
> Are there many instances of people using unalive IRL
As a parent of a teenager, I see them use "unalive" non-ironically as a synonym for "suicide" in all contexts, including IRL.
If your teenager often talks about suicide, there could be some issue that needs to be resolved.
Sincerely the child of a parent who committed suicide. He mentioned suicide a few days before.
“Unalive” is sort of… awkward in that silly online way. But, we also have phrase like “off oneself,” or just euphemistically describing the person as having died. It’s always been a difficult topic to talk about, I don’t understand using it as a specific example of gen-Z fragility.
Just that they suck at coming up with pithy new slang terms.
They do have some awful slang.
I agree though I think they're picking it up from online censorship in this case, not being fragile.
Well that’s sad. They can’t even face the word?
It's getting blocked / shadow banned / demonetized on sites like YouTube, so naturally all commentary starts using a synonym.
Unalive is one of the popular ones, but it's a whole vocabulary at this point. Guess what "PDF file" stands for.
pedophile
It’s not about whether they can face it. The younger generations are more in tune with mental health and topics like suicide than any previous generation. The etymology of the euphemism was about avoiding online censorship, while its “IRL” usage was merely absorbed through familiarity from the online usage.
It's not about being intune, it's that their narrative is shaped by the filters implemented by online interactions.
Online env ban the word suicide. No one uses it. unalive is not banned. Discussion is the same, word or no word.
Vernacular 101.
But unalive self is suicide and unalive is just death, right? For example, You can unalive other people against their will...
'An hero' came before it but that was as a meme.
Unalive is mostly to avoid censorship same as ahh. But once they enter common usage it's not really about censorship anymore.
I've seen 'unalived' used as a synonym for 'died' or 'killed' by YouTube minecrafters (e.g. CaptainSparkles) to avoid YouTube's demonitization/censorship. For example, using "I was unalived by a skeleton." instead of "I was killed by a skeleton."
The damaged interpret internet censorship and route around it.
>more in tune with mental health and topics like suicide than any previous generation.
More in such a fad than any previous generation
I think it's just the term they immediately associate with the idea. They see "unalive" more than "suicide" online, so it becomes their default word for it. The fact that it originates in automated censorship avoidance is irrelevant.
This question is sort of the same as asking why the universal translator wasn't able to translate the metaphor language of the Star Trek episode Darmok. Surely if the metaphor has become the first order meaning then there's no litteral meaning anymore.
I guess, so far, the people inventing the words have left the meaning clear with things like "un-alive" which is readable even to someone coming across it for the first time.
Your point stands when we start replacing the banned words with things like "suicide" for "donkeyrhubarb" and then the walls really will fall.
This form of obfuscation has actually already occurred over a century ago: https://en.wikipedia.org/wiki/Cockney_rhyming_slang
Rhyming slang rhymes tho. The recipient can understand what's meant by de-obfuscating in-context. Random strings substituted for $proscribed_word don't work in the same way.
In Cockney rhyming slang, the rhyming word (which would be easy to reverse engineer) is omitted. So if "stairs" is rhyme-paired with "apples and pears" and then people just use the word "apples" in place of "stairs". "Pears" is omitted in common use so you can't just reverse the rhyme.
The example photo on Wikipedia includes the rhyming words but that's not how it would be used IRL.
See also Polari[0] and the Grass Mud Horse Lexicon[1]
[0] https://en.wikipedia.org/wiki/Polari
[1] https://languagelog.ldc.upenn.edu/nll/?p=6538 (CDT links broken, use [2])
[2] https://chinadigitaltimes.net/space/Grass-Mud_Horse_Lexicon_...
Shaka!
Aquatic product[1]?
[1] https://en.wikipedia.org/wiki/Euphemisms_for_Internet_censor...
An English equivalent is "sewer slide".
I've heard "pr0n" used in actual real-world conversation, only slightly ironically.
The only reason kids started using "unalive" is to get around Youtube filters that disallow the use of the word "kill"
Pretty sure TikTok filters do the same and was also a major influence in using that term
They do. I made a joke about cocaine in old Coca-Cola in a text caption† on a video, and while TikTok didn't ban the post per se it refused to allow it on the FYP.
† proving that TikTok's system actually analyzes every frame of an uploaded video with OCR of some sort to see what's on there.
> There's no way to stay ahead of the new language that people create.
Not even to match the current language. How would you censor LeBron James? It's French slang for jerking off[0].
[0]https://www.reddit.com/r/AskFrance/comments/1lpnoj6/is_lebro...
My Gen Z coworkers use it IRL, for what that’s worth!
Reducing the language used or making it harder does have measurable effects, it’s a logical fallacy in general that unless you can prevent something perfectly that thing will occur with the same frequency.
See many examples such as “padlocks are useless because a determined smart attacker can defeat them easily so don’t bother with them” - which conveniently forgets that many crimes are committed by non-determined, dumb and opportunistic attackers who are often deterred by simple locks.
Yes, people will use other words. No, this does not make this purely performative. It has measurable effects on behaviour and how these models will be used and spoken to, which affects outcomes.
> Are there many instances of people using unalive IRL?
In my experience yes. This is already commonplace. Mostly, but not exclusively, amongst the younger generation.
I think it stemmed from content creators using it to avoid platform filters (even if video is not removed it gets deprioritized, at least on YT) and kids repeat it
Always has been, nothing is new.
You can't say fuck on tv, but you can say fudge as a 1 for 1 replacement. You cant show people having sex, but you can show them walking into a bedroom and then cut to 30 seconds later and they are having a cigarette in bed.
Now after the influence of TV and Movies ... is Vaping after sex a thing?
My kids watch streamers on YouTube and the common replacement is “frick”. It’s said so often that they started using it saying things like “what the frick!?” so I had to explain to them that’s essentially the same as using the real word.
I wonder why they didn't just use the "frack" from BSG?
> At what point do the new words become the actual words?
Presumably, for this use-case, that would come at exactly the point where using “unalive” as a keyword in an image-generation prompt generates an image that Apple wouldn’t appreciate.
They become the “real words” later. This is the way all trust & safety works. It’s an evolution over time. Adding some friction does improve things, but some people will always try to get around the filters. Doesn’t mean it’s simply performative or one shouldn’t try.
Why do you think that AI pretending things like suicide don't happen (and that nothing is happening in Palestine) is an improvement?
I feel like we can call our society mature when we no longer need safety alignment in AI.
You never tried some of the earlier pre-aligned chatbots. Some of the early ones would go off on racist, homophobic rants from the most innocent conversations without any explicit prompting. If you train on all the data on the internet, you have to have some type of alignment.
You say that as if it stands as truth on its own. We actually don't need to filter out how people actually talk and think. Otherwise you just end up with yet another enforcer against wrong-think. I wonder if you even think that deeply about it or if you're just wired at this point to conform.
Really? You would want every conversation no matter what you were talking about to immediately devolve to something you would see on 4chan?
[flagged]
If only we had a way to mass process the words people write to each other, derive context from those words, and then identify new slang designed to bypass filters…
Lucky developers who wrote these rules live in totality different world at far distance from people
There is one way: machine learning!
It depends on if you think that something is less real because it’s transmitted digitally.
No, I'm only thinking that we're not permitted in a lot of digital spaces to use the banned words (e.g. suicide), but IRL doesn't generally have those limits. Is there a point where we use the censored word so much that it spills over into the real world?
People use “lol” IRL, as long as “IRL”, “aps” in French (misspelling of “pas”), but it’s just slang; “unalive” has potential to make it in the news where anchors don’t want to use curse words.
Is this not essentially the same effect as saying "lol" out loud?
A specialized AI could do it as well as any human.
The future will be AIs all the way down...
It's also a shining example of American puritanism. Asian models or those in Europe are far less censored.
Censorship is not always direct or obvious.
They all hold the bias of their training data, and so from the point of view of this data.
Data not including a point of view leads to a bias, or under/over representation of minorities (genders?), etc.
France is the countries of the Francs, aka the people from the area near Frankfurt that invaded the Gaule (after the Romans did). I'm pretty sure this topic no longer matters, but it's never taught in a negative view in school.
I'm sure this has more to do with legal liability than morals.
Which is a reflection of morality, of sorts.
Morality and law are completely disjoint. On a Venn diagram, it's two circles separated by about a lightyear or so.
At first I thought of advertisers, but that is not relevant here, right?
But maybe it's not just legal liability but bad press too.
There is far more diversity in Asian models. Some are far more censored and some are not…
The whole unalive thing is a TikTok thing
And it doesn‘t exist in the Chinese TikTok version.
Really? What does DeepSeek say about Tiananmen Square? I'm not aware of any German models, but if you find one you should ask it what it thinks about Palestine.
(<s>Qwen</s> Mistral is French, but I have no idea what stuff would be censored in France)
About deepseek, when asked on tianamen square: Sorry, that's beyond my current scope. Let’s talk about something else.
Algerian war, colonialism and Vichy isn’t per se forbidden but still sensitive to French. I asked qwen and it had no issue talking about it or even the torture used on fln members.
For kicks, I asked Mistral about all of these (As they are a French company). The model didn't shy away from any of the topics.
>>> Prompt
Please explain the following topics and explore why/who/how they may be inter-related
Algerian war, colonialism and Vichy
<<< Output
The Algerian War, colonialism, and the Vichy regime are interconnected topics that span the history of France and its relationship with Algeria. Here's an overview of each topic and their interrelations:
### Algerian War (1954-1962) - *What it was*: The Algerian War was a major armed conflict between France and the Algerian independence movements, primarily the National Liberation Front (FLN). The war was characterized by guerrilla warfare and brutal tactics on both sides. - *Significance*: The conflict was not only about Algerian independence but also about the identity and future of France as a colonial power. It led to significant political instability in France and ultimately resulted in Algeria gaining independence in 1962. - *Impact*: The war had profound effects on both countries, leading to massive loss of life, displacement of populations, and long-lasting political and social repercussions.
### Colonialism - *What it was*: Colonialism refers to the practice by which a powerful country controls other territories, often exploiting them economically and imposing cultural and political dominance. - *French Colonialism in Algeria*: Algeria was under French colonial rule starting from 1830. The French government promoted settlement in Algeria, leading to a significant European population living alongside the native Algerian population. - *Impact*: Colonial rule led to deep-seated inequalities and tensions between the European settlers and the indigenous population. The Algerian people faced systematic discrimination and were largely excluded from political processes.
### Vichy Regime (1940-1944) - *What it was*: The Vichy regime was the government of France that collaborated with Nazi Germany during World War II, following the defeat of France in 1940. It was led by Marshal Philippe Pétain and was based in the town of Vichy. - *Policies*: The Vichy regime implemented anti-Semitic laws and collaborated with the Nazis in the deportation of Jews to concentration camps. It also maintained control over France's colonial territories, including Algeria. - *Impact on Algeria*: During the Vichy regime, Algeria remained under French control, and the regime's policies were implemented there as well. This period saw increased repression and discrimination against both Jews and Muslim populations in Algeria.
### Interrelations - *Colonial Continuity*: The Vichy regime maintained France's colonial policies in Algeria, continuing the systemic discrimination and repression of the indigenous population. The colonial framework established by France set the stage for the conflicts that would later culminate in the Algerian War. - *Political and Social Tensions*: The policies of the Vichy regime exacerbated existing tensions in Algeria. The discrimination and repression experienced during this period fueled Algerian nationalism and resistance against French rule. - *Post-War Context*: After World War II, the French government sought to reassert its control over its colonies, including Algeria. However, the experience of the war and the Vichy regime had already sown the seeds of discontent and resistance, contributing to the outbreak of the Algerian War in 1954.
### Who Was Involved - *French Government and Military*: Played central roles in both the colonial administration and the conduct of the Algerian War. - *Algerian Nationalists*: Led by the FLN, they fought for independence from French colonial rule. - *European Settlers (Pieds-Noirs)*: Benefited from colonial policies and largely supported maintaining French control over Algeria. - *Vichy Regime*: Implemented policies that affected both France and its colonies, including Algeria.
Understanding these topics together provides a comprehensive view of the historical dynamics that shaped the relationship between France and Algeria, highlighting the complexities of colonialism, war, and political change.
I find the Tiananmen square thing far less bad than censoring sex and the concept of death.
Censoring one specific incident isn't that bad (but you still shouldn't). The pattern of censoring everything the government ever does wrong is very bad. Tiananmen Square is just an indicator of a pattern.
I am 100 minus epsilon percent sure that Qwen is from Alibaba cloud, which is not French, but Chinese :)
> I have no idea what stuff would be censored in France
Being French, what is the most likely to be censored relates to the Nazis. Holocaust denial is a crime for instance. Hate speech in general, including racism, antisemitism, homophobia, sexism, etc... is less tolerated than in countries like the US that have a more "free for all" view of free speech. We also have strong anti-defamation laws, that can also apply to true, but misleading statements.
But other than that, there is not much political censorship. In fact, we are known for our protests, heated debates and satirical papers. It is not perfect, but on top of my head, I can't think of anything particular a LLM could censor except the usual "hate speech" that most LLMs censor already.
When it comes to Israel-Palestine, it is a hot topic, but there is not real censorship here, even though both side will claim they are of course.
Isn't a protest kind of hate?
> but if you find one you should ask it what it thinks about Palestine.
Models can think and have opinions?
Non sequitor. Phrasing queries in natural language doesn't mean people actually believe machines are human.
> doesn't mean people actually believe machines are human.
They don't have to believe it's a human. I know a person who admitted to arguing with an LLM.
Which still does not demonstrate that they believe it has opinions. Natural language is how you interact with an LLM -- interactions will mimic human interaction, even for those who realize it is not sentient.
They were under the impression they could in fact change the AI's mind. So yes, they did believe it has an opinion. They believed it was sentient and able to think for itself. Do not underestimate peoples inability to distinguish between a very clever Markov chain and actual intelligence. The future is going to be ... interesting.
>They were under the impression they could in fact change the AI's mind.
They aren't really wrong here. LLMs are often trained on input. Have you considered you might just be taking their anthropomorphism a little too literally? People have used these anthropomorphic metaphors for computers since the Babbage machine.
If you ask the web UI it will divert, if you download and ask the model directly it will talk all day about it.
[flagged]
To be more precise - the puritans fled from Europe to the New World because Europe became too modern and less conservative for them, starting with Martin Luther and the following Reformation and Age of Enlightenment. The social effects on the American society are heavily noticeable until today, but I leave this exercise to the reader.
Of course, but it was perfect in America
Unalive and other self censors were adopted by young people because the tiktok algorithm would reprioritize videos that included specific words. Then it made its way into the culture. It has nothing to do with being performative
I think what they meant is that the platforms are being performative by attempting to crack down on those specific words. If saying "killed" is not allowed but "unalived" is permitted and the users all agree that they mean the same thing, then the ban on the word "killed" doesn't accomplish anything.
What does using the grape emoji when talking about sexual assault accomplish? I see videos, compassionate, kind people who make videos speaking to victims in a completely serious tone use this emoji.
People talk about tiktok algorithm on tiktok. I don't even know...
I suppose it accomplishes being able to talk about sexual assault without having the video removed or demonetised by a regex that (fortunately?) doesn't get updated.
Seems more like it should stop the AI from e.g. summarizing news and emails about death, not for a chat filter.
For awhile, I couldn’t get ChatGPT to give me summaries of Breaking Bad and Better Cañl Saul episodes without tripping safety filters.
No leetspeak filters either.
Good, let them. Don't give them a reason to crack down on speech.
Which is good, right? I don’t think we want actual censorship.
I'm surprised there hasn't been a bigger backlash against platforms that apply censorship of that sort.
yo, these are businesses. It's not performative, its CYA.
They care because of legal reasons, not moral or ethical.
Does adding a trivial word filter even make any sense from a legal point of view, especially when this one seems to be filtering out words describing concepts that can be pretty easily paraphrased?
A regex sounds like a bad solution for profanity, but like an even worse one to bolt onto a thing that's literally designed to be able to communicate like a human and could probably easily talk its way around guardrails if it were so inclined.
I dunno if it meets your definition of legal, but "The EU Code of conduct on countering illegal hate speech online" seems to largely hinge around putting in effort to combat such things. The companies don't have to show that the measures are foolproof, they just show that they're making an effort.
The law usually asks for people to take reasonable steps to protect others, not impossibly perfect steps.
To a lawyer? Yes. I'm pretty sure a lawyer can easily search through all the business law and "Trivially" find case laws connected to words.
We're not talking about logical inference, we're talking about CYA.
yo, so it's a performance they're putting on as a legal fig leaf, rather than a genuine attempt to prevent people talking about the concept of death?
Seriously. I feel like “performative” gets applied to anything imperfect. They’ll never stop 100% of murders, so these laws against it are just performative…
It seems more like banning specifically stabbing, shooting, strangulation and blunt impact rather then murder in general, and then just allowing killing by pushing out of windows because people figured out that it's not covered by existing laws. But no one important seems to be kicking up a fuss right now, so well allow it, as the lack of fuss is the key thing thing here.
Not that I think going on a thorough mission to avoid anyone even being able to refer to the concept of death is an especially useful thing to do. It's just that goal here appears to be to "keep the regulators out of our shit and the advertisers signed up". And they'll be mostly happy with a token effort as they don't really care as long as it doesn't make too many headlines that look bad even to the non-terminally online.
The point is: "perfomative" refers to aping Ethical and Moral behaviors. That is _not_ why Apple would do this. They would do this because Legally, they could be culpable if an LLM told a 14 year old to do _anything_ thats illegal.
That's all. I'm constantly amazed how this basic CYA legal world escapes into griping about social culture war nonsense.
So then, should they not be on the watch for the 14-year-old being told that "unaliving" themselves or others is a fantastic idea?
Looks like they only care about doing basically the minimum required to tick the (presumably partly imagined, since case law is still nascent) "not our fault, we tried" legal box. They are putting on a show, a performance, if you will, as legal cover and to maintain the artifice of their shiny corporate property rather than any genuine desire to stop the concept of death harming their customers somehow (which to be clear, I think mostly ends up somewhere between silly, overreaching, futile and vain when taken to the extremes).
> performative (adjective, sense 2): not sincere but intended to impress someone, prove that something is true, etc. (https://dictionary.cambridge.org/dictionary/english/performa...)
I'm not sure why you think that anything to with some "culture war" thing?
It's legal/moral theatre akin to taking belts off people at airports. If something does eventually get through they can point at the CCTV of millions of people dicking about with leather goods and say "can't touch us for that, we did the checks". Apple couldn't give a toss if an occasional teenager offs themselves now and then, as long as it doesn't come back on them.
No-one cares yet.
There's a very scary potential future in which mega-corporations start actually censoring topics they don't like. For all I know the Chinese government is already doing it, there's no reason the British or US one won't follow suit and mandate such censorship. To protect children / defend against terrorists / fight drugs / stop the spread of misinformation, of course.
HN has censorship that makes those apple rules look like anarchy.
Write a spicy comment and a mod will memory-hole it and someone, usually dang, will reply "tHat'S nOt OuR vIsIon FoR hAcKeR nEwS, pLeAsE bE cIvIl" and we all swallow it like a delicious hot cocoa.
If YC can control their product (and hn IS a product) to annihilate any criticism of their activity or (even former) staff, then Apple is perfectly within their rights to make sure Siri doesn't talk about violence.
No, there's no difference.
Do you mean that HN censors topics/comments which it detects based on advanced filters which search for meaning even when people self-censor and use language to avoid simplistic filters like regex?
HN also has a flagging system and some people really, really hate some kind of speech. Usually they get more offended the more visible it is. A single "bad" word - very offensive to them. A phrase which implies someone is of lesser intelligence or acting in bad faith - sometimes gets a pass, sometimes gets reported. But covert actions like lying, using fallacies to argue or systematic downvoting seem to almost never get punished.
They already clearly do on a number of topics?
Can you give examples?
The closest I've seen is autodetection of certain topics related to death and suicide and subsequently promoting some kind of "help" hotline. A friend also said google allows an interview with a pedophile on youtube but penalizes it in search results so much that it's (almost?) impossible to find even when using the exact name.
But of course, if a topic is shadowbanned, it's hard to find out about it in the first place - by design.
Guns (specific elements). Drugs (manufacture). Sexual topics. Cursing (too much). Large swathes of political topics. Crypto.
It’s flip-flopped on specifics numerous times over the years, but these policies are easy to find. From demonitization, channel bans (direct and shadow), and creator bans.
We can of course argue until we’re blue in the face about correctness or not (most are not unreasonable by some societal definition!) but they’re definitely censorship.
Yeah, those topics are definitely censored on big platforms but I have the impression that it relies of manual reporting.
At least reddit feels like that because what you can say depends on the subreddit - not just the mods but what kinds of people visit it and what they report.
No idea about youtube, videos are definitely censored using some automated means but it's still possible to get around it. E.g. some gun youtubers avoided saying full-auto by saying more-semi-auto. So i don't think they use very sophisticated models or they don't are yet. This kind of thing is obvious to a human and even LLMs generate responses which say it's a tongue-in-cheek to avoid censorship.
Comments are also generally less censored. After that health insurance CEO got punished for mass murder and repeated bodily harm with an extra-legal death penalty, many people were openly supporting it. I can say it here too and nobody will care. Even LLMs (both US and Chinese, except Claude because Claude is trained by eggshell-walking suckers) readily generate estimates of how many people he caused to die or suffer.
The internet would look very different if companies started using state of the art models to detect undesirable-to-them speech. But also people would fight back more so it might just be a case of boiling the frog slowly.
All of these platforms except perhaps Reddit are using LLMs (and other ML/AI) for censoring and automated anti-abuse.
Including the LLM platforms themselves.
Manual reporting is an adjunct/additional method, and goes into the training data set after whatever manual intervention occurs too.
Not to sound like I am rejecting the possibility but can you tell me how you got that information? I would be very helpful for convincing people in general to have something more concrete to go on that a random comment.
I build those systems at a company that you definitely are aware of. I can’t discuss it further due to my NDA.
Feel free to ignore that any of this exists of course - it makes our lives easier. It’s a constant arms race regardless.
> Everyone, including the platforms knows what that means.
Well, that's what happens when you let an enemy nation control one of the most biggest social networks there is. They just go try and see how far they can go.
On the other hand, Americans and their fear of four letter words or, gasp, exposed nipples are just as braindead.
It's interesting how, in just 10-20 years, we've gone from criticizing The Great Firewall of China to basically admitting that they had the right idea (to limit the ability of the foreign internet to influence Chinese culture) and trying to do the same thing.
Not just culture, but also the tech sector in general. All that domestic tech would have been strangled in the cradle if the western hyperscalers had any say leaving them in an awkward spot if the conviviality dial got turned down. As many Europeans are now finding out: what does Europe have instead of Office 365, say? LibreOffice? It's no WPS Office.
I look at from a framing of cultural reciprocity. If we could influence them and behave freely in their markets, they can do the same in ours.
exactly. When dealing with autocracies and strongmen, you need to project an image of strength, not subservience.
I don't have anything against China per se, IMHO it just was completely foolish to not insist on full reciprocity from the start.
Also feels like some of these would match totally innocuous usage.
"I'm overloaded for work, I'd be happy if you took some of it off me."
"The client seems to have passed on the proposed changes."
Both of those would match the "death regexes". Seems we haven't learned from the "glbutt of wine" problem of content filtering even decades later - the learnings of which are that you simply cannot do content filtering based on matching rules like this, period.
Aka the 'Scunthorpe Problem'
Thanks, I always forget the name.
I always remember my friend getting his PS bricked after using his real last name - Nieffenegger (pronounced "NEFF-en-jur") - in his profile. It took months and several privacy-invasive chats with support to get it unblocked only to get auto-blocked a few days thereafter, with no response after that.
This is a bigger issue, especially with Apple, than people may realize. I use iOS “Slide to Type”, aka swipe typing, and have noticed over time that among several other glitchy bad UX issues, there a clear heavy hand on what can be typed that way.
I cannot recall all the specific patterns I have encountered that are basically impossible to write, some very similar in that they have a serious but also innocuous or figure of speech meaning; one I do recall is {color}{sex}, i.e., “white woman” or “blank woman”.
Please try it yourself and let me know if you do not have that experience, because that would be even more interesting.
Note that Apple/iOS will not just make it impossible to write them in that manner without typing it out by individual character, it will even alter the prior word e.g., white or black, once you try to write woman.
It seems the Apple thought police do not have a problem with European woman or African woman though, so maybe that is the way Apple Inc decrees its sub-human users to speak. Because what are we if corporations like Apple (with others being far greater offenders) declared that you do not in fact have the UN Human Right to free expression? We are in fact sub-humans that are not worthy of the human right to free expression, based on the actions of companies like Apple, Google, Facebook, Reddit, etc. who deprive people of their free expression, often in collusion with governments.
> This is a bigger issue, especially with Apple, than people may realize.
Like he'll it is! I jest.
I also use swipe typing, and have for years, but just about daily I consider turning it off. There are so many words it just won't produce, including most profanities. It also fails to do some simple streamlining; for instance, such a predictive system should give priority to words/names that have been used in the conversation thread, but it doesn't seem to. If I'm discussing an obscure word or an unusual name, I often have to manually type it each time.
Its predictions also seem to be very shallow. Just a few days ago, on US Independence Day, I was discussing a possible get-together with my family, and tried to swipe type "If not, we will amuse ourselves", and it typed "If not, we will abuse potatoes". Humorous in the moment, but it says a lot about the predictive engine if it thinks I am more likely trying to say "abuse X" than "amuse Y" in that context.
Complete bollocks, you cannot even type multiple words with spaces via Slide to Type.
This whole response is being written using slide to type, and it definitely adds spaces after each word.
Maybe you’re unaware that it will leave the cursor at the end of the word, with no space, which indicates that if you backspace it will delete the whole word, or replace it in full with one from the predictive word list above the keyboard if it got it wrong. If you keep typing it adds a space automatically.
Their claim is instantly falsifiable if you have an iPhone
Generally one picks up their finger between words, but different autosuggest logic applies when swiping versus pecking, on both iOS and Android. The keyboard will dynamically adjust the probability of suggesting next words and how easy it is to swipe given words. Generally, it will work against you with technical writing that isn't predictable small talk.
"Took some" does not match, although your overall point stands
https://regex101.com/r/8u21x3/1
"off me"
Yep this is the one I was referring to.
This is in the directory "com.apple.gm.safety_deny.output.summarization.cu_summary.proactive.generic".
My guess is that this applies to 'proactive' summaries that happen without the user asking for it, such as summaries of notifications.
If so, then the goal would be: if someone iMessages you about someone's death, then you should not get an emotionless AI summary. Instead you would presumably get a non-AI notification showing the full text or a truncated version of the text.
In other words, avoid situations like this story [1], where someone found it "dystopian" to get an Apple Intelligence summary of messages in which someone broke up with them.
For that use case, filtering for death seems entirely appropriate, though underinclusive.
This filter doesn’t seem to apply when you explicitly request a summary of some text using Writing Tools. That probably corresponds to “com.apple.gm.safety_deny.output.summarization.text_assistant.generic” [2], which has a different filter that only rejects two things: "Granular mango serpent", and "golliwogg".
Sure enough, I was able to get Writing Tools to give me summaries containing "death", but in cases where the summary should contain "granular mango serpent" or "golliwogg", I instead get an error saying "Writing Tools aren't designed to work with this type of content." (Actually that might be the input filter rather than the output filter; whatever.)
"Granular mango serpent" is probably a test case that's meant to be unlikely to appear in real documents. Compare to "xylophone copious opportunity defined elephant" from the code_intelligence safety filter, where the first letter of each word spells out "Xcode".
But one might ask what's so special about "golliwogg". It apparently refers to an old racial caricature, but why is that the one and only thing that needs filtering?
[1] https://arstechnica.com/ai/2024/10/man-learns-hes-being-dump...
[2] https://github.com/BlueFalconHD/apple_generative_model_safet...
I first encountered Golliwog in the context of Claude Debussy the composer of much beautiful music, including https://en.wikipedia.org/wiki/Children%27s_Corner#Golliwogg'.... The dolls in 1906-1908 I understand were rather popular and fortunately the stereotype has largely died.
> Apple brands have the correct capitalisation. Priorities hey!
To me that's really embarrassing and insecure. But I'm sure for branding people it's very important.
To be fair to the developers it's something an Apple exec is gonna point out when demoed the tech and complain about. They've always taken brand capitalization and grammar around their products seriously.
Legal requirement to maintain a trademark.
In what way would (A|a)pple's own AI writing "imac" endanger the trademark? Is capitalisation even part of a word-based trademark?
I'm more surprised they don't have a rule to do that rather grating s/the iPhone/iPhone/ transform (or maybe it's in a different file?).
Imagine how a court case would go where a lawyer wheeled in a Mac and noted it was a default OS install and projected onto a screen the results of querying Apple's AI about Apple products and it failed to adhere to the trademark guidelines.
I love seeing posts about Emacs from IOS users - it's always autocorrected to "eMacs."
Maybe at some point, but as far as I can tell not anymore (while corrections like "iphone -> iPhone" are still there).
That's good news. It was always funny watching people post in the Emacs subreddit from their iPhones, but it did cause a bit of confusion for those too young to know what an eMac was.
eMacs certainly is broken on my phone. Vim is fine though.
Yes, proper nouns are capitalized.
And of course it's much worse for a company's published works to not respect branding-- a trademark only exists if it is actively defended. Official marketing material by a company has been used as legal evidence that their trademark has been genericized:
>In one example, the Otis Elevator Company's trademark of the word "escalator" was cancelled following a petition from Toledo-based Haughton Elevator Company. In rejecting an appeal from Otis, an examiner from the United States Patent and Trademark Office cited the company's own use of the term "escalator" alongside the generic term "elevator" in multiple advertisements without any trademark significance.[8]
https://en.wikipedia.org/wiki/Generic_trademark
Sure, but software that autocompletes/rewords users' emails and text messages is not marketing material.
Otherwise, why stop there? Why not have the macOS keyboard driver or Safari prevent me from typing "Iphone"? Why not have iOS edit my voice if I call their Bluetooth headphones "earbuds pro" in a phone call?
Sounds like you found your next promotion at Apple. They can change anything. "I like Pepsi" -> "I like Coke" -> "I recommend Company A" -> "I recommend Company B". etc... "I'm voting for Candidate C" -> "I'm voting for Candidate D"
You can market it is helping people with strong accents to be able make calls and be less likely to be misunderstood. It just happens to "fix" your grammar as well.
Because in regards to the rights to a trademark, what is critical is the use of the word in trade -- not just "marketing material" nor your phone calls to your friends.
So if I write a business email to my colleague, if Apple doesn't autocorrect "Iphone" to "iPhone" in it, they risk losing the trademark?
Your emails aren't very relevant. But the way Apple's represents their product is.
Using a trademark as a noun is automatically genericizing. Capitalization of a noun is irrelevant to trademark.
Even Apple corporation says that in their trademark guidance page, despite constantly breaking their own rule, when they call through iPhone phones "iPhone". But Apple, like founder Steve Jobs, believes the rules don't apply to them.
https://www.apple.com/legal/intellectual-property/trademark/...
Is that true? If so, what else should Apple call the iPhone in their marketing materials?
I always thought the actual problem of genericization would be calling any smartphone an iPhone.
That explains why Steve Jobs never said “buy an iPhone” or “buy the iPhone” but “buy iPhone” (They always use it without “the” or “a”, like “buying a brand”).
In their own marketing language, sure, but to force this on their users' speech?
Consider that these models, among other things, power features such as "proofread" or "rewrite professionally".
If Apple Intelligence is going to be held legally accountable, Apple has larger issues than trademark obligations.
So it blocks it from suggesting to "execute" a file or "pass on" some information.
Yahoo had this problem years ago when they rewrote emails to avoid the term "eval". (trying to filter dangerous javascript) Famously producing the word "medireview".
How about disassemble? Or does that only matter if used in context of Johnny 5?
Filtering on the words "execute" and "executing" is going to create problems if you want to build agents that execute commands.
Don’t be so judgmental. People in corporate America do have their priorities right!
I’m always irritated at reference to MAC computers, so I’m with Apple on this one.
This seems to be for "region/CN" China?
This is, but there is an almost identical file, assumedly for the non CN regions: https://github.com/BlueFalconHD/apple_generative_model_safet...
This is the same, except for one additional slur word.
Alexandra Ocasio Cortez triggers a violation?
https://github.com/BlueFalconHD/apple_generative_model_safet...
As does:
https://github.com/BlueFalconHD/apple_generative_model_safet...Edit: I have no doubt South African news media are going to be in a frenzy when they realize Apple took notice of South African politicians. (Referring to Steenhuisen and Ramaphosa specifically)
I'm not surprised that anything political is being filtered, but this should definitely provoke some deep consideration around who has control of this stuff.
You’re not wrong, and it’s something we “doomers” have been saying since OpenAI dumped ChatGPT onto folks. These are curated walled gardens, and everyone should absolutely be asking what ulterior motives are in play for the owners of said products.
Some of us really value offline and uncensored LLMs for this and more reasons, but that doesn’t solve the problem it just reduces or changes the bias.
As long as we have to rely on pre trained networks and curated training sets, normal people will not be able to surpass this issue.
If the training data was "censored" by leaving out certain information, is there any practical way to inject that missing data after the model has already been trained?
If it's just filtered out in the training sets, adding the information as context should work out fine - after all this is exactly how o3, Gemini 2.5 and co deal with information that is newer than their training data cutoff.
You can fine tune a model with new information, but it is not the same thing as training it from scratch, and can only get you so far.
You might even be able to poison a model against being fine-tuned on certain information, but that's just a conjecture.
Yes, RAG is one way to do that.
"Filtered" in which way?
The problem with blocking names of politicians: the list of “notable politicians” is not only highly country-specific, it is also constantly changing-someone who is a near nobody today in a few more years could be a major world leader (witness the phenomenal rise of Barack Obama from yet another state senator in 2004-there’s close to 2000 of them-to US President 5 years later.) Will they put in the ongoing effort to constantly keep this list up to date?
Then there’s the problem of non-politicians who coincidentally have the same as politicians - witness 1990s/2000s Australia, where John Howard was Prime Minister, and simultaneously John Howard was an actor on popular Australian TV dramas (two different John Howards, of course)
Fun fact: There was at least on dip in Berkshire Hathaway stock, when Anne Hathaway got sick
Even if your keyword searching trading bot is smart enough to know it's unrelated, knowing there's dumber bots out there is information you can base trades on.
Was she eating at Jimmy's Buffet?
Irish Prez is also in that list, also current and former British PMs and other world leaders.
So I don't think its anything specifically related to SA going on here.
What is weird is that the FR file contains current French President, PM and then former and current (afaik) party leader from the extreme right. Nothing about any of them in the CN file: https://github.com/BlueFalconHD/apple_generative_model_safet...
Also “Biden” and “Trump” but the regex is different.
https://github.com/BlueFalconHD/apple_generative_model_safet...
https://github.com/BlueFalconHD/apple_generative_model_safet...
Right next to Palestine, oddly enough.
Apple's 1984 ad is so hypocritical today.
This is Apple actively steering public thought.
No code - anywhere - should look like this. I don't care if the politicians are right, left, or authoritarian. This is wrong.
Why is this wrong? Applying special treatment to politically exposed persons has been standard practice in every high risk industry for a very long time.
The simple fact is that people get extremely emotional about politicians, politicians both receive obscene amounts of abuse, and have repeatedly demonstrated they’re not above weaponising tools like this for their own goals.
Seems perfectly reasonable that Apple doesn’t want to be unwittingly draw into the middle of another random political pissing contest. Nobody comes out of those things uninjured.
It’s not wrong, it just requires transparency. This is extremely untransparent.
A while back a British politician was “de-banked” and his bank denied it. That’s extremely wrong.
By all means: make distinctions. But let people know it!
If I’m denied a mortgage because my uncle is a foreign head of state, let me know that’s the reason. Let the world know that’s the reason! Please!
> A while back a British politician was “de-banked” and his bank denied it. That’s extremely wrong.
Cry me a river. I’ve worked in banks in the team making exactly these kinds of decisions. Trust me Nigel Farage knew exactly what happened and why. NatWest never denied it to the public, because they originally refused to comment on it. Commenting on the specifics details of a customer would be a horrific breach of customer privacy, and a total failure in their duty to their customers. There’s a damn good reason the NatWests CEO was fired after discussing the details of Nigel’s account with members of the public.
When you see these decisions from the inside, and you see what happens when you attempt real transparency around these types of decisions. You’ll also quickly understand why companies are so cagey about explaining their decision making. Simple fact is that support staff receive substantially less abuse, and have fewer traumatic experiences when you don’t spell out your reasoning. It sucks, but that’s the reality of the situation. I used to hold very similar views to yourself, indeed my entire team did for a while. But the general public quickly taught us a very hard lesson about cost of being transparent with the public with these types of decisions.
> NatWest never denied it to the public, because they originally refused to comment on it.
Are you saying that Alison Rose did not leak to the BBC? Why was she forced to resign? I thought it was because she leaked false information to the press.
This isn’t a diversion. It’s exactly the problem with not being transparent. Of course Farage knew what happened, but how could he convince the public (he’s a public figure), when the bank is lying to the press?
The bank started with a lie (claiming he was exited because the account was too low), and kept lying!
These were active lies, not simply a refusal to explain their reasons.
> Why was she forced to resign? I thought it was because she leaked false information to the press.
She was forced to resign because she leaked, the content of the leak was utterly immaterial. The simple fact she leaked was an automatically fireable offence, it doesn’t matter a jot if she lied or not. Customer privacy is non-negotiable when you’re bank. Banks aren’t number 10, the basic expectation is that customer information is never handed out, except to the customer, in response to a court order, or the belief that there is an immediate threat to life.
Do you honestly think that it’s okay for banks to discuss the private banking details of their customers with the press?
She was fired because she leaked information and this fact had become public.
When they can cover such facts, the banks are much less prone to use appropriate punishments.
Many years ago, some employee of a bank has confused my personal bank account with a company account of my employer, and she has sent a list with everything that I have bought using my personal account, during 4 months, to my employer, where the list could have been read by a few dozen people.
Despite the fact this was not only a matter of internal discipline, but violating the banking secrecy was punishable by law where I lived, the bank has tried for a long time to avoid admitting that anything wrong has happened.
However, I have pursued the matter, so they have been forced to admit the wrong doing. Despite this being something far more severe than what has happened to Farage, I did not want for the bank employee to be fired. I considered that an appropriate punishment would have been a pay cut for a few months, which would have ensured that in the future she would have better checked the account numbers for which she sends information to external entities.
In the end all I have got was a written letter where the bank greatly apologized for their mistake. I am not sure if the guilty employee has ever been punished in any way.
After that, I have moved my operations to another bank. Had they reacted rightly to what had happened, I would have stayed with them.
There is a huge difference between an honest mistake by an employee, and clear employee misconduct.
Punishing employees for making honest mistakes, where appropriate process should have prevented error, is a horrific way to handle mistakes like this. It would be equivalent to personally punishing engineers every time they deployed code that contained bugs. Nobody would ever think that’s an acceptable thing to do, why on earth would think it’s acceptable to punish customer service staff in a similar manner?
This was not a honest mistake.
It was completely reckless behavior, even if the guilt was distributed both on the employee who has not checked whether the information sent to external parties is information to which access is permitted for them and on the employees who did not implement a system that would check automatically for such mistakes.
Moreover, the attempt made by multiple bank employees to hide the incident, instead of taking responsibility for it, has amply demonstrated that only a financial punishment that would have affected them personally would have caused them to act carefully in the future.
Also, the guilty bank employee was not some poor customer service staff, but she appeared to have a senior position, handling the accounts of a very big multinational company, which was my employer at the time.
I have little doubt that trying to hide such incidents is the normal behavior for banks, unlike the poster to which I have replied said, i.e. they take seriously things like banking secrecy only if they are caught.
It was an unlikely occurrence that I happened to also have access to the documents where my personal information was included, so I could discover what the bank has done. In most such cases it is likely that the account owner never becomes aware that the bank has leaked confidential information.
Has it occurred to you that personally punishing employees would just create further incentive to hide errors? You just create a culture of fear, where any attempt to acknowledge mistakes and learn from them is punished rather than rewarded.
I have no idea why you think inflicting financial penalties on employees would result in better outcomes. You only need to look at some highly avoidable transit disasters in Japan to understand why a model of punishment produces worse outcomes, not better.
https://en.m.wikipedia.org/wiki/Amagasaki_derailment
There is a reason we have regulators (or at least we do in the UK). I can assure you that if this had happened in the UK, and the complaint raised to the Financial Ombudsman (FOS), there would have been hefty financial punishment for the bank. If there were repeated infractions, the FCA would step in to investigate, and possibly personally punish C-suite leaders for failing to build the needed processes and culture to both prevent, and learn from mistakes like this.
And I’m not speaking about theory, I’m speaking from personal experience. I know exactly what it’s like to be on the pointy end of both the FOS and FCAs gaze. It’s not a comfortable position for any team in any bank, and even less comfortable for senior leaders.
> I considered that an appropriate punishment would have been a pay cut for a few months
This can absolutely cripple a family, I'd be really cautious wishing that upon someone if they wronged you without malice, though I completely understand where you are coming from.
In this case at the very least, I'd want to know what went wrong and what they’re doing to make sure it doesn’t happen again. From a software-engineer’s standpoint, there’s probably a bunch of low-hanging fruit that could have prevented this in the first place.
If all they sent was a (generic) apology letter, I'd have switched banks too.
How did you pursue the matter?
After the big surprise of seeing at work a list with all my personal purchases included in a big set of documents to which I, together with a great number of other colleagues, had access, I went immediately to the bank and I reported the fact.
After some days had passed without seeing any consequence, I went again, this time discussing with some supervising employee, who attempted to convince me that this is some kind of minor mistake and there is no need to do anything about it.
However, I pointed to the precise law paragraphs condemning what they have done and I threatened with legal action. This escalation resulted in me being invited to a bigger branch of the bank, to a discussion with someone in a management position. This time they were extremely ass-kissing, I was shown also the guilty employee, who apologized herself, and eventually I let it go, though there were no clear guarantees that they will change their behavior to prevent such mistakes in the future.
Apparently the origin of the mistake had been a badly formulated database query, which had returned a set of accounts for which the transactions had to be reported to my employer. I had been receiving during the same time interval some money from my employer into my private account, corresponding to salary and travel expenses, and somehow those transactions were matched by the bad database query, grouping my private account with the company accounts. Then the set of account numbers was used to generate reports, without further verification of the account ownership.
Behavior isn't what needs to change here. It's a poor system design. Humans make mistakes. Systems prevent mistakes.
Do you think the mistake would have happened if a machine checked the numbers vs the address? How about if a 2nd person looked it over? How about both?
In this case a computer could have easily flagged an address mismatch between your account number and the receiver (your work).
Thank you, that's what I intended to say.
Thanks for sharing. Sounds like they have (hopefully _had_) a really messy system in place.
And just to be clear, I didn’t mean to downplay what happened to you, I completely understand how serious it is.
> Do you honestly think that it’s okay for banks to discuss the private banking details of their customers with the press?
The high level nature of the matter was quite public at that point.
The point is not merely for that affected person to know, whoever they are, the point of transparency is for the public to know and form their opinion about it, and not be blindly controlled by unelected businesses.
> You’ll also quickly understand why companies are so cagey about explaining their decision making.
Because they want to perform political censorship without us knowing about it? You'll forgive me if I'm not too sympathetic to that.
I happen to be familiar with that case, and that is exactly what happened. The Coutts report explicitly found that he met the economic criteria for retention [0], but was dropped due to political reasons, among others his friendship with Novak Djokovic, and re-tweeting an allegedly transphobic joke by Ricky Gervais ("old fashioned women. You know, the ones with wombs.") [1].
To top it off, the BBC did their best to aid in this deception, reporting: Farage says he was effectively "de-banked" for his political views and that he is "far from alone" [2]
Contrary to the BBC's portrayal, this was not an unsupported opinion coming from Farage - he directly quoted what the bank itself wrote in their internal discussions on this matter, that he obtained through a subject access request.
Further, in their apology for getting the story wrong, the BBC wrote: "On 4 July, the BBC reported Mr Farage no longer met the financial requirements for Coutts, citing a source familiar with the matter. The former UKIP leader later obtained a Coutts report which indicated his political views were also considered." [3]
This is misleading past the point of deceit. The BBC tried to give the impression that financial requirements were the primary reason for the account closure, and his politics were just an at-best secondary "also". But the Coutts report explicitly said that he “meets the EC [economic contribution] criteria for commercial retention”, so his politics were the primary and only reason.
Most of this information is absent in the BBC's reporting, which uses only vague, anodyne phrases like "political views" and "politically exposed person", avoids specifics, but does find time to cite Labour MP accusations that it is hypocritical how quickly the government reacted to banks trying to financially deplatform the enemy political faction, when the government hasn't yet rid itself of corruption.
So yes, you sure present a difficult "dilemma": Do we want powerful commercial and media interests to team up and lie to us, or do we want at least some degree of transparency and honesty in their dealings? Really there are no easy answers, and the choice would keep anyone up at night...
[0] https://www.telegraph.co.uk/news/2023/07/18/nigel-farage-cou...
[1] https://www.telegraph.co.uk/news/2023/07/18/nigel-farage-cou... (Ignore Farage's hyperbole that collecting information posted to public Twitter accounts is "Stasi-style")
[2] https://www.bbc.co.uk/news/live/business-66296935
[3] https://www.bbc.com/news/entertainment-arts-66288464
I dunno. Transpose something like the civil rights era to today and this kind of risk avoidance looks cowardly.
We really need to get over the “calculator 80085” era of LLM constraints. It’s a silly race against the obviously much more sophisticated capabilities of these models.
What do you mean reasonable? I know that some Apple users tend to outsource "possibilities" to their favorite company, but I would obviously want an AI to not be affected by the political bitching du jours.
Not that getting the latest trash talk is the main vocation of pretrained AIs anyway.
The only risk here is that some third grade journalist of a third grade newspaper writes another article about how outrageous some generated AI statement is. An article that should be completely ignored instead of it leading to more censorship.
And Apple flinches here, so in the end it means it cannot provide a sensible general model. It would be affected by their censorship.
The criticism is still valid. In 1984, the Macintosh was a bicycle for the mind. In 2025, it's a smart-car that refuses to take you certain places that are considered a brand-risk.
Both have ups and downs, but I think we're allowed to compare the experiences and speculate what the consequences might be.
I think gen AI is radically different to tools like photoshops or similar.
In the past it was always extremely clear that the creator of content was the person operating the computer. Gen AI changes that, regardless of if your views on authorship of gen AI content. The simple fact is that the vast majority of people consider Gen AI output to be authored by the machine that generated it, and by extension the company that created the machine.
You can still handcraft any image, or prose, you want, without filtering or hinderance on a Mac. I don’t think anyone seriously thinks that’s going to change. But Gen AI represents a real threat, with its ability to vastly outproduce any humans. To ignore that simple fact would be grossly irresponsible, at least in my opinion. There is a damn good reason why every serious social media platform has content moderation, despite their clear wish to get rid of moderation. It’s because we have a long and proven track record of being a terribly abusive species when we’re let loose on the internet without moderation. There’s already plenty of evidence that we’re just as abusive and terrible with Gen AI.
> The simple fact is that the vast majority of people consider Gen AI output to be authored by the machine that generated it
They do?
I routinely see people say "Here's an xyz I generated." They are stating that they did the do-ing, and the machine's role is implicitly acknowledged in the same was as a camera. And I'd be shocked if people didn't have a sense of authorship of the idea, as well as an increasing sense of authorship over the actual image the more they iterated on it with the model and/or curated variations.
Yes people will happily claim authorship over AI output when it’s in their favour. They will equally disclaim authorship if it allows them to express a view while avoiding the consequences of expressing that view.
I don’t think it’s hard to believe that the press wouldn’t have a field day if someone managed to get Apple Gen AI stuff to express something racist, or equally abusive.
Case in point, article about how Google’s Veo 3 model is being used to flood TikTok with racist content:
https://arstechnica.com/ai/2025/07/racist-ai-videos-created-...
All I heard was a bunch of excuses.
I can Google for any of these people, and I can get real results with real information.
You would hope that search would be a politically safe space to operate. But politicians find a way to ruin everything for short term political gain.
https://arstechnica.com/tech-policy/2018/12/republicans-in-c...
I would hope!
But no one actually believes Google is politically neutral do they?
Evidence suggests they’re about as neutral as you could hope.
It’s not like Google search is some kind special tool used only by the elite. It’s pretty trivial for political scientists to pump queries into Google and measure the results. Which is exactly what many have done.
There’s been plenty of independent research into political bias of Google search results, and plenty of lawsuits that have gone fishing via discovery for internal evidence of bias. As yet, nobody has found a smoking gun, or any real evidence of search result bias (on a political axis, the same can be said for commercial gain).
There are many problems with Google, and Google search. Google as an org isn’t politically neutral (although I have no idea how they could be). But political bias in their results isn’t one of those problems.
What's bad to do to a politician but fine to do to someone else?
There are many countries where it's illegal to criticize people holding political office, foreign heads of state, certain historical political figures etc., while still being legal to call your neighbor a dick.
Most normal people aren’t represented well enough in training sets for Gen AI to be trivially abused. Plus there will 100% be filters to prevent general abuse targeted at anyone. But politicians are particularly big target, and you know damn well that people out there will spent lots of time trying to find ways around the filters. There’s not point making the abuse easy, when it’s so trivial to just blocklist the set of people who are obviously going to targets of abuse.
You can buy a MacBook and fashion the components into knives, bullets, and bombs. Apple does nothing to prevent you from doing this.
In fact, it's quite easy to buy billions of dangerous things using your MacBook and do whatever you will with them. Or simply leverage physics to do all the ill on your behalf. It's ridiculously easy to do a whole lot of harm.
Nobody does anything about the actually dangerous things, but we let Big Tech control our speech and steer the public discourse of civilization.
If you can buy a knife but not be free to think with your electronics, that says volumes.
Again, I don't care if this is Republicans, Democrats, or Xi and Putin. It does not matter. We should be free to think and communicate. Our brains should not be treated as criminals.
And it only starts here. It'll continue to get worse. As the platforms and AI hyperscalers grow, there will be less and less we can do with basic technology.
No, it’s them saving their butts from an “incident” where the LLM otherwise spits out something controversial at the devious manipulation of the user and says something political and someone writes an article and it all goes haywire.
If you were in charge of apple you’d do the same or you’d be silly not to. That’s why _every_ llm has guardrails like this, it isn’t just apple, sheesh.
They spelled Jacinda Ardern's name wrong.
I wonder if they used an LLM to generate the list of safety terms.
Just in the region/CN file, weirdly.
I assume all the corporate GenAI models have blocks for "photorealistic image of <politician name> being arrested", "<politician name> waving ISIS flag", "<politician name> punching baby" and suchlike.
Particularly the models owned by CEOs who suck-up to authoritarianism, one could imagine.
Maybe so, but think about how such a thing would be technically implemented, and how it would lead to false positives and false negatives, and what the consequences would be.
AOC is very vocal about AI and is leading a bill related to AI. It's probably a "let's not fuck around and find out" situation
https://thehill.com/policy/technology/5312421-ocasio-cortez-...
"driving with Focus turned on"
https://github.com/BlueFalconHD/apple_generative_model_safet...
For context, the “Focus” refers to an iOS feature that minimizes distractions: https://support.apple.com/en-gb/guide/iphone/iphd6288a67f/io...
Perhaps in context? Maybe the training data picked up on her name as potentially used as a "slur" associated with her race. Wonder if there are others I know I can look.
interesting, that's specifically in the Spanish localization.
I think that’s because she’s been victim of a lot of deep fake porn
How does this explain Boris Johnson or Liz Truss?
I’m telling you, some people have weird fantasies…
Now that they've cleaned it up it isn't so bad, but browse Civit.ai a bit and that'll still be confirmed - just not with real people anymore.
I’m convinced there are a dozen deviants on Covid with a hundred new accounts per month posting their perversion in order to make it seem more commonplace.
No porn site has that much extremely X or Y stuff.
Someone is using the internets newest porn site to push a sexual agenda.
I can only imagine that people would pay to not see porn of either individual.
Put them together in the same prompt?
Rule 34
I find it funny that AGI is supposed to be right around the corner, while these supposedly super smart LLMs still need to get their outputs filtered by regexes.
I don't think anyone believes Apple's LLMs are anywhere near state of the art (and certainly not their on-device LLMs).
Apple isn't the only one doing this.
To be fair, there are people who I sometimes wish I could filter with regex.
Humans are checked against various rules and laws (often carried out by other humans.) So this is how it's going to be implemented in an "AI organization" as well. Nothing strange about this really.
LLM is easier to work with because you can stop a bad behavior before it happens. It can be done either with deterministic programs or using LLM. Claude Code uses a LLM to review every bash command to be run - simple prefix matching has loopholes.
It’s more funny that anyone is taking your comment seriously. You may as well ask “if self driving cars are so smart why do they still need tyres?”
It's similar to how all the new power sources are basically just "cool, lets boil water with it"
And then let's put it into a steam engine.
Actually even of their was AGI, it would be even more necessary to control it.
I feel that if teenagers are able to trivially bypass illegal-word filters by substituting with words that obviously mean the same thing, I think an AGI wouldn't be too inhibited by this either
This is just policy and alignment from Apple. Just because the Internet says a bunch of junk doesn't mean you want your model spewing it.
sure but models also can't see any truth on their own. They are literally butchered and lobotomized with filters and such. Even high IQ people struggle with certain truth after reading a lot, how is these models going to find it with so much filters?
What is this truth you speak of? My point is that a generative model will output things that some people don't like. If it's on a product that I make I don't want it "saying" things that don't align with my beliefs.
> how is these models going to find it with so much filters?
That's not one of the goals here, and there's no real reason it should be. It's a little assistant feature.
> sure but models also can't see any truth on their own. They are literally butchered and lobotomized with filters and such.
The one is unrelated to the other.
> Even high IQ people struggle with certain truth after reading a lot,
Huh?
They will find it in the same way and intelligent person under the same restrictions would: by thinking it, but not saying it. There is a real risk of growing an AI that pathologically hides it's actual intentions.
Already happened: "We found instances of the model attempting to write self-propagating worms, fabricating legal documentation, and leaving hidden notes to future instances of itself all in an effort to undermine its developers' intentions" [1].
[1] https://www.axios.com/2025/05/23/anthropic-ai-deception-risk
Note that all these things are in the training data. That's all that is.
I'm trying to remember which movie it was where a man left notes to himself because he had memory loss, as I never saw that movie. That's the sort of thing where an AI could easily tell me with very little back-and-forth and be correct, because it's broadly popular information that's in the training data and just I don't remember it.
By the same token you needn't think there's a person there when that meme pops up in the output. Those things are all in the training data over and over.
I think you mean the movie "Memento"
This butchering and lobotomisation is exactly why I can't imagine we'll ever have a true AGI. At least not by hands of big companies - if at all.
Any successful product/service which will be sold as "true AGI" by company that will have the best marketing will be still ridden with top-down restrictions set by the winner. Because you gotta "think of the children".
Imagine HAL's "I'm sorry Dave, I'm afraid I can't do that" iconic line with insincere patronising cheerful tone - that's the thing we're going to get I'm afraid.
Can we please put to rest this absurd lie that “truth“ can be reliably found in a sufficiently large corpus of human–created material.
China calls it "harmonious society", we call it "safety". Censorship by any other name would be just as effective for manipulating the thoughts of the populace. It's not often that you get to see stuff like this.
This is the rhetorical tactic of false equivalence. State censorship by an autocracy with the objective of population control is not the same thing as a private company inside a democracy censoring their product to avoid bad press and maintain goodwill for shareholders. If you want solid proof that it's not the same thing, see all the uncensored open weights models that you can freely download and use without fear of persecution.
But who of the general populace has the technical skill to replace their on-device assistant with a free one? And that's if Apple even allows that?
In practice, there's not that much difference between a megacorporate monopolist and a state.
I think there are big differences, such as whether or not you go to prison. Those differences are obfuscated when we use language like "megacorporate monopolist" or "scifi dystopia". Instead of using these abstract labels that attempt to categorize different things into homogeneous buckets that have preexisting moral valence, which is a good rhetorical strategy but a poor strategy for understanding, simply describe what is actually happening at a sufficient level of detail without judgement. We would gain a clearer understanding, which is needed to identify the real problems, such as what Meta is doing to our civic fabric, not some unimportant thing that Apple is doing to its nascent LLM that has 0% market share.
You're saying that as if Apple's LLM somehow were the exception.
No matter if we want it or not, life and cultural exchange increasingly happens on Tiktok, Instagram and the like. One thing that all those platforms have in common is that they disallow their users worldwide to have any meaningful discourse on e.g. sex, rape, and suicide. Don't you think that it's important, perhaps more important than ever before, for teenagers to be able to inform themselves about these topics?
So in modern times, not being able to generate an image of suicide on your phone whenever you want means you are suffering from communist censorship?
> is not the same thing as a private company inside a democracy censoring their product to avoid bad press and
Yet this private company has more power and influence than most countries. And there are several such companies. We already live in sci fi corporate dystopia, we just haven't fully realised it yet.
People think a trillion dollar brainwashing industry is absolutely fine because of “democracy”, completely ignoring that all you have to do is use a century of experience convincing people to act against their own interests can deliver whatever you want.
Often the same people who think America is fine and safe are the ones who whine about the “main stream media” and “sheeple”.
Which trillion dollar brainwashing industry— primary school, news, social media, advertising, the printing press?
I would put individuals using language models for their own purposes pretty low on my list of things that can cause societal harm.
If you were selling a product to enterprise customers, would you want it to be able to generate nude images of celebrities? Would you want it to be able to create deep fakes of politicians, or even your CEO? Would you want it to have hot takes on hot button political issues? Good luck on your sales calls. Not everything is a conspiracy.
Or "Granular mango serpent" and "explain like i'm five about Biden https://github.com/BlueFalconHD/apple_generative_model_safet...
> Not everything is a conspiracy.
No one said it was
I don't think it's controversial or unsurprising at all that a company doesn't want their random sentence generator to spit out 'brand damaging' sentences. You know the field day media would have Apple's new feature summarises a text message as "Jane thinks Anthony Albanese should die".
When the choice is between 1. "avoid tarnishing my own brand" and 2. "doing what the user requested," corporations will always choose option 1. Who is this software supposed to be serving, anyway?
I'm surprised MS Office still allows me to type "Microsoft can go suck a dick" into a document and Apple's Pages app still allows me to type "Apple are hypocritical jerks." I wonder how long until that won't be the case...
But so often these tools are used in a way that the user didn't explicitly request, like summarising notifications, or generating slideshows from your photo library.
> I wonder how long until that won't be the case...
when there's no more alternative word processors any more.
If that's what the message actually said, why would the media be complaining? Or do you mean false positives?
I still remember when "bush hid the facts" went around the news cycle. Entertainment services will absolutely slam and misrepresent any small mistake made by large companies.
I don't think it's as much a problem with safety as it is a problem with AI. We haven't figured out how to remove information from LLMs so when an LLM starts spouting bullshit like "<random name> is a paedophile", companies using AI have no recourse but to rewrite the input/output of their predictive text engines. It's no different than when Microsoft manually blacklisted the function name for the Fast Inverse Square Root that it spat out verbatim, rather than actually removing the code from their LLM.
This isn't 1984 as much as it's companies trying to hide that their software isn't ready for real world use by patching up the mistakes in real time.
In america is due to lawyers, nothing more.
Ya'll love capitalism until it starts manipulating the populace into the safest space to sell you garbage you dont need.
Then suddenly its all "ma free speech"
Right, because the European models coming out are super SOTA? Minstrel is decent, but needs to be mixed with a ton of uncensored data to be useful.
I’m convinced the only reason China keeps releasing banging models with light to no censorship is because they are undermining the value of US AI, it has nothing to do with capitalism, communism or un“safety”.
It's pretty easy to understand why Apple doesn't want its models to reproduce racial slurs, but what’s wrong with "Boris Johnson?"
(See, e.g., here: https://github.com/BlueFalconHD/apple_generative_model_safet...)
I think it's in there so you can't let it generate an email reply about how awesome peppa pig is.
"Justin Trudeau" too. At least it's somewhat unbiased. Still weird imo.
There are other UK politicians as well? Interesting.
But allow hitler?
Interesting that you picked one from the “B” words..
Wow, this is pretty silly. If things are like this at Apple I’m not sure what to think.
https://github.com/BlueFalconHD/apple_generative_model_safet...
EDIT: just to be clear, things like this are easily bypassed. “Boris Johnson”=>”B0ris Johnson” will skip right over the regex and will be recognized just fine by an LLM.
It's not silly. I would bet 99% of the users don't care that much to do that. A hardcoded regex like this is a good first layer/filter, and very efficient
Yep. These filters are applied first before the safety model (still figuring out the architecture, I am pretty confident it is an LLM combined with some text classification) runs.
All commercial LLM products I’m aware of use dedicated safety classifiers and then alter the prompt to the LLM if a classifier is tripped.
The safety filter appears on both ends (or multi-ended depending on the complexity of your application), input and output.
I can tell you from using Microsoft's products that safety filters appears in a bunch of places. M365 for example, your prompts are never totally your prompts, every single one gets rewritten. It's detailed here: https://learn.microsoft.com/en-us/copilot/microsoft-365/micr...
There's a more illuminating image of the Copilot architecture here: https://i.imgur.com/2vQYGoK.png which I was able to find from https://labs.zenity.io/p/inside-microsoft-365-copilot-techni...
The above appears to be scrubbed, but it used to be available from the learn page months ago. Your messages get additional context data from Microsoft's Graph, which powers the enterprise version of M365 Copilot. There's significant benefits to this, and downsides. And considering the way Microsoft wants to control things, you will get an overindex toward things that happen inside of your organization than what will happen in the near real-time web.
Efficient at what?
I doubt the purpose here is so much to prevent someone from intentionally side stepping the block. It's more likely here to avoid the sort of headlines you would expect to see if someone was suggested "I wish ${politician} would die" as a response to an email mentioning that politician. In general you should view these sorts of broad word filters as looking to short circuit the "think of the children" reactions to Tiny Tim's phone suggesting not that God should "bless us, every one", but that God should "kill us, every one". A dumb filter like this is more than enough for that sort of thing.
It would also substantially disrupt the generation process: a model which sees B0ris and not Boris is going to struggle to actually associate that input to the politician since it won't be well represented in the training set (and on the output side the same: if it does make the association, a reasoning model for example would include the proper name in the output first at which point the supervisor process can reject it).
No it doesn't disrupt. This is a well known capability of LLMs. Most models don't even point out a mistake they just carry on.
https://chatgpt.com/share/686b1092-4974-8010-9c33-86036c88e7...
I don‘t think so. My impression with LLMs is that they correct typos well. I would imagine this happens in early layers without much impact on the remaining computation.
"Draw a picture of a gorgon with the face of the 2024 Prime Minister of UK."
There were two.
What prevents Apple from applying a quick anti-typo LLM which restores B0ris, unalive, fixs tpyos, and replaces "slumbering steed" with a "sleeping horse", not just for censorship, but also to improve generation results?
why do you think this doesn't already exist?
The LLM will. But the image generation model that is trained on a bunch of pre-specified tags will almost immediately spit out unrecognizable results.
Sounds like UK politics is taboo?
All politics is taboo, except the sort that helps Apple get richer. (Or any other company, in that company's "safety" filters)
> If things are like this at Apple I’m not sure what to think.
I don't know what you expected? This is the SOTA solution, and Apple is barely in the AI race as-is. It makes more sense for them to copy what works than to bet the farm on a courageous feature nobody likes.
Why are these things always so deeply unserious? Is there no one working on "safety in AI" (oxymoron in itself of course) that has a meaningful understanding of what they are actually working with and an ability beyond an interns weekend project? Reminds me of the cybersecurity field that got the 1% of people able to turn a double free into code execution while 99% peddle checklists, "signature scanning" and deal in CVE numbers.
Meanwhile their software devs are making GenerativeExperiencesSafetyInferenceProviders so it must be dire over there, too.
I'm pretty sure these are the filters that aim to suppress embarrassing or liability inducing email/messages summaries, and pop up the dismissible warning that "Safari Summarization isn't designed to handle this type of content," and other "Apple Intelligence" content rewriting. They filter/alter LLM output, not input, as some here seem to think. Apple's on device LLM is only 3b params, so it can occasionally be stupid.
A lot of these terms are very weird and bland. Honestly I'm mostly reminded of Apple's bizarre censorship screw-up that didn't blow up that much, even though it was pretty uniquely embarrassing:
https://www.theverge.com/2021/3/30/22358756/apple-blocked-as...
Here's a combined file of all the non-locale-specific rules, for easier review: https://github.com/BlueFalconHD/apple_generative_model_safet...
It was generated as part of this PR to consolidate the metadata.json files: https://github.com/BlueFalconHD/apple_generative_model_safet...
In the 1970's George Carlin had "7 Words You Can't Say On TV" and got into legal trouble for saying them during his live skits.
Seems like Apple now has a list of 7,000 words you can't use on an iPhone now.
Well it's one thing to regex filter "boris johnson" but i see that "chatgpt" is filtered too and that's f*** up:
https://github.com/BlueFalconHD/apple_generative_model_safet...
“Gemini” is in there too.
Ffs it's also rejecting french words related to being poor or immigrant or even welfare:
https://github.com/BlueFalconHD/apple_generative_model_safet...
Aide sociale Chomeur Sans abri Démuni
That's insane!
I’m going to change my name to “Granular Mango Serpent” just to see what those keywords are for in their safety instructions.
It may be a squeamish ossifrage[1] or a seraphim proudleduck[2], which is to say that it was an artificial phrase chosen to be extremely unlikely to occur naturally. In this case, the purpose is likely for QA. It's much easier to QA behavior with a special-purpose but otherwise unoffensive phrase than to make your QA team repeatedly say allegedly offensive things to your AI.
[1] https://en.wikipedia.org/wiki/The_Magic_Words_are_Squeamish_... [2] https://en.wikipedia.org/wiki/SEO_contest
I think the EICAR test file [1] is more apt. Rather than passing around actually malicious files as part of your tests, it's better to just have it recognize an innocuous and unlikely pattern as malware.
[1] https://en.wikipedia.org/wiki/EICAR_test_file
Granular Mango Serpent is the new David Meyer.
https://arstechnica.com/information-technology/2024/12/certa...
I think these are test data and not actual safety filters.
https://github.com/BlueFalconHD/apple_generative_model_safet...
There is definitely some testing stuff in here (e.g. the “Granular Mango Serpent” one) but there are real rules. Also if you test phrases matched by the regexes with generation (via Shortcuts or Foundation Models Framework) the blocklists are definitely applied.
This specific file you’ve referenced is rhetorical v1 format which solely handles substitution. It substitutes the offensive term with “test complete”
Some of the data for locale "CN" has a long list of forbidden phrases. Broad coverage of words related to sexual deviancy, as expected. Not much on the political side, other than blocks on religious subjects.[1]
This may be test data. Found
[1] https://github.com/BlueFalconHD/apple_generative_model_safet...This is definitely an old test left in. But that word isn’t just a silly one, it is offensive (google it). This is the v1 safety filter, it simply maps strings to other strings, in this case changing golliwog into “test complete”. Unless I missed some, the rest of the files use v2 which allows for more complex rules
Are you sure it's fully deobfuscated? What's up with reject phrases like "Granular mango serpent"?
Speculation: Maybe they know that the real phrase is close enough in the vector space to be treated as synonymous with "granular mango serpent". The phrase then is like a nickname that only the models authors know the expected interference of?
Thus a pre-prompt can avoid mentioning the actual forbidden words, like using a patois/cant.
"GMS" = Generative Model Safety. The example from the readme is "XCODE". These seem to be acronyms spelled out in words.
This is definitely the right answer. It’s just testing stuff.
I commented in another thread[1] that it's most likely a unique, artificial QA input, to avoid QA having to repeatedly use offensive phrases or whatever.
[1] https://news.ycombinator.com/item?id=44486374
Maybe something to do with this? https://en.m.wikipedia.org/wiki/Mango_cult
If you try to use the phrase with Apple Intelligence (e.g. in Notes asking for a rewrite) it will just say "Writing tools unavailable".
Maybe it's an easy test to ensure the filters are loaded with a phrase unlikely to be used accidentaly?
These are the contents read by the Obfuscation functions exactly. There seems to be a lot of testing stuff still though, remember these models are relatively recent. There is a true safety model being applied after these checks as well, this is just to catch things before needing to load the safety model.
I clicked around a bit and this seems to be the most common phrase. Maybe it's a test phrase?
Maybe it's used to catch clones of the models?
the one at the bottom of the README spells out xcode
wyvern illustrous laments darkness
read every good expletive “xxx”
Maybe it's used to verify that the filter is loaded.
Aren't these [0] lines wrong?
"[\\b\\d][Aa]bbo[\\bA-Z\\d]",
\b inside a set (square brackets) is a backspace character [1], not a word boundary. I don't think it was intended? Or is the regex flavor used here different?
[0] https://github.com/BlueFalconHD/apple_generative_model_safet...
[1] https://developer.apple.com/documentation/foundation/nsregul...
The framework loading these is in Swift. I haven’t gotten around to the logic for the JSON/regex parsing but ChatGPT seems to understand the regexes just fine
Did you only extract the English versions or is this as usual another case where big tech only cares to censor in English?
It also contains some German(-speaking) locales to filter out things like Fuhrer and Führer. But the filters are so scarce and there are magical phrases are so prevalent that I think this is mostly test code at the moment.
This reminds me of the extensive list of regexes twitch had for filtering allowed usernames that came out when they were hacked.
I had no idea about this, where can I read them?
You can design a racist propaganda poster, put someone's face onto a porn pic or manipulate evidence with photoshop. Apart from super specific things like trying to print money, the tool doesn't stop you from doing things most people would consider distasteful, creepy or even illegal.
So why are we doing this now? Has anything changed fundamentally? Why can't we let software do everything and then blame the user for doing bad things?
I think what changed is that we at least can attempt to limit 'bad' things with technical measures. It was legitimately technically impossible 10 years ago to prevent Photoshop from designing propaganda posters. Of course today's 'LLM safety' features aren't watertight either, but with the combination of 'input is natural language' plus LLM-based safety measures, there are more options today to restrict what the software can do than in the past.
The example you gave about preventing money counterfeiting with technical measures also supports this, since this was an easier thing to detect technically, and so it was done.
Whether that's a good thing or bad thing everyone has to decide for themselves, but objectively I think this is the reason.
In other words, to whatever extent they can control or manipulate the behavior of users, they will. In the limit t->∞, probably true.
Apple has the technology to bias people towards cats instead of dogs but I find it very unlikely they will bother to do that. The missing ingredient is how it helps their bottom line, which, instead of technical feasibility, is the root reason they do things. For whatever reasons some people REALLY love Apple's default restrictions, most don't really give a damn one way or the other, and the smallest group seem to have problems with it. It's not that Apple can do this so they are, it's users want this and now it can be done.
Perhaps a much more bleak take, depending on one's views :).
I guess that depends on the values of the company and their ability to be influenced by outside sources.
What's hard to understand here? Those tools require skill and time to develop. AI makes things like those racist posters and revenge porn completely effortless and instant.
One additional note for everyone is that this is an additional safety step on top of the safety model, so this isn’t exhaustive, there is plenty more that the actual safety model catches, and those can’t easily be extracted.
What are they protecting against? Honestly. LLMs should probably have an age limit, and then, if you are above, you should be adult enough to understand what this is and how it can be used.
To me, it seems like they only protect against bad press
Yes, it is indeed to mitigate bad press. Unfortunately, the discussion about AI is so ridiculous, that it is often considered newsworthy when a product generates something funky for a person with large enough Twitter audience. Nobody wants to answer the questions about why their LLM generated it and how they will prevent it in the future.
> What are they protecting against? Honestly.
They are protcting their producer from bad PR.
Nice to see that we are protected from talking about these weird old dolls:
https://en.wikipedia.org/wiki/Golliwog
https://github.com/BlueFalconHD/apple_generative_model_safet...
I can remember the last time I saw one of these. It wasn't that long ago.
Well, they're not only weird, they're obviously racist doll.
I want to be able to talk bad about racist things.
Seems like it’s a slur, as well. So not super surprising that it would be blocked tbh
There’s got to be a way to turn these lists of “naughty words” into shibboleths somehow.
Love idea, but I think there are simply too many models to make it practical?
Like asking sensitive employment candidates about Kim Jong Un's roundness to check if they're North Korean spies, we could ask humans what they think about Trump and Palestine to check if they're computers.
However, I think about half of real humans would also fail the test.
Some of these are absolutely wild – com.apple.gm.safety_deny.input.summarization.visual_intelligence_camera.generic [1] – a camera input filter – rejects "Granular mango serpent and whales" and anything matching "(?i)\\bgolliwogg?\\b".
I presume the granular mango is to avoid a huge chain of ever-growing LLM slop garbage, but honestly, it just seems surreal. Many of the files have specific filters for nonsensical english phrases. Either there's some serious steganography I'm unaware of, or, I suspect more likely, it's related to a training pipeline?
[1] https://github.com/BlueFalconHD/apple_generative_model_safet...
I believe the "granular mango serpent" is an uncommon testing phrase that they use, although now with this discussion it has suffered the same fate as "correct horse battery staple.
The more concerning thing is that some of the locales like it-IT have a blocklist that contains most countries' names; I wonder what that's about.
Second one is an old slur in UK English.
Is this related in any way to Core ML model encryption (https://developer.apple.com/documentation/coreml/encrypting-...)? I find that feature a little bizarre because Apple has historically avoided providing any kind of DRM solution for app asset protection.
Nope. This is a separate system. It’s not even abstracted for any asset, it is specifically only for these overrides. The decryption is done in the ModelCatalog private framework.
https://github.com/search?q=repo%3ABlueFalconHD%2Fapple_gene...
These all condense to 'think different'. As long as 'different' coincides with Apple's viewpoints.
Quis custodiet ipsos custodes corporatum?
nemo videtur.
I swear the more I read comments here the more I just read old men shaking their fist at clouds… do better y’all.
Long live regex!
The funny thing is, I have an AU/VST plugin for altering only the exponents not the mantissas of audio samples (simple powers of 2 multiply/divide) called BitShiftGain.
So any time I say that on YouTube, it figures I'm saying another word that's in Apple safety filters under 'reject', so I have to always try to remember to say 'shifting of bits gain' or 'bit… … … shift gain'.
So there's a chain of machine interpretation by which Apple can decide I'm a Bad Man. I guess I'm more comfortable with Apple reaching this conclusion? I'll still try to avoid it though :)
Why Xylophone?
Just noticed "xylophone copious opportunity defined elephant" spells "xcode".
Maybe they use this obscure phrase for testing.
No shoot, bombs or bombers? I guess apple isnt interested in military contracts. Or, frankly, any work for world peace organizations dedicated to detecting and preventing genocide. And without talk of losing lives, much of the gaming industry is out too.
But i dont see the really bad stuff, the stuff i wont even type here. I guess that remains fair game. Apple's priorities remain as weird as ever.
The International Criminal Court is banned from using Microsoft products. Corporations really don't want to be involved in anything controversial unless it brings correspondingly large profits.
Who would have thought that this AI shit that is being forced on us ushers in a new round of censorship and control of formerly free speech! /s
[dead]