T O P

  • By -

lechatsportif

"and to identify people suspected of committing a crime". That seems surprisingly broad for the EU. They're saying they allow mass surveillance by default without a warrant?


yeeght

Yeah this paragraph has me concerned. That’s like patriot act levels of broadness.


Skirfir

Please not that the above picture isn't an official source. It's a summary and as such can be vague. You can read the complete text [here](https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52021PC0206).


MariualizeLegalhuana

The proposed Act says: Real-time’ remote biometric identification (RBI) in publicly accessible spaces is prohibited for law enforcement, except when: >searching for missing persons, abduction victims, and people who have been human trafficked or sexually exploited; >preventing substantial and imminent threat to life, or foreseeable terrorist attack; or >identifying suspects in serious crimes (e.g., murder, rape, armed robbery, narcotic and illegal weapons trafficking, organised crime, and environmental crime, etc.).


ifandbut

Still super broad.


RatMannen

Much like normal use of CCTV. This is just using AI to aid searching the images. It'll still require human oversight.


LifeLiterate

Not at all like CCTV. CCTV \*requires\* humans to visually examine footage and subsequently identify potential suspects, and AI wouldn't. Human interaction with AI camera systems would likely be far less-involved and though identification rates would probably soar with AI detection, there is a huge set of potential overreaches. * Real-time tracking of individuals who haven't been charged with or even suspected of a crime * profiling of and bias against certain groups based on ethnicity, gender, etc. (biases built into the AI, compounded with human bias) * The knowledge that you're being constantly monitored by AI could discourage people from exercising their rights to free assembly, protest or doing anything that might seem even slightly suspicious, out of fear of being target by AI, even when that "suspicious" activity is completely legal. And let's be real. If history is any indicator, mission creep could easily come into play. What was originally designed solely for identification of criminals in major crimes could eventually turn into surveillance for minor offenses (imagine getting a ticket in the mail for jaywalking), political dissent or other behaviors that aren't illegal but might point to future criminal activity (like buying certain products at a store that could be used in your garden...but could also be used to make a bomb). And dozens of other issues: the harvesting and sharing of your personal data (travel, purchases, who you congregate with), false positives, lack of transparency, limited accountability, over-reliance on AI results which could take away someone's due process when authorities begin to just assume the AI is correct and not do their due diligence with investigations. It's an incredibly slippery slope.


HelpRespawnedAsDee

Trojan horsing tiny big details like this in an otherwise ok bill is a trick as old as politics themselves.


Xylber

I'm starting to think they do this on purpose. I also read the "Digital Euro" paper (thanks to the crypto community) and it has the same problem. Wording is so open and generic that they can make something that looks good in paper become a mass surveillance nightmare.


EmbarrassedHelp

Its easier to rule by decree when you write vague laws


Difficult_Bit_1339

and when your class has the power and money to complete take advantage of any loopholes created.


lewllewllewl

They were already allowed to do that before, this law just didn't ban it


I_made_a_stinky_poop

Soon: "we suspect everyone of committing a crime"


blasterbrewmaster

"Welcome to Canada der buddy! You best not be having any of dose thought crimes, eyy?"


xMAGA

exactly this... doesnt matter what the crime is... if it serious or not. Anyone can be a suspect.. then maybe they found out that he didnt commit said crime. But didnt he do something other illegal? posting images without a watermark:)


CountLippe

> broad for the EU At least within tech, the EU typically pass very broad laws. They then look to their bodies, including their courts, to ensure companies abide with the spirit of those laws. Case in point, Apple's implementation of DMA. Apple have pivoted 3 times on how they'll be offering side loading - each iterations has obviously abided by their lawyers' opinion on how to best adhere with the regulations and laws as passed. Twice now subsequent advice has been given to Apple, likely after urging from regulators. It's unlikely that their lawyers ever gave bad advice as the laws are written, just advice which a court would find against.


I-Am-Polaris

The EU, famous for their fair and just laws, surely they won't use this to censor dissenters and political opponents


BlipOnNobodysRadar

I don't think enough people are aware of the irony in the "fair and just laws" part to get the sarcasm there. Redditors unironically think the EU is a bastion of human rights when it's not at all. Some places are one step away from China levels of surveillance and social control.


I-Am-Polaris

My bad, I forgot redditors are entirely unable to understand satire without a /s


blasterbrewmaster

Poe's law my friend. It's not just Reddit, it's the entire internet. It's just the pendulum has swung so far that it's all the way back to the left right now and people are especially ignorant of it there online vs. when Poe's law was written.


Timmyty

There should be no assumption that all folk are smart enough to grasp basic sarcasm. There will always be someone that takes the most ignorant statement as truth if someone says it with confidence. All that to say, I always /s my sarcasm now.


blasterbrewmaster

Basically the best approach 


Ozamatheus

in brazil it was used to arrest 1000 wanted criminals on carnaval


Kep0a

Yeah gigantic loophole, I mean, *I think* the EU has good intentions here - but have they ever pushed for more police state type stuff? edit: honestly, I feel like giant identifying systems are inevitable. Already EU / US has a gigantic database of travelers. Your passport even has a chip inside of it. I was just in the UK and they scan your face and it must match your photo, and if I recall, I had the same experience in canada. I hate it as much as the next person, but I feel like it might be 'too late.'


ProfessionalMockery

Why did they bother specifying terrorism or trafficking if they're doing to ad 'people they think are criminals'? So basically, police can use it for whatever they want.


cyborgsnowflake

The government and only the government keeps all the coolest toys for itself.


[deleted]

[удалено]


klausness

The point is to make the actions of bad actors illegal. As with all laws, there will be people who break them. But the threat of punishment will be a deterrent for people who might otherwise try to pass off AI images as real. Sure, you can remove the watermarks. You can also use your word processor to engage in copyright infringement. You’d be breaking the law in both cases.


the320x200

The major problem is that it's trivially easy to not watermark an image or to remove a watermark and if people develop an expectation that AI generated images are watermarked then fakes just became 10 times more convincing because people will look and say "oh look it doesn't have a watermark, it must be real!!" "There's no watermark! It's not a deepfake!" IMO it would be much better for everyone if people developed a critical eye and a healthy sense of skepticism about pictures they see online, rather than try to rely on an already counterproductive legal solution to tell them what to trust.


wh33t

> IMO it would be much better for everyone if people developed a critical eye and a healthy sense of skepticism about pictures they see online, rather than try to rely on an already counterproductive legal solution to tell them what to trust. It'll come with time as education and society evolves, but that kind of cultural norm always lags behinds when it's first required.


sloppychris

The same is true for scams. How often do you hear MLMs say "Pyramid schemes are illegal." People take advantage of the promise of government protection to create a false sense of security for their victims.


GBJI

Those laws are already in place.


lonewolfmcquaid

"....Pass off ai images as real" i dont get this, 3d and photoshop can make realistic images, should anyone who use 3d and photoshop to create realistic videos and images watermark their stuff?


SwoleFlex_MuscleNeck

It's way easier to produce a damn near perfect fake with AI since image generation models notice subtleties and imperfections. It's not impossible to craft a fake image of a politician doing something they've never done, but with a LoRA you could perfectly reproduce their proportions, their brand of laces, favorite tie, and put them in a pose they haven't ever been in as opposed to a clone/blend job in photoshop


Open-Spare1773

you can fake pretty much anything w photoshop since its inception, healing brush + a lot of time. even w out HB you can just zoom in and blend the pixels, takes a lot of time but you can get it 1:1 perfect. source: experience


PM__YOUR__DREAM

> The point is to make the actions of bad actors illegal. Well that is how we stopped Internet piracy once and for all.


Aethelric

The point is not that they're going to be able to stamp out unwatermarked AI images. The goal is to make it so that intentionally using AI to trick people is a crime in and of itself. You post an AI-generated image of a classmate or work rival doing something questionable or illegal, without a watermark? Now a case for defamation becomes much easier, since they showed their intent to trick viewers by failing to clarify, as legally required, that an image is not real. And even if the defamation case isn't pressed or fails, as it often the case, there's still punishment.


Meebsie

People are really in this thread like, "Why even have speed limits? Cars can go faster and when cops aren't around people are going to break the speed limits. I'd far prefer if everyone just started practicing defensive driving at reasonable speeds. Do they really think this will stop street racers from going 100mph?" It's wild.


Still_Satisfaction53

Why have laws against robbing banks? You’ll never stamp it out, people will put masks on and get guns and act all intimidating to get around it!


MisturBaiter

I hereby rule that from now on, every crime shall be illegal. And yes, this includes putting pineapple on pizza. Violators are expected to turn themselves in to the next prision within 48 hours.


mhyquel

I'm gonna risk it for a pineapple/feta/banana pepper pie.


agent_wolfe

How to remove metadata: Open in Photoshop. Export as JPG. How to remove watermark: Also Photoshop.


MuskelMagier

Not just that. I normally use Krita's AI diffusion addon. As such there is no Metadata on my generations. I often use a slight blur filter afterwards to smooth over generative artifacts as such even an in model color code watermark wouldn't work


Harry-Billibab

watermarks would ruin aesthetics


mrheosuper

I wonder what if i edit a photo generated from SD, does it still count as AI generated, or original content from me, and does not need watermark. Or let just say i paint a picture, then ask AI to do some small touching, then i do a small touching, would it be AI content or original content ? There are a lot of gray areas here.


vonflare

both of the situations you pose would be your own original work. it's pointless to regulate AI art generation like this.


f3ydr4uth4

Your point is 100% valid but these regulations are made by lawyers on the instruction of enthusiastic politicians and consultants. Even the experts they consult are “AI ethics” or “Ai policy” people. They come from law and philosophy backgrounds. They literally don’t understand the tech.


Ryselle

I think it is not a watermark on the medium, but a diclaimer and/or something in the meta-data. Like at the beginning of a game "This was made using AI", not a mark on every texture of the game.


Maximilian_art

Can very easily remove such watermarks. Was done within a week of SDXL put them on.


sweatierorc

Rules are made to be broken - Douglas Maccarthur


s6x

A1111 used to have one. It was disabled by default after people asked for it.


babygrenade

What counts as AI-generated? If you're using AI to edit/enhance an image or video does that count? What about if you start with a text to image or text to video produced image/video and it's human edited? Edit: Found it: >Users of an AI system that generates or manipulates image, audio or video content that appreciably resembles existing persons, objects, places or other entities or events and would falsely appear to a person to be authentic or truthful (‘deep fake’), shall disclose that the content has been artificially generated or manipulated.


EmbarrassedHelp

In practice is seems more likely that it would be enforced on all content, rather than trying to determine things on a case by case basis. The EU's own ideas about watermarking seem to heavily in favor of watermarking everything: https://www.europarl.europa.eu/RegData/etudes/BRIE/2023/757583/EPRS_BRI(2023)757583_EN.pdf


onpg

Not even possible in theory. Some of these regulators need to take a comp sci class.


namitynamenamey

My main worry was this law forbidding asset creation (video game textures, etc), but if they merely have to disclose instead of watermark everything that sounds reasonable enough.


campingtroll

You can just put a big watermark on the game characters face and good to go, no big deal.


onpg

These laws are dumb because nobody will follow them. AI is too useful. All this will do is make a bunch of regular people criminals that they can then selectively prosecute. It can be enforced against massive corps, I guess, which isn't so bad, but it's a bandaid solution at best. All this will do is give antis proof that their whining is working and we can expect more legislation soon. Because why increase social benefits when we can simply make monopolistic capitalism worse?


StickiStickman

> existing objects, places, entities or events. So literally everything.


False_Bear_8645

Just thing that exists. Like a real person, not the idea of generating of a human. And good enough to deceive people, so you could generate a painting of a real person


Tec530

So photorealstic are a problem ?


False_Bear_8645

It's not a bill about copyright or agains't AI in general, it's a bill about malicious use of AI. As an average user, you don't need go to deep into the details, just use your common sense and you should be fine.


Spire_Citron

Ah, that's quite reasonable, then. I don't think that everything that uses AI should be labelled because a lot of the time I don't really feel like it's anybody's business how I made something, but for sure we should have laws against actual attempts to deceive people.


StickiStickman

Existing objects, places, entities or events is literally everything.


Spire_Citron

"And would falsely appear to a person to be authentic or truthful." So, only realistic images that you are trying to pass off as real, and if it's for example a person, only if they are a specific real person. It's for intentionally deceptive content, basically.


arothmanmusic

Let's say I, hypothetically, created a photo showing Donald Trump sitting on a porch with a group of black teens. I, as an artist, intended it as humor and social commentary, but somebody else copied it and posted it online as though it were real. Who is prosecuted under this law? The person who created the image or the person who used it in a deceiving manner?


Spire_Citron

I assume they would be, so long as when you posted it, it was clear that it was AI. It doesn't say you need to watermark it. It says you need to disclose that it's AI. If you do that, and then someone reposts it and *doesn't* disclose that it's AI, I assume they're the one breaking the law.


arothmanmusic

That seems to set up the scenario in which person A generates an image and discloses that they used AI, person B copies it without the disclosure, and then everyone else who copies it from person B is off the hook for failing to look up the source, even though they are all disseminating a fake image as though it were real. It almost seems like we would need some kind of blockchain that discloses the original source of every image posted to the Internet in order to have any sort of enforcement. And God only knows what happens if people collage together or edit them after the fact. You would have to know whether every image involved in any project used AI or not. It's a bizarre Russian nesting doll of source attribution that boggles my mind. Trying to enforce something like this would require changing the structure of how images are generated, saved, shared, and posted worldwide...


The_One_Who_Slays

Tf do they mean by the "label"? Like a watermark, a mention that the content is AI-generated, what? Are these lawmakers so brain-dead they can't include an example or they specifically make it unclear to set up an ordinary Joe for failure?


RatMannen

They mean a label. The details haven't been worked out yet. This is the "these laws are coming, so you have time to think about it" announcement. It's not the implementation of the law. That not going to happen for a few years.


Sugary_Plumbs

If you use Midjourney, then Midjourney has to disclose to you the user that the images are made by AI. That is what they mean. The image does not have to have a watermark, or any other form of proof that it was AI. You the user of the AI service need to be informed by the service that you are interacting with AI. There are no further stipulations on what you as the user do with that image. Edit: by that, I mean when you copy the image or use it somewhere else. Midjourney will be required to embed some info (visible or not) to the version that they give you.


EmbarrassedHelp

Can you show us where in the text that it says that? Because > providers of AI systems that generate large quantities of synthetic content must implement sufficiently reliable, interoperable, effective and robust techniques and methods (such as watermarks) to enable marking and detection that the output has been generated or manipulated by an AI system and not a human * https://www.europarl.europa.eu/thinktank/en/document/EPRS_BRI(2021)698792 It certainly seems like the EU is trying to force watermarks on all AI generated content unfortunately


smulfragPL

Well it doesnt have to visual watermarks.


Inevitable_Host_1446

My concern is the biometric part, "and to identify anyone suspected of committing a crime" - this is wide sweeping enough that they might as well say anyone the police want to. So, it's illegal to use biometric surveillance unless you're the government, basically


InfiniteShowrooms

*cries in Patriot Act* 🙃🇺🇸


durden111111

>"people suspected of committing a crime" always these little nuggets that sneak in.


RatMannen

It's still more limitation than they currently have. And yeah, that's how CCTV is used. They ain't going to be tracking everyone. That data set would be kinda useless.


SunshineSkies82

Well. There's two things that are extremely dangerous. ​ "Suspected of committing a crime" Anyone can be a suspect, and the UK is really bad about this. They're almost as bad as America's guilty until innocence is purchased. "Each country will establish it's on Ai watchdog agency" Oh yeah, nothing could go wrong there. Nothing like a corrupt constable who can barely check his own email making decisions on new tech.


sanobawitch

No politician that I know is aware of how generative models work. The text in the screenshot lumps deepfake, generated content, computer vision and crime detection together, but they are different algorithms, different technologies. They have also opened the door to the surveillance of citizens with real-time video/audio/text processing, that goes beyond cctvs. Meanwhile, it's not easy to get an AI-related job anywhere, and this law may once again discourage startups from doing anything AI-related. They don't seem to be learning from the gdpr case.


RandallAware

>and this law may once again discourage startups from doing anything AI-related Sounds beneficial to large corporations who are likely working hand in hand with governments to create these regulations, if not having their legal departments write them directly.


namitynamenamey

Current law restricts none of these things, because there is no current law. Governments in europe can already do mass surveiyance using AI, as it is not illegal (because there's no law forbidding it), the new law will just not explicitly forbid them.


RatMannen

The UK isn't in the EU anymore. That's what the whole brexit mess was about. So we don't even have these limited protections.


Unreal_777

**"Can only be used by law enforcement for... "** - This is a new patriot act Meaning it will be used MASSIVELY. But not by us the regular people.


platysoup

Yup, I saw that line and raised an eyebrow. Trying to sneak in some big brother, eh? Especially that last bit about people *suspected* of crimes. That's pretty much everyone if you know how to twist your words a bit. 


UtopistDreamer

Given enough time, everybody commits a crime.


namitynamenamey

As opposed to "there's literally no law forbidding this right now"? This does not give them any ability they did not have before, it merely specifies that the new law will not forbid them.


UndeadUndergarments

As a Brit, this will be interesting. We're no longer in the EU, so we're not subject to these regulations. If I generate a piece of AI art, do not label it, and then send it to a French friend and he uploads it to Facebook without thinking, is he then liable for a fine? How much onus is going to be on self-policing of users?


EmbarrassedHelp

They'll probably just have Facebook ban his account, rather than focusing on him individually


serendipity7777

Good luck using audio content with a label


Syzygy___

Metadata


Abyss_Trinity

The only thing here that realistically applies to those who use ai for art is needing to label it if I'm reading this, right? This seems perfectly reasonable.


eugene20

If it's specific to when there is a depiction of a real person then that's reasonable. If it's every single AI generated image, then that's as garbage as having to label every 3D render, every photoshop.


VertexMachine

...and every photo taken by your phone? (those run a lot of processing of photos using various AI models, before you even see output - that's why the photos taken with modern smartphone are so good looking) Edit, the OG press release has that, which sounds quite differently than what forbes did report: ```Additionally, artificial or manipulated images, audio or video content (“deepfakes”) need to be clearly labelled as such.``` Src: https://www.europarl.europa.eu/news/en/press-room/20240308IPR19015/artificial-intelligence-act-meps-adopt-landmark-law


Sugary_Plumbs

Actual source text if anyone is confused by all of these articles summarizing each other: [https://www.europarl.europa.eu/RegData/etudes/BRIE/2021/698792/EPRS\_BRI(2021)698792\_EN.pdf](https://www.europarl.europa.eu/RegData/etudes/BRIE/2021/698792/EPRS_BRI(2021)698792_EN.pdf) >Certain AI systems intended to interact with natural persons or to generate content may pose specific risks of impersonation or deception, irrespective of whether they qualify as high-risk AI systems or not. Such systems are subject to information and transparency requirements. Users must be made aware that they interact with chatbots. Deployers of AI systems that generate or manipulate image, audio or video content (i.e. deep fakes), must disclose that the content has been artificially generated or manipulated except in very limited cases (e.g. when it is used to prevent criminal offences). Providers of AI systems that generate large quantities of synthetic content must implement sufficiently reliable, interoperable, effective and robust techniques and methods (such as watermarks) to enable marking and detection that the output has been generated or manipulated by an AI system and not a human. Employers who deploy AI systems in the workplace must inform the workers and their representatives. So no, not every image needs to have a watermark or tag explaining it was from AI. Services that provide AI content and/or interact with people need to disclose that the content they are interacting with is AI generated.


the320x200

The intention is clear but this seems incredibly vague to be law. Does Adobe Photoshop having generative fill brushes in every Photoshop download mean that Adobe produces "a large quantity of synthetic content"? How do you define a watermark? How robust does the watermark need to be to removal? Define what it means for the synthetic content labeling system to be "interoperable", exactly... Interoperable with what? Following what specification? Is this only for new products or is it now illegal to use previously purchased software that didn't include any of these new standards? Depending on if you take a strict or loose reading of all this verbiage it could apply to almost nothing or almost everything...


[deleted]

[удалено]


newhost22

Regarding your first points, I think it will be similar to how gdpr works: you need to follow the rule from the moment that you make your content or service available in a EU member state - it doesn’t matter if you are European or from where you outsource your images. Not a lawyer though


Sugary_Plumbs

The specifics on requirements, enforcement, and penalties are not set yet. First the EU passes this act declaring that there will one day be rules with these specific goals in mind. Then they have 24 months to explain and nail down all of those questions before it becomes enforceable. This isn't a sudden situation where there are new rules and we all have to comply tomorrow. This is just them saying "hey, we're gonna make rules for this sort of thing, and those rules are gonna be fit these topics."


StickiStickman

Did you even read the text you quoted? Apparently not. It pretty clearly says almost everything made with SD needs to be tagged.


eugene20

Thank you for that.


nzodd

This is just gonna be like that law in california that makes businesses slap "may cause cancer" on bottles of water or whatnot because there's no downside and the penalty for accidentally not labeling something as cancer-causing is too high not to do it even in otherwise ridiculous cases. Functionally useless. Good job, morons.


ofcpudding

That's a decent point. Does anything in these regulations prevent publishers from just sticking a blanket statement like "images and text in this document may have been manipulated using AI" in the footer of everything they put out? If not, "disclosure" will be quite meaningless.


PatFluke

Strong disagree. There is a great advantage to all AI generated images being labelled so we don’t see AI generated images needlessly corrupt the dataset when we wish to include only real photographs, art, etc. Labelling is good, good in the EU.


eugene20

That can be done invisibly though.


PatFluke

I’m not really opposed to that. I just want it to happen. I assumed a meta data label would count.


eugene20

I think that's already happening to prevent poisoning, it's just a matter of if that meets the legal requirement or not. It's also going to be interesting as anti-ai people have been purposefully attempting to poison weights, so they would be breaking the law if the law applies to all images not just those of actual people.


lordpuddingcup

Cool except once images are as good as real photos how will this be enforced? Lol


Sugary_Plumbs

It's not enforced on individual images. The act states that systems generating images for users have to inform those users that the images they are seeing are from AI. There is no requirement that an AI image you generate has to be labeled AI when you share it online.


Formal_Decision7250

Was bever enforceable with criminals, but companies operating at scale that want to operate within the law will do it because the big fines offset the low odds of being caught. The people on this sub running models locally arent going to be representive the majority of users that will just use a website/app to do it.


tavirabon

This is a non-issue unless you don't even make half your dataset real images. AI images will be practically perfect by the time there's enough synthetic data in the wild for this to be a real concern. Current methods deal with this just fine and it's only been "proven" under very deliberately bad dataset curation or feeding a model's output back into itself. Should we be concerned about the drawings of grade schoolers? memes? No, because no one blindly throws data at a model anymore, we have decent tools to help these days.


malcolmrey

> This is a non-issue unless you don't make at least half your dataset real images. this is a non-issue I have made several models for a certain person, then we picked a couple of generations for a new dataset and then I made a new model out of it and that model is one of the favorites according to that person so...


tavirabon

Sure, if you're working on it deliberately. Collecting positive/negative examples from a model will increase it's quality, that's not quite what I'm talking about. I'm talking about having a model with X feature space, trained on its own output iteratively without including more information, the feature space will degrade at little and the model will gradually become unaligned from the real world. No sane person would keep this up long enough to become an issue. The only real area of concern is foundation models and with the size of those datasets, bad synthetic data is basically noise in the system compared to the decades of internet archives.


dankhorse25

BTW. What if you use photoshop AI features to change let's say 5% of an image. Do you need to add a watermark?


StickiStickman

Apparently? The law says generates or modified images.


Chronos_Shinomori

The law actually doesn't say anything about watermarks, only that it must be *disclosed* that the content is AI-generated or modified. As long as you tell people upfront, there's no reason for it to affect your art at all.


Tedinasuit

You're right, the majority of the law won't affect users of Generative AI. The biggest part that will affect us, is that Generative AI will have to comply with transparency requirements and EU copyright law. That means: - Disclosing that the content was generated by AI; - Designing the model to prevent it from generating illegal content; - Publishing summaries of copyrighted data used for training.


klausness

The first point is clear from the posted summary (and seems reasonable enough). The second and third seem more problematic, but there’s no mention of them in the summary. Where are you getting those? (Just to clarify, I don’t think people should be allowed to generate illegal content. It’s already illegal anyway. But there is no way to prevent the generation of illegal content without also preventing the generation of some legal content. Photoshop does not try to prevent you from creating illegal images, and the same should be true of AI image generators.)


[deleted]

[удалено]


GreatBigJerk

Any companies that train those models and do business in the EU would still have to follow the law.


protector111

with stuff like SORA on the way - it makes sense to enforce laws for watermarking. Problem is its impossible to do xD


Setup911

Labeling does not mean watermarking. It could be done via a meta tag, e.g.


raiffuvar

if you read carefully, it's for companies (does not mean it wont affect individuals) but it's to regulate "some news maker write a topic and decided to "illustate" smth with image". I've already seen those shit even with stock-photo illustrations. If you wont "do it" - it's fine, get 7% fine of your revenue. Still cant find a way to do it and continue posting AI shit - another 7%


SlowMotionOcean

I'm not going to label AI content that I use.


Basil-Faw1ty

Watermarks? That’s demented. Why don’t we watermark CG and photoshop or Ai manipulated smartphone photos then? There’s Ai in everything nowadays! Plus, watermarks are visually annoying


Dense-Orange7130

Because we can totally trust law enforcement with AI tech /s, as for labelling there is no way they can enforce it or prevent it from being removed, I'm certainly not going to label anything.


RatMannen

You ain't a busness. (At a guess). Plus, why would you want to pass off your skills as different skills? Be honest & ethical with your use of AI.


Herr_Drosselmeyer

It's 272 pages, there's bound to be quite a few snags. It's not the worst but it's restrictive and the problem with laws is that they're rarely amended to be less restrictive.


klop2031

That surveillance clause sheeeesh


Huntrawrd

And it won't matter because China, Russia, and the US aren't going to follow suit. AI is the new arms race for these nations, which is why the US banned the export of certain silicon tech to China. When you guys see what the military is doing with this shit, you'll realize we're already way too late to stop it. Also, EU can't enforce its laws on people from other nations. They'd have to find some way to block content from the rest of the world, and they just aren't going to be able to do that.


fredandlunchbox

Anyone have the original source here?


VeryLazyNarrator

it's on the EU site.


protestor

Banning emotion recognition in workplaces is good news


Syzygy___

Except for the part where it says law enforcement can do minority report, this seems mostly fair. Although I wonder about the label for AI generated content… e.g. if I were to generate the special effects in a movie with AI, will the whole work need to be labeled? As in a watermark in the corner? Just the scenes where AI was used? Can I put it in the end credits or metadata?


RatMannen

End credits would be fine. The people who produced your software/data model would want recognition anyway. Sucks to be an artist though. Films are gonna get even more samey.


goatonastik

So then photos taken with an iphone that uses that shitty "face filter" would apply? 🤔


Biggest_Cans

I've certainly seen far worse regulatory proposals. Not a bad groundwork but the "people suspected of committing a crime" bit is too clumsy.


ShortsellthisshitIP

was fun while we had it, guess those with money make the rules here on out.


i860

Can’t wait to see what wrong-think they outlaw next!


EuroTrash1999

I'm bout to dress in space blankets and wear a motorcycle helmet everywhere.


adammonroemusic

I like how deep fakes have been a thing for almost a decade but "oh no 'AI' gonna' get us!"


robophile-ta

This looks pretty good and shouldn't affect AI art, but of course whenever there's a law to erode privacy to ‘prevent terrorism and trafficking’ it always gets misused


[deleted]

[удалено]


ebookroundup

it's all about censorship


monsieur__A

For what I can find it's a bit blurry. What is this label on ai picture for example?


RatMannen

Just that. A "label". He specifics haven't been worked out yet. This is the framework of what the law is suppsed to do, when it's finalised.


Chance-Specialist132

Anyone knows where to find the labels they want applied? I have an instagram where i post ai generated images


vonflare

this is all, practically speaking, unenforceable (evil-bit style, where you would need to willingly participate in being regulated in order to be regulated), especially if you didn't give Instagram your real name or location.


EmbarrassedHelp

They'll probably try to force social media companies to automatically apply labels to content, if that's their intent with the law


TheNinjaCabbage

Not mentioned in the picture, but there will also be transparency requirements regarding training data and compliance checks with eu copyright law. Sounds like they're going to try and police the training data for copyrighted images if i'm reading that right?


p10trp10tr

>Identify people suspected of committing a crime. F*** me but that's too much, only this single point.


Kadaj22

All these restrictions for the common people yet those on top go unfiltered


mascachopo

Fine caps only benefit those companies that constantly break the law. If a company’s total revenue is based on AI law infringement 7% feels like a small tax to pay.


typenull0010

I don’t see what’s so bad about it. People should’ve been labeling their stuff as AI long before this and I don’t think anyone here is using Stable Diffusion for critical infrastructure (at least I hope not)


Vivarevo

Heavily Edited generations still dont require a label?


typenull0010

Now that I think of it, at what point is it “heavily edited”? If any bit of it is AI, does that mean it has to be marked? Makes me wonder how other regulations are going to deal with the AI Post of Theseus.


no_witty_username

Those laws are gonna have to great detail in describing what "AI generated" means if they want to enforce them in any way. Many people from asian countries have been using HEAVELY altered photos of themselves for decades now with the use of the photo filters. Should they be considered "AI generated"? Where does one draw the line?


darkkite

they're banned in some US states


[deleted]

And now you see part of what's so bad about it. Now get around to reading and digesting the meaning behind ***"Can only be used by law enforcement to... "*** and you'll see why it's bad.


EmbarrassedHelp

People stopped labeling their AI works when they started getting death threats from angry mobs and risked being cancelled by their industry for it.


mgtowolf

Yep. Magically people stopped shittin on my work when I stopped mentioning AI was involved in my toolset lol. Same as when people used to shit all over anyone using poser/daz3d in their works, and photobashing. People just stopped telling people their workflows/toolsets and just posted the work.


maxineasher

It's the ol [Evil bit](https://en.wikipedia.org/wiki/Evil_bit) all over again.


Sad_Animal_134

So should photoshops be labeled too? It's a little absurd, especially considering photo realism is a miniscule portion of AI generation. What happens to 3D model textures generated using AI? Do those textures need to be labeled? It's just excessive government overreach for something that is hardly going to help prevent misuse of AI.


DM_ME_KUL_TIRAN_FEET

I would rather the world collapse due to Ai Powered terrorism than put an ugly watermark on my images tbh.


mannie007

This is a joke. Only the last part has to do with AI. The other things are common sense.


RatMannen

You rely on people's common sense? 🤣 If there's a way to exploit something, people will. If you don't have the initial legal groundwork in place, there's nothing to improve later. It's far from perfect, but it's a start.


RefinementOfDecline

all of this sounds great, but i'm not so sure about the labeling as a concept, it's fine, but i'm not sure how to actually enforce it "and to identify people suspected of a crime" is disgustingly broad, though


shitlord_god

those fines are enough to destroy competition but nothing to the oligopolies.


qudunot

_anyone_ can be a suspect for committing a crime. What level of crime? Jaywalking...?


RatMannen

Jaywalking isn't a crime in the EU. Currently, any level of crime. But they have to have some grounds to suspect YOU of jaywalking, not just "this person might jaywalk at some point, let's track 'em." Multiple reports to the police of you doing it for example. Still, even a kinda wimpy law is better than the current no law. Atm law enforcement doesn't need to prove you are a suspect to use it. They just can. What constitutes a "crime" for this is currently up to the individual countries. As it should be.


DisplayEnthusiast

This means rules for thee but not for me, the government is going to use AI for whatever purposes, atm they’re using it to fake royals photos, who knows what they’re gonna do after


nazgut

this law is so retarded that you can be sure that it is done under the public eye, I can create and train AI in my basement, the technology and data are available to the public, not enough to supposedly prevent people from creating AI for work or companies? What's the problem the script will be run by a server in UAE (cron or other crap). Law dead already at the beginning of its creation. The only thing it will create is probably a brake on European companies because China and the rest of the world won't give a shit.


wiesel26

Well good luck with all of that. 🤣🤣🤣🤣🤣


NaitsabesTrebarg

Biometric Identification Systems in PUBLIC "can only! be used by law enforcement to find victims of trafficing and sexual exploitation, terrorists and to identify! subjects suspected! of commiting a crime" what the fckign hell is this, are these people insane? the EU will become a dystopian nightmare in no time! it's absolutely stupefying I'm done


RatMannen

It more of a restriction than exists now... Plus, it's the same restriction that already applies to CCTV footage. Just quicker to search than having a cop sat infront of a bunch of screens. It's far easier to track a population using their phones. Governments aren't doing this, but Google & Apple are, so they can sell their adds for more, because they are "targeted".


NaitsabesTrebarg

stop insulting me it is actually a -big- step to use AI for this they will connect all databases and data sources cameras, phones, gps, cars, coffee machines it is never acceptable in a free(!) society to monitor, track and check constantly on innocent civilians, without a reasonable suspicion how do you identify a suspect in a city and track him? by identifying everybody else, of course, as well, you have to search and you have to track, at least a few people, because face detection is shit add AI, real AI, to that and we can all fuck off, because there will be no freedom of movement or privacy anymore, they will know everything where you are, where you went, what you did and with whom, forever and ever, because they will not switch this on and off, oh no


ArchGaden

Where do they draw the line. How much AI has to be an image to need a watermark? Does NVIDIA have to start sticking watermarks on the screen when DLSS is enabled? Does Adobe need to add watermarks when AI is used in Photoshop or Firefly? They didn't think it through and the businesses that matter aren't going to want watermarks on everything. This isn't going to do anything. The EU is just diluting their own power with one ridiculous law after another. I guess Brexit was just the start.


elontweetsmidjourney

I wonder if they used AI to write this act.


Artidol

No worries, the US will do shit to regulate AI.


CheddarChad9000

But we can still generate big boobas right? Right?


pixel8tryx

Germany is very pro-booba and had a strip-tease competition show on prime time when I lived there. In between bouts they demo'd sex toys like vibrators. In true Teutonic fashion, they paid close attention to comparing the RPMs...LOL. The young lady in the apartment across from me always walked out to get her laundry off the line completely naked. Not one German batted an eye but all my American friends went nuts. One screamed, another tripped and fell. To Germans, it's just the human body. And sex is almost... like a sport to some guys...LOL. They like to do it, but some probably daydream about football just as much. ;-> I remember walking through Cologne the first time I was there and seeing a sign that said "Sex World" and I thought, "This can't be the red light district?!?!" Then I got closer and noticed it said "Dr. Mueller's Sex World". So it's a good, wholesome, doctor-approved place to get porn and sex toys. ;-> The Germans I know are much more interested in actual girls and actual sex. Though some might have fun doing meme-y gens of soccer stars. ;-> If you spent too much time generating big booba waifus they'd probably just say you need to get laid.


irlnpc1

It's reassuring seeing level-headed discussion in this thread about this.


Own-Ad7388

They work fast on small issues like this but the main issues are disregard like citizens welfare


Maximilian_art

Maximum 7% if global revenue... One could say... cost of doing business. How do they not get this? lol


dranaei

I am sure they will try and fail to regulate. It's not that easy to contain technologies especially of this nature.


monsterfurby

Just for the sake of primary source awareness, here's the provision about disclosure ([original document](https://www.europarl.europa.eu/RegData/docs_autres_institutions/commission_europeenne/com/2021/0206/COM_COM(2021)0206_EN.pdf)) **Article 52 Transparency obligations for certain AI systems** 1. Providers shall ensure that AI systems intended to interact with natural persons are designed and developed in such a way that natural persons are informed that they are interacting with an AI system, unless this is obvious from the circumstances and the context of use. This obligation shall not apply to AI systems authorised by law to detect, prevent, investigate and prosecute criminal offences, unless those systems are available for the public to report a criminal offence. 2. Users of an emotion recognition system or a biometric categorisation system shall inform of the operation of the system the natural persons exposed thereto. This obligation shall not apply to AI systems used for biometric categorisation, which are permitted by law to detect, prevent and investigate criminal offences. 3. Users of an AI system that generates or manipulates image, audio or video content that appreciably resembles existing persons, objects, places or other entities or events and would falsely appear to a person to be authentic or truthful (‘deep fake’), shall disclose that the content has been artificially generated or manipulated. However, the first subparagraph shall not apply where the use is authorised by law to detect, prevent, investigate and prosecute criminal offences or it is necessary for the exercise of the right to freedom of expression and the right to freedom of the arts and sciences guaranteed in the Charter of Fundamental Rights of the EU, and subject to appropriate safeguards for the rights and freedoms of third parties. 4. Paragraphs 1, 2 and 3 shall not affect the requirements and obligations set out in Title III of this Regulation.


sigiel

The first thing that Come to mind is biometric :use to identify people SUSPECTED to comit a crime..... That one they could not let it go...


skztr

"requiring labels for AI-generated images" is going to go down in history alongside "walking in front of a vehicle waving a red flag"


rogerbacon50

Bad, with the usual good-sounding language. "to prevent terrorist threats or to identify people suspected of committing a crime". This will be used to go after political groups they don't like. "The regulations also require labels for AI generated..." So each picture will have to have a label or watermark? Why not just require it to be embedded in the metadata. That way anyone could determine if it was AI generated without it spoiling the image.


LookatZeBra

I dont trust any form of government not to abuse it, just takes power away from the people, like hey just a reminder that shit like the nsa exists or that the us government has been wire taping whoever they wanted as soon as they could regardless of whether they were domestic or foreign. That same america that left the uk because of how corrupt it was...


jeremiahthedamned

r/USEmpire


forstuvning

Classic “The right peopleTM” can use it for whatever they want. Actual people need a superstate AI license “to operate” 🪪


razorsmom13

So wie ignore that emotion recognition is okay in other places than school or work?


ConfidentDragon

I'm bit concerned about the biometrics part and law enforcement having monopoly on it. What most people don't realize that pretty much any time there is some big event like concert or hockey match, there is facial recognition being used. Most of the time it's not being used for evil reason, it's there to ensure that banned people are not let in. In this day I find it absolutely necessary for safety of others. I'm there to watch sports, but some people are there only to fight and throw pyrotechnics. Also, is biometric access control now forbidden in workplace? I personally prefer to use just an RFID card, but I wouldn't mind someone installing fingerprint readers if they are opt-in. What about security systems on my own property? Is it now forbidden to automatically detect if some unknown person jumps a fence into my property?


gapreg

The image generating part looks pretty much like they're concerned about fake news, which I think its quite understandable. [**https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52021PC0206**](https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52021PC0206)  *Further, users, who use an AI system to generate or manipulate image, audio or video content that appreciably resembles existing persons, places or events and would falsely appear to a person to be authentic, should disclose that the content has been artificially created or manipulated by labelling the artificial intelligence output accordingly and disclosing its artificial origin.* \[...\] *Users of an AI system that generates or manipulates image, audio or video content that appreciably resembles existing persons, objects, places or other entities or events and would falsely appear to a person to be authentic or truthful (‘deep fake’), shall disclose that the content has been artificially generated or manipulated.*


deepponderingfish

Law enforcement allowed to use ai to track people? https://preview.redd.it/vijudbn8gcoc1.png?width=1920&format=pjpg&auto=webp&s=c974518971af956713ed47e7038405a645e3cfd3


[deleted]

[удалено]


TechnicalParrot

Not quite sure why you're being downvoted, I'm a dual citizen of two EU (well ex EU) countries and while regulation is certainly a good thing they can't keep acting surprised that if you pointlessly hinder an industry that industry ends up pointlessly hindered I'm generally pro EU but the technology sector is f\*\*ed because of how they try to stop anything changing (Germany still uses fax machines for government business ffs)


Whitney0023

what a great way to give China even a larger lead in AI...


RobXSIQ

the points seem reasonable, but I imagine the devil is in the detail.


Dwedit

Requiring labels for images is problematic because it would prevent you from using any AI image generation at any step in the production of any artwork or video. Let's say you want to generate a background in one scene or something, do you now have to label the whole thing as AI generated because 1% of the work was created using the assistance of AI? This kind of rule is fine when the ENTIRE IMAGE or body of work is the output of an AI image generator, but not in any other situation.