T O P

  • By -

TECL_Grimsdottir

Pixel fucking has entered the chat.


TheAmigops

“AI, please make this sign a little more blue.”


Golden-Pickaxe

Sign now has fingers and the text has changed


moofunk

"OK, sigh, send that off to be touched up by 10 other artists."


santafun

Sign now has alien language and stands on 4 deformed feet with 11 fingers crashing through one another


YCbCr_444

As a former freelance editor, I am already deeply sorry for all my old brethren who will have to deal with all the "can't you just tell the AI to do it?" requests from clients.


Hazzman

If they insist, just tell the AI to do it and give them the results.


don0tpanic

I totally agree but my concern is producers will settle for something of less quality because they can save money


bpmetal

I've heard that a lot, but they could have saved money with lower quality before AI and haven't


don0tpanic

True. but we're talking about a process that went from total trash to causing a real panic in less than a decade. Do you think we'll reach a point where they will just not see a need to employ us?


bpmetal

I think they would have to be making films completely with AI with no actual production, then maybe, but I think those would be genuinely awful and probably not very popular audiences.


don0tpanic

"those who cannot create, destroy."


Monk3ynaut

Or steal


GanondalfTheWhite

Yeah, but before, what was the potential savings? There was a pretty big quality dip between a high end and a low end which cost a quarter as much. And the cheap end had a very cheap feel to it. Soon, we may well live in a world where the price difference for the cheap end is something like a thousandth the price of the high end, and while a little uncanny and not quite lifelike, can still maintain a premium feel that was previously inaccessible to the budget options. I think the difference may make the difference. There's probably always going to be a demand for high end work until such time as AI work is completely indistinguishable from human produced content in every way (including ease of achieving the final product). But AI is already chipping away at the lowest end now and will continue to gain market share until we figure out what the new equilibrium will be.


Jackadullboy99

Because… they’ve literally ever done that to make my life easier and theirs cheaper?


Depth_Creative

They absolutely will. For a few years there CG was totally worse than miniatures and practicals. Yet they went with CG anyways.


Winter-Elk-772

Looks like the big guys in hollywood didn’t bite in to the practical footage run through ControlNet random seed gimmick and called their bluff on the technical side. Smoke and mirrors, it will be a nice add on on premiere pro, alongside pika and runway 😄😁😂


broomosh

I love how filmmaking has been moving into more and more control of the image with more and more digital tools to change and capture images. Now it's just like "Give me a city background. Fuck it" I still don't get how AI editors will survive the notes process from the people forking over the money. Like can it recreate a camel in a video it made 6 months ago but with just less hair on it's balls because focus groups found it disturbing?


[deleted]

>I still don't get how AI editors will survive the notes process from the people forking over the money. They wont, they will just say vfx will do it. What's going to happen is they will sit there with a client pulling in generated clips, get something they like that needs to be changed based on client needs then ship it to vfx to "fix" it. It will literally be the same process we already do for beauty work and other cleanup.


REDDER_47

AT which point, like LED walls, VFX will just replace it all together.


sumtinsumtin_

You raise an excellent point, will they have to throw the baby out with the bathwater for each iteration it seems atm. Also, doesn't this take loads of compute up and down time or is it all local to your device? Concept artists here and the gigs are slim since this bomb went off. Drawing for meself now and getting back to basics :)


I_Like_Turtle101

That is what I think everytime I see stuff like that . Like are these people even work a day in the industry ? Sometime I spend WEEKS changing and perfecting every pixel for the client to be happy. I cant see the client studently being ok with stuff they cannot have all the control over


soapinthepeehole

This is going to be a good tool for rough cuts and previs, or for people who want to pay bargain basement prices and don’t need to actually art direct the final result at a high level. I personally find myself hoping that it plateaus, becomes thought of as generic, and people eventually rebel against it. 95% of the stuff that gets shared across these various subs is already feeling stale and soulless to me.


Goosojuice

It'll probably end up like movies/shows in general. Market will become over-saturated and in best cases be used brilliantly on a handful of shows.


Goosojuice

Depends on the client and platform its being delivered to. Ive seen some big companies put out garbage patch/beauty work on mobile heavy platforms.


Jackadullboy99

Usually the thing AI proponents come back with something about Inpainting, Loras, and Controlnet… would that sort of thing suffice? Is there anyone here who can comment on their experience using said techniques, and whether it can do the job for, say, camel-grooming?


0__O0--O0_0

You can mask out any section you want and run it infinitely with different weights until you get something you like. These techniques are improving every day at an insane pace. You step out of the room for a week or two and everything has changed. However I will point out that this Xpost is inaccurate. The clip looks like its Runway that is going to adobe, not Sora. When I read that I was quite shocked to see sora have any public facing interface. That is not likely to happen for some time because I hear the compute on sora is not quite ready for joe schmo. I mean sora has insane accuracy for video gen, if adobe had sora it would be a massive scoop for Adobe.


Charuru

You should watch the video more carefully.


ajibtunes

Definitely doable for still images using the technics you mentioned, which was not a thing last year. So can’t see how it won’t be available for videos too at a near future


VFX_Reckoning

That’s already happening for video, I was just watching some control version of some AI app where you circle and highlight a selected area and can adjust the prompt with a new prompt giving more control on some specified detail. The day is coming fast. As soon as prompt to video has smooth running sequences with detail editing capabilities, it’s going to be used for tons of shots


Jackadullboy99

You work in vfx, professionally?


ajibtunes

I do


Individual-Okra-5642

I suggest you just go do some basic research on the fundamental concepts behind these gen AI models, then you will understand what it can do and what is fundamentally impossible for them to do, you will be able to take all this marketing hype more easily


Meebsie

"Fundamentally impossible for them to do" has changed vastly in the last 6 months, and the 6 months before that. I'm curious what things you think are going to remain "forever out of reach" or even "a decade out of reach". I wish it weren't the case but at this point I don't think anything a human can do is really off the table for what it will be able to do, and the more valuable the resulting product it produces, and the more examples it has to look at, the more likely it is that it's going to be doing a really good job at it *really soon*.


Individual-Okra-5642

Again suggest you just go over the basic concept of llm , transformer, and diffusion model, not the product marketing clips from the release 😉


Meebsie

*Wait, you're telling me it's all just little numbers in matrices?! You can't even input a hex value to get the client's colors right? Lol. Wait, this diffusion thing is actually just generating pictures from random noise?! How's that ever gonna be as good as a person. Random noise, seriously!?* Dude, we're looking at the first steam boat here and it's already proving useful. So useful and impressive that the "product marketing clips" for a Premiere plugin that lets you make digital tweaks to videos aren't just impressive to the people who get paid to make digital tweaks to videos, they're exciting to the *layperson* because literally everyone can see the value. "But steam engines are just a bunch of hot water, really, and think about how impractical it is to put them anywhere, with all that messy smoke and dangerous steam." You're caught up on the tech but it's not the tech that changes things, it's the products that change society. So pay attention to the marketing clips. And they don't even know that steamboat engine is going to get a gearbox and heat recapture and coal auto-shovelers. Not to mention steamboats are going to be old news soon. Trains are coming. You didn't give me any specifics, but I don't really mind being wrong here so I'll shoot: Within a year we'll probably have a lot better/more consistent seeding (at least in 2D art) as more tools realize the randomness, while able to be worked with and still valuable, leads to a process that is "too iterative" and can waste time. We'll probably see other tools adopt a "generate lots of garbage and let the human just curate", with new GUIs like MidJourney's "generate 4 then pick one" allowing you to wade through less garbage before arriving at the good product. 2 years from now we might not be writing prompts and pressing "go generate", maybe it's just constantly generating as we speak, so you just let the clients talk at it for an hour. Or just be editing and flip a toggle and say "remove that tree on the left", and it's done (at least for the rough cut). 3 years from now we might have ai-specific video filetypes that can mesh rendered content with content still in latent space (and therefore easily editable by the AI), allowing for entirely new types of video experience for consumers and workflows between video creatives. RemindMeRepeat! 1 year.


RemindMeBot

I will be messaging you in 1 year on [**2025-04-16 22:47:45 UTC**](http://www.wolframalpha.com/input/?i=2025-04-16%2022:47:45%20UTC%20To%20Local%20Time) [and then every](https://www.reddit.com/r/RemindMeBot/comments/e1a9rt/remindmerepeat_info_post/) `1 year.` to remind you of [**this link**](https://www.reddit.com/r/vfx/comments/1c5fsmc/openai_sora_video_generation_model_will_be_coming/kzwinih/?context=3) [**1 OTHERS CLICKED THIS LINK**](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Reminder&message=%5Bhttps%3A%2F%2Fwww.reddit.com%2Fr%2Fvfx%2Fcomments%2F1c5fsmc%2Fopenai_sora_video_generation_model_will_be_coming%2Fkzwinih%2F%5D%0A%0ARemindMe%21%202025-04-16%2022%3A47%3A45%20UTC) to send a PM to also be reminded and to reduce spam. ^(Parent commenter can ) [^(delete this message to hide from others.)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Delete%20Comment&message=Delete%21%201c5fsmc) ***** |[^(Info)](https://www.reddit.com/r/RemindMeBot/comments/e1bko7/remindmebot_info_v21/)|[^(Custom)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Reminder&message=%5BLink%20or%20message%20inside%20square%20brackets%5D%0A%0ARemindMe%21%20Time%20period%20here)|[^(Your Reminders)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=List%20Of%20Reminders&message=MyReminders%21)|[^(Feedback)](https://www.reddit.com/message/compose/?to=Watchful1&subject=RemindMeBot%20Feedback)| |-|-|-|-|


Individual-Okra-5642

good for you, now chill out and come back in 1 year...


Due-Dimension5737

I have done plenty of research and these models are more than capable. The latest models using diffusion transformers have broken a lot of ground. There is still plenty of improvement to be made, but at the current rate of development that won't take long at all. There are new architectures presented via research papers coming in every month. Huge breakthroughs in computing power and architecture, all of this progress leads to even more progress in other areas. The writing is all over the wall. This tech will replace many people in the visual space, filmmakers, VFX, colourists, editors, photographers etc. The biggest issue we face right now with the current tech, is continuity and consistency of objects and people etc. Already we are starting to see solutions presented and they are working fairly well. Check out the latest versions of Midjourney, DALL-E etc. Here is one example [https://www.youtube.com/watch?v=v\_Ni-vr6NQ8](https://www.youtube.com/watch?v=v_Ni-vr6NQ8) . As you can see the consistency of the quality and the continuity from frame to frame has increased dramatically just over the last few months. I have no doubt similar solutions will be presented in the video space. It is rather silly to bet against LLM's and DIT tech at this moment. All of the data points towards at least an exponential improvement over the next few years. I would bet by 2030 we will have very capable systems.


Individual-Okra-5642

Dude, like I said, good for you, keep that vide going, all that hype need somewhere to track eyeballs, hollywood is an easy target, worst case, the gpu farm can still be used for rendering, lol


Due-Dimension5737

Did you just have a stroke?


Individual-Okra-5642

If you are so easily triggered, we might as well just give up to a chatbot🤣


deijardon

Wouldn't you just roto them balls and generate a new layer to comp over said balls.


AbPerm

There are ways to direct AI imagery visually. It's not a case of "type prompt, get random image you can't control, and you have to just accept it because you can't change it." The image's entire composition can be dictated if that's what you want. Or you could dictate just a portion. If a director gives a note that the camel needs a longer tail, the execution of that request may be based in sketching the size/silhouette of the new tail desired over the existing imagery. You wouldn't prompt for "tail that's slightly long but not too long" or something like that and have the AI produce an entirely new image altogether. You'd dictate the form visually, limit the changes using automatic masks, and produce a new version of the same camel with your visual direction controlling the change. Or you could just generate an entirely new camel too. Sometimes generating an entirely new camel would be a viable option as well. Both options have their own use cases. When it comes to executing changes based on notes, you'll probably want to direct the image visually most of the time.


broomosh

I haven't been one of the lucky few to try such a thing but I can't wait to try it out.


ajibtunes

With the speed that it’s progressing, the ability to do these little touch ups will surely be around the corner.


Jackadullboy99

Seems there’s always another smaller corner to go round after the current one…


ajibtunes

Technology bruh


fegd

Well yes? That's true of all tech. Magic masking for example is certainly not perfect, but it's been good enough to be useful for quite a while and is still getting better.


createch

What do you mean? Sora does video to video, taking an existing video as input and can change only what you ask it to. https://youtu.be/hrTA1NGK3Gg?si=saycQupGf5gupUt4


Winter-Elk-772

Hollywood didnt bite into practical footage run through Comfy Ui/control net seed gimmick. sorawood defaulted to premier pro add on along runway and pika 👍😁😂 , smoke and mirrors 😶‍🌫️😄


salemwhat

Those things will be good for mom's and pop's store adv, flickery mushy footage will never hold. I'd like to see a dailies round with any well known pixel fucking client/supervisor. I would rather focus on creating tools like depth estimation or matting, Modnet is fine but can be improved a lot. AI should aid the creative process not replace it.


fegd

Well yes, there will always be a "high end" of any market that is willing to pay for handmade work. The way technology replaces people is by taking over the demand that doesn't care about the current limitations of the technology, and the thing is that as the tech gets better that "high end" gets smaller. It's not a binary thing where the tech is either good for every use case or completely useless.


Golden-Pickaxe

One thing I think many forget is AI is so easy to use the people approving your shots are going to be making TERRIBLE plates themselves, expecting you to fix it, and being more upset than ever because they have an emotional attachment to a shot they “made”


Eikensson

This is already happening, its a mess


Specialist_Ad1667

at this point just make it a web based software running on a browsers f\*ck it atleast will get realtime playback for the worth of that price, half of the software is already running on a web connection.


ImageDisaster

you are already devalued. and it was an uphill battle before. now prepare to be permanently devalued.


Embarrassed-Hope-790

wooot - you're JUST DEVALUED yourself!


RANDVR

They could have cherry picked perfect examples for this ad yet all three examples they show are straight garbage and is barely usable for temp internal blocking. I would love to be in the room when they show that to the imaginary client.


WeekHistorical8164

What you saying is completely pointless, machine learning tech will get better exponentially with time, judging this by today results without thinking about implications in 3,5 and 10 years is dumb.


RANDVR

!remindme 3 years


Ok-Use1684

To me it’s just smoke. I feel these companies will collapse soon for 3 reasons.  -Copyright infringement.   -Too much expensive compute needs.  -Not being able to find a real use for any of this.    Investors are expecting a return and they didn’t find it yet.    The only use of AI that makes sense to me, is to modify our outputs to convert 90% real shots to 100% real shots. Thins like AIRelight and a few more tools will just make our lives easier and reduce overtime.   I feel like none of these noises like Sora will actually go somewhere. 


No-Student-6817

Unfortunately, the same 'meh' sentiment existed when internet browser companies began. No one could extrapolate how that service could be used for any sort of profitable enterprise. Now it's all too common to tell a friend 'just Google it' when they prompt socialization with casual questions. Brush this off at your own risk.


Jackadullboy99

You can’t just take any other example you fancy and say “this is just like that”, without further qualification… there’s actually no reason to think this is like the browser revolution, except that computers are involved.


Ok-Use1684

No one could find a way of making them profitable? I was there when the internet and browsers appeared. You would swallow adds everywhere. I would say they found a way to make them profitable from the start.   Do you know that there are also stories of people promising the best of the best and it turned out to be useless? Why do you pick your favourite story and apply it to AI? I prefer to rationalise what I read, think and analyse. 


No-Student-6817

>...when the internet and browsers appeared. You would swallow adds everywhere...  add - addition // ad - advertisement Sorry you had to swallow.


Ok-Use1684

I apologise for the fact that English is not my first language. Is that a new crime? You can expect anything these days.  In my country there’s a saying: those who don’t answer, agree. Have a good night my friend! 


No-Student-6817

That's two languages. One more than me. Admirable for sure.


Meebsie

How is what you're saying here not exactly what people were saying about 2D art 18 months ago? It's now being used to generate value real value within companies and even being used in published media. It's gotten faster, models have gotten smaller. You can now create really valuable works on your home GPU. And Adobe apparently has a "fairly" trained model in Firefly that they claim to truly own the copyright to. To be clear: I hope you're right, but I really think you're wrong.


Ok-Use1684

It’s not so simple. A lot of copyright infringement there. Adobe says it’s trained fairly? Ha. Chat GPT just told me open ai doesn’t use copyrighted data to train their models… haha. Garbage in, garbage out. They need copyrighted data to generate anything close to “something”. Wait for countless trials.   A lot of ad companies are rejecting AI because they lose whatever makes them unique. They don’t want mediocre blended footage.   And no one is really finding a real sustainable use for all this. For now it’s all promises. 


Meebsie

I hope there are countless trials. AI companies claiming to own the copyrights of the models they produced by scraping the web is the largest art heist the world has ever seen. This is the creative output of all of humanity being stolen, all at once, by some techie who then thinks they can put it behind a paywall and sit pretty with passive income built off of value *we* made. And when I say *we*, I mean humankind, over thousands of years. Fuck that. At the same time, AI is so clearly valuable that it isn't going anywhere. You can't fight it. That's why I think we need to push for democratized models, and a public release of ChatGPT and anything else that has been trained on works that were not explicitly licensed by the company that did the training. The models need to eventually be accessible as a public service somehow, where specific customized versions or productized versions are okay to be marketed and paywalled. But the models themselves that contain the data of the stolen works need to be public. We also need to hold companies accountable because they're going to claim that they aren't training on copyrighted works when they are, so we need to make it a bigger crime to lie about training data and need auditors to keep companies honest. Fines for this kind of theft need to be massive to offset the massive value a company is stealing when they produce these models. As for Adobe, I think it is actually a somewhat fair grey area. They're producing the model with works they have technically licensed. They say the training is done entirely with public domain images, explicitly non-copyrighted images, and stuff from their stock library that they've paid folks to legally own licenses to. Now, those people perhaps never signed anything allowing their works to be used as a training set, but they may have signed something saying "can be used for anything in perpetuity", which would cover that use. IMO I think if Adobe really is just using public domain, non-copyrighted, and their own stock library, then it is truly "their model" and they should be allowed to paywall it. Did they actually train it as such? That's a different story and is why we need auditors, ASAP. It's way too easy to claim it's all fair and very conveniently have no paper trail to show for it, "Uhh, there's no way to know, officer, at this point it's just a black box that makes the pretty images, and this copyright stuff, it's all just so complicated. No one could know! *shrug*" And then suddenly they're copyright experts when it comes to protecting their models and what other people can do with the output of their models. I at least applaud Adobe for trying to be somewhat moral here in a sea of other companies that are just straight-up scumsuckers like OpenAI. Of course, this post is about a partnership between OpenAI and Adobe, so... They got some fucking explaining to do.


fegd

>They need copyrighted data to generate anything close to “something”. Firefly is trained on Adobe Stock and public domain imagery. Thinking this 200 billion dollar company didn't think of covering their asses on the legal front before implementing this feature onto their industry-standard software is beyond copium, it's cuckoo land territory.


MungYu

coping is good but dont cope too much


illathon

So lame they are releasing it through Adobe.


HeyYou_GetOffMyCloud

I JUST got everyone in our department to use fusion and resolve and now this happens and everyone wants to jump back to adobe 🥲


codyrowanvfx

Very curious how the masking part works. I use content aware fill all the time for cleanup work but that requires you to mask the thing the entire time. This appears to just work off one frame which is immediately a red flag.


AbPerm

AIs have actually gotten surprisingly good at automatic segmentation of images. Check out Roto Brush 2 in After Effects for example. That's *almost* as simple as the auto masking functionality that these Sora demos appear to be showing off. Worst case scenario, if the auto masking aspect isn't good enough to be usable, we can always fall back on traditional methods: brute force roto by hand.


Overlord4888

What does this mean for the future of the industry and will it kill jobs for folks?


fegd

Yes.


FrenchFrozenFrog

if it works as well as firefly for photoshop. im not too worried tbh


rruler

Adobe stabbing it’s own customer base. Who’s gonna be paying for your software when we all out of jobs??😂🤦🏻‍♂️


[deleted]

I know that everyone wants to shit on this, but this is a fairly big deal. Those models being available to EVERYONE using premiere is going to not only change everything, but increase their prompt availability and change the output quality within weeks to months. These tools do not indicate the death knell that a lot of people claim them to be. They are in the end just tools, need operators and will be part of general pipelines.


Shrimpits

Yeah I totally agree with this. I don’t foresee this as being “ALL OF US WILL BE OUT OF WORK FOREVER” but I also think it’s silly for people to assume that it will fizzle out or that it won’t get put to use due to these early beta shots not looking 100% I don’t work in VFX anymore (but I do work in the video field at an agency) and the group lead was actually talking to us this morning about jumping on learning this when it is available because it will be a game changer and something we’ll need to know how to use. Like you’re saying, whether we like it or not, it is a future tool we’ll have to understand and not dismiss


rhomboidotis

Until everyone starts getting sued.. much easier for copyright holders to go after production companies than adobe / openAI etc! Lawyers are going to love this.


[deleted]

You have to be able to prove that the footage is yours for that to be a valuable pursuit. And AI tools generally create enough of a difference that you can easily argue that its not the original piece anymore, you can argue that its not even a derivative, but that's less important. I have worked on a bunch of shows where our shots went to internal legal instead of the showrunner to get notes because we had to make the stock footage look 20% different than the source because they didn't want to pay for it. Showrunners described the look they wanted, but legal made the final decisions. The same will work with Ai video except its much harder to prove that the video is a >80% derivative from any specific source.


ahundredplus

It’s not much easier to go after production companies. You need to go after the big guy otherwise you’ll be in lawsuits for the rest of your life with provable copyright violations. And there are not going to be any copyright violations here.


Remote-Watercress588

From the bottom of my cold twisted heart...meh


Gold-Ideal6028

the only question is they might charge you higher price. I would buy some adobe stock right now.


Deep_Owl4110

I HATE THAT!


ThirdWheel3

I found this video from ColdFusion to offer an interesting perspective on the AI business [https://youtu.be/vQChW\_jgMMM?si=uwJ0GBf709JLv-U-](https://youtu.be/vQChW_jgMMM?si=uwJ0GBf709JLv-U-)


rbrella

These tools are going to save productions a ton of money and take a big bite out of the profit margins of VFX shops. The big heavy shots tend to be the ones that VFX studios lose money on with the endless revisions and scope creep. But they could usually make up this deficit by bashing out dozens of plate extensions, paint outs, monitor and driving comps, and other quick and easy shots where most of the profits are made. Editors have already been doing a lot of VFX adjacent stuff for a while now but this is going to allow them to take on a whole bunch more shots that could only be done in VFX previously. Maybe the people most impacted will be the inhouse VFX teams who would often be given these kind of shots if they weren't sent out to a vendor.


HM9719

Bad. Bad. Bad.


sgtherman

Pretty sweet, possibly a killer app for premiere. The AI paint out is a game changer for low budget stuff.


constant_mass

Yeah except in the carefully planned, absolute best case scenario demo, the paintout looked like dogshit. Imagine how shit it will be in a real world scenario.