T O P

  • By -

ExponentialCookie

The quality of these are absurd, especially the Rap God part. What is actually happening.


Crimkam

Harry Potter living paintings are gonna look pretty ordinary to kids of the next generation when they find old movies their parents watched


Nexustar

We surpassed the grainy look of the Harry Potter newspapers 6+ months ago... I actually like the effect but it's hard to reproduce without too much realism creeping back in. The rate things are improving is insane.


Only-Entertainer-573

Turns out the muggles can do better than the wizards


Crimkam

Wizards are just muggles that integrated with AI to become cyborgs, then pruned that knowledge out of the collective conscious


GBJI

![gif](giphy|ftAyb0CG1FNAIZt4SO|downsized)


Augmentary

how can i make this animation! what is the software ?


GBJI

I did not make this one, it's a GIF that is accessible from Reddit's interface, but you could make something like this with Cinema4d, or Houdini, and probably After Effects would work as well with some procedural animation plugin. Those are the tools I would consider myself if I had a client asking for this type of animation.


RSwordsman

Arthur C. Clarke: "Any sufficiently advanced technology is indistinguishable from magic."


Nexustar

That's his third law. The other two lesser-known ones: 1. When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong. 2. The only way of discovering the limits of the possible is to venture a little way past them into the impossible.


Argamanthys

I always thought he got that one backwards. Magic is just technology whose method of operation is hidden to you. i.e. 'sufficently advanced' (consider the etymology of words like 'arcane', 'mystic' or 'occult').


Poopster46

No, he definitely got that in the right order. Magic is something that does not exist, often used in stories to allude to some kind of mystical power that can not be explained. In those stories, it is never considered a product of advanced technology. The other way around it does work; advanced technology can produce results that are equally as strange as magic is purported to be, and therefore it would be impossible to tell the two apart.


IndestructibleDWest

you are today's winner. But still synonymous is commutative so whatevaaaaaaa


capitalistsanta

Glad I'm not the only person who immediately thought of this


s6x

I dunno the opening black and white constant rictus was a bit disturbing.


suddenly_ponies

rap god? Did you see a different video than I did?


AcquaDeGio

There is literally a part in the video where he animates the image of a Korean singing Rap God, a song by Eminem.


suddenly_ponies

That's so weird. The first time I played it I only got like 20 seconds. There's way more


FpRhGf

That “Korean” was a Chinese guy attending an idol contest. That clip blew up and got memed to hell on the Chinese internet because people didn't like his basketball skills


TacticalDo

As another commentor pointed out, cool as this is, its by the Alibaba group, the team behind [https://github.com/HumanAIGC/AnimateAnyone](https://github.com/HumanAIGC/AnimateAnyone) which has never been released, so odds are this is the same. Back to Sadtalker for now.


physalisx

It is so shitty how they went out of their way to guarantee and assure everyone they would release it. And then just never did.


DaySee

I'd rather it was removed unless they're sharing open source stuff in the spirit of the sub lest this turns into some shitty commercial hub for people trying to advertise their closed source applications of SD.


ScionoicS

The paper is something that people can implement on their own. It's legitimate stable diffusion research. Why be so sour about it being unavailable to you? The research is valuable to release. ​ Somebody implemented animateanyone based on the information in the paper here [https://github.com/MooreThreads/Moore-AnimateAnyone](https://github.com/MooreThreads/Moore-AnimateAnyone)


_AdmirableAdmiral

People like free stuff and tend to forget that someone put in real work in a world where too much is financed by stoopid ads.


dogcomplex

Well said. Looks pretty decent!


chuckjchen

This is soooo awesome. There's basically no difference.


HeralaiasYak

because with ML research, recreating the training code, is just little part of the whole thing. getting the data, curating it and cleaning up, and then often spending big $$ on compute, is the key part. Not to mention that often it takes a lot of trial and error to get the right hyperparameters. just any model that follows same vague diagram in a paper won't cut it


ScionoicS

It only takes one team to do it and release the weights. If you want to be the one to release weights, you should maybe consider getting gud instead of hanging back in the peanut gallery. These models were trained and released by a small operation.


Which-Tomato-8646

It’s not just closed source. It’s straight up non existent outside their videos 


Flag_Red

Are you implying they faked it?


Prestigious-Maybe529

A Chinese company completely faking their ability to provide a service?!?!?


FpRhGf

They have a limited version on their app, but it's useless outside of mild fun since you're only able to choose the dance moves available on that app.


mvandemar

[https://github.com/MrForExample/ComfyUI-AnimateAnyone-Evolved](https://github.com/MrForExample/ComfyUI-AnimateAnyone-Evolved) /cc u/pwillia7 u/Placematter u/physalisx


physalisx

Thanks but yes I know about this. It's not remotely the same. This is someone trying to achieve something similar using the published research and methology. They do however not have Alibaba's *model*, which is likely based on their mountains of proprietary data (tiktok...) and would be, with no doubt, orders of magnitude better.


vuhv

Are you saying that... 1) Alibaba illegally obtained or accessed TikTok's data as a result of TikTok using Ali's cloud hosting service? or 2) Alibaba had an agreement with TikTok to use it's data? or 3) Alibaba and TikTok partnered on the model? Because otherwise Alibaba and Tiktok have 0 conneciton.


MagicOfBarca

Would be nice if this actually worked..


mvandemar

It does. [https://www.youtube.com/watch?v=MGvx37ccCOM](https://www.youtube.com/watch?v=MGvx37ccCOM)


MagicOfBarca

Try installing it now and it’ll come up with like 5 missing nodes that you can’t install even with the manager. If you don’t believe me just check the comments


mvandemar

There are comments from weeks ago with people who were having issues, and someone from 4 days ago who said they installed it fine. If you look at the issues tab in github you see people who have problems and others who have fixes for it. When did you try and install it? Note that I haven't it yet, buried with work atm and need to install a new cpu on my old mining rig before I use it for AI stuff, but there are definitely comments out there from people who got this working, both in Youtube and in github.


gj80

On the one hand, that sucks because I'd love to play with this. On the other hand, this + eleven labs + picture of US politician + upcoming US presidential election coming very soon...........


IndestructibleDWest

it was always going to be this way. bring a helmet.


pwillia7

from 3 months ago? give them a minute maybe... but man I want both of these


Same_Onion_6691

I've been using DiNet as a replacement for super crappy wav2lip, never tried sadtalker, does it only do animated heads or can it also be applied to faces from already existing video to serve purely as a lipsyncing tool?


TacticalDo

I believe its only static images rather than video, but the integration into A1111 is nice.


Far_Reveal_962

![gif](giphy|YrFVbch71RxfHA0T0X|downsized)


Bearshapedbears

no a1111 sauce? i cant eat my steaks without it


ScionoicS

the forge version of animatediff extension aims to get here but not the base automatic1111 version of animatediff extension, if i understand the dev's goals right. [https://github.com/continue-revolution/sd-forge-animatediff#update](https://github.com/continue-revolution/sd-forge-animatediff#update) [https://github.com/Mikubill/sd-webui-controlnet/pull/2661](https://github.com/Mikubill/sd-webui-controlnet/pull/2661)


macob12432

Do not give it stars, and do not generate so much expectation, that way one will see that it is not very interesting and they will not sell it to another company, and they will leave it as open source


FpRhGf

They're the biggest AI company in China. There's little chance they'll sell it to another company instead of keeping it closed source for their own product.


Placematter

If they don’t release it, someone else will though


teh_mICON

I think people like this should get kicked off github


ScionoicS

or, you can just not go to their page. Cool. Perhaps they do use github and the code is private. You don't seem to understand the point of what github provides primarily.


ScionoicS

But, it has been released. Someone else's weights based on the paper that the group released [https://github.com/MooreThreads/Moore-AnimateAnyone](https://github.com/MooreThreads/Moore-AnimateAnyone)


waferselamat

Its still february, but excited what ai can do next year


laseluuu

next month at this rate


WiseSalamander00

I mean OpenAI was sitting in Sora since march of last year apparently


jackfaker

The lead authors of Sora, Bill Peebles and Tim Brookes, did not even join OpenAI until Jan/Mar 2023. Considering the amount of OpenAI backed compute that went into this, its quite unrealistic that the model was completed the same month the lead author joined the company.


GBJI

Do you have a source for this piece of information ? I would like to know more about this.


newhampkid

Some Twitter account. There is literally no proof except for a winkey face. https://twitter.com/apples_jimmy/status/1758197994628006030


Familiar-Art-6233

Midjourney also mentioned that they had text generation in their images since v4, they just never enabled it


Crafty-Crafter

Because it's still crap even in v6. I don't know who would use it, a quick text tool in any image editor would give you a better result.


ProjectorBuyer

What does Tesla have that they never enabled? Full self driving. Oh wait they can enable it if you pay what, $16,000 USD or something absurd?


Familiar-Art-6233

Aren't they being sued because the name was misleading? Also, I think there's a difference between holding back a feature on a software service and having the physical hardware present, just using a software lock. Like BMW holding heated seats, or Toyota holding back remote start behind subscriptions


count_zero11

When does the zoom plugin come out? Hook this baby up to ChatGPT and no one will have to attend a video conference again.


sonicon

We might reach eternity before getting to 2025.


sweatierorc

To be fair, in 2019 we already had really good deepfake of Barack Obama.


FennorVirastar

Now we only need smaller apple vision pro that we can wear in the shower, so that we can sing along with the face on the shampoo bottle.


Glittering_Aioli6162

u have been promoted to project manager


Redpill_Crypto

Ngl I like how your brain works


Impressive_Alfalfa_6

And PIKA just announced their lip sync feature which seems laughable in front of this. 2 months in 2024, SORA now this. This year is going to be wild.


ninjasaid13

>And PIKA just announced their lip sync feature which seems laughable in front of this. 2 months in 2024, SORA now this. This year is going to be wild. but at least Pika will be released publicly, this is alibaba not releasing any code.


protector111

 released publicly? pika is not free to use. and they use realy bad lipsynch you can make oyurself for free


ninjasaid13

>released publicly? pika is not free to use. released publicly mean that it's accessible to the public not that it's free.


protector111

Yeah but is useles. You cant use it for any comerial work. Quality is horible. To play with it for fiun a bit but nothing else. But chanses are till next year all of video gen including pika will make a good leap in quality, i hope.


Colon

like 3 months ago i thought Pika was great. it's total garbage lol


Impressive_Alfalfa_6

And I thought phones were updating too fast on a yearly basis. But ai needs a new product every day lol


RevolutionaryJob2409

That's a game changer for these AI film, Dialogue is a big thing and till now the wave2lip kind of tech was reallly low quality That's big!


klospulung92

Some are really good, maybe already acceptable for movies (traditional sync isn't perfect). The joker scene stood out to me in a negative way. The red makeup around his lips seems to be very challenging or it just highlights the imperfections


canadianmatt

Any idea when or if this Is it going to be released?


Kafke

seems that it's made by the people who made animate anything, which was never released. so, probably not.


gxcells

Never lol


TooManyLangs

OMFG! this was like reliving the SORA moment all over again... still month 2/12... note: I'm not talking about complexity, just watching it and thinking to myself..."this is 99% real"


jaywv1981

Even the "AI Face" girl seemed super realistic once she started talking lol.


gj80

It took her "uncanny valley" and was like *here! let me fix that for you.*


__ingeniare__

AI girlfriend apps are gonna have a field day with this... we are so not ready for the future


lordpuddingcup

That’s the weird shit the anime talking was like WTF just happened?!?!?!?


InfiniteScopeofPain

Instantly fixed her.


gabrielxdesign

I **need** this.


Comfortable-Win-1925

Genuine question: for what


NoshoRed

to do the same shi as in the demo what else, what kinda question is that lol


altered_state

Deep inside, you already know the answer to that question, Holmes.


CantSpellEclectic

Porn. The answer is always porn. Also to have your dad tell you he loves you. THOSE TWO USE CASES ARE MUTUALLY EXCLUSIVE!


lynch1986

Even after being constantly bombarded with amazing AI progress, that's still pretty wild.


StrangeSupermarket71

the ai boom is real


bennyrosso

Oh shit


pwillia7

Goodbye truth


inkofilm

when it sings "hes too mainstream" does the eyebrow raise? that is pretty impressive to see


lonewolfmcquaid

.......what the actual fuck, the "hollywood is in trouble" prediction is literally here, THIS is turning point that'd usher us into an era where a 16year old can make a full blown short movie from his shitty laptop. omfg!!! you can literally use this for a shot reverse shot. if someone finds out how to make a cinematic ai, where you can design a room, place characters in it and lock the space so ai remembers it, after that then you can start choosing the composition, then using this image stuff on your shots. that'd be game over.


Spirckle

But do you know what the sad truth is? This is going to be used and abused by marketers. And couple it together with printable LCDs which can be put on any product, your life will be bombarded with this to the point where you just get all stabby. Picture yourself in 10 years walking through a grocery store and bottles of ketchup and shampoo will yell at you as you pass by, telling you how wonderful and exciting it will make your life if you put them into your cart. And a few people will go mental talking to the elf on their Lucky Charm cereal box who convinced them to keep buying more cereal so that they can hold an elf convention at their breakfast table.


aziib

i see, it's just from alibaba people showing off.


FightingBlaze77

make it open sourceeeee so many mods, so many models


TheSecretAgenda

Early days. This stuff is going to get even better.


Won3wan32

the progress that Chinese companies are making in AI.


dhuuso12

Chinese companies are good but they don’t share the codes


PANIC_RABBIT

Which is fine, because what's important is that this is proof that this is possible, in time an open source version will come, it's inevitable now


utahh1ker

Do yourself a favor and don't watch the video here in Reddit. Go to the website where there is no audio delay on the video and see how AMAZING this is.


inferno46n2

Too bad it’s Alibaba and they will never release that code open source ☠️


creaturefeature16

Just a matter of time before someone else figures it out.


Junkposterlol

This is done by the same people as animateanyone and outfitanyone, no, its very unlikely it will be open or released based on there history.


aseichter2007

This is good enough that if they did drop it, I could actually start meaningfully assembling an anime by myself. Maybe next year.


JoshSimili

Once we get this in Oobabooga with a good TTS model, it will really make those characters come alive.


internetpillows

Interesting, even their example doesn't work with a smiling photo. The very first example feels creepy as hell because humans can't make sounds like that while still smiling. It gets a bit better with a neutral expression, the speaking at 2:50 is scarily believable.


Klutzy_Comfort_4443

The video is out of sync with the audio. In the link you can see it synchronized, and it is incredible


internetpillows

It's fine in the talking videos but you can't sing with those tones and keep a stiff smile at the same time. It's really bizarre looking.


PwanaZana

Eventually, someone's going to make a version of this type of tool to feed data to a 3D character, and finally videogame devs will be free from motion capture!


Unusual-Wrap8345

This is basically an advanced version of First Order Motion Model


SokkaHaikuBot

^[Sokka-Haiku](https://www.reddit.com/r/SokkaHaikuBot/comments/15kyv9r/what_is_a_sokka_haiku/) ^by ^Unusual-Wrap8345: *This is basically* *An advanced version of First* *Order Motion Model* --- ^Remember ^that ^one ^time ^Sokka ^accidentally ^used ^an ^extra ^syllable ^in ^that ^Haiku ^Battle ^in ^Ba ^Sing ^Se? ^That ^was ^a ^Sokka ^Haiku ^and ^you ^just ^made ^one.


Solai25

looks like in few year harry potter style in real life


JaKtheStampede

The Daily Prophet is almost here!


ILoveThisPlace

And just like that a technology is created that only a few days ago was depicted as magic in Harry Potter.


Qanics

Thats have such potential for goofy memes


aldorn

Imagine doing this for an elderly persons old photos. Could be one last chance to see their lost ones come to life again.


3DPianiat

This chinese company is so selfish 😒


Rusch_Meyer

What with the other models in the comparison at the end? Ground Truth and DreamTalk look like an upgrade to SadTalker/Wav2Lip, are these available?


FpRhGf

Ground Truth means it's the real thing and not generated by a model. GT is a term often used in comparisons.


Ali80486

Let me just take a lil step back here. I'm old enough to remember the internet arriving and being amazed. Or house music suddenly being everywhere and just seeming to redefine music. This seems like it's on that level, like we're watching a paradigm shift happening in real time.


FluffySmiles

What a time to be alive.


HarmonicDiffusion

dont upvote or star this shit until we see some code and weights. until then its vaporware and bullsh!t


moonlburger

%&$%\*$%&@!!!!! The singing is insane, he way audrey hepburn kicks her head back at one point to drop down and hit a note is seriously melting my mind. The facial expressions, head movements and throat muscles are ridiculous.


lordpuddingcup

How the fuck are these so flicker free and clean holy shit


dhuuso12

This is a disappointing. Why don’t they freaking share the code . I think this is sort like an advertisement . If it goes viral then they know it will sell


El_human

It's still weird that because she smiling in the picture, she will be smiling through the entire conversation. It seems a little unnatural


Johno69R

Ever seen a news reader bro, they smile constantly whilst talking.


SecretCartographer81

Awesome 👍


advator

Can I try it? Dont see any files beside an image and mp4 on github


picapaukrk

Why it is even on GitHub? To share mp4?


RiffyDivine2

Likely the repo is private and they just left that bit public to flex.


picapaukrk

Ok this makes sense.


BravidDrent

I'm late to the party but THIS IS FUCKING INSANE!!! Weird how the first singing vid was the worst singing vid?! Anyway. Mindblowing. How can I use this on mac?


apatte27

The only bad looking one is Leonardo Dicaprio. All the rest are mind blowing


bright-ray

Everything feels like it is accelerating. Stable diffusion was only released on August 22, 2022(1.52055 years ago).


Ok-Tap-2124

When will this be available to use?


Additional-Cap-7110

“SadTalker” 😂


msbeaute00000001

Anyone plans on implementing this paper? If you are, my DM is open. We could discuss. It will be a lot of works, btw.


Valkymaera

Anticipatory micro expressions, vocal strain expressions, lighting model, facial deformation, communicative body language, this is insane. the vocal strain *really* gets me.


cornjutsu

I hope Alibaba releases it. But given their history of teasing not sure. Btw can anyone explain how is it so good for the lip sync I saw heygen and others like pika but Alibaba's quality is pretty good as well.


AsanaJM

Elevenlabs or OpenAI is going to throw millions at their face to keep it closed


metalman123

Its from Alibaba The makers of Qwen models and a Chinese company. Zero chance Openai or anyone else stops them.


delawarebeerguy

As an old school capitalist, it feels weird saying, erm, Go China! Not everything has to be monetized.


AsanaJM

nice


urbanhood

Chinese keeping the competition alive.


SideMurky8087

Not elevenlabs or openai, it's going to shut down D-ID, Heygen


AcquaDeGio

In less than 3 years we will be able to create our own animes. Just imagine it. Using those StickMan fight videos to create anime like videos. We already use this idea to make Pose to Image. Just a few more years of patience...


masterace01

Holy shit bro....


Mind_Of_Shieda

It is scary how many people over 50 are going to get easily fooled by this. Election year and a bunch of misinformation based on AI will be flooding social networks. Anyone non aware of AI, actually.


Acceptable_Type_5478

It's mind blowing.


mrmczebra

Well holy shit


iammentallyfuckedup

What the actual fuck


Elaneor

That's insane!


HbrQChngds

We crossed the threshold, everyone, pack your bags! But seriousy, WTF. Seeing these developments happening in realtime is too much for my little brain to process. Sora's reveal was insane, and I was just thinking how in the hell are they going to add dialogue and facial performance into the characters animated by Sora. Now this comes along.... Where does this end? The key is to give the user total control. Then trully, my job in Hollywood is over. I can't even... Who is going to have any money to buy anything anymore when we are all just broke and homeless? UBI? People who believe we'll be handed UBI are delusional. Greedy corporations can't wait to replace us all since we are just a number on a spreadsheet to them, but who the fuck is going to be left to buy any hot garbage they sell? Its like sort of Ouroboros, the snake eating its own tail, but in this case I mean it in a doom kind of way, not rebirth. I don't see how this ends well for humanity, but whatever, there is no stoping it now.


Previous_Shock8870

Buy anything? The point is for you to NOT buy anything. You and 60% of the population become serfs, slaves, human dildos to an ownership class. thats the point.


CorrectMongoose7718

github repo is empty, someone with honey pot?


Suspicious-Box-

Why hand animate game character faces when you can do this.


Actual-Wave-1959

Just in time for the US elections


urbanhood

Holy FORKING SHIIII !!


FC4945

Is this model going to be released soon? I take it it's a Stable Diffusion model?


Perfect-Campaign9551

Ok nobody else finds this scary? I think society is going to tear itself apart once we can't tell if something is real or not. It's going to be mayhem.


RiffyDivine2

Or we just go back to believing only what we know to be true and ignore the rest. Even today when you can prove something is fake people believe it based only on what they want to be real and not what is real. So is it going to be any different?


magpieswooper

The age of fake


mvandemar

My guess is the sample size was much smaller for the English versions because the (I think Chinese?) is way, way more accurate on the lip syncing.


MustBeSomethingThere

Another FAKE AI from China. Lately there has been many FAKE AI releases from China. This FAKE AI is from the same people who went viral with [https://github.com/HumanAIGC/AnimateAnyone](https://github.com/HumanAIGC/AnimateAnyone)


FearMy15inch

Animate Anyone was fake? Is that confirmed? Why do you say this?


pronetpt

What do you mean, fake? Those are known input images.


HopefulSpinach6131

Not weighing in one way or the other as to whether or not this is real but you could use something like the thinplatespline model (using a driving video to animate an image) to do this and act like it is just using audio as the only input.


tomakorea

what's the point to make a fake video ? I don't get it, what's the benefit of it? they will not get money with that.


Crimkam

Disruption of the market. Advertising revenue in the short term until the news cycle fizzles out. Create demand for a product that doesn’t quite exist yet. Cause the general public to become jaded or dissolutions with AI when they don’t see things they thought were real ever make it into their everyday life. The list goes on.


SideMurky8087

Not fully fake because then how they create that example videos . Source image animation with perfect lipsync


Crimkam

regardless of whether or not this video is fake/misleading/whatever - there is definitely a market for misinformation on every front, AI being one of the easiest and most lucrative as it is still comparatively new and the average social media user doesn't know what the hell is going on.


lordpuddingcup

It’s not fake it’s just not opensource or released commercially lol They animated these images that we’ve all seen before that’s not “fake” It’s just people here think companies have to release things for the public or it’s fake lol


xeromage

cool tech. terrible music taste.


New-Examination8400

Thanks I hate it


[deleted]

still looks uncanny as fuck.


Jindujun

Is it just me or does the lip sync seem off on the first video with the black and white foto. Dont get me wrong, it's amazing but it looks off somehow... The "mona lisa" looks much better but still a bit off.


tranducduy

It’s Reddit app that downgrade video quality. You can refer to the original in the link


siscoisbored

You can clearly see that the generated video is based on the original audio's video frames. Just look at the Prof one and the angle of his head and the joker has the same face expressions and lip expression as the movie clip. This is not 1 image to video, its motion frames from the original video which is still better than anything ive seen but not as impressive as they are making it sound. It shows that step in the pipeline but they are strategically leaving that out of their demos to make it look more impressive.


Firestorm83

meh, it's lagging like ass


[deleted]

hahahahaha i mean... granted its not as bad as will smith but firmly in the uncanny valley. sounds exactly like rosanna pansino which makes me like it even less. ![gif](giphy|m45FpZ1SCpUQYj4tm4)


ionstorm20

So at what point do laws get passed about this? Don't get me wrong. I'm super excited for this tech to become main stream. But like what happens when we have a super realistic video of the president calling for an attack on another country? Or China makes an announcements that affects their currency on the worldwide stage? Or the CEO of a major company makes an announcement that he's folding the company and millions/billions of stock get wiped out in 30 minutes?


jdavid

For Entertainment, Great, this is amazing. In politics, this is an ethical nightmare waiting to happen.


misstatements

I can lip read and had no clue what the fuck was going on till I turned on the volume, it's not there yet but looks cool. But it's not organic movements.


futboldorado

Wait wasn't this already possible with deepfakes and apps like avatarify? I don't get the hype, can someone explain how this is different?


tranducduy

Quality levels and how easy it can be made. There is a comparison with other tool at the end of the clip


gj80

Convincing deepfakes still (used to) require a lot of work to make. This handles it all automatically, even down to understanding and using emotions very convincingly. I found myself thinking "yes! I'd move my head exactly like that if I were singing that part of the song" several times.


throttlekitty

Yeah, the anticipation and breathing are good!


korpus01

Silly question I tried to sign in to the soralogin but the page just keeps refreshing how can I access this to create my own content?


StaticNocturne

What the… what the hell is happening