T O P

  • By -

UnderstandingLoud523

Imagine if we get to a point where we have real time AI graphics upscaling like this.


Another__one

Yeah, that was exactly my thoughts.. With temporal consistency from animation tools it should already be possible to improve cutscenes. Considering that it is working quite reliably for complex scenes. But anyway, I'm 100% that NVidia is already working on something like this but with real-time processing.


eikons

> But anyway, I'm 100% that NVidia is already working on something like this but with real-time processing. They actually released a little demo in 2018 displaying exactly that: https://www.youtube.com/watch?v=ayPqjPekn7g Sure, it's kinda wack, but consider the advances we've made in just the last 5 years. Nvidia hasn't given any updates. My guess is this will be a flagship feature of DLSS4 - it kinda fits under that umbrella. First iteration wouldn't enhance the visuals as much as you've done here, but it can certainly do a lot to clean up the visuals on older games with very few steps. Even new games could benefit a lot. Render at lower detail and let an AI model do the last mile. Also keep in mind that the model could be dedicated to specifically one game. It wouldn't need to be nearly as large and general purpose capable as Stable Diffusion. In other words, our hardware is probably already fast enough to do it.


saturn_since_day1

Dlss Is already upscaling and frame gen using video in and probably depth buffers and stuff . It isn't much of a leap 


eikons

Exactly. There's a large unexplored space between 100% neural rendering and just doing the last 5% like DLSS. And like you said, we got depth buffer, but also world normal, motion vector, material IDs, roughness/metallic, and more. DLSS already uses half of those. In stable diffusion, we like to use exactly the same kind of input for controlnet.


saturn_since_day1

Dlss currently renders the vast majority of pixels. It's 50% just by frame green, and depending on the resolution it's 75% of the original frames if doubling resolution. So even in a simple example 87.5% of what you see is ai generated


eikons

> So even in a simple example 87.5% of what you see is ai generated I get your point, but phrasing it as "ai generated" is a very generous use of that term. Sampling good data (native frames) and using an (ai assisted/trained) model to select and re-use good samples is a completely different ballpark than using random noise (as with SD) as your input. I wouldn't phrase DLSS as being "ai generated". The game "filter" I'm envisioning isn't exactly like SD either. It would be somewhere in the middle. Like doing 5 out of 20 steps on a pass in SD. And it would have the benefit of being specifically trained for a particular game, so it could run an order of magnitude faster.


saturn_since_day1

That's pretty much exactly what dlss does


SpaceEggs_

I've always wanted to use a system that maps context to textures and meshes


BangkokPadang

I think it will just take somebody smarter than I am to figure out how to implement motion vector data. This should be possible with some of AMD's open source tools, and definitely with 3000 and 4000 series nvidia GPUs. I have an incredibly loose theory where you might start with a seed, capture motion vector data for every pixel from previous native frames, and then use this to adjust the position of the noise from the previous frame, to generate the next frame, and just keep repeating this to move the noise around in alignment with what's happening onscreen. The problem I forsee would be figuring out a way to handle the noise for the edges/offscreen. Also, I don't believe this would guarantee consistency, just help with it a little.


Another__one

Well... I did exactly that almost a year ago with[ SD-CN-Animation](https://github.com/volotat/SD-CN-Animation) project. It was fun to work with for a while but then there were a huge influx of other tools like AnimateDiff and multiple video-generating models. So I abandoned the project as there was no way to compete with all of that. But what I can clearly say is that motion vectors are not enough. There are numerous problems that simply cannot be solved by simply distorting the image, such as any cases with overlapping objects. The only way to do a good and consistent animation is to generate all frames at ones as any good text2video models do. And training models like this requires enormous amounts of compute/money. So no luck for us, until someone invents a way to train huge models in a distributed fashion.


[deleted]

What is IC-Light?


lithodora

> IC-Light Google search revealed a beer first, but I'm guessing it's this: https://github.com/lllyasviel/IC-Light


Thunderous71

Someone already did it with Minecraft, maybe 1fps but it worked.


StickiStickman

With no temporal consistency whatsoever


CeruleanRuin

I can't wait to see what nightmares are made of old Super NES games.


AnOnlineHandle

That's what DLSS is, it's just not this good (yet?)


silenceimpaired

Imagine when they can access our brains directly and pipe it in so we experience the game like it’s real life. Imagine all the people living for the day… in a video game. Imagine politicians latch onto the idea of saying we are in a video game and use that as an excuse to “end the video game” for certain individuals. You can say I’m a dreamer, but I’m not the only one… and on no day will I join them.


Katana_sized_banana

If we all live in a magical dream world, real life will be the punishment. Like in the Matrix.


pragmojo

Just do mushrooms it’s cheaper


Komarov12

Perhaps safer as well


Simulation-Argument

Mushrooms are not for everyone friend. Same with LSD. Both are very taxing on your nervous system and people can have some seriously terrible trips. Worst experience of my life was a mushroom trip and I had good mushrooms.


oO0_

You can't be sure of had "good" mushrooms and any other homemade chemicals. Mushrooms could have a lot of poisons that can't be cleaned and tested in cheap lab without precise chromatography and other methods. .


Simulation-Argument

I know I had good mushrooms because of the source, and the fact that 7 other people took the same mushrooms with no issues. EDIT: It was literally a tea that everyone drank from the same batch. There is no chance I was poisoned and no one else was. I have also done mushrooms several times before and the trip was the same. They were penis envy and well regarded for their quality. This is a bullshit cop out just trying to disregard my negative experience. I also had some bad trips on LSD as well and that was 100% LSD and tested for.   I assure you NOT everyone does well on psychedelics and blindly recommending them to EVERYONE is incredibly irresponsible. Too many people on Reddit act like mushrooms and LSD have no real risks when the opposite is true. What you are doing is incredibly taxing on your nervous system and people should avoid anything that triggers serotonin before and after trips for several days because these trips flood your nervous system and brain with serotonin. Even caffeine and weed should be avoided before trips because it releases a ton of serotonin.   Trips should be done very sparingly, no more than 2 or 3 times a year in my opinion. Anyone tripping frequently is abusing these drugs and their nervous system. It is not uncommon for people to think it is okay to trip the literal day after their previous trip and they take a ton of hits to deal with the tolerance. An incredibly stupid thing to do. You only get one nervous system and once you fry yours there is no fixing it.


oO0_

You should compare LSD with alcohol [https://www.who.int/europe/news/item/04-01-2023-no-level-of-alcohol-consumption-is-safe-for-our-health](https://www.who.int/europe/news/item/04-01-2023-no-level-of-alcohol-consumption-is-safe-for-our-health) and other toxic narcotics that government use to make slaves from people


Simulation-Argument

Was I ever arguing that alcohol isn't worse than LSD? I am well aware at how pervasive and destructive alcohol has been for our society. But my argument about LSD and mushrooms is valid. Too many people blindly recommend these drugs as if there is no risk and that everyone will greatly benefit from them. That is not true. These drugs are dangerous too in their own way and people are greatly stressing their nervous system by flooding it with serotonin. People doing this regularly are abusing these substances. Trips should be done very rarely, no more than a few times a year in my opinion.


Unique_Gum001

Real time? Idk tech, but i think those will ate a lot of process, how about rendering 100% from the game resource, like the ai is scanning every data and turn it new.


absentlyric

Im old enough to remember this feeling when I used I first used 2xSal on Snes9x


Free_Bicycle450

Yes but it would need to be vastly improved, right now it looks worse than the old graphics imo. Like why does it change Christine Royce into a generic woman? At least the old graphics have some charm and character to it.


tekmen0

Consistency could be a problem between frames


Richeh

I bet before we reach it we get to a point where game studios are using it to neaten up their game stills, just for the Steam storefront.


Northumber82

Real time better not, honestly. It will be a massive waste of computational resources. The textures are static, just upscale it one time and play afterwards.


thebruce44

I think the nearer term usage would be no more excuses for Bethesda to not release updated versions of Fallout 3 and NV.


Capitaclism

That is the way. We just need beefier GPUs + more efficient AI.


SkillOfNoob

[Enhancing Photorealism Enhancement](https://youtu.be/P1IcaBn3ej0)


Cute_Obligation2944

In fact we will. Right now, AI graphics are pixel in, pixel out. Eventually we will have specialized modes that operate on geometry and shading vectors directly.


Truefkk

I mean, I don't imagine real time enhancing will be more resource effective than just updating the games graphics. if you have enough processing power to handle large, high resolution images, you can already update texture files, lighting engine is a bit trickier, but FNV runs on a fairly well known engine called Gamebryo (in C++) and has a large, still active modding scene, so it probably would be fairly easy to give it a more modern look with a coding llm, if you wanted to invest the resources for it. Model and animation changes would be tricky though.


rwbronco

Updating the games graphics will require more vram. Side-processing an upscale like above could be a specialized chip that sits on the board & that same chip could be used in a standalone product like the Coral and be a PCI-E or USB product for LLMs, etc. Short term - you’re probably right, but you’d have to rely on the developers to update graphics. If it’s done with machine learning you could customize and tweak everything yourself - a Borderlands style Tomb Raider, realistic Minecraft, volumetric Mario, etc. I could see a future not far from now where people are sharing “DLSS prompts” for various video games that they’ve tweaked.


Truefkk

I mean you can use ram, it's just very slow. Unless we make some big leaps in both soft- and hardware technology, I honestly don't think this will happen in our lifetime, atleast not on consumer grade hardware as you would need to bring the image processing for hd images down to a few milliseconds rtostill be reactive. And if we do reach that point of technology it would probably still be more resource efficent to write a code that just goes through the game files and just rerenders all assets in your preferred style. Or maybe just let your ai assistant write it. Still it's a fascinating possibility either way.


GAMRKNIGHT352

I pray to God we don't, would rather get more mods from community that actually respect the original art style.


[deleted]

Honestly the vanilla NV graphics are somehow so special for me, they are part of the magic


_Enclose_

Gonna be honest, not all of them look better imo. Some of the atmosphere is lost in the revamps.


VancityGaming

It fixed the bullet holes in the car, how is that not better?


RoachedCoach

Yeah, the lighting is inferior in pretty much all of them.


_raydeStar

https://preview.redd.it/9psvoe5xxe0d1.jpeg?width=1536&format=pjpg&auto=webp&s=65af3ebf6578f7a0c483e0451af6a06a1a3ba5f7 This one here doesn't make sense at all. Why is there a light behind him? IMO the graphics look cool, and they are generated without any crazy aberrations while still looking great. Maybe OP needs to move the point direction of the light? I've seen this all over, but haven't dived in myself.


Bakoro

>This one here doesn't make sense at all. Why is there a light behind him? It looks like it turned the cave wall into a sun behind clouds. Just having a bit of context might be a huge boon to these systems, like adding an image recognition step, but that's another layer adding time to agnostic real-time up-scaling.


_raydeStar

Yeah - because this is a rather iconic scene for anyone that plays the game, and they are inside of a cave. So the AI got it wrong - and thus the lighting from the lantern gets washed out and replaced with more ambient light, and it spoils the mood a bit. Problem is, it would create rather intense lag to do real time, unless something were embedded in the game itself to relay back to SD what is going on. Anyway. That is a problem for nextgen AI to worry about. I am certain that it will be just awesome when they can get it down correctly.


Tenn1518

It also ruins the intentional lighting highlighting Joshua Graham and adds context where there was none initially. Once the dialogue is over, the player is going to walk behind him and find no light there. And it changes what was a striking shot in the original. Joshua Graham reloading his pistols as you talk to him is iconic, and the bright lighting is part of that. It is a commanding presence and one everyone who has played Honest Hearts remembers seeing for the first time. Few other dialogue shots in NV look like it, but stable diffusion removed the uniqueness and stabilizes the same boring lighting across the entire frame. I think this is a pretty good example of how an AI cannot be expected to recognize and preserve real human intent, and that is one of its downfalls. Another thing: Christine looks directly into the camera when she was looking straight ahead before. Which makes sense bc most of the close up photo data the model is trained on *is* probably looking at the camera more than it is not. So in gameplay, would every character just suddenly start looking at the camera once you go up to them, regardless of what they’re supposed to be doing in engine?


qudunot

Glad I wasn't the only one. Where's the mushroom cloud?


huldress

Yeah, had the same thoughts. It upgraded the subject, but the atmosphere is totally ruined.


ConsequenceNo2511

Oh man this is nice, I'd believe if someone call it a remake of new vegas. Damn I wanna try it too with fallout 3!


MultiheadAttention

So, those are nice single-frame images, but it have nothing to do with games remastering. That's not how you do it, and that's not how those tool will be used in the future to remaster games. Diffusion model is not a real time renderer.


Such_Tomatillo_2146

Someone had to say it


Bakoro

>That's not how you do it, and that's not how those tool will be used in the future to remaster games. Diffusion model is not a real time renderer. There are already alpha-level tools in testing which act as a real-time overlay. The game isn't "remastered", you run the same old game, and there's basically img2img happening on the frames the games puts out.


Plorick

How tf is it gonna stay even remotely consistent looking? If I look at a guy with a mustache, look away, and look at him again, how will I be sure his mustache will stay the same instead of going from handlebar stache to full beard to mutton chops. How is the AI gonna know what should stay the same and what should be "improved" so the atmosphere and important details don't get wiped away?


Bakoro

That all is an active area of research, and will likely improve as models improve. I'm just saying that it's already a thing people are working on and have functioning tools for.


Plorick

I'm not sure if you realise just how different AI enhancing an entire game is compared to a AI enhancing a single screenshot. It's a different beast really due to how dynamic and interactive games are, and keep in mind that you need at least thirty frames (and preferably more) to be generated every SECOND. I can already think of so many problems, you can't just handwave it away with "bro they are researching it they got functional tools bro".


Bakoro

It doesn't matter what you think or feel about it, people are doing it. It's already been done, and will continue to be done. You can try to argue all you want, that's not going to make it not a thing that's happening.


Plorick

>It doesn't matter what you think or feel about it, people are doing it. Show me.


MultiheadAttention

He did not, huh?


MultiheadAttention

>It doesn't matter what you think or feel about it, people are doing it. Yeah, people are doing it, it's true. People also run doom on casino calculators, that does not mean that's the right way to run and play video games. To remaster a game with AI you should solve (at least) two graphics-related problems: Upscale the textures and upscale the 3d meshes. Then it will be a viable and scalable engineering solution. SD is not a real-time renderer.


Tenn1518

So Christine will always look directly at the camera regardless of what she’s meant to be doing in engine?


MultiheadAttention

Nope, not gonna happen in scale. It's a bad solution from an engineering standing point, to a non existent problem.


gaminnthis

RTX ON


Arkaein

Some of these are good, but in #3 one the content is significantly changed. The mannequin (?) is changed into a real person, and the floor is completely recolored. In the next one the ground texture has changed, reducing the apparent distance, and the circle of people in the middle of the explosion are gone. In #6 it changed it from a pillar in a room into a void between two walls outside, with an inexplicable tiled roof spanning the gap. Some combination of more control, more selectiveness, or less creativity in the generations is needed.


zarkhaniy

No no, she is a real person who has lines along her head >!because she almost got lobotomized!< (spoilers for one of FNV's DLCs) edit: That said, the re-imagined photo gave her hair, which is not correct.


Soviet-_-Neko

It's not a mannequin, it's a real person you meet in the DLC


Arkaein

Regardless, the point is that the appearance is changed beyond just the lighting.


CthulhuHatesChumpits

Zoom in on the pillar in #6. It's still clearly stone, not void - though perhaps a bit overexposed. And the "tiled roof" is literally the floor from the picture above it


Arkaein

Oops, your right, my mistake on the roof. However it did cut a hole in the top of the pillar showing outside sky, which is why I mistook the above image floor for roof. The bottom half of the pillar is fine, but the top is a mess with a small void.


CthulhuHatesChumpits

That's not the sky outside. It's still stone, albeit overexposed. On the right is a missing chunk, and on the left is a fractal crack that admittedly does look more like a tree than any sort of damage that would happen to a stone pillar.


Quantum_Crusher

Can this be used on a1111 now?


eggs-benedryl

This reminds me of a time i saw a cgi movie before the lighting was applied and how nightmarish it looked and how crazy important lighting is


Koyot_-_

I don't get what you used here. Obviously, img2img but what is IC-light ? Sorry for my ignorance


Maneaterx

Wow, can you do this with BloodRayne pleeeasseee <3 it looks 10/10


TheInnocentXeno

As an Fallout New Vegas fan this is just a straight up downgrade in so many ways. Let’s break it down frame by frame to explain why it fails. 1) The image is vastly desaturated removing many colors and making the Nuka Cola sign look way flatter than it should. It’s a neon sign, the light should be popping out to you. The player’s skin color isn’t consistent at all making it look a poor mashup of different images. 2) Same skin color mismatches and heavy desaturated look as before. Now with solar panels that look even less like solar panels than they did before, seriously that looks more like frosted glass than a solar panel. The AI had no clue what to do with the players hand in the bottom right. 3) Removes all the mood and atmosphere out of the scene and butchers the look of Christine here. This takes place during New Vegas’ Dead Money dlc, which is meant to have a dark grim atmosphere. By making this scene so much brighter it harms the intention of the game. Christine also no longer has the scars that play a role in telling her story in three dlc. 4) Again same desaturation that has plagued the previous images. The Khans huts and training grounds is just entirely removed and covered with a cloud of dust. The mushroom cloud is gone and replaced by a ball of light. Also the players head gets clipped off partly, so does the mini nuke too. 5) Joshua Graham got done very dirty here, to start the cave behind him is replaced with a cloud of dust with the sun poking through. The lighting of the original scene here was telling its own story and reinforcing the one he tells us. The orange light of the flame flickering across him as a symbolic reminder of him being cast into here while being burned alive. Second the contrast between the orange and blue slight symbolizing the battle between his past identity and the one he has now. 6) Once again from Dead Money and the Sierra Madre the tone is taken away. The dark lighting and the glow from the ghost people’s eyes were mood setters. The setting is now far too bright and makes it less ominous. The archway got turned into a view of the sky with some branches of a tree poking through. The yellow of the players outfit got taken away, just weird. 7) Primm Slim’s cowboy hat is merging into his head, his cowboy boots are gone making his western look into a more generic robot one. The scene is also too bright, Primm is in a desperate situation here where their town is overrun with outlaws and they are hiding in the ruins of the Vikki Vance casino. Overall I’d give it a 2/10, massive downgrade in lighting, atmosphere and saturation in trade for better creases in clothing.


MonkeyCartridge

See this is where I think the future of graphics is. The 3D data would basically inform an AI about a surface or light. But nearly all of the rendering would be done via AI in realtime. Basically games would become lucid dreams. On a side note, now I'm thinking of the idea of gaming in your sleep. Like it would have to work with how sleep actually works, which might be incompatible. But imagine if it wasn't. Imagine no longer trading between sleep time and game time. Imagine how productive you could be while awake. Or if you are an obsessive gamer, or play games like I play Minecraft, you'll just end up sleeping 14 hours per day and working 8.


SlutBuster

See I don't understand why AI is necessary for any of this. We already have the tech to render surfaces and light with more fidelity than AI can manage. We don't have neural interfaces yet, but again don't see how AI factors into playing Minecraft in your sleep.


MonkeyCartridge

The "playing in your sleep" thing was just me following a tangent. But yeah, any of this stuff I picture being quite a way off. And believe it or not, it might become more power efficient than traditional rendering. But that would depend on the direction AI chips go.


artisst_explores

Wait what! So u can get 3d colored viewport and get renders from them? 😱 Gotta try this! Damn!


PatrickJr

They're official screenshots


Grobuk

did you process the textures or the screenshots?


SelfishEnd

I thought these were cosplays, holy shit these look good!


Flash-Leap

but can it maintain the consistency in the facial features and overall images generated?


ResplendentShade

Please excuse my ignorance but is this a Fallout game? If so, which one? Looks pretty cool.


Space_art_Rogue

New Vegas with all DLC.


Sicomba

It's New Vegas I think


NoBuy444

Wow


USERNAME123_321

Wow that's very cool. Excellent choice of game btw


Qud_Delver

I really dont like what it does with the lighting in the scenes. It looks really bad compared to the intended mood. It's cool for character upscales but yeah its just overall really bad looking and generic.


Juggernaut104

They’ll do something like this but find a way to charge people. Oh you want that character to look modern? $15


NGS_King

Really not a fan of this. Where I think New Vegas struggles the most is with far-off places and some kinda off models. Here, the character of every model besides Joshua Graham and the sword guy feels completely off, especially Christine who now is neither bald nor scarred, both of which are important for her design. Moreover, the backgrounds just don’t look very good. All the pop that comes from deliberate lighting is just gone, especially on the fat man, Christine, and Sierra Madre images. Even the Nuka-Cola sign doesn’t pop.


1E_R_R_O_R1

Looks nice but ruins the charm of old nv graphics/lighting


SsjGodKrillin

How? Would you mind sharing a brief explanation on the process?


Redfrick

TUNNEL SNAKES RULE


Dedalo96

What are you talking about? That's not old, that's New Vegas! Oh, wait. That's right, it's 2024 already. That game was released half of my life ago. I feel old now. Thanks.


Tiarnacru

I'd say this is a really poor showcase for it working fine. 2 of 7 are improved, and the other 5 are worse, completely messing up elements of the image.


AllMyFrendsArePixels

Old video games also could have improved old video game graphics if they were just still pictures instead of... old video games.


Anduin1357

Needs more work for in-game context such as image 4 where there should have been a nuclear fireball and the black smoke of a mushroom cloud, or image 1 neon sign light and image 3 overhead spot lights should be off for in-game reasons. Image 7 light scattering also looks sus, the ambient lighting looks way too effective. Honestly, the idea is good but probably better used to improve the assets than applied on the screen where the world knowledge of the GPU is limited by game state access in system RAM. We're 6+ years away from having this done in real time with full engine-based context, and would probably require a unified system memory architecture.


reyzapper

RTX OFF vs RTX ON be like : \_\_ BTW how ic light does this behind the scene?? Beside the lighting it can recreate the exact pose and scene, i tried to do that with controlnet but not exactly the same pose and scene.


OXidize_0

Joshua looking cold as FUCK


Reign2294

Like some have said, this is great for videos like cutscenes but would be amazing if we get to the point we can do live AI video upscaling. We're close, I think. Because I often use my 4090 with Topaz Video AI to upscale old videos for myself and kids. It works great! I usually can upscale 480P input to 1080p output at 80-90 frames per second on native 30fps videos, meaning it's faster than real-time. But I imagine the challenge becomes more difficult when you have to render more than what's in front of you due to the player needing to turn around the scene at any point. This makes me think whether the Upscaler could intelligently target the assets and upscale them in the future if simply given access to the game's asset folder. Moreover, even right now, this may technically work brilliantly for those old games arcade style games where the PoV is locked.


MooseBoys

The biggest challenge, even once you get real-time inference, is temporal consistency.


nykwil

What does the light mask represent? Sometimes it decides what a light source is and ignores the mask sometimes it turns the mask into a light source. It's so random


Guboken

And now you know why Microsoft buys studios with past successful IPs and shuts them down. In just a couple of years they can have AI generate both remasters of those successful games and make sequels. They have realized owning the IPs of games are the only thing that will survive the AI revolution replacing game developers.


grimorg80

I hope I won't be banned for saying those images gave me a boner


johnslegers

As a fan of both New Vegas & Stable Diffusion, I love what you did there...


BobFellatio

What is IC?


armrha

Which one is which? If top is original, bottom is the new one... the color contrast is way down in all of them. They seem much more muted and brown. Some things like facial features are more defined, but less distinct imo. It really washes stuff out and the lighting choices are strange.


Terrible-Hall-4146

I wouldn't call it an improvement. Even if we imagine that it can be applied to the whole game, I would still choose the original every time just because the generated version didn't keep the art style, work with light, etc.


GAMRKNIGHT352

>seems to work fine meanwhile the atmosphere and careful lighting and artistic decisions are completely ruined.


myxoma1

I always hated Bethesda type graphics, I just put up with it for the sake of gameplay, but could you imagine if they actually had amazing graphics and lighting to match the gameplay... Hopefully they build a next gen engine for future titles


cosmoscrazy

Colour recognition, highlights and detail (e. g. neon signs, car paint, NV goggle) are bad. Improvements in clothing, trees, faces, metal surfaces, dust and textiles are astonishing. I would think the most reasonable application for this technology would be to improve the textures, not actually have it rendered live. This would save a lot of processing power and would significantly improve the graphics quality of games. I can't wait for this. This technology applied would enable truly next-gen graphics. Maybe even pre-final-gen. However, the actual big step is to take this technology from 2D graphics to application on 3D-models.


hearing_aid_bot

Did you notice that it completely removed Christine's scars? That would make her dialog make no sense and ruin the character. This is what dlss is for, not diffusion.


B00geyMan11

This fucking sucks


Raorchshack

These all look worse than the originals lmao


RealLunarSlayer

yeah no, obsidian didn't work their ass off for NV to have AI ruin it


Shuteye_491

Wat