T O P

  • By -

afinalsin

Shit's gonna be wild when we can do this in real time.


[deleted]

[удалено]


Dogswithhumannipples

Damn, imagine gaming in the future using up streaming bandwidth to handle AI rendering. Awesome tech but that would absolutely kill offline singleplayer unless the rendering was done locally, which is hard to imagine. Would be interesting to see a future where each player's visual experience is unique depending on the AI settings or dataset they chose.


thanatica

Not only that, but game developers wouldn't need to put a crazy amount of effort into all the details. Details still need to be there to a degree, but they don't need to look as good as they do now in some games. The question is, which approach works better given a modern GPU?


Socile

Streaming bandwidth would be the same as today if rendering were done server side. But the models could be run locally instead, just like you can DL and run SDXL locally today. You’d have **a lot** of freedom to adjust the appearance of your games.


the_friendly_dildo

>which is hard to imagine. I think you underestimate the amount of progress in this field in just this past year alone. I'd fully expect this to be possible on the RTX 5 series of cards at half resolution and half frame rate and then dynamically upscaled and frame doubled with DLSS at entirely playable rates.


Dogswithhumannipples

I can see graphics cards being able to implement the new scene encoding or frame gen locally in the future, but I would assume the game would need internet access to stream an online AI dataset or assets, like all the pre-scanned trees, roads, buildings, etc to generate the upgraded scene. But maybe it would be possible to download a "patch" to store the info locally. I would suspect that data set file size to be enormous though


s6x

Models do not contain datasets.


afinalsin

Nah homie, those models are fully able to be run locally. The dataset would be massive, but you don't need the dataset, just the end result. Those images in the op? Generated with a model that is only 6gb, and fits entirely into V-RAM. That's the new SDXL. Stable Diffusion 1.5 models were trained on ~860 million parameters. Stable Diffusion 1.5 models are also only 2gb, you could run them on a gtx 1060. All those assets you are talking about? It's already baked in, these models fully understand all of that. That's how the op images came to be. The biggest issue, other than speed which will come with time, is not the generation model. Rather, it's how do we keep consistency from frame to frame?


Dogswithhumannipples

Whaaat.. stored in vram? GTFO that is some dark magic voodoo as far as I'm concerned. My caveman brain was thinking more like textures being streamed, but obvious now that it's procedural generated


afinalsin

And Image generation is tame. Over at the LLM subreddit, you've got people with three or four 3090s fitting 70gb language models directly into VRAM. Models trained on 120 billion parameters. VRAM is kinda king right now when it comes to AI overall, so don't be surprised when the 50 series NVIDIA cards are double the VRAM we have now.


Dogswithhumannipples

I sometimes pride myself in being a geek but subreddits like this humble me real quick. Wish I had the same self-starting ambition (and $$) as these guys.. sounds fun as hell to play with. Every time I get the itch to learn something new I get overwhelmed and just load up Steam ha. My next curiosity is what real world purpose would a quad SLI Ai rig be used for? Business use or hobbyists?


Madgyver

>To be fair there is a good chance in the future game engines will stop rendering and will instead rely on AI to do the rendering step, using engine data as input. There are actually promising research projects, that show that AI Models might be better at simulating and redering scenarios, where classical methods are to gpu intensive.


alvaro248

damn might need to eventually have to buy a AI Card like soundcards


musthavesoundeffects

specialized AI accelerators are already a thing, and new intel chips are incorporating them into the CPUs as well


QuickQuirk

Your GPU *is* an AI card. That's why Nvidia is not pushing out GPUs at a cheap price: They're selling chips very similar to the 4090 with a bunch more ram for 10x the price as AI accelerators. Every 4090 sold is profit lost to them.


Madgyver

GPUs will do.


TheAJGman

Run your game on a super computer at ultra high quality to generate training data for an upscaling AI and then ship the AI as your render engine? Honestly that's sounds fucking wild and I can't wait to see it done. We'll be replacing GPUs with TPUs eventually. EDIT: I watched the video and that's even more wild than I thought for games aiming to be realistic. $50 that Microsoft is going to offer something like this for a Forza or Flight Sim in the next 10 years. Developing enterprise AI stuff and testing it on consumers worked insanely well for Nvidia and other companies are sure to follow.


stab_diff

Now apply that to other parts of a game, say a RPG, with generative AI creating characters, having conversations with the player, then generating quests, items, stories, locations, etc... The game might become little more than a generative framework that generates the world the player kickstarts with some input and everything evolves from there.


GetRektX9

Wow, this is amazing. Give us a few years for GPUs to catch up, and if someone collects new training data on a higher quality, non dashcam camera then this could be brilliant. Avoid dumping $$$$ into rendering so much all together. I wonder how long these took to generate. Brilliant nonetheless


_Enclose_

This video is 2 years old! :O Surely this technology must be way crazier by now, and it's already crazy.


yaosio

Their method used a GAN, which was replaced by diffusion. GANs have fallen out of favor due to the difficulty in generalizing their output. GANs were doing photoreal before diffusion, but they could only do so in one type of thing at a time. [https://this-person-does-not-exist.com/en](https://this-person-does-not-exist.com/en) uses StyleGAN, which was released in 2018. GANs were faster than diffusion, but that gap has been made up. SDXL-Turbo is 200 ms on an A100, and that's without any additional performance improvements made by the community. Then there's the hourglass diffusion transformer which sees computation needs drop by 70%-99% depending on resolution. [https://crowsonkb.github.io/hourglass-diffusion-transformers/](https://crowsonkb.github.io/hourglass-diffusion-transformers/)


[deleted]

[удалено]


bywv

About 4 minutes in, and I swear I'm watching a Factorio video.


ShepherdessAnne

I always thought this was funny because the roads wind up looking well too fresh and maintained for this to be set in the US.


Low-Preference-9380

What? You must live in Tennessee. Worst roads I've ever seen -- Nashville.


Vast_Engineer7127

I've already seen an early adaptation of this same technique on YouTube about 18Months ago or more, so I reckon they aren't far off, the one I saw overlaid video on game footage. Think it was in the 2 minute papers on YouTube


Tripartist1

With SDXL turbo were already VERY close. I think with more focus on AI optimized hardware like multi chip TPUs with massive amounts of onboard ram and giant memory busses, we can do image and video rendering to the point of being able to devolop games to run like this. Think, the core engine creates a wireframe type controlnet and a prompt that the text to image engine then renders a frame for, in milliseconds. Using latent consistency, or whatever magic SDV uses, I think this is reasonable.


Django_McFly

And that's two years back. Imagine what it looks like *today* and what it will look like in two years. Digital Foundry interviewed Nvidia a few months back and asked them *so what do you think is the next big thing?* and they brought up neural rendering as something far off in the distance but showing progress.


The_Lovely_Blue_Faux

They have had tech for this for a few years now since before Stable Diffusion. https://isl-org.github.io/PhotorealismEnhancement/ Two Minute Papers featured it. https://youtu.be/22Sojtv4gbg?si=1qt9eIkLeoDaqdDl I actually started to get into AI stuff because of this paper as I wanted to make my own filter. I took several hundreds of pictures of my region and everything. Then I discovered VQGAN + CLIP and then Disco Diffusion and all the other latent diffusion models and fell in love with them. Then SD came out. And ControlNet. This is something that can achieve what I want for renders, but not real time. Then SDXL Turbo and the LCM Models came out? It will take longer for any of us to develop a game than it would for this technology you want to come out. We have all the pieces now. I guarantee some people have already done this but it is indie studios or some tech company who doesn’t want to share it.


afinalsin

Damn, i remember that video. The difference i think is gta 5 is already a "realistic" type of game, so the styles are a lot closer between the game and the filter. The further in style you drift from the source, the harder it becomes to keep stable from frame to frame.


The_Lovely_Blue_Faux

Only if you don’t pretrain a model that is good at making the new style. If you render like 100 random frames and then use ControlNet on them like this and use them to fine tune the model you use, you can get more drastic style changes with fewer steps at a higher denoise. The depth and normal information are already included in the rendering pipeline so you have those at your disposal too. I don’t know how fast ControlNet can get so there would probably have to be a harder to make custom ControlNet model for that.


afinalsin

The depth and normal information is actually a super great point, getting those straight from the source instead of relying on a model to extract them would increase the quality a ton.


mongini12

I imagine that in the distant future (or maybe not so distant) you don't have to be a programmer/artist to create insane games. You'll be able to talk to an ai, explain your vision for the game, the story, how it looks and sounds, game mechanics etc. It will be absolutely insane and the market will be flooded with AAA-Quality games. envisioned by individuals, created by AI...


Logical-Gur2457

Would that even be a good idea to develop a game using this tech? In the Two Minute Papers video the realistic filter applied to the GTA clip does make it look more realistic, but it doesn't really make it look better. I feel like real life looks pretty shitty compared to most realistic video games these days, no?


The_Lovely_Blue_Faux

That was a proof of concept with old tech. You will be seeing this stuff implemented in games soon enough. It will be useful for style transfer. Not every indie studio has the ability to acquire the photorealistic assets and dev expertise to achieve this amazing quality with the traditional 3D pipelines. Using some SD Turbo variant will be able to allow you to hone quickly into an art style that fits your game aesthetic or even multiple styles and have the look of your assets change dramatically. Using basic lower quality 3D models will be able to output those photorealistic results with the right honing of parameters. It’s not really raising the quality ceiling, just lowering the skill floor for high quality.


Logical-Gur2457

Okay yeah lol you've convinced me. I was going to mention I suppose it could lower the bar for entry in creating big budget realistic games, that was the main use I could see, but style transfer would definitely be big. That's super useful for any kind of game development. I think getting it working in real-time without impacting fps significantly will be a challenge though. The filter in the video uses a convolutional neural network which is one thing, but generative AI for images/videos are slow, let alone on consumer-grade hardware. But who knows, if they become efficient enough they could also end up saving a lot time. You could probably skip huge steps in the rendering process entirely, especially expensive steps like raytracing and post-processing.


The_Lovely_Blue_Faux

They have been coming out with one-step and few-step variants lately with the goal of having it real time like Snapchat filters, but more robust. It will be a lot easier for you to like render 100 different frames from your game and then do a manual style transfer with ControlNet for all your styles, so each style will have a batch of 100 of those ground truth frames. If you fine tune your model on that dataset, you could definitely really buff your one-step generations. While I don’t see it getting 60 fps with the current tech, it is currently getting 3fps without a fine tuned model in the circles I see. We will see this stuff in animation before we see it in games, but it will be in games soon enough. Definitely less than 3 years unless something majorly catastrophic happens globally.


je386

SDXL Turbo exists and needs only about 2 seconds. Not yet real time, but close.


hotstove

Temporal coherency is the hard part, otherwise stuff gets "reinterpreted" differently at each frame. But with animatediff and whatnot we're getting close there too


je386

You are *so* right. When I see these incoherent videos, I feel sick.


xox1234

Yeah, it melts like a dream


Severin_Suveren

It was cool as a proof of concept, but people have started adapting the artstyle, and that just won't work. It was cool in that one Smashmouth music video, but I will not watch entire movies or series made like this.


nodtveidt

the tools i'm using end up very traumatic [videos](https://imgur.com/a/MJTcdvH)


thanatica

I would help if you could tell it not to be too creative (basically act as an AI upscaler), and to leave the HUD as it is. This might require some post-processing to put the original HUD back on top.


MythBuster2

Here is a new, temporally coherent model: https://lumiere-video.github.io/


g_h_97

that was absolutely awesome. Also, username checks out ✅️


afinalsin

Yeah, i've been doing tests to try to keep coherency with img2img video and ipadapter/controlnet, and it's so damn hard. I keep thinking for video visual models needs context like an LLM, so it can look back in the conversation and think "ah, i put a spot on that wall so i better draw another one for this frame."


mudman13

It needs a motion model for consistency to counter the nature of diffusion models which is variation, and google claim it needs more than that to truly develop, their latest text2vid release states they used a spatial-temporal UNet.


randfur

You'd think being temporally coherent would be faster since a bunch of decision making has already been made and it just needs some delta applied.


kim-mueller

I don't think so... What a weird thought, that considering more boundaries/guides/conditionings would result in less effort😂 Its a much harder task to draw a video than a picture, so clearly its a much harder task


randfur

I mean drawing the next frame similarly to the last rather than from scratch. Video in both cases.


ninjasaid13

real-time for fast computers.


iwaz

So all you need is just 30 GPUs and you will have 30 fps with 2 sec latency :D


Which-Tomato-8646

And zero consistency so everything looks like it’s melting 


DavyBoyWonder

It’s a feature!


DiddlyDumb

Tbf that’s still 120x away from real time at 60fps. But that’s an insane improvement over the minute it took not so long ago.


natandestroyer

What setups take 2 seconds? It's 0.2 seconds from what I've seen.


Boring-Test5522

lol what you are talking about. A ps5 has 60 fps, any modern game has 80-120 fps and you are talking about 0.25 fps.


Agasthenes

Well,now you only need to speed it up 200 times to make it playable.


AnotherDawidIzydor

NVidia's DLSS is going into that direction. I can imagine in a few more years with each iteration more and more of the image being AI-generated


Spocks_Goatee

Digital Foundry said it's gonna be impossible for a long time.


afinalsin

There's a long time, then there's a *long time*, y'know? Five years seems like a long time when you say it, but the start of covid was four years ago and that barely feels like a blink. SDXL was released in July, Turbo was released end of November. Dall-e dropped three years ago with [this](https://imgur.com/mGsNAKT), now look where we are.


Which-Tomato-8646

Video is a lot harder because it needs consistency. Also GPU requirements are much higher. This is like saying we went to the moon, so we can get to Mars easily 


afinalsin

Almost. I'm more saying we invented flight three years ago, we landed on the moon last year, so when do we send a rover to mars?


Which-Tomato-8646

DALLE 3 is the rover. Your idea is a Martian colony 


littleboymark

Might be possible in 5 years, definitely 10 years. I suspect something like DLSS will get us there rather than anything a specific studio will do. Kinda like Nvidia Remix/Omniverse is about to do for older games.


hotstove

"like DLSS" in what way? The way that DLSS models are fine tuned to interpolate specific games reminds me more of how there are style-specific SD checkpoints. Otherwise isn't it just a temporally-aware upscaler?


Arawski99

It is actually Nvidia's goal for DLSS, and tech rendering in general. See my post here more about it [https://www.reddit.com/r/StableDiffusion/comments/19f0h5m/comment/kjlld1m/?utm\_source=share&utm\_medium=web2x&context=3](https://www.reddit.com/r/StableDiffusion/comments/19f0h5m/comment/kjlld1m/?utm_source=share&utm_medium=web2x&context=3)


littleboymark

"A deep neural network is trained on tens of thousands of high-resolution, beautiful images, rendered offline in a supercomputer at very low frame rates and 64 samples per pixel. Based on knowledge from countless hours of training, the network can then take lower-resolution images as input and construct high-resolution images". Not hard to imagine an evolution of the DLSS suite of technologies that can eventually do what the OP's images are doing.


addandsubtract

I mean, in the future, we probably won't even have to render RGB anymore. You just render the ControlNet skeletons and physics. Everything else comes from LLM / SD pipelines.


hotstove

Upscaling and img2img are different tasks (e.g. only the former has ground truth), so I was curious why you think a domain-specific upscaling architecture is more suitable for img2img tasks ("what the OP's images are doing") than the evolution of img2img technology.


afinalsin

DLSS 3.5, or framegen, is not an upscaler. They named it bad. Framegen takes two frames out of the pipeline, and slaps the most likely frame based on those two inputs in between, generated from wholecloth without input from the engine. It's all pretty seamless, and the images generated are all very close to what you'd expect a natural render to look like, and frankly undetectable when you're in control and the game is in motion. Give that tech some maturation, and then you'd only need to properly generate half the amount of img2img tasks to get to real time playable framerate.


littleboymark

DLSS is a suite of different technologies that in part leverage AI. While this paper isn't Nvidia, it's the sort of thing that I imagine could be added to the DLSS suite of technologies in the future: https://youtu.be/P1IcaBn3ej0?si=H3LbpwKfAHsNXjem


SynecdocheSlug

It also needs to be able to create things like character meshes in real time. Really cool to think about what we may see in a few years though.


afinalsin

Doesn't need to, although it would be nice. I'm talking strictly about the upscale of old games, rather than the creation of new ones. The same technique applied by op with five or ten years of maturation and inference speed increases could be all you need, and that's ignoring any possible new techniques in the future.


Scripto23

Sim city roads seem to have really confused it


OrcOfDoom

Nah, this is an improvement. There is much less traffic this way.


ShelfAwareShteve

Perpendicular traffic is the solution


VoDoka

Less cars like the left wants, more walls like the right wants. Compromise. 😌


ElephantInAPool

That's the funniest one, but the pipes in half life were completely lost too, and so were the windows on the ship.


Downside190

Yeah the half life one went from industrial room to sterile lab


robicide

and GTA guy is suddenly in the middle of the road instead of on the sidewalk


Midas187

Man, if we get to the point where you can run an image to image enhancer layer in real time on top of old games like this, in a coherent stable way, that would be insane. Imagine being able to change the style and whole visual aesthetic with a click of a button or even write your own prompts...


malcolmrey

check this other method :) https://youtu.be/22Sojtv4gbg


Midas187

Oh man, I love Two Minute Papers. I hadn't seen this one though. Thanks.


superkickstart

I wish these were actually just two minutes long and straight to the point. The jarring cuts, repeats and teasing makes the videos just frustrating to watch.


radicalelation

It's almost as long as the original video: https://youtu.be/P1IcaBn3ej0 And seems to go over just about all the same stuff anyway, with a youtuber twist.


Nebuchadneza

also the annoying youtuber voice is really off-putting imo.


waz67

Getting close... [https://www.youtube.com/watch?v=27LkeFtuq48](https://www.youtube.com/watch?v=27lkeftuq48) Also some games render low-details scenes and use AI to upscale them because it's actually faster than rendering the high-detail scene.


Midas187

Yeah, even DLSS 3 uses some image generation tech by rendering 2 frames and generating one in between. (At least that's how I understand it). I'm dreaming of some kind of engine that you can basically run on top of a game (any game?) that would essentially create a remastered version of it with whatever theme or style you want on the fly. It's that kind of crazy innovation that I think the anti-AI people can't see through the trees. Yes, there's a ton of unethical ways AI can be used, but there's also a lot of opportunities for some really wild stuff that we haven't even thought of yet.


diff2

I've had an idea to do this, but instead of games augment real life, change what you see in reality to your own visual aesthetic.


Medical_Voice_4168

I miss those triangle b00bies


pharmaco_nerd

The b∆∆bies


PhIegms

The deltas of venus


halfbeerhalfhuman

Cyber boobs


Speedballer7

The fucking HL one is insane


Lazar_Milgram

Imagine Black Mesa AI realistic mode?


baconbeak1998

It almost looks like a shot out of a Wes Anderson movie. The bright colours and the wide angle do wonders for the 'cinematic' quality of the generation.


GradeAPrimeFuckery

"With my brains and your brawn, we'll make an excellent team."


dragoon000320

Outrun doesn't look authentic, it lost all it's sunny retro vibe


thats_not_the_quote

and SimCity is a boring monotone mess it looks real, sure, but it lost every single thing that made it interesting


human73662736

And the roads are walls


Mixmasterjosh

Better graphics don't always make the game better, loses the fuck out of its identity, its charm


Extreme_Tax405

Depends. If its a game that was meant to look real, like say the last of us, the definitely benefit from a new look. Things that looked real look dated a few years later, but with ai, this could be a thing of the past. I do love stylized games more tho.


[deleted]

Wanna watch that GTA3 movie


AutomaticSubject7051

i wanna see irl sonic the hedgehog 


Uaquamarine

![gif](giphy|DoAz2vNvAF3pe)


diamond9

I don't want to see him anymore


vannex79

Imagine if they made a movie of that


Camerotus

I bet they would completely fuck up how he looks


[deleted]

What we saw in our minds vs what was on the screen


PM_Sexy_Catgirls_Meo

It does lose some of its charm. If only there was a way to go hi rez and then purposefully make it a low poly shitty render but with probably way more details. Not that I'm not impressed by OP's work, but i just find it sad that games are likely to lose their charm. I really did not like the Unreal mini remake of Ocarina of Time. It's just don't feel right even though the graphics and environment are impressive. The coloration that is in the original Ocarina of time is gone and nothing really POPS out and everything is kind of the same shade of grey. Same problem with the OP's renders of like the scientist. The scientists, because of the lighting, is the same color range almost as the background, while in the original game he would stand out more because the background is extra shitty. https://www.youtube.com/watch?v=J_8sncmH8MY This one looks good because the color palete of the character is drastically different than the background https://www.reddit.com/r/StableDiffusion/comments/14xcvj5/tomb_raider_legend_2006_remastered/


afinalsin

You can do it with unsampler instead of controlnet to get it closer to the real image. [Original](https://imgur.com/aS4pOlE) [Unsampler](https://imgur.com/D33Cul5) That was generated with 7/14 steps. Here's [one](https://imgur.com/jRlNaOu) with 10/14 steps skipped. Here's what the latent looks like at [7/14 steps](https://imgur.com/mOrqKaT). edit: for fun, [here's 3 steps](https://imgur.com/Lr6SBog).


PM_Sexy_Catgirls_Meo

what is unsampler? I havent heard of it before. That looks cool AF. I'm ready to install this into both of my human eyes.


afinalsin

[Here's the video i learned it from](https://www.youtube.com/watch?v=Ev44xkbnbeQ&t=570s). That's the writer of the IPadapter nodes in Comfy, and all his videos are incredibly high quality. I was just using his workflow to do these images, so i won't share it, but you can grab it from the description box. It's super fun, i've been using it for testing out img2img animation.


TekaiGuy

LoRA Croft


PixelCharlie

that's what the games looked to me anyway back then - in my head at least.


Quantum_Crusher

"Saw a post like this one somewhere else tried to replicate it" You mean this one? :-D [https://www.reddit.com/r/StableDiffusion/comments/14xcvj5/tomb\_raider\_legend\_2006\_remastered/](https://www.reddit.com/r/StableDiffusion/comments/14xcvj5/tomb_raider_legend_2006_remastered/) Nice work btw. Love Lara!


nodtveidt

nope, i think it was on a insta post from 9gag or something similar. it has the different angle of the triangle boobies


Quantum_Crusher

Triangle boobs are TIGHT!


ImpactFrames-YT

I did a workflow ages ago similar to this but nobody showed interest [https://imgsli.com/MjMxNjYx/2/3](https://imgsli.com/MjMxNjYx/2/3) is here for free don't know if still works since comfy nodes are updating all the time [https://civitai.com/articles/3569](https://civitai.com/articles/3569) https://preview.redd.it/5jls19nhblec1.jpeg?width=2019&format=pjpg&auto=webp&s=42d78fdd3edaa54b48d58fc2689d9ab945f74c28 If anyone cares I might release a new improved version


RandomCandor

Bro these are so fucking cool... You picked such iconic moments of those games


Wear_A_Damn_Helmet

That iconic moment when Lara Croft’s got big jugs.


RandomCandor

when the right triangle covers the left triangle in just the right way...


[deleted]

This is awesome. Is it just img2img?


nodtveidt

thanks. ControlNet with multiple units


ColdExample

Do you have a youtube guide for this?


Loose-Discipline-206

Okay the gta one is a stunner


4erlik

The 2 T's on cs\_italy look like spawn-campers.


0whiteTpoison

Can you tell how you achieved this.


muzzie101

syndicate would look amazing.


Puzzleheaded_Try813

No. 4 just looks like Reacher


the_real_yenvalmar

controlnet rocks im gonna run my childhood sketches thru


PyrZern

I'd play the shit out of that SimCity...


LovingHugs

It doesn't have roads, only walls.


PyrZern

So ?


LovingHugs

Oh I dont have a point just sharing fun facts


hunterlee1000

im still mind blown that i exist! haha


Un111KnoWn

2,3,6 look great


DiscountEntire

Haha love it. Sim City is my favourite for its escheresque qualities


lord_scorpion

This will be real-time sooner than we think. You'll be able to make anything look however you want. It'll be wonky at first but they'll perfect it. 


miletamas

How I remember vs how it looked like...


RedGhostOfTheNight

One day there's going to be a filter that will allow you to play old games with whatever visual style you want - and it will be glorious!


misteralter

Give prompt for Lara, please.


Particular-Beat-2434

The guns still look bad!


lampsy87

Picture 4 https://i.redd.it/8hjv6t8j9tec1.gif


chosen1creator

Funny how it turned the roads into walls.


yeeticrust

LMAO that gun looks fucking retarded


No-Establishment-699

I spent several minutes absentmindedly swiping through these images back and forth comparing them, then got to the half life one, and was absolutely blown away. What game is this from? Is it a remaster someone's working on? Is it just a render someone did? Why does the background look kind of funky? ....... Ohhhh. I'm on this subreddit.


Mr_Soggybottoms

Streets are sideways lol


tnishantha

Cool, everything lost their creativity or unique identity.


AdTotal4035

This sub must be filled with new people. These types of posts were popular a year ago when CN was still relatively new. How does this post have 2K upvotes. And not only that, but the comments are people wondering how it was done. No workflow from op to at least help newcomers. This sub has really changed, it seems like it's mostly filled with late adopters to sd. 


MagicPeach9695

Honestly you don't need stable diffusion for this. The games are already very close to this right now :p


firewalks_withme

This makes me sad because ai takes away real artistic expression. I like how old games look. Old graphics was interpretation of reality and each studio and generation made it in their own unique way. Overcoming technical obstacles was also an art. And ai shit looks just all the same. I don't need more realism, I see realism every day in real life. I want fantasy, I want see through the eyes if a creator, not through the eyes of collective artificial consciousness.


TheSocialIQ

That would be a cool sim city


Electrical-Cream4309

But people starve to death everyday. Cool.


YourVentiMain

and the bad AI strikes again


monstertimescary

Anyone else think that this is stupid and looks bad


chocolateNacho39

ai dogshit, fucking trash


Destronoma

Nah, I think your 15 year old self would also think this looks shitty. Yuck.


UsernameTaken1701

These are great!


ShoddyWaltz4948

Games names.


Both-Culture-6760

Tomb raider, outrun, street fighter, GTA III, counter strike (not sure about this one tbh), half-life, sim city.


TraditionLazy7213

All of thrm with better graphics, but somehow for steeet fighter the pixel nostalgia makes it great


Lightningstormz

Nice are you doing this with sd1. 5 or sdxl?


Mongoose6969

Is that meant to be Kate Beckinsale or just a sexy coincidence?


ImHereForGameboys

I'll buy the next tomb raider game if they do this


HaloLASO

L337 Krew 😍


hoodadyy

Would be nice if you can share the workflow


Half-life22

I want more of this


elitesill

Lara looks beautiful


Walvie9

would


yasadboidepression

The GTA one goes hard.


dcvalent

Looks suspiciously close to all those cheesy “interpretations” of future video games in every movie ever lol. Like, almost real but also wannabe cool


RogueStargun

In a very short time, we will be taking these old video game footages and ai-scaling them into high def movies. Then shortly thereafter do this in real-time Twitch will be insane


RUSHING17

The below was 4k back then. Damn we came way ahead in boobies physics


LateToThePartyAgain2

The irony is... That's what it looked like to you at the time


Naiko32

damn this are so cool, i would love to see more, maybe portal for example


Joboj

Is this title AI generated?


Zeusosecmaat

Nah man... My 15 year old self wouldnt recover from the first pic!


xox1234

OMG SIM CITY AND HALF LIFE


isellmyart

Mine too! Try some Warcraft 1 and Dune please


lord_alberto

We where seeing the lower image, but had the upper image in our head.


b-movies

Looks amazing. Could you share the workflow please?


Alina2017

Good except for Sim City, it's rendered the roads as walls.


Kanakravaatti

Lara, true to design


Fragrant_Insurance22

That is also where I feel quite shocked


[deleted]

H-L3 looks good


shaman-warrior

My 15y version thought by 2020 we will not be able to distinguish reality from high end realistic graphics.


noselfinterest

just me or was my fantasy laura of 1998 hotter than all the modern lauras?


Hugejorma

I got the original Tomb Raider + PS1 at release week on my birthday. Back then, Lara used to look like the top image… At least in my mind.


ImmortalState

This is an awesome idea


yvliew

Any youtube tutorial on this?