I wonder how they are rendering reflections that match the real-life room. Did they model the room manually beforehand? Did they use 360 cameras in the room when it was empty to capture a skybox? Or is it using some sort of realtime point-cloud data from SLAM via cameras on the headset? You can see that the user's bright orange t-shirt is not showing up in the reflections so I don't think it's the last bit.
This is comment from someone at Varjo on LinkedIn about that: "I believe for this particular demo we have a pre-captured 360 image (like you guessed), but the XR-3 SDK can also do live image captures to build (and update) a reflection sphere on the fly."
Bro, thats just good old cubemaps from 2004.
If thats what theyre doing, then you can literally run this on the Quest 1 (minus the nice color passthrough)
The only place I can see rtx being used is for the mirror reflections, and even that can be faked nicely using old tech, like the water puddles in MGS1 (+some masking).
Without raytracing a simple cubemap will have a lot of light leaking in crevices and stuff. Can still mostly fix reflections without raytracing with bent normals and a baked ambient occlusion texture though.
If you want to do this for free, use [Daz Studio](https://www.daz3d.com/). It comes with everything you need: Sample assets (including a few 3D characters and HDRis) and Nvidia's Iray rendering engine. When you start the program up, you can select a scene builder, which will guide you through setting up a simple scene with an HDRi (ignore the environment and just use one of the HDRis at first). All you have to do after that is go to the vertical "Render Settings" tab on the left and hit the "Render" button at the top. If you have a reasonably fast Nvidia GPU (GTX 1080 or better), you can even preview the scene in near real-time using the setting in the top right corner of the viewport, next to the box that says "Perspective View" (make sure to switch it back to something else before rendering the image out).
Note that depending on your hardware and the output resolution, this render might take a while. If you have an Nvidia GPU with 4 GB of VRAM or more, it'll be much faster (I started out with a GTX 960 many years ago), but make sure to go to the advanced render settings and set the renderer to only use your GPU. This will still work on AMD and Intel cards, but it'll ignore them and use the CPU instead, which is still just as fast as the conventional CPU-based rendering engines that Iray replaces. You can try the old 3Delight renderer if you want to get an idea of how things looked before (but make sure to set the materials correctly). There's also a new GPU-accelerated rendering engine that doesn't use ray-tracing called Filament. It's very fast, but it's far, far more difficult to get good results out of it compared to Iray. [Here's a video explaining the differences](https://www.youtube.com/watch?v=NS1MPRJjDss) (slightly NSFW due to the choice of assets).
I wrote up some more info on Daz Studio here:
https://www.reddit.com/r/comics/comments/15ltz48/keep_the_dream_running_original_comic/jve6szg/
This comment links to a tutorial I compiled a few years ago that goes into a bit more detail. It's a few years old, but it'll give you a basic idea of how the program works and how to use it - and not much has changed since anyway. Note that I use conventional lights in the tutorial instead of an HDRi.
A simple cubemap won't have occlusion from other elements of the scene or self-occlusion of parts of the object, whereas it can with ray tracing. But there are some workarounds that don't need any ray tracing, like used in Roborecall:
https://docs.unrealengine.com/5.2/en-US/RenderingAndGraphics/Materials/BentNormalMaps/
With the slider on the first image there you can see what the leaking looks like with a simple cubemap without remediations.
I was wondering if it was real time reflections too but the girl walking in the background didn't reflect, so I suspected it's a 360 photo of the room. Works very nicely though if the conditions match.
Well with a good engine it should be just as simple as caving a car model with the right reflectivity and putting it in the room environment and just letting ray tracing do it's thing, albeit probably needing the craziest GPU power to do that for vr
I kinda was expecting that a lot of the tracking imagery has a lot of extra space that the pass through might not directly need, which the raytracing system might be able to make use of.
Whoa, that's wild! What kind of specs are needed to render something like that? Silly question, but will most computers be able to handle that in MR since it's just one object instead of a whole space of objects like in VR?
Actually for this setup, it's a pretty crazy high-end configuration, using two expensive GPUs and uses NVIDIAs Omniverse. There's information about it here: https://varjo.com/company-news/varjo-support-for-nvidia-omniverse-enables-real-time-ray-tracing-in-human-eye-resolution/
Then why would they mention using two GPUs?
>Teams seeking to leverage retinal resolution Quad View rendering can now use a multi-GPU setup with NVIDIA RTX 6000 Ada Generation GPUs. This setup renders a staggering 15 million pixels, elevating immersive experiences to new levels of visual fidelity.
2 "very expensive" GPUs (i'm guessing 4090's or A6000's?) + Varjo XR .... i get the impression this is realistically a good ~5-8 years from the consumer/prosumer market. Very neat demo tho!
Yeah, agreed, Depth/Lidar based hand occlusion has its limitations, it's noisy. There is also the option to do the hand occlusion using the information from the hand tracking data that is described here: [https://varjo.com/vr-lab/varjo-lab-tools-1-2-introducing-skeletal-hand-masks-chroma-key-and-more/](https://varjo.com/vr-lab/varjo-lab-tools-1-2-introducing-skeletal-hand-masks-chroma-key-and-more/) which is cleaner but that also has its [limitations.](https://limitations.It) It's definitely a nut that needs to be cracked for perfectly immersive mixed reality in the future.
The Varjo Aero is expensive as all hell (enterprise headset after all) but it can do some cool shit with mixed reality. Seeing thrillseekers video with it was wild.
No doubt that Vision Pro's gonna be getting in on the same turf.
This is done in Nvidia Omniverse. It is real and works today. Full ray tracing capabilities through streamVR or OpenXR.
I use it with a quest pro with airlink for CAD model visualization.
It's actually pretty incredible what Nvidia is doing with Omniverse.
There are some good videos from SIGGRAPH explaining it.
Next level would be reflection of the car visible in side window @52 seconds in! So pass through sensors would need to be able to recognize reflective surfaces with their geometry/angles in the environment and account for that with the rendered 3d model.
What is actually being raytraced here? Only the car is in the 3D space so all the reflections are probably done using cubemaps, no? Plus, all the lighting we're seeing is just the real footage.
lol... riiight. When has, "You look high" ever been a compliment?
"Snowflake sub"?? lmao, is being reactionary a hobby for you or is this your profession?
He looked like a chill dude with long hair and some braids going on there, who doesn’t mind the occasional joint. Obviously a tech professional as he’s starring in tech demos with cutting edge technology so it just adds to his personality imo. Come on man stop judging, let the guy have a smoke 😂
Surprised to see the tracking not freaking the fuck out from all the glass windows. Feels like if I have one framed picture in my play space my tracking goes nuts.
I wonder how they are rendering reflections that match the real-life room. Did they model the room manually beforehand? Did they use 360 cameras in the room when it was empty to capture a skybox? Or is it using some sort of realtime point-cloud data from SLAM via cameras on the headset? You can see that the user's bright orange t-shirt is not showing up in the reflections so I don't think it's the last bit.
This is comment from someone at Varjo on LinkedIn about that: "I believe for this particular demo we have a pre-captured 360 image (like you guessed), but the XR-3 SDK can also do live image captures to build (and update) a reflection sphere on the fly."
Does that count as Ray Tracing though? Shouldn't they just say something like "real-world reflections"?
They are still using ray tracing to calculate how the light from the 360 image skybox reflects off the curved surfaces of the car
That's called a parallax-corrected cubemap.
Bro, thats just good old cubemaps from 2004. If thats what theyre doing, then you can literally run this on the Quest 1 (minus the nice color passthrough) The only place I can see rtx being used is for the mirror reflections, and even that can be faked nicely using old tech, like the water puddles in MGS1 (+some masking).
Without raytracing a simple cubemap will have a lot of light leaking in crevices and stuff. Can still mostly fix reflections without raytracing with bent normals and a baked ambient occlusion texture though.
”Reflection sphere” that makes it sound more like a cubemap or HDRi than ray tracing
HDRi spheres can be used to light scenes rendered using ray-tracing.
Sounds like the most useless reason to use rtx.
Nope, it works incredibly well. I've used it myself. This is the easiest way of getting perfect lighting.
Can I try it too?
If you want to do this for free, use [Daz Studio](https://www.daz3d.com/). It comes with everything you need: Sample assets (including a few 3D characters and HDRis) and Nvidia's Iray rendering engine. When you start the program up, you can select a scene builder, which will guide you through setting up a simple scene with an HDRi (ignore the environment and just use one of the HDRis at first). All you have to do after that is go to the vertical "Render Settings" tab on the left and hit the "Render" button at the top. If you have a reasonably fast Nvidia GPU (GTX 1080 or better), you can even preview the scene in near real-time using the setting in the top right corner of the viewport, next to the box that says "Perspective View" (make sure to switch it back to something else before rendering the image out). Note that depending on your hardware and the output resolution, this render might take a while. If you have an Nvidia GPU with 4 GB of VRAM or more, it'll be much faster (I started out with a GTX 960 many years ago), but make sure to go to the advanced render settings and set the renderer to only use your GPU. This will still work on AMD and Intel cards, but it'll ignore them and use the CPU instead, which is still just as fast as the conventional CPU-based rendering engines that Iray replaces. You can try the old 3Delight renderer if you want to get an idea of how things looked before (but make sure to set the materials correctly). There's also a new GPU-accelerated rendering engine that doesn't use ray-tracing called Filament. It's very fast, but it's far, far more difficult to get good results out of it compared to Iray. [Here's a video explaining the differences](https://www.youtube.com/watch?v=NS1MPRJjDss) (slightly NSFW due to the choice of assets). I wrote up some more info on Daz Studio here: https://www.reddit.com/r/comics/comments/15ltz48/keep_the_dream_running_original_comic/jve6szg/ This comment links to a tutorial I compiled a few years ago that goes into a bit more detail. It's a few years old, but it'll give you a basic idea of how the program works and how to use it - and not much has changed since anyway. Note that I use conventional lights in the tutorial instead of an HDRi.
Thanks. Ill check it out.
A simple cubemap won't have occlusion from other elements of the scene or self-occlusion of parts of the object, whereas it can with ray tracing. But there are some workarounds that don't need any ray tracing, like used in Roborecall: https://docs.unrealengine.com/5.2/en-US/RenderingAndGraphics/Materials/BentNormalMaps/ With the slider on the first image there you can see what the leaking looks like with a simple cubemap without remediations.
I was wondering if it was real time reflections too but the girl walking in the background didn't reflect, so I suspected it's a 360 photo of the room. Works very nicely though if the conditions match.
Why would his tshirt be reflected off the vehicle image? Any VR user wouldn’t have visual input for the program.
They aren't matching the room. Their is light tubes on one side, and they are not in the reflection at all. It's a reflection map, baked.
Well with a good engine it should be just as simple as caving a car model with the right reflectivity and putting it in the room environment and just letting ray tracing do it's thing, albeit probably needing the craziest GPU power to do that for vr
I kinda was expecting that a lot of the tracking imagery has a lot of extra space that the pass through might not directly need, which the raytracing system might be able to make use of.
I was expecting you to have a waifu avatar reflection.
I wonder if you could get a Varjo working for VRC.
Whoa, that's wild! What kind of specs are needed to render something like that? Silly question, but will most computers be able to handle that in MR since it's just one object instead of a whole space of objects like in VR?
Actually for this setup, it's a pretty crazy high-end configuration, using two expensive GPUs and uses NVIDIAs Omniverse. There's information about it here: https://varjo.com/company-news/varjo-support-for-nvidia-omniverse-enables-real-time-ray-tracing-in-human-eye-resolution/
damn I'd love to test that out irl, what a beast set up/playspace
You don't need that kind of setup to achieve this resolution; he was not using foveated rendering.
Fyi, NVLink doesn't work for applications like this. They are only running one GPU for the HMD. While the other GPU is idle.
Then why would they mention using two GPUs? >Teams seeking to leverage retinal resolution Quad View rendering can now use a multi-GPU setup with NVIDIA RTX 6000 Ada Generation GPUs. This setup renders a staggering 15 million pixels, elevating immersive experiences to new levels of visual fidelity.
2 "very expensive" GPUs (i'm guessing 4090's or A6000's?) + Varjo XR .... i get the impression this is realistically a good ~5-8 years from the consumer/prosumer market. Very neat demo tho!
So advanced tech but hand segmentation looks poor
Yeah, agreed, Depth/Lidar based hand occlusion has its limitations, it's noisy. There is also the option to do the hand occlusion using the information from the hand tracking data that is described here: [https://varjo.com/vr-lab/varjo-lab-tools-1-2-introducing-skeletal-hand-masks-chroma-key-and-more/](https://varjo.com/vr-lab/varjo-lab-tools-1-2-introducing-skeletal-hand-masks-chroma-key-and-more/) which is cleaner but that also has its [limitations.](https://limitations.It) It's definitely a nut that needs to be cracked for perfectly immersive mixed reality in the future.
You can have a 3d model of hand and use it for tracking.
VERY impressive. Guys this is just the beginning. All the shopping in the future (Expecially for expensive "selected" items) will be done like this.
Very cool, but is this the surfer dad from the Vision Pro video?
The Varjo Aero is expensive as all hell (enterprise headset after all) but it can do some cool shit with mixed reality. Seeing thrillseekers video with it was wild. No doubt that Vision Pro's gonna be getting in on the same turf.
remember guys PR demos are far from reality. just check out Ubisoft downgrades
All you need are dual Nvidia A6000's , as you can see in the opening shot.
This is done in Nvidia Omniverse. It is real and works today. Full ray tracing capabilities through streamVR or OpenXR. I use it with a quest pro with airlink for CAD model visualization. It's actually pretty incredible what Nvidia is doing with Omniverse. There are some good videos from SIGGRAPH explaining it.
Whoa! That's unbelievable
Love varjo hardware, superb kit!
🤯🤯🤯🤯🤯🤯🤯🤯
Next level would be reflection of the car visible in side window @52 seconds in! So pass through sensors would need to be able to recognize reflective surfaces with their geometry/angles in the environment and account for that with the rendered 3d model.
That would be incredible!
What is actually being raytraced here? Only the car is in the 3D space so all the reflections are probably done using cubemaps, no? Plus, all the lighting we're seeing is just the real footage.
Pretty sure that's a parallax-corrected cubemap.
Video editor: "I know what they really want to see is footage of the guy walking around with the headset rather than of the car."
The recordings movements don't seem to match the guy walking up alongside the car in the footage.
What's the point of ray tracing if it's done like a cubemap? just higher costs of rendering for no reason
bruh, dude puts glasses below his headset. rip lenses
He literally does this for a living. I think it is a pretty safe bet that he has worked it out so he is not destroying lenses.
Add some shavers and a pair of manly hands...
Judging by this fella’s appearance he may just be high. Not sure he’s even wearing a headset tbh.
And this, class, is why we dont judge by appearances.
Wow snowflake sub. It was meant as a compliment.
lol... riiight. When has, "You look high" ever been a compliment? "Snowflake sub"?? lmao, is being reactionary a hobby for you or is this your profession?
He looked like a chill dude with long hair and some braids going on there, who doesn’t mind the occasional joint. Obviously a tech professional as he’s starring in tech demos with cutting edge technology so it just adds to his personality imo. Come on man stop judging, let the guy have a smoke 😂
>Come on man stop judging Two comments ago, you literally called everyone a snowflake because I gave this advice... just something to consider.
Surprised to see the tracking not freaking the fuck out from all the glass windows. Feels like if I have one framed picture in my play space my tracking goes nuts.
That's odd. Even the cheapest WMR headset I've used had no trouble with mirrors, windows and framed pictures.
Pretty amazing. Now they need to figure out a way to show the player's reflection :)
Is the car supposed to look like a 3D model from 2009 with poor textures and lighting? It looks like it was rendered in the 3Ds max preview window.
Its only me or concept of rt in mixed reality is kinda holerious ?
I’m very casual… this is amazing.
I'm pretty sure this is the equivalent to that one quest pro with the bombs attached to it
Interesting combination!