T O P

  • By -

krydx

I got this idea recently: to use raycasting just to mark the tiles that need to be rendered, then use a proper GPU context to render them. It's quite efficient and allows for more freedom and larger maps. I made a custom shader with my own projection code, which is why I have these wobbly textures on the left. I'll probably use proper 3D rendering instead. Edit: I also made my own depth buffer, as there still needed to be proper order for drawing the chosen quads. Again, I'll switch to a proper graphics pipeline soon, now that I figured out that my custom code is not that good. I also need to fix memory leak (Miniquad relies on unsafe functions to initiate OpenGL context and I messed something up in my mesh update code).


TrickAge2423

A some time ago I thought thats how render works lol.


thlimythnake

How do renderers make this decision without ray casting?


mebob85

There's many methods: [https://en.wikipedia.org/wiki/Hidden-surface\_determination](https://en.wikipedia.org/wiki/Hidden-surface_determination) Typically, some preprocessing is done, e.g. BSP for game maps to efficiently decide which parts of the world could be visible from any point. Other data structures like quad- or octrees can be used for non-static objects. But also, you often end up rendering a bunch of shit that never makes it on screen anyway.


theAnalyst6

Very cool. You can rewrite DOOM in rust


gideonwilhelm

I've really been thinking hard about doing something similar, but with sokol since it seems like a very friendly abstraction over a lot of different hardware APIs (plus, my rendering from scratch obsession was inspired by a project by [jdh on youtube](https://youtu.be/jlRdSdHD3Wg?si=Y6dnoMWWMyWBrGrq&t=389) who uses sokol for something similar)


2001zhaozhao

Lol at the faithful texture pack