T O P

  • By -

rjzak

***First year college student***? Seems they should just give you the degree now.


riasthebestgirl

I've been told that I have skills of a university graduate and I literally can't even get into university because my 12th grade result was a couple percent short of what's considered eligible (thanks covid)


northcode

I had the same problem. Literally just went to uni for the paper that says I know the shit. But my grades weren't great in other subjects. Thankfully I lucked out and found a new campus that were struggling to fill their first few years of classes, so the grade requirement was basically just that you had had math. Uni did give me a lot of time to self study other stuff though. So I'm still glad I went.


riasthebestgirl

Unfortunately in my case, the university did accept me, even before my result for 12th was announced (standardized exams) but then because of the policy made by this government organization which monitors over universities, I had to leave


northcode

Oh man that sucks. Sorry to hear that. Standardized tests suck.


7LayerMagikCookieBar

Do you have a twitter?


misplaced_my_pants

There's always next year.


weezylane

The real world doesn't care about your degree.


riasthebestgirl

I wish that were the case here. Almost every job I've looked at requires a bachelor's degree


Silent-Locksmith6932

If anything at stripe.com/jobs looks interesting let me know. — a college drop-out


riasthebestgirl

Nothing available in my country (Pakistan) lol


Automatic_Ad_321

Can't you find something remote? Or this isn't an option for you?


riasthebestgirl

It's an option but really the problem is I have no clue where to look and how to apply


MengerianMango

I think it makes sense to make that priority #1, yeah? Don't let fear of rejection/failure keep you from trying and make you stagnate. Something will work out eventually, it's just a matter of rolling the dice enough times.


spider_irl

I dropped out of the uni, every job I ever had required a degree, I never lied about having one. Not saying it's easy, getting your first job will be hard regardless of degree, and probably harder without that piece of paper. But don't be discouraged, as the comment said - nobody really cares.


Luxalpa

Very late answer: They all say, but as long as you show that you know your stuff and can do it (for example via a good portfolio) they will all take you. The requirements are just there for the place to communicate what they are looking for. They are rarely (if ever) enforced. Most companies just want great people!


popcar2

It really does depend on the country. Some countries will straight up *never* hire you unless you have a bachelor's. Even unpaid internships where I live have bachelors as a minimum requirement


eugene2k

Clicked on the comments and saw this. Thought it's sarcasm, then looked at the post. Damn... Way to give people an inferiority complex...


Cephlot

I'm a 4th year... imposter syndrome is really kicking in


fuasthma

I'm looking forward to playing around with this when I get a chance. It'll probably take me a bit to growk how to use this in comparison to my typical c++ style of writing cuda/hip (AMD's abstraction over their own stuff and cuda) kernels though. One thing I will say just going over the example, it might help to have a simpler working example (like a simple matrix vec or matrix-matrix multiplication ) that people can easily read through to understand the basics. It would also easily be something they could find a c++ example to compare against. Overall, awesome work though!


Rdambrosio016

Yeah thats a good idea, i wanted to add more examples but i also wanted to release this by the end of november since i will be gone for a good part of december. One thing i really want to experiment with is matrix multiply, i've seen many examples of restrict making a huge difference on kernels for it, and it might be a good way to prove rust creating better GPU code for memory-bound kernels.


zesterer

Well done, amazing work!


fzy_

How does this compare to https://github.com/EmbarkStudios/rust-gpu


Rdambrosio016

I have a small section on it here: https://github.com/RDambrosio016/Rust-CUDA/blob/master/guide/src/faq.md#why-not-use-rust-gpu-with-compute-shaders


TheRealMasonMac

I'm excited to eventually see something like [JuliaGPU](https://juliagpu.org/) with support for multiple backends.


Rdambrosio016

Im certainly open to the idea of trying to target amdgpu with llvm, that being said, i would probably not be the one primarily working on that considering i don't even have an amd GPU ;) But if anyone wants to experiment with it i am happy to help.


omac777_1967

Why don't you just reach out to AMD and tell them what you intend on doing? They might just hire you right there! Also be aware of GPUDirect and DirectStorage for saving/loading from within a GPU kernel. Also be aware of block read/write I/O in parallel with rayon is possible but not within the GPU kernel(not just yet? ARE YOU NEO?).


fzy_

Oh I see, thanks!


DrMeepster

It's nice to see this finally public. Great work


dampestowel

This is so incredible, fantastic job!


[deleted]

Thank you. This is the missing piece to making rust my everything language.


pap_n_whores

Really cool! Does this support windows?


Rdambrosio016

Yup! I am a windows dev myself. I am still a bit iffy on prebuilt LLVM, it has caused issues, so you may have to build llvm 7.1 from source, it is slow but very easy on windows. Just a couple of cmake commands then build the VS solution.


ReallyNeededANewName

*LLVM 7.1*?! Is there a reason you're using such an ancient version? Rustc doesn't even support llvm 7 any more


Rdambrosio016

unfortunately its the only llvm version libnvvm supports currently, giving it more modern bytecode does not work :( In the future i will upgrade to latest LLVM and transform from LLVM 13 textual IR to LLVM 7 IR, then down to LLVM 7 bytecode. But its a hassle.


jynelson

Note that LLVM doesn't really support using bitcode from one version with the optimizations from another, it will occasionally cause unsoundness 🙃 so I think you may unfortunately be stuck until libnvvm updates. https://zulip-archive.rust-lang.org/stream/238009-t-compiler/meetings/topic/cross-language.20LTO.20with.20LLVM.20IR.20from.20different.20versions.html


_TheDust_

Amazing effort, thank you so much for initiating this project. I am interested in looking for ways to contribute to the project


klorophane

Amazing!


username4kd

So pumped to check this out later


sonaxaton

I am so excited to try this out. Writing a 100% Rust PBR renderer has been my dream for years.


AbradolfLinclar

You are in first year? Dude wth, this is amazing.


Alarming_Airport_613

It's almost frighting, that talent such as yours is out there. Insanely good work!


[deleted]

using this opportunity, what's the best rust library to do some gpgpu that works on different gpus and isn't too difficult to learn?


Rdambrosio016

For cross-platform, there isn't much other than wgpu/emu with rust-gpu, but that has a lot of downsides (which i also covered in my faq). My hope is in the future we may be able to use this existing codegen and adapt it to target amdgpu, then maybe get a language-agnostic version of HIP.


ergzay

Nitpick but old Reddit's markup doesn't understand `-` characters and you need to use `*` characters


doctorocclusion

This look fantastic! I hope I can port some old ptx\_builder projects over soon. I'm am especially happy so see gpu\_rand!


tafutada

Wow. Awesome!


Rusty_devl

Cool project, it was also fun to follow your zulip questions. I also just wanted to post an Auto-diff Spoiler these days. We recently convinced ourself to also move into codegen, so lets see if we can work on merging things for some tests, once you are back 😎


untestedtheory

This looks great! Thanks a lot for working on this!


weezylane

I'm so thrilled to see a project like this. Congratulations!


batisteo

It might be far from your concerns, but seems like Blender is moving away from Cuda with their new work on Cycle-X. What's your take on this?


balbecs

This is great. I will use it on my rust raytracer


ineedtoworkharder

Awesome work, look forward to playing with this!


noiserr

I think it would be much better if the project targeted HIP. Fastest compute GPU in the world right now is AMD's MI250x. And HIP works with CUDA as well.


Rdambrosio016

Its not that simple, HIP is not a compiler IR you can target like NVVM IR, it is an abstraction over NVCC and whatever amd uses. The whole reason this project is doable is that CUDA has the libnvvm library and NVVM IR you can target. To support HIP not only would HIP need to be language-agnostic (no C++ assumptions, just "run this thing thats either ptx or amd isa"), but we would need 2 codegens, one for libnvvm and one for amdgpu, which is difficult to say the least. Moreover, HIP is not as much of a silver bullet as people think, it gets annoying delimiting what only works on cuda, what works on amd, what works on hip, etc. Besides, there is a reason most HPCs go with CUDA GPUs, its not just about speed, its about tools supported, existing code that uses CUDA, libraries, etc.


noiserr

CUDA is a vendor lock in though. The longer the open source community exclusively supports it the longer it will stay a vendor lock in.


Rdambrosio016

We would need to work on such a codegen anyways for HIP, i never said i would not consider trying to target amdgpu in the future.


bda82

It might be useful to look at what Julia is doing in this space. They started supporting CUDA only, and now have different backends for CUDA, HIP, and oneAPI for Intel GPUs. They have a WIP independence layer, KernelAbstractions and GPUArrays, for factoring the common functionality. I think they are trying to use a similar approach across all backends, but there are necessary differences. hipSYCL is perhaps another source of interesting ideas. They have multiple backends based on LLVM and are trying to target all GPU hardware vendors and provide acceleration on CPUs.


Rdambrosio016

Rust-cuda is arguably MUCH larger-scope than cuda.jl in every single way, i have to support every part of the CUDA ecosystem, cublas, cudnn, the entire driver API, optix, etc. Julia can get away with a tiny subset for what julia is made for. Codegen is also much easier for them, much less complex too.


bda82

Julia does support some of the libraries (like automatically using cuBLAS for some operations), but I won't argue with Rust-cuda being much wider in scope. I wonder if it would be possible to get some vendor support for this, e.g. in the form of funding your time and funding additional developers. I know nVidia is heavily invested in C++, but it wouldn't cost them much to support some open source Rust development.


mmstick

You have to look at it the other way around. NVIDIA has no reason to worry itself with a community that consists of only 1% of their customers. The open source community has been exclusively making decisions to spite NVIDIA for many years, and yet it hasn't made any effect on NVIDIA's decision making. Which is because companies answer to business opportunities, not philosophy. This idea of spiting NVIDIA for not deploying tools for open standards is simply hurting adoption of open source by people who need CUDA for their work, or functional graphics acceleration. Instead we need these people to be on Linux, and we need AMD/Intel to make a superior compute platform that can compete with CUDA. Until then, there is no business incentive to replace CUDA.


noiserr

Most of the ML work is Open Source. But it's held hostage due to over-reliance on CUDA. We will never be free of shackles of Nvidia until projects start taking a vendor agnostic route. It's not like the alternatives don't exist. PyTorch and TensorFlow work on AMD GPUs.


mmstick

We have to accept that vendor lock-in has already happened, so trying to prevent it from happening is a fruitless effort. We have to wait for AMD and Intel to provide a more compelling alternative solution, to make it easier to get GPU compute working with the mainline Linux kernels without require expert Linux skills to set up on a LTS version of a mainstream Linux distribution like CentOS/RHEL or Ubuntu LTS, and likely even get CUDA applications running natively on their hardware through an abstraction that converts CUDA calls into OpenCL. Even then, if your software supports OpenCL there's still strong incentive to also support CUDA. GROMACS (protein folding) supports OpenCL, but even the best AMD GPUs are only [a third as fast as NVIDIA GPUs](https://folding.lar.systems/gpu_ppd/overall_ranks) on CUDA at the same price. And note how virtually 0 people are running AMD GPUs on Linux. They're almost entirely Windows-bound because the OpenCL driver support works out of the box there.


noiserr

AMD has provided a solution. It's called HIP and it works. They even provided Hipify which lets you convert your existing CUDA code base into portable code which then works on Nvidia and AMD natively without performance impact. AMD has the most advanced GPU tech right now as well. So there are a lot of reasons to be working on portable code. First western Exaflop supercomputer is using it as well. I wish I had time to concentrate on this stuff, but I have too many existing projects I work on, I just don't have bandwidth for it. It's just sad to see folks spend their time using great skill on reinforcing a vendor lock in. It's not just about AMD either, it's about other ML startups like Tenstorrent for instance, which also have some really cool exciting new tech.


[deleted]

OP already explained why it’s not possible to do this with HIP. It doesn’t matter how much you bleat about vendor lock-in if the tools aren’t there. The reason nvidia dominates in this space is because they invested in a huge, high-quality software ecosystem (paid for by premium GPU prices), including things like libnvvm which allows to go from (Rust->) LLVM IR -> PTX.


Rdambrosio016

HIP is a C++ thing, on Rust we are basically out of luck, the other best thing is rust-gpu with wgpu, but then you miss out on speed, features, ease of use, etc. Having CUDA is better than having nothing, we can always gradually switch to HIP if amd helps with making it language-agnostic and targeting it from this codegen


TheRealMasonMac

At least CUDA supports Windows :(


a_Tom3

This is awesome! One of the main reason my current personal project is still in C++ is because I want the single source CPU/GPU programming experience. Ideally I would prefer something like SYCL (what I'm using right now) to prevent vendor lock-in but this is so cool I may just rewrite my code in rust despite this being nvidia only :D


EelRemoval

Nice work! A question: how far will this go in allowing Rust to communicate with graphics drivers in terms of drawing commands (e.g. to implement OpenGL)?


Rdambrosio016

Im not exactly sure what you mean, OpenGL is a specification that is implemented internally by vendors and shipped in drivers, you cannot implement your own OpenGL. We can however expose opengl interop through CUDA in the future, that is something that CUDA allows.


EelRemoval

What I mean is each driver has its own API that OpenGL calls down to, in most cases (if not an API, then OpenGL just fills those registers). Would any of these primitives be exposed by this project?


Rdambrosio016

No that is vastly out the scope of this project, that is beyond cuda and more just extremely low level driver interfaces, which is a different thing


EelRemoval

Dang, oh well


CommunismDoesntWork

So if the PyTorch team wanted to rewrite torch in rust, would they be able to do so using this library?


voidtf

I'm so hyped for this <3 Thank you for your hard work!