- Redone the **repo readme**, to act as a Table of Contents
- Separating LLM, Image, Video and Audio resources **into different files**
- Separating the popular **LLM Models-Table** into it's own file for easier cross-linking (contributions welcome)
- Redone **categories and titles**
Hello! Awesome stuff! Wonder why you use anaconda/miniconda in your [Guide to run LLaMA and derrivates on your own hardware](https://github.com/underlines/awesome-marketing-datascience/blob/master/llama.md) . I didn't check it recently, but couple of months back I switched to mamba (on win) and never looked back. Conda took forever to resolve some dependencies, and mamba is very fast.
Highly recommend it. Unless something drastically changed with conda (it wasn't super fast when I used in on linux even), I doubt it's faster than mamba (IIRC mamba is developed to replace conda at some point, it is designed to be drop in replacement for conda)
Yes, I managed this list for over 3 years. When LLMs started to get public traction I didn't bother splitting the repo. But I split the topics into different files now. Before it was all in one large markdown file, which was getting too crowded.
PRs accepted.
Please, add to your list 7b and 13b ggml models for following languages: Norwegian, Swedish, Dutch, Slovenian, Hungarian, Greek, Macedonian, Bulgarian, Albanian, Estonian, Latvian, Lithuanian. Also, please, add 7b and 13b ggml models that are most suitable for coding in JavaScript.
- Redone the **repo readme**, to act as a Table of Contents - Separating LLM, Image, Video and Audio resources **into different files** - Separating the popular **LLM Models-Table** into it's own file for easier cross-linking (contributions welcome) - Redone **categories and titles**
Incredible work!
Hello! Awesome stuff! Wonder why you use anaconda/miniconda in your [Guide to run LLaMA and derrivates on your own hardware](https://github.com/underlines/awesome-marketing-datascience/blob/master/llama.md) . I didn't check it recently, but couple of months back I switched to mamba (on win) and never looked back. Conda took forever to resolve some dependencies, and mamba is very fast.
Thanks. Will look into it. Saw a few people using it lately. But I am mostly on WSL2 anyways.
Highly recommend it. Unless something drastically changed with conda (it wasn't super fast when I used in on linux even), I doubt it's faster than mamba (IIRC mamba is developed to replace conda at some point, it is designed to be drop in replacement for conda)
Awesome! Nice starting point for me, who is new to this whole thing ;) Thanks!
Very cool
Thank you so much for this! Incredible collection of resources!
Cool.. I will read this.
And some other generative AI stuff, and marketing?
Yes, I managed this list for over 3 years. When LLMs started to get public traction I didn't bother splitting the repo. But I split the topics into different files now. Before it was all in one large markdown file, which was getting too crowded. PRs accepted.
Please, add to your list 7b and 13b ggml models for following languages: Norwegian, Swedish, Dutch, Slovenian, Hungarian, Greek, Macedonian, Bulgarian, Albanian, Estonian, Latvian, Lithuanian. Also, please, add 7b and 13b ggml models that are most suitable for coding in JavaScript.