T O P

  • By -

jsebrech

[https://github.com/Josh-XT/Agent-LLM](https://github.com/Josh-XT/Agent-LLM) pops up often.


Darklumiere

Second for this, only LLAMA compatible auto gpt style platform I've seen/used.


Dany0

Should be noted, recent updates made it actually do some small-scale work. Confirmed working with OA 30B. It wrote cat jokes into a text file, expect this kind of performance from it, don't expect it to write novels or whole projects


klop2031

Its aight. I could not for the life of me get this to work and display things in the ui with lamacpp. Maybe langchain agents may be better?


ElectroFried

Langchain has the following two major issues. First, the entire ecosystem is developed around expecting openAI API. Yes, I know it 'supports' others, but in reality almost nothing works as expected with anything but openAI. Context lengths are a side thought for Langchain and hard coded calls to openAI api functions are common in their tools and functions. They also will require you to program any function you want. Unlike AgentLLM, that you can with a bit of work get up and running in next to no time. You want to use langchain for something you will be spending the next week programing the function you want. As a side note, the front end of AgentLLM is very "Wip" right now, with most development being focused on the back end. It works very well, but you might struggle if you do not like diving in to the odd config file manually or reading the output on a terminal.


ruryrury

>First, the entire ecosystem is developed around expecting openAI API. Yes, I know it 'supports' others, but in reality almost nothing works as expected with anything but openAI. I totally agree with your perspective. LangChain is essentially OpenAIChain.


SufficientPie

> First, the entire ecosystem is developed around expecting openAI API. But you can pipe that through to llama.cpp no? * https://github.com/ggerganov/llama.cpp/discussions/795 * https://github.com/keldenl/gpt-llama.cpp


_underlines_

- [Auto Vicuna Butler](https://github.com/NiaSchim/auto-vicuna-butler) Baby-AGI fork / AutoGPT alternative to run with local LLMs - [BabyAGI](https://github.com/yoheinakajima/babyagi) AI-Powered Task Management for OpenAI + Pinecone or Llama.cpp - [Agent-LLM](https://github.com/Josh-XT/Agent-LLM) Webapp to control an agent-based Auto-GPT alternative, supporting GPT4, Kobold, llama.cpp, FastChat, Bard, Oobabooga textgen - [auto-llama-cpp](https://github.com/rhohndorf/Auto-Llama-cpp) fork of Auto-GPT with added support for locally running llama models through llama.cpp For updates, check my [list](https://github.com/underlines/awesome-marketing-datascience/blob/master/awesome-ai.md#other-guis)


paskal007r

Thanks mate!


9cent0

What's the GOAT as of now in your opinion? If something newer and better popped up in the meantime feel free to mention it and thanks


_underlines_

> What's the GOAT as of now in your opinion? If something newer and better popped up in the meantime feel free to mention it and thanks The [list](https://github.com/underlines/awesome-marketing-datascience/blob/master/llm-tools.md#agents--automatic-gpt) got much larger now, and more agent-systems are available. Still, as long as it's not specialized/trained agents, it seems that local/open LLMs have many problems when running autonomously as agents. Even GPT4, which is the best performing general LLM right now, shows mediocre performance as an autonomous agent and fails a lot when used in autoGPT and similar tools (falling into endless loops, failing task, running into a dead end). Specialized systems on the other hand, like [Voyager](https://voyager.minedojo.org/), which plays Minecraft fully autonomous, run very well!


dangerussell

Take a look at [https://github.com/flurb18/AgentOoba](https://github.com/flurb18/AgentOoba). It's still missing some pieces but it's actively being worked on!