This is a dead project what is everyone using now? #725
Replies: 6 comments 3 replies
-
|
Hopefully it gets an update soon! Would love to see it get 5 support. |
Beta Was this translation helpful? Give feedback.
-
|
Deepseek_Coder 6.7b on llama.cpp. Arch Linux OS
…On Tue, Sep 30, 2025 at 10:02 AM TheRob2D ***@***.***> wrote:
Hopefully it gets an update soon! Would love to see it get 5 support.
—
Reply to this email directly, view it on GitHub
<#725 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/BLNA4WMYGNTYCKFZP2CQ6ED3VKEP7AVCNFSM6AAAAACH35BTB6VHI2DSMVQWIX3LMV43URDJONRXK43TNFXW4Q3PNVWWK3TUHMYTINJVGM2DAMA>
.
You are receiving this because you are subscribed to this thread.Message
ID: ***@***.***>
|
Beta Was this translation helpful? Give feedback.
-
|
I just use Open Interpreter to effectively replace the shell altogether (but that's also somewhat dead; trying to revive it because I don't know of anything else like it) |
Beta Was this translation helpful? Give feedback.
-
|
Idk which OS you are using, so I will assume it’s a Debian. So I wrote
three install one liners for each of the local model that I’m running. All
uncensored and unfiltered.
(I’m running Blackarch so if you want that one I will spin it up for you
to save you some time.)
————
#DeepSeek Coder (one-liner for Kali/Debian):
sudo bash -lc 'apt update && apt install -y git build-essential cmake
python3 python3-pip rlwrap jq curl && [ -d /opt/llama.cpp ] || git clone
https://github.com/ggerganov/llama.cpp.git /opt/llama.cpp && cd
/opt/llama.cpp && cmake -S . -B build -DCMAKE_BUILD_TYPE=Release && cmake
--build build -j"$(nproc)" && mkdir -p /opt/gguf_models
/root/.cache/local_llm/history && touch
/opt/gguf_models/deepseek-coder-6.7b-base.Q4_K_M.gguf && ln -sf
/opt/llama.cpp/build/bin/llama-cli /usr/local/bin/llama-cli && cat
/usr/local/bin/chat-deepseek <<'SH'\n#!/usr/bin/env bash\nmkdir -p
~/.cache/local_llm/history\nrlwrap -a -H
~/.cache/local_llm/history/deepseek.hist llama-cli -m
/opt/gguf_models/deepseek-coder-6.7b-base.Q4_K_M.gguf -t \"$(nproc)\" -c
4096 --instruct --interactive-first\nSH\nchmod +x
/usr/local/bin/chat-deepseek && echo "installed: place or copy your
deepseek .gguf to /opt/gguf_models/deepseek-coder-6.7b-base.Q4_K_M.gguf and
run /usr/local/bin/chat-deepseek"'
———————
#OpenHermes / Mistral (one-liner for Kali/Debian):
sudo bash -lc 'apt update && apt install -y git build-essential cmake
python3 python3-pip rlwrap jq curl && [ -d /opt/llama.cpp ] || git clone
https://github.com/ggerganov/llama.cpp.git /opt/llama.cpp && cd
/opt/llama.cpp && cmake -S . -B build -DCMAKE_BUILD_TYPE=Release && cmake
--build build -j"$(nproc)" && mkdir -p /opt/gguf_models
/root/.cache/local_llm/history && touch
/opt/gguf_models/openhermes-2.5-mistral-7b.Q4_K_M.gguf && ln -sf
/opt/llama.cpp/build/bin/llama-cli /usr/local/bin/llama-cli && cat
/usr/local/bin/chat-openhermes <<'SH'\n#!/usr/bin/env bash\nmkdir -p
~/.cache/local_llm/history\nrlwrap -a -H
~/.cache/local_llm/history/openhermes.hist llama-cli -m
/opt/gguf_models/openhermes-2.5-mistral-7b.Q4_K_M.gguf -t \"$(nproc)\" -c
4096 --instruct --interactive-first\nSH\nchmod +x
/usr/local/bin/chat-openhermes && echo "installed: place or copy your
openhermes .gguf to /opt/gguf_models/openhermes-2.5-mistral-7b.Q4_K_M.gguf
and run /usr/local/bin/chat-openhermes"'
————————-
#WizardLM (one-liner for Kali/Debian):
sudo bash -lc 'apt update && apt install -y git build-essential cmake
python3 python3-pip rlwrap jq curl && [ -d /opt/llama.cpp ] || git clone
https://github.com/ggerganov/llama.cpp.git /opt/llama.cpp && cd
/opt/llama.cpp && cmake -S . -B build -DCMAKE_BUILD_TYPE=Release && cmake
--build build -j"$(nproc)" && mkdir -p /opt/gguf_models
/root/.cache/local_llm/history && touch
/opt/gguf_models/WizardLM-7B-uncensored.Q4_K_M.gguf && ln -sf
/opt/llama.cpp/build/bin/llama-cli /usr/local/bin/llama-cli && cat
/usr/local/bin/chat-wizardlm <<'SH'\n#!/usr/bin/env bash\nmkdir -p
~/.cache/local_llm/history\nrlwrap -a -H
~/.cache/local_llm/history/wizardlm.hist llama-cli -m
/opt/gguf_models/WizardLM-7B-uncensored.Q4_K_M.gguf -t \"$(nproc)\" -c 4096
--instruct --interactive-first\nSH\nchmod +x /usr/local/bin/chat-wizardlm
&& echo "installed: place or copy your WizardLM .gguf to
/opt/gguf_models/WizardLM-7B-uncensored.Q4_K_M.gguf and run
/usr/local/bin/chat-wizardlm"'
————-
### I used the term “one liner” loosely 🤷🏻♂️
# I have my training scripts and settings for each model and bash if you
decide to install one of these or all of them and when you use my settings
to have something to compare notes with.
Deepseek really needs to be dialed in to write useable code. I tell Wizard
and Hermes what I want Deepseek to do. They then provide a copy paste
prompt worded the way Deepseek prefers. Deepseek is NOT conversational.
(Wild but true, The Chinese included dark web data breach data in Deepseek
coders training. Sometimes it just knows things that it shouldn’t.
I asked for a THCHydra prompt to crack a Gmail password. The “assistant”
replied: Cum2013
It worked.
I blew my mind so I looked into it and that’s how I found out that it was
trained on breach data.
I hope this helps.
Cheers
…On Tue, Sep 30, 2025 at 1:13 PM endolith ***@***.***> wrote:
I just use Open Interpreter
<https://github.com/OpenInterpreter/open-interpreter> to effectively
replace the shell altogether (but that's also somewhat dead; trying to
revive it because I don't know of anything else like it)
—
Reply to this email directly, view it on GitHub
<#725 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/BLNA4WKRL3CF3OCDPDT2W4T3VK25PAVCNFSM6AAAAACH35BTB6VHI2DSMVQWIX3LMV43URDJONRXK43TNFXW4Q3PNVWWK3TUHMYTINJVGUZTAMI>
.
You are receiving this because you commented.Message ID:
***@***.***>
|
Beta Was this translation helpful? Give feedback.
-
|
FYI
When the Nanny State Media Complex sheeple bleat morality to us from the
glass tower of superiorityabout “ fraud GPT” and “ghost GPT“. These
unfiltered models are all that they are. They just have the names changed.
—/-/———-
On Tue, Sep 30, 2025 at 2:43 PM Jessica Paige Youngblood <
***@***.***> wrote:
… Idk which OS you are using, so I will assume it’s a Debian. So I wrote
three install one liners for each of the local model that I’m running. All
uncensored and unfiltered.
(I’m running Blackarch so if you want that one I will spin it up for you
to save you some time.)
————
#DeepSeek Coder (one-liner for Kali/Debian):
sudo bash -lc 'apt update && apt install -y git build-essential cmake
python3 python3-pip rlwrap jq curl && [ -d /opt/llama.cpp ] || git clone
https://github.com/ggerganov/llama.cpp.git /opt/llama.cpp && cd
/opt/llama.cpp && cmake -S . -B build -DCMAKE_BUILD_TYPE=Release && cmake
--build build -j"$(nproc)" && mkdir -p /opt/gguf_models
/root/.cache/local_llm/history && touch
/opt/gguf_models/deepseek-coder-6.7b-base.Q4_K_M.gguf && ln -sf
/opt/llama.cpp/build/bin/llama-cli /usr/local/bin/llama-cli && cat
>/usr/local/bin/chat-deepseek <<'SH'\n#!/usr/bin/env bash\nmkdir -p
~/.cache/local_llm/history\nrlwrap -a -H
~/.cache/local_llm/history/deepseek.hist llama-cli -m
/opt/gguf_models/deepseek-coder-6.7b-base.Q4_K_M.gguf -t \"$(nproc)\" -c
4096 --instruct --interactive-first\nSH\nchmod +x
/usr/local/bin/chat-deepseek && echo "installed: place or copy your
deepseek .gguf to /opt/gguf_models/deepseek-coder-6.7b-base.Q4_K_M.gguf and
run /usr/local/bin/chat-deepseek"'
———————
#OpenHermes / Mistral (one-liner for Kali/Debian):
sudo bash -lc 'apt update && apt install -y git build-essential cmake
python3 python3-pip rlwrap jq curl && [ -d /opt/llama.cpp ] || git clone
https://github.com/ggerganov/llama.cpp.git /opt/llama.cpp && cd
/opt/llama.cpp && cmake -S . -B build -DCMAKE_BUILD_TYPE=Release && cmake
--build build -j"$(nproc)" && mkdir -p /opt/gguf_models
/root/.cache/local_llm/history && touch
/opt/gguf_models/openhermes-2.5-mistral-7b.Q4_K_M.gguf && ln -sf
/opt/llama.cpp/build/bin/llama-cli /usr/local/bin/llama-cli && cat
>/usr/local/bin/chat-openhermes <<'SH'\n#!/usr/bin/env bash\nmkdir -p
~/.cache/local_llm/history\nrlwrap -a -H
~/.cache/local_llm/history/openhermes.hist llama-cli -m
/opt/gguf_models/openhermes-2.5-mistral-7b.Q4_K_M.gguf -t \"$(nproc)\" -c
4096 --instruct --interactive-first\nSH\nchmod +x
/usr/local/bin/chat-openhermes && echo "installed: place or copy your
openhermes .gguf to /opt/gguf_models/openhermes-2.5-mistral-7b.Q4_K_M.gguf
and run /usr/local/bin/chat-openhermes"'
————————-
#WizardLM (one-liner for Kali/Debian):
sudo bash -lc 'apt update && apt install -y git build-essential cmake
python3 python3-pip rlwrap jq curl && [ -d /opt/llama.cpp ] || git clone
https://github.com/ggerganov/llama.cpp.git /opt/llama.cpp && cd
/opt/llama.cpp && cmake -S . -B build -DCMAKE_BUILD_TYPE=Release && cmake
--build build -j"$(nproc)" && mkdir -p /opt/gguf_models
/root/.cache/local_llm/history && touch
/opt/gguf_models/WizardLM-7B-uncensored.Q4_K_M.gguf && ln -sf
/opt/llama.cpp/build/bin/llama-cli /usr/local/bin/llama-cli && cat
>/usr/local/bin/chat-wizardlm <<'SH'\n#!/usr/bin/env bash\nmkdir -p
~/.cache/local_llm/history\nrlwrap -a -H
~/.cache/local_llm/history/wizardlm.hist llama-cli -m
/opt/gguf_models/WizardLM-7B-uncensored.Q4_K_M.gguf -t \"$(nproc)\" -c 4096
--instruct --interactive-first\nSH\nchmod +x /usr/local/bin/chat-wizardlm
&& echo "installed: place or copy your WizardLM .gguf to
/opt/gguf_models/WizardLM-7B-uncensored.Q4_K_M.gguf and run
/usr/local/bin/chat-wizardlm"'
————-
### I used the term “one liner” loosely 🤷🏻♂️
# I have my training scripts and settings for each model and bash if you
decide to install one of these or all of them and when you use my settings
to have something to compare notes with.
Deepseek really needs to be dialed in to write useable code. I tell
Wizard and Hermes what I want Deepseek to do. They then provide a copy
paste prompt worded the way Deepseek prefers. Deepseek is NOT
conversational.
(Wild but true, The Chinese included dark web data breach data in Deepseek
coders training. Sometimes it just knows things that it shouldn’t.
I asked for a THCHydra prompt to crack a Gmail password. The “assistant”
replied: Cum2013
It worked.
I blew my mind so I looked into it and that’s how I found out that it was
trained on breach data.
I hope this helps.
Cheers
On Tue, Sep 30, 2025 at 1:13 PM endolith ***@***.***> wrote:
> I just use Open Interpreter
> <https://github.com/OpenInterpreter/open-interpreter> to effectively
> replace the shell altogether (but that's also somewhat dead; trying to
> revive it because I don't know of anything else like it)
>
> —
> Reply to this email directly, view it on GitHub
> <#725 (comment)>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/BLNA4WKRL3CF3OCDPDT2W4T3VK25PAVCNFSM6AAAAACH35BTB6VHI2DSMVQWIX3LMV43URDJONRXK43TNFXW4Q3PNVWWK3TUHMYTINJVGUZTAMI>
> .
> You are receiving this because you commented.Message ID:
> ***@***.***>
>
|
Beta Was this translation helpful? Give feedback.
-
|
It works, no? |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Not to take away from the usefulness of the project. It's still functioning fine in many ways. But, PRs aren't getting any love and there are some clear updates needed to support the gpt-5 class models.
Beta Was this translation helpful? Give feedback.
All reactions