## llama-cpp-low Script to build llama.cpp server binary using cargo ### Wait, are you sober? I just wanted to have the daemon to run the LLM with minimal external dependency...