llama-cpp-vulkan
Inference of Meta's LLaMA model (and others) in pure C/C++
- Name
- llama-cpp
- Main Program
llama- Programs
convert_hf_to_gguf.pyllamallama-batched-benchllama-benchllama-clillama-completionllama-cvector-generatorllama-export-lorallama-fit-paramsllama-gguf-splitllama-imatrixllama-mtmd-clillama-perplexityllama-quantizellama-serverllama-tokenizellama-tts
- Homepage
- Version
- 7898
- License
- Maintainers
- Platforms
- i686-cygwin
- x86_64-cygwin
- x86_64-darwin
- aarch64-darwin
- i686-freebsd
- x86_64-freebsd
- aarch64-freebsd
- x86_64-solaris
- aarch64-linux
- armv5tel-linux
- armv6l-linux
- armv7a-linux
- armv7l-linux
- i686-linux
- loongarch64-linux
- m68k-linux
- microblaze-linux
- microblazeel-linux
- mips-linux
- mips64-linux
- mips64el-linux
- mipsel-linux
- powerpc-linux
- powerpc64-linux
- powerpc64le-linux
- riscv32-linux
- riscv64-linux
- s390-linux
- s390x-linux
- x86_64-linux
- aarch64-netbsd
- armv6l-netbsd
- armv7a-netbsd
- armv7l-netbsd
- i686-netbsd
- m68k-netbsd
- mipsel-netbsd
- powerpc-netbsd
- riscv32-netbsd
- riscv64-netbsd
- x86_64-netbsd
- i686-openbsd
- x86_64-openbsd
- x86_64-redox
- Defined
- Source