![]() |
LlamaLib
v2.0.2
Cross-platform library for local LLMs
|
LLM service implementation with server capabilities. More...
Go to the source code of this file.
Classes | |
| class | LLMService |
| Runtime loader for LLM libraries. More... | |
Macros | |
| #define | LLAMALIB_INF(...) LOG_TMPL(GGML_LOG_LEVEL_INFO, -1, __VA_ARGS__) |
| Info-level logging macro for LLama library. | |
Typedefs | |
| using | server_http_res_ptr = std::unique_ptr<server_http_res> |
| using | handler_t = std::function<server_http_res_ptr(const server_http_req & req)> |
Functions | |
| void | LLMService_Registry (LLMProviderRegistry *existing_instance) |
| Set registry for LLMService (C API) | |
| LLMService * | LLMService_Construct (const char *model_path, int num_slots=1, int num_threads=-1, int num_GPU_layers=0, bool flash_attention=false, int context_size=4096, int batch_size=2048, bool embedding_only=false, int lora_count=0, const char **lora_paths=nullptr) |
| Construct LLMService instance (C API) | |
| LLMService * | LLMService_From_Command (const char *params_string) |
| Create LLMService from command string (C API) | |
| const char * | LLMService_Command (LLMService *llm_service) |
| Returns the construct command (C API) | |
| void | LLMService_InjectErrorState (ErrorState *error_state) |
LLM service implementation with server capabilities.
Provides a concrete implementation of LLMProvider with HTTP server functionality, parameter parsing, and integration with llama.cpp backend
Definition in file LLM_service.h.
| #define LLAMALIB_INF | ( | ... | ) | LOG_TMPL(GGML_LOG_LEVEL_INFO, -1, __VA_ARGS__) |
Info-level logging macro for LLama library.
Definition at line 16 of file LLM_service.h.
| using handler_t = std::function<server_http_res_ptr(const server_http_req & req)> |
Definition at line 26 of file LLM_service.h.
| using server_http_res_ptr = std::unique_ptr<server_http_res> |
Definition at line 25 of file LLM_service.h.