AI Voice Control for Home Assistant (Fully Local)

Long, but interesting read :smile:.

First of all, for me the inference speeds of wyoming-faster-whisper on my GTX 1080 are within 1 second. What speeds did you have before with faster-whisper?

For piper, the Dutch models also aren’t great, I am using the Belgian (Dutch) as an alternative, which are quite good! You can get in touch with @synesthesiam for improving the Polish Piper models. I am planning on improving the Dutch models.

For satellites I am still testing a lot. Examples are the m5stack atom echo and an ESP32 with INMP441 microphone. These devices are only for testing, and are definitely not good enough for the actual use-case, since their performance degrades a lot over distance. Personally, I find onjuvoice way too expensive (since you also have to order 5 at a time). The most important thing for me is a quality microphone, is the onjuvoices one good enough for longer distances and noise?

Now for the LLM part, the changes made in Extended OpenAI were not my changes and credits to the guy I noted in the story. I currently don’t have a PC near, so I can help again when I do. Is the problem fixed already?

1 Like

Hello, thx for you guide.
I have the same problem of rvsh2: when I ask tu turn on/off a light for example, it work (the light really turn on/off), but Assist return an error:

Sorry, I had a problem talking to OpenAI: Error code: 500 - {‘error’: {‘message’: ‘[{'type': 'literal_error', 'loc': ('body', 'messages', 4, 'typed-dict', 'role'), 'msg': “Input should be 'system'”, 'input': 'assistant', 'ctx': {'expected': “'system'”}}, {'type': 'missing', 'loc': ('body', 'messages', 4, 'typed-dict', 'content'), 'msg': 'Field required', 'input': {'role': 'assistant', 'tool_calls': [{'id': 'call_kQSk0g3TEXZnOFVgdOAwS8Kj', 'function': {'arguments': '{“list”: [{“domain”: “light”, “service”: “turn_off”, “service_data”: {“entity_id”: “light.lampadario”}}]}', 'name': 'execute_services'}, 'type': 'function'}]}}, {'type': 'literal_error', 'loc': ('body', 'messages', 4, 'typed-dict', 'role'), 'msg': “Input should be 'user'”, 'input': 'assistant', 'ctx': {'expected': “'user'”}}, {'type': 'missing', 'loc': ('body', 'messages', 4, 'typed-dict', 'content'), 'msg': 'Field required', 'input': {'role': 'assistant', 'tool_calls': [{'id': 'call_kQSk0g3TEXZnOFVgdOAwS8Kj', 'function': {'arguments': '{“list”: [{“domain”: “light”, “service”: “turn_off”, “service_data”: {“entity_id”: “light.lampadario”}}]}', 'name': 'execute_services'}, 'type': 'function'}]}}, {'type': 'missing', 'loc': ('body', 'messages', 4, 'typed-dict', 'content'), 'msg': 'Field required', 'input': {'role': 'assistant', 'tool_calls': [{'id': 'call_kQSk0g3TEXZnOFVgdOAwS8Kj', 'function': {'arguments': '{“list”: [{“domain”: “light”, “service”: “turn_off”, “service_data”: {“entity_id”: “light.lampadario”}}]}', 'name': 'execute_services'}, 'type': 'function'}]}}, {'type': 'literal_error', 'loc': ('body', 'messages', 4, 'typed-dict', 'role'), 'msg': “Input should be 'tool'”, 'input': 'assistant', 'ctx': {'expected': “'tool'”}}, {'type': 'missing', 'loc': ('body', 'messages', 4, 'typed-dict', 'content'), 'msg': 'Field required', 'input': {'role': 'assistant', 'tool_calls': [{'id': 'call_kQSk0g3TEXZnOFVgdOAwS8Kj', 'function': {'arguments': '{“list”: [{“domain”: “light”, “service”: “turn_off”, “service_data”: {“entity_id”: “light.lampadario”}}]}', 'name': 'execute_services'}, 'type': 'function'}]}}, {'type': 'missing', 'loc': ('body', 'messages', 4, 'typed-dict', 'tool_call_id'), 'msg': 'Field required', 'input': {'role': 'assistant', 'tool_calls': [{'id': 'call_kQSk0g3TEXZnOFVgdOAwS8Kj', 'function': {'arguments': '{“list”: [{“domain”: “light”, “service”: “turn_off”, “service_data”: {“entity_id”: “light.lampadario”}}]}', 'name': 'execute_services'}, 'type': 'function'}]}}, {'type': 'literal_error', 'loc': ('body', 'messages', 4, 'typed-dict', 'role'), 'msg': “Input should be 'function'”, 'input': 'assistant', 'ctx': {'expected': “'function'”}}, {'type': 'missing', 'loc': ('body', 'messages', 4, 'typed-dict', 'content'), 'msg': 'Field required', 'input': {'role': 'assistant', 'tool_calls': [{'id': 'call_kQSk0g3TEXZnOFVgdOAwS8Kj', 'function': {'arguments': '{“list”: [{“domain”: “light”, “service”: “turn_off”, “service_data”: {“entity_id”: “light.lampadario”}}]}', 'name': 'execute_services'}, 'type': 'function'}]}}, {'type': 'missing', 'loc': ('body', 'messages', 4, 'typed-dict', 'name'), 'msg': 'Field required', 'input': {'role': 'assistant', 'tool_calls': [{'id': 'call_kQSk0g3TEXZnOFVgdOAwS8Kj', 'function': {'arguments': '{“list”: [{“domain”: “light”, “service”: “turn_off”, “service_data”: {“entity_id”: “light.lampadario”}}]}', 'name': 'execute_services'}, 'type': 'function'}]}}]’, ‘type’: ‘internal_server_error’, ‘param’: None, ‘code’: None}}

Instead, normal question like: who is batman? doesn’t have this issue, Assist return the correct answer.

I hope you can help me with this problem and sorry for my English.

Did you install llama-cpp-python from my build files and extended openAI from my fork?

yes for the first, for extended openAi I edited the init.py with yours

could you please share your prompt of extended openAi and tell me the what you use for the other options? Like model, max token, top P etc etc…
Thanks

Make sure that you have restarted home assistant and that it is correctly installed.

Settings and prompt can be default. If you define your own yaml functions, make sure you enable “Use Tools”. Context threshold can be set to 8000, since this is the limit of the Functionary LLM.

last thing just to be sure, when you add the openAi extension, what you use? a random API key and url like http://ip-of-docker:8000/v1 right?

API key doesn’t matter, just a placeholder for the OpenAI API, can be any value. Url is also fine, since you can talk to the LLM that should not be the problem.

Hi, I started from scratch, installed again your docker and your fork of extended openAi but I have the same problem. I also use your function block with standard prompt. Do you have any suggestion? Thanks

Did you also test it with the standard Extended OpenAI integration?
Can you also send your llama-cpp-python logs here, to see how the model is loaded?

Hi, yes, last week I tested also with the standard extended openai with same results. This is the log, I initially ask to turn off a light called “lampadario”. It turned off the light but respond with an error. Then I ask 2 common question like who is batman and it answer without errors.

==========
== CUDA ==
==========

CUDA Version 12.2.2

Container image Copyright (c) 2016-2023, NVIDIA CORPORATION & AFFILIATES. All rights reserved.

This container image and its contents are governed by the NVIDIA Deep Learning Container License.
By pulling and using the container, you accept the terms and conditions of this license:
https://developer.nvidia.com/ngc/nvidia-deep-learning-container-license

A copy of this license is made available in this container at /NGC-DL-CONTAINER-LICENSE for your convenience.

None of PyTorch, TensorFlow >= 2.0, or Flax have been found. Models won't be available and only tokenizers, configuration and file/data utilities can be used.
/usr/local/lib/python3.10/dist-packages/huggingface_hub/file_download.py:1132: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
  warnings.warn(
You set `add_prefix_space`. The tokenizer needs to be converted from the slow tokenizers
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
llama_model_loader: loaded meta data with 25 key-value pairs and 291 tensors from /var/model/functionary-small-v2.4.Q4_0.gguf (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.name str              = .
llama_model_loader: - kv   2:                           llama.vocab_size u32              = 32004
llama_model_loader: - kv   3:                       llama.context_length u32              = 32768
llama_model_loader: - kv   4:                     llama.embedding_length u32              = 4096
llama_model_loader: - kv   5:                          llama.block_count u32              = 32
llama_model_loader: - kv   6:                  llama.feed_forward_length u32              = 14336
llama_model_loader: - kv   7:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv   8:                 llama.attention.head_count u32              = 32
llama_model_loader: - kv   9:              llama.attention.head_count_kv u32              = 8
llama_model_loader: - kv  10:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  11:                       llama.rope.freq_base f32              = 1000000.000000
llama_model_loader: - kv  12:                          general.file_type u32              = 2
llama_model_loader: - kv  13:                       tokenizer.ggml.model str              = llama
llama_model_loader: - kv  14:                      tokenizer.ggml.tokens arr[str,32004]   = ["<unk>", "<s>", "</s>", "<0x00>", "<...
llama_model_loader: - kv  15:                      tokenizer.ggml.scores arr[f32,32004]   = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv  16:                  tokenizer.ggml.token_type arr[i32,32004]   = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
llama_model_loader: - kv  17:                tokenizer.ggml.bos_token_id u32              = 1
llama_model_loader: - kv  18:                tokenizer.ggml.eos_token_id u32              = 2
llama_model_loader: - kv  19:            tokenizer.ggml.unknown_token_id u32              = 0
llama_model_loader: - kv  20:            tokenizer.ggml.padding_token_id u32              = 2
llama_model_loader: - kv  21:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  22:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  23:                    tokenizer.chat_template str              = {% for message in messages %}\n{% if m...
llama_model_loader: - kv  24:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:   65 tensors
llama_model_loader: - type q4_0:  225 tensors
llama_model_loader: - type q6_K:    1 tensors
llm_load_vocab: special tokens definition check successful ( 263/32004 ).
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = llama
llm_load_print_meta: vocab type       = SPM
llm_load_print_meta: n_vocab          = 32004
llm_load_print_meta: n_merges         = 0
llm_load_print_meta: n_ctx_train      = 32768
llm_load_print_meta: n_embd           = 4096
llm_load_print_meta: n_head           = 32
llm_load_print_meta: n_head_kv        = 8
llm_load_print_meta: n_layer          = 32
llm_load_print_meta: n_rot            = 128
llm_load_print_meta: n_embd_head_k    = 128
llm_load_print_meta: n_embd_head_v    = 128
llm_load_print_meta: n_gqa            = 4
llm_load_print_meta: n_embd_k_gqa     = 1024
llm_load_print_meta: n_embd_v_gqa     = 1024
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-05
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale    = 0.0e+00
llm_load_print_meta: n_ff             = 14336
llm_load_print_meta: n_expert         = 0
llm_load_print_meta: n_expert_used    = 0
llm_load_print_meta: causal attn      = 1
llm_load_print_meta: pooling type     = 0
llm_load_print_meta: rope type        = 0
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 1000000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx  = 32768
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: ssm_d_conv       = 0
llm_load_print_meta: ssm_d_inner      = 0
llm_load_print_meta: ssm_d_state      = 0
llm_load_print_meta: ssm_dt_rank      = 0
llm_load_print_meta: model type       = 8B
llm_load_print_meta: model ftype      = Q4_0
llm_load_print_meta: model params     = 7.24 B
llm_load_print_meta: model size       = 3.83 GiB (4.54 BPW) 
llm_load_print_meta: general.name     = .
llm_load_print_meta: BOS token        = 1 '<s>'
llm_load_print_meta: EOS token        = 2 '</s>'
llm_load_print_meta: UNK token        = 0 '<unk>'
llm_load_print_meta: PAD token        = 2 '</s>'
llm_load_print_meta: LF token         = 13 '<0x0A>'
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:   no
ggml_cuda_init: CUDA_USE_TENSOR_CORES: yes
ggml_cuda_init: found 1 CUDA devices:
  Device 0: NVIDIA GeForce RTX 3080, compute capability 8.6, VMM: yes
llm_load_tensors: ggml ctx size =    0.30 MiB
llm_load_tensors: offloading 32 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 33/33 layers to GPU
llm_load_tensors:        CPU buffer size =    70.32 MiB
llm_load_tensors:      CUDA0 buffer size =  3847.57 MiB
warning: failed to mlock 74473472-byte buffer (after previously locking 0 bytes): Cannot allocate memory
Try increasing RLIMIT_MEMLOCK ('ulimit -l' as root).
..................................................................................................
llama_new_context_with_model: n_ctx      = 4096
llama_new_context_with_model: n_batch    = 192
llama_new_context_with_model: n_ubatch   = 192
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base  = 1000000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init:      CUDA0 KV buffer size =   512.00 MiB
llama_new_context_with_model: KV self size  =  512.00 MiB, K (f16):  256.00 MiB, V (f16):  256.00 MiB
llama_new_context_with_model:  CUDA_Host  output buffer size =     0.14 MiB
llama_new_context_with_model:      CUDA0 compute buffer size =   111.00 MiB
llama_new_context_with_model:  CUDA_Host compute buffer size =     6.00 MiB
llama_new_context_with_model: graph nodes  = 1030
llama_new_context_with_model: graph splits = 2
AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 1 | AVX512_VBMI = 1 | AVX512_VNNI = 1 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | 
Model metadata: {'tokenizer.chat_template': '{% for message in messages %}\n{% if message[\'role\'] == \'user\' or message[\'role\'] == \'system\' %}\n{{ \'<|from|>\' + message[\'role\'] + \'\n<|recipient|>all\n<|content|>\' + message[\'content\'] + \'\n\' }}{% elif message[\'role\'] == \'tool\' %}\n{{ \'<|from|>\' + message[\'name\'] + \'\n<|recipient|>all\n<|content|>\' + message[\'content\'] + \'\n\' }}{% else %}\n{% set contain_content=\'no\'%}\n{% if message[\'content\'] is not none %}\n{{ \'<|from|>assistant\n<|recipient|>all\n<|content|>\' + message[\'content\'] }}{% set contain_content=\'yes\'%}\n{% endif %}\n{% if \'tool_calls\' in message and message[\'tool_calls\'] is not none %}\n{% for tool_call in message[\'tool_calls\'] %}\n{% set prompt=\'<|from|>assistant\n<|recipient|>\' + tool_call[\'function\'][\'name\'] + \'\n<|content|>\' + tool_call[\'function\'][\'arguments\'] %}\n{% if loop.index == 1 and contain_content == "no" %}\n{{ prompt }}{% else %}\n{{ \'\n\' + prompt}}{% endif %}\n{% endfor %}\n{% endif %}\n{{ \'<|stop|>\n\' }}{% endif %}\n{% endfor %}\n{% if add_generation_prompt %}{{ \'<|from|>assistant\n<|recipient|>\' }}{% endif %}', 'tokenizer.ggml.add_eos_token': 'false', 'tokenizer.ggml.padding_token_id': '2', 'tokenizer.ggml.unknown_token_id': '0', 'tokenizer.ggml.eos_token_id': '2', 'general.quantization_version': '2', 'tokenizer.ggml.model': 'llama', 'general.architecture': 'llama', 'llama.rope.freq_base': '1000000.000000', 'llama.context_length': '32768', 'general.name': '.', 'llama.vocab_size': '32004', 'general.file_type': '2', 'tokenizer.ggml.add_bos_token': 'true', 'llama.embedding_length': '4096', 'llama.feed_forward_length': '14336', 'llama.attention.layer_norm_rms_epsilon': '0.000010', 'llama.rope.dimension_count': '128', 'tokenizer.ggml.bos_token_id': '1', 'llama.attention.head_count': '32', 'llama.block_count': '32', 'llama.attention.head_count_kv': '8'}
INFO:     Started server process [27]
INFO:     Waiting for application startup.
INFO:     Application startup complete.
INFO:     Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)

llama_print_timings:        load time =     163.55 ms
llama_print_timings:      sample time =       1.33 ms /     9 runs   (    0.15 ms per token,  6746.63 tokens per second)
llama_print_timings: prompt eval time =     598.92 ms /  1069 tokens (    0.56 ms per token,  1784.87 tokens per second)
llama_print_timings:        eval time =     107.30 ms /     8 runs   (   13.41 ms per token,    74.56 tokens per second)
llama_print_timings:       total time =    2119.51 ms /  1077 tokens
from_string grammar:
brightness-kv ::= ["] [b] [r] [i] [g] [h] [t] [n] [e] [s] [s] ["] space [:] space string 
space ::= space_7 
string ::= ["] string_8 ["] space 
char ::= [^"\] | [\] char_4 
char_4 ::= ["\/bfnrt] | [u] [0-9a-fA-F] [0-9a-fA-F] [0-9a-fA-F] [0-9a-fA-F] 
entity-id-kv ::= ["] [e] [n] [t] [i] [t] [y] [_] [i] [d] ["] space [:] space string 
root ::= [{] space brightness-kv [,] space entity-id-kv [}] space 
space_7 ::= [ ] | 
string_8 ::= char string_8 | 

brightness-kv ::= "\"brightness\"" space ":" space string
char ::= [^"\\] | "\\" (["\\/bfnrt] | "u" [0-9a-fA-F] [0-9a-fA-F] [0-9a-fA-F] [0-9a-fA-F])
entity-id-kv ::= "\"entity_id\"" space ":" space string
root ::= "{" space brightness-kv "," space entity-id-kv "}" space
space ::= " "?
string ::= "\"" char* "\"" space
Llama.generate: prefix-match hit

llama_print_timings:        load time =     163.55 ms
llama_print_timings:      sample time =      93.98 ms /    23 runs   (    4.09 ms per token,   244.73 tokens per second)
llama_print_timings: prompt eval time =       0.00 ms /     1 tokens (    0.00 ms per token,      inf tokens per second)
llama_print_timings:        eval time =     244.12 ms /    23 runs   (   10.61 ms per token,    94.22 tokens per second)
llama_print_timings:       total time =     573.83 ms /    24 tokens
Llama.generate: prefix-match hit

llama_print_timings:        load time =     163.55 ms
llama_print_timings:      sample time =       0.14 ms /     1 runs   (    0.14 ms per token,  7042.25 tokens per second)
llama_print_timings: prompt eval time =      49.87 ms /    21 tokens (    2.37 ms per token,   421.09 tokens per second)
llama_print_timings:        eval time =       0.00 ms /     1 runs   (    0.00 ms per token,      inf tokens per second)
llama_print_timings:       total time =      81.86 ms /    22 tokens
INFO:     192.168.22.24:50818 - "POST /v1/chat/completions HTTP/1.1" 200 OK
Exception: [{'type': 'literal_error', 'loc': ('body', 'messages', 2, 'typed-dict', 'role'), 'msg': "Input should be 'system'", 'input': 'assistant', 'ctx': {'expected': "'system'"}}, {'type': 'literal_error', 'loc': ('body', 'messages', 2, 'typed-dict', 'role'), 'msg': "Input should be 'user'", 'input': 'assistant', 'ctx': {'expected': "'user'"}}, {'type': 'dict_type', 'loc': ('body', 'messages', 2, 'typed-dict', 'function_call'), 'msg': 'Input should be a valid dictionary', 'input': None}, {'type': 'literal_error', 'loc': ('body', 'messages', 2, 'typed-dict', 'role'), 'msg': "Input should be 'tool'", 'input': 'assistant', 'ctx': {'expected': "'tool'"}}, {'type': 'missing', 'loc': ('body', 'messages', 2, 'typed-dict', 'tool_call_id'), 'msg': 'Field required', 'input': {'content': None, 'role': 'assistant', 'function_call': None, 'tool_calls': [{'id': 'call_UO4QZqOKuDPa6tT3FKGEg2cf', 'function': {'arguments': '{"brightness": "0", "entity_id": "light.lampadario"}', 'name': 'set_light_brightness'}, 'type': 'function'}]}}, {'type': 'literal_error', 'loc': ('body', 'messages', 2, 'typed-dict', 'role'), 'msg': "Input should be 'function'", 'input': 'assistant', 'ctx': {'expected': "'function'"}}, {'type': 'missing', 'loc': ('body', 'messages', 2, 'typed-dict', 'name'), 'msg': 'Field required', 'input': {'content': None, 'role': 'assistant', 'function_call': None, 'tool_calls': [{'id': 'call_UO4QZqOKuDPa6tT3FKGEg2cf', 'function': {'arguments': '{"brightness": "0", "entity_id": "light.lampadario"}', 'name': 'set_light_brightness'}, 'type': 'function'}]}}]
Traceback (most recent call last):
  File "/usr/local/lib/python3.10/dist-packages/llama_cpp/server/errors.py", line 171, in custom_route_handler
    response = await original_route_handler(request)
  File "/usr/local/lib/python3.10/dist-packages/fastapi/routing.py", line 315, in app
    raise validation_error
fastapi.exceptions.RequestValidationError: [{'type': 'literal_error', 'loc': ('body', 'messages', 2, 'typed-dict', 'role'), 'msg': "Input should be 'system'", 'input': 'assistant', 'ctx': {'expected': "'system'"}}, {'type': 'literal_error', 'loc': ('body', 'messages', 2, 'typed-dict', 'role'), 'msg': "Input should be 'user'", 'input': 'assistant', 'ctx': {'expected': "'user'"}}, {'type': 'dict_type', 'loc': ('body', 'messages', 2, 'typed-dict', 'function_call'), 'msg': 'Input should be a valid dictionary', 'input': None}, {'type': 'literal_error', 'loc': ('body', 'messages', 2, 'typed-dict', 'role'), 'msg': "Input should be 'tool'", 'input': 'assistant', 'ctx': {'expected': "'tool'"}}, {'type': 'missing', 'loc': ('body', 'messages', 2, 'typed-dict', 'tool_call_id'), 'msg': 'Field required', 'input': {'content': None, 'role': 'assistant', 'function_call': None, 'tool_calls': [{'id': 'call_UO4QZqOKuDPa6tT3FKGEg2cf', 'function': {'arguments': '{"brightness": "0", "entity_id": "light.lampadario"}', 'name': 'set_light_brightness'}, 'type': 'function'}]}}, {'type': 'literal_error', 'loc': ('body', 'messages', 2, 'typed-dict', 'role'), 'msg': "Input should be 'function'", 'input': 'assistant', 'ctx': {'expected': "'function'"}}, {'type': 'missing', 'loc': ('body', 'messages', 2, 'typed-dict', 'name'), 'msg': 'Field required', 'input': {'content': None, 'role': 'assistant', 'function_call': None, 'tool_calls': [{'id': 'call_UO4QZqOKuDPa6tT3FKGEg2cf', 'function': {'arguments': '{"brightness": "0", "entity_id": "light.lampadario"}', 'name': 'set_light_brightness'}, 'type': 'function'}]}}]
INFO:     192.168.22.24:50818 - "POST /v1/chat/completions HTTP/1.1" 500 Internal Server Error
Exception: [{'type': 'literal_error', 'loc': ('body', 'messages', 2, 'typed-dict', 'role'), 'msg': "Input should be 'system'", 'input': 'assistant', 'ctx': {'expected': "'system'"}}, {'type': 'literal_error', 'loc': ('body', 'messages', 2, 'typed-dict', 'role'), 'msg': "Input should be 'user'", 'input': 'assistant', 'ctx': {'expected': "'user'"}}, {'type': 'dict_type', 'loc': ('body', 'messages', 2, 'typed-dict', 'function_call'), 'msg': 'Input should be a valid dictionary', 'input': None}, {'type': 'literal_error', 'loc': ('body', 'messages', 2, 'typed-dict', 'role'), 'msg': "Input should be 'tool'", 'input': 'assistant', 'ctx': {'expected': "'tool'"}}, {'type': 'missing', 'loc': ('body', 'messages', 2, 'typed-dict', 'tool_call_id'), 'msg': 'Field required', 'input': {'content': None, 'role': 'assistant', 'function_call': None, 'tool_calls': [{'id': 'call_UO4QZqOKuDPa6tT3FKGEg2cf', 'function': {'arguments': '{"brightness": "0", "entity_id": "light.lampadario"}', 'name': 'set_light_brightness'}, 'type': 'function'}]}}, {'type': 'literal_error', 'loc': ('body', 'messages', 2, 'typed-dict', 'role'), 'msg': "Input should be 'function'", 'input': 'assistant', 'ctx': {'expected': "'function'"}}, {'type': 'missing', 'loc': ('body', 'messages', 2, 'typed-dict', 'name'), 'msg': 'Field required', 'input': {'content': None, 'role': 'assistant', 'function_call': None, 'tool_calls': [{'id': 'call_UO4QZqOKuDPa6tT3FKGEg2cf', 'function': {'arguments': '{"brightness": "0", "entity_id": "light.lampadario"}', 'name': 'set_light_brightness'}, 'type': 'function'}]}}]
Traceback (most recent call last):
  File "/usr/local/lib/python3.10/dist-packages/llama_cpp/server/errors.py", line 171, in custom_route_handler
    response = await original_route_handler(request)
  File "/usr/local/lib/python3.10/dist-packages/fastapi/routing.py", line 315, in app
    raise validation_error
fastapi.exceptions.RequestValidationError: [{'type': 'literal_error', 'loc': ('body', 'messages', 2, 'typed-dict', 'role'), 'msg': "Input should be 'system'", 'input': 'assistant', 'ctx': {'expected': "'system'"}}, {'type': 'literal_error', 'loc': ('body', 'messages', 2, 'typed-dict', 'role'), 'msg': "Input should be 'user'", 'input': 'assistant', 'ctx': {'expected': "'user'"}}, {'type': 'dict_type', 'loc': ('body', 'messages', 2, 'typed-dict', 'function_call'), 'msg': 'Input should be a valid dictionary', 'input': None}, {'type': 'literal_error', 'loc': ('body', 'messages', 2, 'typed-dict', 'role'), 'msg': "Input should be 'tool'", 'input': 'assistant', 'ctx': {'expected': "'tool'"}}, {'type': 'missing', 'loc': ('body', 'messages', 2, 'typed-dict', 'tool_call_id'), 'msg': 'Field required', 'input': {'content': None, 'role': 'assistant', 'function_call': None, 'tool_calls': [{'id': 'call_UO4QZqOKuDPa6tT3FKGEg2cf', 'function': {'arguments': '{"brightness": "0", "entity_id": "light.lampadario"}', 'name': 'set_light_brightness'}, 'type': 'function'}]}}, {'type': 'literal_error', 'loc': ('body', 'messages', 2, 'typed-dict', 'role'), 'msg': "Input should be 'function'", 'input': 'assistant', 'ctx': {'expected': "'function'"}}, {'type': 'missing', 'loc': ('body', 'messages', 2, 'typed-dict', 'name'), 'msg': 'Field required', 'input': {'content': None, 'role': 'assistant', 'function_call': None, 'tool_calls': [{'id': 'call_UO4QZqOKuDPa6tT3FKGEg2cf', 'function': {'arguments': '{"brightness": "0", "entity_id": "light.lampadario"}', 'name': 'set_light_brightness'}, 'type': 'function'}]}}]
INFO:     192.168.22.24:50818 - "POST /v1/chat/completions HTTP/1.1" 500 Internal Server Error
Exception: [{'type': 'literal_error', 'loc': ('body', 'messages', 2, 'typed-dict', 'role'), 'msg': "Input should be 'system'", 'input': 'assistant', 'ctx': {'expected': "'system'"}}, {'type': 'literal_error', 'loc': ('body', 'messages', 2, 'typed-dict', 'role'), 'msg': "Input should be 'user'", 'input': 'assistant', 'ctx': {'expected': "'user'"}}, {'type': 'dict_type', 'loc': ('body', 'messages', 2, 'typed-dict', 'function_call'), 'msg': 'Input should be a valid dictionary', 'input': None}, {'type': 'literal_error', 'loc': ('body', 'messages', 2, 'typed-dict', 'role'), 'msg': "Input should be 'tool'", 'input': 'assistant', 'ctx': {'expected': "'tool'"}}, {'type': 'missing', 'loc': ('body', 'messages', 2, 'typed-dict', 'tool_call_id'), 'msg': 'Field required', 'input': {'content': None, 'role': 'assistant', 'function_call': None, 'tool_calls': [{'id': 'call_UO4QZqOKuDPa6tT3FKGEg2cf', 'function': {'arguments': '{"brightness": "0", "entity_id": "light.lampadario"}', 'name': 'set_light_brightness'}, 'type': 'function'}]}}, {'type': 'literal_error', 'loc': ('body', 'messages', 2, 'typed-dict', 'role'), 'msg': "Input should be 'function'", 'input': 'assistant', 'ctx': {'expected': "'function'"}}, {'type': 'missing', 'loc': ('body', 'messages', 2, 'typed-dict', 'name'), 'msg': 'Field required', 'input': {'content': None, 'role': 'assistant', 'function_call': None, 'tool_calls': [{'id': 'call_UO4QZqOKuDPa6tT3FKGEg2cf', 'function': {'arguments': '{"brightness": "0", "entity_id": "light.lampadario"}', 'name': 'set_light_brightness'}, 'type': 'function'}]}}]
Traceback (most recent call last):
  File "/usr/local/lib/python3.10/dist-packages/llama_cpp/server/errors.py", line 171, in custom_route_handler
    response = await original_route_handler(request)
  File "/usr/local/lib/python3.10/dist-packages/fastapi/routing.py", line 315, in app
    raise validation_error
fastapi.exceptions.RequestValidationError: [{'type': 'literal_error', 'loc': ('body', 'messages', 2, 'typed-dict', 'role'), 'msg': "Input should be 'system'", 'input': 'assistant', 'ctx': {'expected': "'system'"}}, {'type': 'literal_error', 'loc': ('body', 'messages', 2, 'typed-dict', 'role'), 'msg': "Input should be 'user'", 'input': 'assistant', 'ctx': {'expected': "'user'"}}, {'type': 'dict_type', 'loc': ('body', 'messages', 2, 'typed-dict', 'function_call'), 'msg': 'Input should be a valid dictionary', 'input': None}, {'type': 'literal_error', 'loc': ('body', 'messages', 2, 'typed-dict', 'role'), 'msg': "Input should be 'tool'", 'input': 'assistant', 'ctx': {'expected': "'tool'"}}, {'type': 'missing', 'loc': ('body', 'messages', 2, 'typed-dict', 'tool_call_id'), 'msg': 'Field required', 'input': {'content': None, 'role': 'assistant', 'function_call': None, 'tool_calls': [{'id': 'call_UO4QZqOKuDPa6tT3FKGEg2cf', 'function': {'arguments': '{"brightness": "0", "entity_id": "light.lampadario"}', 'name': 'set_light_brightness'}, 'type': 'function'}]}}, {'type': 'literal_error', 'loc': ('body', 'messages', 2, 'typed-dict', 'role'), 'msg': "Input should be 'function'", 'input': 'assistant', 'ctx': {'expected': "'function'"}}, {'type': 'missing', 'loc': ('body', 'messages', 2, 'typed-dict', 'name'), 'msg': 'Field required', 'input': {'content': None, 'role': 'assistant', 'function_call': None, 'tool_calls': [{'id': 'call_UO4QZqOKuDPa6tT3FKGEg2cf', 'function': {'arguments': '{"brightness": "0", "entity_id": "light.lampadario"}', 'name': 'set_light_brightness'}, 'type': 'function'}]}}]
INFO:     192.168.22.24:50818 - "POST /v1/chat/completions HTTP/1.1" 500 Internal Server Error
Llama.generate: prefix-match hit

llama_print_timings:        load time =     163.55 ms
llama_print_timings:      sample time =       0.47 ms /     3 runs   (    0.16 ms per token,  6396.59 tokens per second)
llama_print_timings: prompt eval time =      86.51 ms /   129 tokens (    0.67 ms per token,  1491.12 tokens per second)
llama_print_timings:        eval time =      21.75 ms /     2 runs   (   10.87 ms per token,    91.97 tokens per second)
llama_print_timings:       total time =     289.46 ms /   131 tokens
Llama.generate: prefix-match hit

llama_print_timings:        load time =     163.55 ms
llama_print_timings:      sample time =       0.83 ms /     6 runs   (    0.14 ms per token,  7246.38 tokens per second)
llama_print_timings: prompt eval time =       0.00 ms /     1 tokens (    0.00 ms per token,      inf tokens per second)
llama_print_timings:        eval time =      65.27 ms /     6 runs   (   10.88 ms per token,    91.93 tokens per second)
llama_print_timings:       total time =     123.12 ms /     7 tokens
INFO:     192.168.22.24:36850 - "POST /v1/chat/completions HTTP/1.1" 200 OK
Llama.generate: prefix-match hit

llama_print_timings:        load time =     163.55 ms
llama_print_timings:      sample time =       0.43 ms /     3 runs   (    0.14 ms per token,  7025.76 tokens per second)
llama_print_timings: prompt eval time =      86.71 ms /   130 tokens (    0.67 ms per token,  1499.20 tokens per second)
llama_print_timings:        eval time =      22.07 ms /     2 runs   (   11.04 ms per token,    90.61 tokens per second)
llama_print_timings:       total time =     290.21 ms /   132 tokens
Llama.generate: prefix-match hit

llama_print_timings:        load time =     163.55 ms
llama_print_timings:      sample time =      13.05 ms /    85 runs   (    0.15 ms per token,  6511.91 tokens per second)
llama_print_timings: prompt eval time =       0.00 ms /     1 tokens (    0.00 ms per token,      inf tokens per second)
llama_print_timings:        eval time =     905.27 ms /    85 runs   (   10.65 ms per token,    93.90 tokens per second)
llama_print_timings:       total time =    1750.42 ms /    86 tokens
INFO:     192.168.22.24:36088 - "POST /v1/chat/completions HTTP/1.1" 200 OK

It seems that llama-cpp-python made some commits with regards to Functionary, so my Docker build might not be needed anymore. Did you also try a standard llama-cpp-python installation? (You can modify my Dockerfile and remove the line 27, where it copies the llama_types file).

Hi, thanks for your suggestion. I removed the container, edited the dockerfile, build and compose again but same error, here is the log


==========
== CUDA ==
==========

CUDA Version 12.2.2

Container image Copyright (c) 2016-2023, NVIDIA CORPORATION & AFFILIATES. All rights reserved.

This container image and its contents are governed by the NVIDIA Deep Learning Container License.
By pulling and using the container, you accept the terms and conditions of this license:
https://developer.nvidia.com/ngc/nvidia-deep-learning-container-license

A copy of this license is made available in this container at /NGC-DL-CONTAINER-LICENSE for your convenience.

None of PyTorch, TensorFlow >= 2.0, or Flax have been found. Models won't be available and only tokenizers, configuration and file/data utilities can be used.
/usr/local/lib/python3.10/dist-packages/huggingface_hub/file_download.py:1132: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
  warnings.warn(

tokenizer_config.json:   0% 0.00/2.86k [00:00<?, ?B/s]
tokenizer_config.json: 100% 2.86k/2.86k [00:00<00:00, 6.54MB/s]

tokenizer.model:   0% 0.00/493k [00:00<?, ?B/s]
tokenizer.model: 100% 493k/493k [00:00<00:00, 4.38MB/s]
tokenizer.model: 100% 493k/493k [00:00<00:00, 4.32MB/s]

tokenizer.json:   0% 0.00/1.80M [00:00<?, ?B/s]
tokenizer.json: 100% 1.80M/1.80M [00:00<00:00, 2.30MB/s]
tokenizer.json: 100% 1.80M/1.80M [00:00<00:00, 2.28MB/s]

added_tokens.json:   0% 0.00/95.0 [00:00<?, ?B/s]
added_tokens.json: 100% 95.0/95.0 [00:00<00:00, 286kB/s]

special_tokens_map.json:   0% 0.00/660 [00:00<?, ?B/s]
special_tokens_map.json: 100% 660/660 [00:00<00:00, 7.13MB/s]
You set `add_prefix_space`. The tokenizer needs to be converted from the slow tokenizers
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
llama_model_loader: loaded meta data with 25 key-value pairs and 291 tensors from /var/model/functionary-small-v2.4.Q4_0.gguf (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.name str              = .
llama_model_loader: - kv   2:                           llama.vocab_size u32              = 32004
llama_model_loader: - kv   3:                       llama.context_length u32              = 32768
llama_model_loader: - kv   4:                     llama.embedding_length u32              = 4096
llama_model_loader: - kv   5:                          llama.block_count u32              = 32
llama_model_loader: - kv   6:                  llama.feed_forward_length u32              = 14336
llama_model_loader: - kv   7:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv   8:                 llama.attention.head_count u32              = 32
llama_model_loader: - kv   9:              llama.attention.head_count_kv u32              = 8
llama_model_loader: - kv  10:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  11:                       llama.rope.freq_base f32              = 1000000.000000
llama_model_loader: - kv  12:                          general.file_type u32              = 2
llama_model_loader: - kv  13:                       tokenizer.ggml.model str              = llama
llama_model_loader: - kv  14:                      tokenizer.ggml.tokens arr[str,32004]   = ["<unk>", "<s>", "</s>", "<0x00>", "<...
llama_model_loader: - kv  15:                      tokenizer.ggml.scores arr[f32,32004]   = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv  16:                  tokenizer.ggml.token_type arr[i32,32004]   = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
llama_model_loader: - kv  17:                tokenizer.ggml.bos_token_id u32              = 1
llama_model_loader: - kv  18:                tokenizer.ggml.eos_token_id u32              = 2
llama_model_loader: - kv  19:            tokenizer.ggml.unknown_token_id u32              = 0
llama_model_loader: - kv  20:            tokenizer.ggml.padding_token_id u32              = 2
llama_model_loader: - kv  21:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  22:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  23:                    tokenizer.chat_template str              = {% for message in messages %}\n{% if m...
llama_model_loader: - kv  24:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:   65 tensors
llama_model_loader: - type q4_0:  225 tensors
llama_model_loader: - type q6_K:    1 tensors
llm_load_vocab: special tokens definition check successful ( 263/32004 ).
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = llama
llm_load_print_meta: vocab type       = SPM
llm_load_print_meta: n_vocab          = 32004
llm_load_print_meta: n_merges         = 0
llm_load_print_meta: n_ctx_train      = 32768
llm_load_print_meta: n_embd           = 4096
llm_load_print_meta: n_head           = 32
llm_load_print_meta: n_head_kv        = 8
llm_load_print_meta: n_layer          = 32
llm_load_print_meta: n_rot            = 128
llm_load_print_meta: n_embd_head_k    = 128
llm_load_print_meta: n_embd_head_v    = 128
llm_load_print_meta: n_gqa            = 4
llm_load_print_meta: n_embd_k_gqa     = 1024
llm_load_print_meta: n_embd_v_gqa     = 1024
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-05
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale    = 0.0e+00
llm_load_print_meta: n_ff             = 14336
llm_load_print_meta: n_expert         = 0
llm_load_print_meta: n_expert_used    = 0
llm_load_print_meta: causal attn      = 1
llm_load_print_meta: pooling type     = 0
llm_load_print_meta: rope type        = 0
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 1000000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx  = 32768
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: ssm_d_conv       = 0
llm_load_print_meta: ssm_d_inner      = 0
llm_load_print_meta: ssm_d_state      = 0
llm_load_print_meta: ssm_dt_rank      = 0
llm_load_print_meta: model type       = 8B
llm_load_print_meta: model ftype      = Q4_0
llm_load_print_meta: model params     = 7.24 B
llm_load_print_meta: model size       = 3.83 GiB (4.54 BPW) 
llm_load_print_meta: general.name     = .
llm_load_print_meta: BOS token        = 1 '<s>'
llm_load_print_meta: EOS token        = 2 '</s>'
llm_load_print_meta: UNK token        = 0 '<unk>'
llm_load_print_meta: PAD token        = 2 '</s>'
llm_load_print_meta: LF token         = 13 '<0x0A>'
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:   no
ggml_cuda_init: CUDA_USE_TENSOR_CORES: yes
ggml_cuda_init: found 1 CUDA devices:
  Device 0: NVIDIA GeForce RTX 3080, compute capability 8.6, VMM: yes
llm_load_tensors: ggml ctx size =    0.30 MiB
llm_load_tensors: offloading 32 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 33/33 layers to GPU
llm_load_tensors:        CPU buffer size =    70.32 MiB
llm_load_tensors:      CUDA0 buffer size =  3847.57 MiB
warning: failed to mlock 74473472-byte buffer (after previously locking 0 bytes): Cannot allocate memory
Try increasing RLIMIT_MEMLOCK ('ulimit -l' as root).
..................................................................................................
llama_new_context_with_model: n_ctx      = 4096
llama_new_context_with_model: n_batch    = 192
llama_new_context_with_model: n_ubatch   = 192
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base  = 1000000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init:      CUDA0 KV buffer size =   512.00 MiB
llama_new_context_with_model: KV self size  =  512.00 MiB, K (f16):  256.00 MiB, V (f16):  256.00 MiB
llama_new_context_with_model:  CUDA_Host  output buffer size =     0.14 MiB
llama_new_context_with_model:      CUDA0 compute buffer size =   111.00 MiB
llama_new_context_with_model:  CUDA_Host compute buffer size =     6.00 MiB
llama_new_context_with_model: graph nodes  = 1030
llama_new_context_with_model: graph splits = 2
AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 1 | AVX512_VBMI = 1 | AVX512_VNNI = 1 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | 
Model metadata: {'tokenizer.chat_template': '{% for message in messages %}\n{% if message[\'role\'] == \'user\' or message[\'role\'] == \'system\' %}\n{{ \'<|from|>\' + message[\'role\'] + \'\n<|recipient|>all\n<|content|>\' + message[\'content\'] + \'\n\' }}{% elif message[\'role\'] == \'tool\' %}\n{{ \'<|from|>\' + message[\'name\'] + \'\n<|recipient|>all\n<|content|>\' + message[\'content\'] + \'\n\' }}{% else %}\n{% set contain_content=\'no\'%}\n{% if message[\'content\'] is not none %}\n{{ \'<|from|>assistant\n<|recipient|>all\n<|content|>\' + message[\'content\'] }}{% set contain_content=\'yes\'%}\n{% endif %}\n{% if \'tool_calls\' in message and message[\'tool_calls\'] is not none %}\n{% for tool_call in message[\'tool_calls\'] %}\n{% set prompt=\'<|from|>assistant\n<|recipient|>\' + tool_call[\'function\'][\'name\'] + \'\n<|content|>\' + tool_call[\'function\'][\'arguments\'] %}\n{% if loop.index == 1 and contain_content == "no" %}\n{{ prompt }}{% else %}\n{{ \'\n\' + prompt}}{% endif %}\n{% endfor %}\n{% endif %}\n{{ \'<|stop|>\n\' }}{% endif %}\n{% endfor %}\n{% if add_generation_prompt %}{{ \'<|from|>assistant\n<|recipient|>\' }}{% endif %}', 'tokenizer.ggml.add_eos_token': 'false', 'tokenizer.ggml.padding_token_id': '2', 'tokenizer.ggml.unknown_token_id': '0', 'tokenizer.ggml.eos_token_id': '2', 'general.quantization_version': '2', 'tokenizer.ggml.model': 'llama', 'general.architecture': 'llama', 'llama.rope.freq_base': '1000000.000000', 'llama.context_length': '32768', 'general.name': '.', 'llama.vocab_size': '32004', 'general.file_type': '2', 'tokenizer.ggml.add_bos_token': 'true', 'llama.embedding_length': '4096', 'llama.feed_forward_length': '14336', 'llama.attention.layer_norm_rms_epsilon': '0.000010', 'llama.rope.dimension_count': '128', 'tokenizer.ggml.bos_token_id': '1', 'llama.attention.head_count': '32', 'llama.block_count': '32', 'llama.attention.head_count_kv': '8'}
INFO:     Started server process [27]
INFO:     Waiting for application startup.
INFO:     Application startup complete.
INFO:     Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)

llama_print_timings:        load time =    1167.58 ms
llama_print_timings:      sample time =       0.71 ms /     3 runs   (    0.24 ms per token,  4207.57 tokens per second)
llama_print_timings: prompt eval time =    1633.05 ms /  1069 tokens (    1.53 ms per token,   654.60 tokens per second)
llama_print_timings:        eval time =    4772.44 ms /     2 runs   ( 2386.22 ms per token,     0.42 tokens per second)
llama_print_timings:       total time =    7802.39 ms /  1071 tokens
Llama.generate: prefix-match hit

llama_print_timings:        load time =    1167.58 ms
llama_print_timings:      sample time =      16.19 ms /   105 runs   (    0.15 ms per token,  6485.89 tokens per second)
llama_print_timings: prompt eval time =       0.00 ms /     1 tokens (    0.00 ms per token,      inf tokens per second)
llama_print_timings:        eval time =    1130.92 ms /   105 runs   (   10.77 ms per token,    92.84 tokens per second)
llama_print_timings:       total time =    2178.45 ms /   106 tokens
INFO:     192.168.22.24:49774 - "POST /v1/chat/completions HTTP/1.1" 200 OK
Llama.generate: prefix-match hit

llama_print_timings:        load time =    1167.58 ms
llama_print_timings:      sample time =       0.72 ms /     5 runs   (    0.14 ms per token,  6983.24 tokens per second)
llama_print_timings: prompt eval time =      87.12 ms /   130 tokens (    0.67 ms per token,  1492.14 tokens per second)
llama_print_timings:        eval time =      43.32 ms /     4 runs   (   10.83 ms per token,    92.34 tokens per second)
llama_print_timings:       total time =     329.79 ms /   134 tokens
from_string grammar:
char ::= [^"\] | [\] char_1 
char_1 ::= ["\/bfnrt] | [u] [0-9a-fA-F] [0-9a-fA-F] [0-9a-fA-F] [0-9a-fA-F] 
list ::= [[] space list_8 []] space 
space ::= space_19 
list_4 ::= list-item list_7 
list-item ::= [{] space list-item-domain-kv [,] space list-item-service-kv [,] space list-item-service-data-kv [}] space 
list_6 ::= [,] space list-item 
list_7 ::= list_6 list_7 | 
list_8 ::= list_4 | 
list-item-domain-kv ::= ["] [d] [o] [m] [a] [i] [n] ["] space [:] space string 
list-item-service-kv ::= ["] [s] [e] [r] [v] [i] [c] [e] ["] space [:] space string 
list-item-service-data-kv ::= ["] [s] [e] [r] [v] [i] [c] [e] [_] [d] [a] [t] [a] ["] space [:] space list-item-service-data 
string ::= ["] string_20 ["] space 
list-item-service-data ::= [{] space list-item-service-data-entity-id-kv [}] space 
list-item-service-data-entity-id-kv ::= ["] [e] [n] [t] [i] [t] [y] [_] [i] [d] ["] space [:] space string 
list-kv ::= ["] [l] [i] [s] [t] ["] space [:] space list 
root ::= [{] space root_18 [}] space 
root_17 ::= list-kv 
root_18 ::= root_17 | 
space_19 ::= [ ] | 
string_20 ::= char string_20 | 

char ::= [^"\\] | "\\" (["\\/bfnrt] | "u" [0-9a-fA-F] [0-9a-fA-F] [0-9a-fA-F] [0-9a-fA-F])
list ::= "[" space (list-item ("," space list-item)*)? "]" space
list-item ::= "{" space list-item-domain-kv "," space list-item-service-kv "," space list-item-service-data-kv "}" space
list-item-domain-kv ::= "\"domain\"" space ":" space string
list-item-service-data ::= "{" space list-item-service-data-entity-id-kv "}" space
list-item-service-data-entity-id-kv ::= "\"entity_id\"" space ":" space string
list-item-service-data-kv ::= "\"service_data\"" space ":" space list-item-service-data
list-item-service-kv ::= "\"service\"" space ":" space string
list-kv ::= "\"list\"" space ":" space list
root ::= "{" space  (list-kv )? "}" space
space ::= " "?
string ::= "\"" char* "\"" space
Llama.generate: prefix-match hit

llama_print_timings:        load time =    1167.58 ms
llama_print_timings:      sample time =     157.62 ms /    39 runs   (    4.04 ms per token,   247.43 tokens per second)
llama_print_timings: prompt eval time =       0.00 ms /     1 tokens (    0.00 ms per token,      inf tokens per second)
llama_print_timings:        eval time =     424.33 ms /    39 runs   (   10.88 ms per token,    91.91 tokens per second)
llama_print_timings:       total time =     987.77 ms /    40 tokens
Llama.generate: prefix-match hit

llama_print_timings:        load time =    1167.58 ms
llama_print_timings:      sample time =       0.14 ms /     1 runs   (    0.14 ms per token,  6896.55 tokens per second)
llama_print_timings: prompt eval time =      60.03 ms /    38 tokens (    1.58 ms per token,   633.00 tokens per second)
llama_print_timings:        eval time =       0.00 ms /     1 runs   (    0.00 ms per token,      inf tokens per second)
llama_print_timings:       total time =     110.12 ms /    39 tokens
INFO:     192.168.22.24:44866 - "POST /v1/chat/completions HTTP/1.1" 200 OK
Exception: [{'type': 'literal_error', 'loc': ('body', 'messages', 2, 'typed-dict', 'role'), 'msg': "Input should be 'system'", 'input': 'assistant', 'ctx': {'expected': "'system'"}}, {'type': 'literal_error', 'loc': ('body', 'messages', 2, 'typed-dict', 'role'), 'msg': "Input should be 'user'", 'input': 'assistant', 'ctx': {'expected': "'user'"}}, {'type': 'dict_type', 'loc': ('body', 'messages', 2, 'typed-dict', 'function_call'), 'msg': 'Input should be a valid dictionary', 'input': None}, {'type': 'literal_error', 'loc': ('body', 'messages', 2, 'typed-dict', 'role'), 'msg': "Input should be 'tool'", 'input': 'assistant', 'ctx': {'expected': "'tool'"}}, {'type': 'missing', 'loc': ('body', 'messages', 2, 'typed-dict', 'tool_call_id'), 'msg': 'Field required', 'input': {'content': None, 'role': 'assistant', 'function_call': None, 'tool_calls': [{'id': 'call_ES0hdHmriocpwwkwi7IePMTg', 'function': {'arguments': '{"list": [{"domain": "light", "service": "turn_off", "service_data": {"entity_id": "light.lampadario"}}]}', 'name': 'execute_services'}, 'type': 'function'}]}}, {'type': 'literal_error', 'loc': ('body', 'messages', 2, 'typed-dict', 'role'), 'msg': "Input should be 'function'", 'input': 'assistant', 'ctx': {'expected': "'function'"}}, {'type': 'missing', 'loc': ('body', 'messages', 2, 'typed-dict', 'name'), 'msg': 'Field required', 'input': {'content': None, 'role': 'assistant', 'function_call': None, 'tool_calls': [{'id': 'call_ES0hdHmriocpwwkwi7IePMTg', 'function': {'arguments': '{"list": [{"domain": "light", "service": "turn_off", "service_data": {"entity_id": "light.lampadario"}}]}', 'name': 'execute_services'}, 'type': 'function'}]}}]
Traceback (most recent call last):
  File "/usr/local/lib/python3.10/dist-packages/llama_cpp/server/errors.py", line 171, in custom_route_handler
    response = await original_route_handler(request)
  File "/usr/local/lib/python3.10/dist-packages/fastapi/routing.py", line 315, in app
    raise validation_error
fastapi.exceptions.RequestValidationError: [{'type': 'literal_error', 'loc': ('body', 'messages', 2, 'typed-dict', 'role'), 'msg': "Input should be 'system'", 'input': 'assistant', 'ctx': {'expected': "'system'"}}, {'type': 'literal_error', 'loc': ('body', 'messages', 2, 'typed-dict', 'role'), 'msg': "Input should be 'user'", 'input': 'assistant', 'ctx': {'expected': "'user'"}}, {'type': 'dict_type', 'loc': ('body', 'messages', 2, 'typed-dict', 'function_call'), 'msg': 'Input should be a valid dictionary', 'input': None}, {'type': 'literal_error', 'loc': ('body', 'messages', 2, 'typed-dict', 'role'), 'msg': "Input should be 'tool'", 'input': 'assistant', 'ctx': {'expected': "'tool'"}}, {'type': 'missing', 'loc': ('body', 'messages', 2, 'typed-dict', 'tool_call_id'), 'msg': 'Field required', 'input': {'content': None, 'role': 'assistant', 'function_call': None, 'tool_calls': [{'id': 'call_ES0hdHmriocpwwkwi7IePMTg', 'function': {'arguments': '{"list": [{"domain": "light", "service": "turn_off", "service_data": {"entity_id": "light.lampadario"}}]}', 'name': 'execute_services'}, 'type': 'function'}]}}, {'type': 'literal_error', 'loc': ('body', 'messages', 2, 'typed-dict', 'role'), 'msg': "Input should be 'function'", 'input': 'assistant', 'ctx': {'expected': "'function'"}}, {'type': 'missing', 'loc': ('body', 'messages', 2, 'typed-dict', 'name'), 'msg': 'Field required', 'input': {'content': None, 'role': 'assistant', 'function_call': None, 'tool_calls': [{'id': 'call_ES0hdHmriocpwwkwi7IePMTg', 'function': {'arguments': '{"list": [{"domain": "light", "service": "turn_off", "service_data": {"entity_id": "light.lampadario"}}]}', 'name': 'execute_services'}, 'type': 'function'}]}}]
INFO:     192.168.22.24:44866 - "POST /v1/chat/completions HTTP/1.1" 500 Internal Server Error
Exception: [{'type': 'literal_error', 'loc': ('body', 'messages', 2, 'typed-dict', 'role'), 'msg': "Input should be 'system'", 'input': 'assistant', 'ctx': {'expected': "'system'"}}, {'type': 'literal_error', 'loc': ('body', 'messages', 2, 'typed-dict', 'role'), 'msg': "Input should be 'user'", 'input': 'assistant', 'ctx': {'expected': "'user'"}}, {'type': 'dict_type', 'loc': ('body', 'messages', 2, 'typed-dict', 'function_call'), 'msg': 'Input should be a valid dictionary', 'input': None}, {'type': 'literal_error', 'loc': ('body', 'messages', 2, 'typed-dict', 'role'), 'msg': "Input should be 'tool'", 'input': 'assistant', 'ctx': {'expected': "'tool'"}}, {'type': 'missing', 'loc': ('body', 'messages', 2, 'typed-dict', 'tool_call_id'), 'msg': 'Field required', 'input': {'content': None, 'role': 'assistant', 'function_call': None, 'tool_calls': [{'id': 'call_ES0hdHmriocpwwkwi7IePMTg', 'function': {'arguments': '{"list": [{"domain": "light", "service": "turn_off", "service_data": {"entity_id": "light.lampadario"}}]}', 'name': 'execute_services'}, 'type': 'function'}]}}, {'type': 'literal_error', 'loc': ('body', 'messages', 2, 'typed-dict', 'role'), 'msg': "Input should be 'function'", 'input': 'assistant', 'ctx': {'expected': "'function'"}}, {'type': 'missing', 'loc': ('body', 'messages', 2, 'typed-dict', 'name'), 'msg': 'Field required', 'input': {'content': None, 'role': 'assistant', 'function_call': None, 'tool_calls': [{'id': 'call_ES0hdHmriocpwwkwi7IePMTg', 'function': {'arguments': '{"list": [{"domain": "light", "service": "turn_off", "service_data": {"entity_id": "light.lampadario"}}]}', 'name': 'execute_services'}, 'type': 'function'}]}}]
Traceback (most recent call last):
  File "/usr/local/lib/python3.10/dist-packages/llama_cpp/server/errors.py", line 171, in custom_route_handler
    response = await original_route_handler(request)
  File "/usr/local/lib/python3.10/dist-packages/fastapi/routing.py", line 315, in app
    raise validation_error
fastapi.exceptions.RequestValidationError: [{'type': 'literal_error', 'loc': ('body', 'messages', 2, 'typed-dict', 'role'), 'msg': "Input should be 'system'", 'input': 'assistant', 'ctx': {'expected': "'system'"}}, {'type': 'literal_error', 'loc': ('body', 'messages', 2, 'typed-dict', 'role'), 'msg': "Input should be 'user'", 'input': 'assistant', 'ctx': {'expected': "'user'"}}, {'type': 'dict_type', 'loc': ('body', 'messages', 2, 'typed-dict', 'function_call'), 'msg': 'Input should be a valid dictionary', 'input': None}, {'type': 'literal_error', 'loc': ('body', 'messages', 2, 'typed-dict', 'role'), 'msg': "Input should be 'tool'", 'input': 'assistant', 'ctx': {'expected': "'tool'"}}, {'type': 'missing', 'loc': ('body', 'messages', 2, 'typed-dict', 'tool_call_id'), 'msg': 'Field required', 'input': {'content': None, 'role': 'assistant', 'function_call': None, 'tool_calls': [{'id': 'call_ES0hdHmriocpwwkwi7IePMTg', 'function': {'arguments': '{"list": [{"domain": "light", "service": "turn_off", "service_data": {"entity_id": "light.lampadario"}}]}', 'name': 'execute_services'}, 'type': 'function'}]}}, {'type': 'literal_error', 'loc': ('body', 'messages', 2, 'typed-dict', 'role'), 'msg': "Input should be 'function'", 'input': 'assistant', 'ctx': {'expected': "'function'"}}, {'type': 'missing', 'loc': ('body', 'messages', 2, 'typed-dict', 'name'), 'msg': 'Field required', 'input': {'content': None, 'role': 'assistant', 'function_call': None, 'tool_calls': [{'id': 'call_ES0hdHmriocpwwkwi7IePMTg', 'function': {'arguments': '{"list": [{"domain": "light", "service": "turn_off", "service_data": {"entity_id": "light.lampadario"}}]}', 'name': 'execute_services'}, 'type': 'function'}]}}]
INFO:     192.168.22.24:44866 - "POST /v1/chat/completions HTTP/1.1" 500 Internal Server Error
Exception: [{'type': 'literal_error', 'loc': ('body', 'messages', 2, 'typed-dict', 'role'), 'msg': "Input should be 'system'", 'input': 'assistant', 'ctx': {'expected': "'system'"}}, {'type': 'literal_error', 'loc': ('body', 'messages', 2, 'typed-dict', 'role'), 'msg': "Input should be 'user'", 'input': 'assistant', 'ctx': {'expected': "'user'"}}, {'type': 'dict_type', 'loc': ('body', 'messages', 2, 'typed-dict', 'function_call'), 'msg': 'Input should be a valid dictionary', 'input': None}, {'type': 'literal_error', 'loc': ('body', 'messages', 2, 'typed-dict', 'role'), 'msg': "Input should be 'tool'", 'input': 'assistant', 'ctx': {'expected': "'tool'"}}, {'type': 'missing', 'loc': ('body', 'messages', 2, 'typed-dict', 'tool_call_id'), 'msg': 'Field required', 'input': {'content': None, 'role': 'assistant', 'function_call': None, 'tool_calls': [{'id': 'call_ES0hdHmriocpwwkwi7IePMTg', 'function': {'arguments': '{"list": [{"domain": "light", "service": "turn_off", "service_data": {"entity_id": "light.lampadario"}}]}', 'name': 'execute_services'}, 'type': 'function'}]}}, {'type': 'literal_error', 'loc': ('body', 'messages', 2, 'typed-dict', 'role'), 'msg': "Input should be 'function'", 'input': 'assistant', 'ctx': {'expected': "'function'"}}, {'type': 'missing', 'loc': ('body', 'messages', 2, 'typed-dict', 'name'), 'msg': 'Field required', 'input': {'content': None, 'role': 'assistant', 'function_call': None, 'tool_calls': [{'id': 'call_ES0hdHmriocpwwkwi7IePMTg', 'function': {'arguments': '{"list": [{"domain": "light", "service": "turn_off", "service_data": {"entity_id": "light.lampadario"}}]}', 'name': 'execute_services'}, 'type': 'function'}]}}]
Traceback (most recent call last):
  File "/usr/local/lib/python3.10/dist-packages/llama_cpp/server/errors.py", line 171, in custom_route_handler
    response = await original_route_handler(request)
  File "/usr/local/lib/python3.10/dist-packages/fastapi/routing.py", line 315, in app
    raise validation_error
fastapi.exceptions.RequestValidationError: [{'type': 'literal_error', 'loc': ('body', 'messages', 2, 'typed-dict', 'role'), 'msg': "Input should be 'system'", 'input': 'assistant', 'ctx': {'expected': "'system'"}}, {'type': 'literal_error', 'loc': ('body', 'messages', 2, 'typed-dict', 'role'), 'msg': "Input should be 'user'", 'input': 'assistant', 'ctx': {'expected': "'user'"}}, {'type': 'dict_type', 'loc': ('body', 'messages', 2, 'typed-dict', 'function_call'), 'msg': 'Input should be a valid dictionary', 'input': None}, {'type': 'literal_error', 'loc': ('body', 'messages', 2, 'typed-dict', 'role'), 'msg': "Input should be 'tool'", 'input': 'assistant', 'ctx': {'expected': "'tool'"}}, {'type': 'missing', 'loc': ('body', 'messages', 2, 'typed-dict', 'tool_call_id'), 'msg': 'Field required', 'input': {'content': None, 'role': 'assistant', 'function_call': None, 'tool_calls': [{'id': 'call_ES0hdHmriocpwwkwi7IePMTg', 'function': {'arguments': '{"list": [{"domain": "light", "service": "turn_off", "service_data": {"entity_id": "light.lampadario"}}]}', 'name': 'execute_services'}, 'type': 'function'}]}}, {'type': 'literal_error', 'loc': ('body', 'messages', 2, 'typed-dict', 'role'), 'msg': "Input should be 'function'", 'input': 'assistant', 'ctx': {'expected': "'function'"}}, {'type': 'missing', 'loc': ('body', 'messages', 2, 'typed-dict', 'name'), 'msg': 'Field required', 'input': {'content': None, 'role': 'assistant', 'function_call': None, 'tool_calls': [{'id': 'call_ES0hdHmriocpwwkwi7IePMTg', 'function': {'arguments': '{"list": [{"domain": "light", "service": "turn_off", "service_data": {"entity_id": "light.lampadario"}}]}', 'name': 'execute_services'}, 'type': 'function'}]}}]
INFO:     192.168.22.24:44866 - "POST /v1/chat/completions HTTP/1.1" 500 Internal Server Error

I was able to reproduce the issue when rebuilding the docker image. The issues lies with llama-cpp-python, so installing an older version of llama-cpp-python should fix the issue. You can open an issue on their github repository if you want this to be resolved.

For a quick fix: modify Dockerfile to install an older version (e.g. try v0.2.64) of llama-cpp-python. Replace line 24 with:
RUN CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python==0.2.64

Thanks a lot, I can confirm that with your “quick fix” it works now

1 Like

Im getting a 400 Bad Request from HA to cpp. Would anyone be able to help out here? I have the same GTX 1080 card as you and was pumped to see your integration with MASS as well.

==========
== CUDA ==
==========

CUDA Version 12.1.1

Container image Copyright (c) 2016-2023, NVIDIA CORPORATION & AFFILIATES. All rights reserved.

This container image and its contents are governed by the NVIDIA Deep Learning Container License.
By pulling and using the container, you accept the terms and conditions of this license:
https://developer.nvidia.com/ngc/nvidia-deep-learning-container-license

A copy of this license is made available in this container at /NGC-DL-CONTAINER-LICENSE for your convenience.

None of PyTorch, TensorFlow >= 2.0, or Flax have been found. Models won't be available and only tokenizers, configuration and file/data utilities can be used.
tokenizer_config.json: 100% 2.86k/2.86k [00:00<00:00, 11.1MB/s]
tokenizer.model: 100% 493k/493k [00:00<00:00, 6.05MB/s]
tokenizer.json: 100% 1.80M/1.80M [00:00<00:00, 14.9MB/s]
added_tokens.json: 100% 95.0/95.0 [00:00<00:00, 407kB/s]
special_tokens_map.json: 100% 660/660 [00:00<00:00, 2.87MB/s]
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
functionary-small-v2.4.Q4_0.gguf: 100% 4.11G/4.11G [00:58<00:00, 69.7MB/s]
llama_model_loader: loaded meta data with 25 key-value pairs and 291 tensors from /root/.cache/huggingface/hub/models--meetkai--functionary-small-v2.4-GGUF/snapshots/a0d171eb78e02a58858c464e278234afbcf85c5c/./functionary-small-v2.4.Q4_0.gguf (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.name str              = .
llama_model_loader: - kv   2:                           llama.vocab_size u32              = 32004
llama_model_loader: - kv   3:                       llama.context_length u32              = 32768
llama_model_loader: - kv   4:                     llama.embedding_length u32              = 4096
llama_model_loader: - kv   5:                          llama.block_count u32              = 32
llama_model_loader: - kv   6:                  llama.feed_forward_length u32              = 14336
llama_model_loader: - kv   7:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv   8:                 llama.attention.head_count u32              = 32
llama_model_loader: - kv   9:              llama.attention.head_count_kv u32              = 8
llama_model_loader: - kv  10:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  11:                       llama.rope.freq_base f32              = 1000000.000000
llama_model_loader: - kv  12:                          general.file_type u32              = 2
llama_model_loader: - kv  13:                       tokenizer.ggml.model str              = llama
llama_model_loader: - kv  14:                      tokenizer.ggml.tokens arr[str,32004]   = ["<unk>", "<s>", "</s>", "<0x00>", "<...
llama_model_loader: - kv  15:                      tokenizer.ggml.scores arr[f32,32004]   = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv  16:                  tokenizer.ggml.token_type arr[i32,32004]   = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
llama_model_loader: - kv  17:                tokenizer.ggml.bos_token_id u32              = 1
llama_model_loader: - kv  18:                tokenizer.ggml.eos_token_id u32              = 2
llama_model_loader: - kv  19:            tokenizer.ggml.unknown_token_id u32              = 0
llama_model_loader: - kv  20:            tokenizer.ggml.padding_token_id u32              = 2
llama_model_loader: - kv  21:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  22:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  23:                    tokenizer.chat_template str              = {% for message in messages %}\n{% if m...
llama_model_loader: - kv  24:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:   65 tensors
llama_model_loader: - type q4_0:  225 tensors
llama_model_loader: - type q6_K:    1 tensors
llm_load_vocab: special tokens definition check successful ( 263/32004 ).
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = llama
llm_load_print_meta: vocab type       = SPM
llm_load_print_meta: n_vocab          = 32004
llm_load_print_meta: n_merges         = 0
llm_load_print_meta: n_ctx_train      = 32768
llm_load_print_meta: n_embd           = 4096
llm_load_print_meta: n_head           = 32
llm_load_print_meta: n_head_kv        = 8
llm_load_print_meta: n_layer          = 32
llm_load_print_meta: n_rot            = 128
llm_load_print_meta: n_embd_head_k    = 128
llm_load_print_meta: n_embd_head_v    = 128
llm_load_print_meta: n_gqa            = 4
llm_load_print_meta: n_embd_k_gqa     = 1024
llm_load_print_meta: n_embd_v_gqa     = 1024
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-05
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale    = 0.0e+00
llm_load_print_meta: n_ff             = 14336
llm_load_print_meta: n_expert         = 0
llm_load_print_meta: n_expert_used    = 0
llm_load_print_meta: causal attn      = 1
llm_load_print_meta: pooling type     = 0
llm_load_print_meta: rope type        = 0
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 1000000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx  = 32768
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: ssm_d_conv       = 0
llm_load_print_meta: ssm_d_inner      = 0
llm_load_print_meta: ssm_d_state      = 0
llm_load_print_meta: ssm_dt_rank      = 0
llm_load_print_meta: model type       = 8B
llm_load_print_meta: model ftype      = Q4_0
llm_load_print_meta: model params     = 7.24 B
llm_load_print_meta: model size       = 3.83 GiB (4.54 BPW) 
llm_load_print_meta: general.name     = .
llm_load_print_meta: BOS token        = 1 '<s>'
llm_load_print_meta: EOS token        = 2 '</s>'
llm_load_print_meta: UNK token        = 0 '<unk>'
llm_load_print_meta: PAD token        = 2 '</s>'
llm_load_print_meta: LF token         = 13 '<0x0A>'
ggml_cuda_init: failed to initialize CUDA: forward compatibility was attempted on non supported HW
llm_load_tensors: ggml ctx size =    0.15 MiB
llm_load_tensors: offloading 32 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 33/33 layers to GPU
llm_load_tensors:        CPU buffer size =  3917.89 MiB
..................................................................................................
llama_new_context_with_model: n_ctx      = 4096
llama_new_context_with_model: n_batch    = 192
llama_new_context_with_model: n_ubatch   = 192
llama_new_context_with_model: freq_base  = 1000000.0
llama_new_context_with_model: freq_scale = 1
ggml_cuda_host_malloc: warning: failed to allocate 512.00 MiB of pinned memory: forward compatibility was attempted on non supported HW
llama_kv_cache_init:        CPU KV buffer size =   512.00 MiB
llama_new_context_with_model: KV self size  =  512.00 MiB, K (f16):  256.00 MiB, V (f16):  256.00 MiB
ggml_cuda_host_malloc: warning: failed to allocate 0.14 MiB of pinned memory: forward compatibility was attempted on non supported HW
llama_new_context_with_model:        CPU  output buffer size =     0.14 MiB
ggml_cuda_host_malloc: warning: failed to allocate 111.00 MiB of pinned memory: forward compatibility was attempted on non supported HW
llama_new_context_with_model:  CUDA_Host compute buffer size =   111.00 MiB
llama_new_context_with_model: graph nodes  = 1030
llama_new_context_with_model: graph splits = 1
AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 1 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | 
Model metadata: {'tokenizer.chat_template': '{% for message in messages %}\n{% if message[\'role\'] == \'user\' or message[\'role\'] == \'system\' %}\n{{ \'<|from|>\' + message[\'role\'] + \'\n<|recipient|>all\n<|content|>\' + message[\'content\'] + \'\n\' }}{% elif message[\'role\'] == \'tool\' %}\n{{ \'<|from|>\' + message[\'name\'] + \'\n<|recipient|>all\n<|content|>\' + message[\'content\'] + \'\n\' }}{% else %}\n{% set contain_content=\'no\'%}\n{% if message[\'content\'] is not none %}\n{{ \'<|from|>assistant\n<|recipient|>all\n<|content|>\' + message[\'content\'] }}{% set contain_content=\'yes\'%}\n{% endif %}\n{% if \'tool_calls\' in message and message[\'tool_calls\'] is not none %}\n{% for tool_call in message[\'tool_calls\'] %}\n{% set prompt=\'<|from|>assistant\n<|recipient|>\' + tool_call[\'function\'][\'name\'] + \'\n<|content|>\' + tool_call[\'function\'][\'arguments\'] %}\n{% if loop.index == 1 and contain_content == "no" %}\n{{ prompt }}{% else %}\n{{ \'\n\' + prompt}}{% endif %}\n{% endfor %}\n{% endif %}\n{{ \'<|stop|>\n\' }}{% endif %}\n{% endfor %}\n{% if add_generation_prompt %}{{ \'<|from|>assistant\n<|recipient|>\' }}{% endif %}', 'tokenizer.ggml.add_eos_token': 'false', 'tokenizer.ggml.padding_token_id': '2', 'tokenizer.ggml.unknown_token_id': '0', 'tokenizer.ggml.eos_token_id': '2', 'general.quantization_version': '2', 'tokenizer.ggml.model': 'llama', 'general.architecture': 'llama', 'llama.rope.freq_base': '1000000.000000', 'llama.context_length': '32768', 'general.name': '.', 'llama.vocab_size': '32004', 'general.file_type': '2', 'tokenizer.ggml.add_bos_token': 'true', 'llama.embedding_length': '4096', 'llama.feed_forward_length': '14336', 'llama.attention.layer_norm_rms_epsilon': '0.000010', 'llama.rope.dimension_count': '128', 'tokenizer.ggml.bos_token_id': '1', 'llama.attention.head_count': '32', 'llama.block_count': '32', 'llama.attention.head_count_kv': '8'}
INFO:     Started server process [27]
INFO:     Waiting for application startup.
INFO:     Application startup complete.
INFO:     Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
Exception: Requested tokens (4503) exceed context window of 4096
Traceback (most recent call last):
  File "/usr/local/lib/python3.10/dist-packages/llama_cpp/server/errors.py", line 171, in custom_route_handler
    response = await original_route_handler(request)
  File "/usr/local/lib/python3.10/dist-packages/fastapi/routing.py", line 278, in app
    raw_response = await run_endpoint_function(
  File "/usr/local/lib/python3.10/dist-packages/fastapi/routing.py", line 191, in run_endpoint_function
    return await dependant.call(**values)
  File "/usr/local/lib/python3.10/dist-packages/llama_cpp/server/app.py", line 462, in create_chat_completion
    ] = await run_in_threadpool(llama.create_chat_completion, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/starlette/concurrency.py", line 42, in run_in_threadpool
    return await anyio.to_thread.run_sync(func, *args)
  File "/usr/local/lib/python3.10/dist-packages/anyio/to_thread.py", line 56, in run_sync
    return await get_async_backend().run_sync_in_worker_thread(
  File "/usr/local/lib/python3.10/dist-packages/anyio/_backends/_asyncio.py", line 2144, in run_sync_in_worker_thread
    return await future
  File "/usr/local/lib/python3.10/dist-packages/anyio/_backends/_asyncio.py", line 851, in run
    result = context.run(func, *args)
  File "/usr/local/lib/python3.10/dist-packages/llama_cpp/llama.py", line 1658, in create_chat_completion
    return handler(
  File "/usr/local/lib/python3.10/dist-packages/llama_cpp/llama_chat_format.py", line 2066, in functionary_v1_v2_chat_handler
    completion = create_completion(stop=stops)
  File "/usr/local/lib/python3.10/dist-packages/llama_cpp/llama_chat_format.py", line 1962, in create_completion
    completion = cast(llama_types.Completion, llama.create_completion(
  File "/usr/local/lib/python3.10/dist-packages/llama_cpp/llama.py", line 1494, in create_completion
    completion: Completion = next(completion_or_chunks)  # type: ignore
  File "/usr/local/lib/python3.10/dist-packages/llama_cpp/llama.py", line 972, in _create_completion
    raise ValueError(
ValueError: Requested tokens (4503) exceed context window of 4096
INFO:     192.168.1.68:58058 - "POST /v1/chat/completions HTTP/1.1" 400 Bad Request

Also, i noticed there is a v5 of the functionary small model. Has anyone tested that out? meetkai/functionary-small-v2.5-GGUF · Hugging Face

Also, when setting up extended openai it shows gpt-3.5-turbo-1106 in that first field. I presume we set that to: meetkai/functionary-small-v2.4-GGUF or is it just functionary-small-v2.4-GGUF

Here are my function calls. Any help would be great.

- spec:
    name: execute_services
    description: Use this function to execute service of devices in Home Assistant.
    parameters:
      type: object
      properties:
        list:
          type: array
          items:
            type: object
            properties:
              domain:
                type: string
                description: The domain of the service
              service:
                type: string
                description: The service to be called
              service_data:
                type: object
                description: The service data object to indicate what to control.
                properties:
                  entity_id:
                    type: string
                    description: The entity_id retrieved from available devices. It
                      must start with domain, followed by dot character.
                required:
                - entity_id
            required:
            - domain
            - service
            - service_data
  function:
    type: native
    name: execute_service
- spec:
    name: set_light_color
    description: Sets a color value for a light entity. Only call this function
      when the user explicitly gives a color, and not warm, cold or cool.
    parameters:
      type: object
      properties:
        color:
          type: string
          description: The color to set
        entity_id:
          type: string
          description: The light entity_id retrieved from available devices. 
            It must start with the light domain, followed by dot character.
      required:
      - color
      - entity_id
  function:
    type: script
    sequence:
    - service: light.turn_on
      data:
        color_name: '{{color}}'
      target:
        entity_id: '{{entity_id}}'

- spec:
    name: set_light_brightness
    description: Sets a brightness value for a light entity. Only call this
      function when the user explicitly gives you a percentage value.
    parameters:
      type: object
      properties:
        brightness:
          type: string
          description: The brightness percentage to set.
        entity_id:
          type: string
          description: The light entity_id retrieved from available devices. 
            It must start with the light domain, followed by dot character.
      required:
      - brightness
      - entity_id
  function:
    type: script
    sequence:
    - service: light.turn_on
      data:
        brightness_pct: '{{brightness}}'
      target:
        entity_id: '{{entity_id}}'

- spec:
    name: set_light_warm
    description: Sets a light entity to its warmest temperature.
    parameters:
      type: object
      properties:
        entity_id:
          type: string
          description: The light entity_id retrieved from available devices. 
            It must start with the light domain, followed by dot character.
      required:
      - entity_id
  function:
    type: script
    sequence:
    - service: light.turn_on
      data:
        kelvin: '{{state_attr(entity_id, "min_color_temp_kelvin")}}'
      target:
        entity_id: '{{entity_id}}'

- spec:
    name: set_light_cold
    description: Sets a light entity to its coldest or coolest temperature,
      only call this function when user explicitly asks for cold or cool temperature of the light.
    parameters:
      type: object
      properties:
        entity_id:
          type: string
          description: The light entity_id retrieved from available devices. 
            It must start with the light domain, followed by dot character.
      required:
      - entity_id
  function:
    type: script
    sequence:
    - service: light.turn_on
      data:
        kelvin: '{{state_attr(entity_id, "max_color_temp_kelvin")}}'
      target:
        entity_id: '{{entity_id}}'
- spec:
    name: play_track_on_media_player
    description: Plays any track (name or artist of song) on a given media player
    parameters:
      type: object
      properties:
        track:
          type: string
          description: The track to play
        entity_id:
          type: string
          description: The media_player entity_id retrieved from available devices. 
            It must start with the media_player domain, followed by dot character.
      required:
      - track
      - entity_id
  function:
    type: script
    sequence:
    - service: mass.play_media
      data:
        media_id: '{{track}}'
        media_type: track
      target:
        entity_id: '{{entity_id}}'
- spec:
    name: play_playlist_on_media_player
    description: Plays any playlist on a given media player
    parameters:
      type: object
      properties:
        playlist:
          type: string
          description: The name of the playlist to play
        entity_id:
          type: string
          description: The media_player entity_id retrieved from available devices. 
            It must start with the media_player domain, followed by dot character.
      required:
      - playlist
      - entity_id
  function:
    type: script
    sequence:
    - service: mass.play_media
      data:
        media_id: '{{playlist}}'
        media_type: playlist
      target:
        entity_id: '{{entity_id}}'

Also, when setting up extended openai it shows gpt-3.5-turbo-1106 in that first field. I presume we set that to: meetkai/functionary-small-v2.4-GGUF or is it just functionary-small-v2.4-GGUF

This doesn’t matter and can be given any name.

Make sure that you “Enable Tools” in the Extended OpenAI configuration. And set the context threshold to 8000. Let me know if this fixes your issue.

Also, i noticed there is a v5 of the functionary small model. Has anyone tested that out?

I noticed yesterday, will try it out soon!

Thanks for the quick reply. I noticed that in my .env file i see the N_CTX=4092 value. How does this pertain to the 8000 that we set in extended openai? Im wondering if i need to set that N_CTX to 8000 as well maybe?

here’s my .env

USE_MLOCK=0
HF_MODEL_REPO_ID=meetkai/functionary-medium-v2.4-GGUF
MODEL=functionary-medium-v2.4.Q4_0.gguf
HF_PRETRAINED_MODEL_NAME_OR_PATH=meetkai/functionary-medium-v2.4-GGUF
CHAT_FORMAT=functionary-v2
N_GPU_LAYERS=33
N_CTX=4092
N_BATCH=192
N_THREADS=6

i was just about to try functionary-medium here but all the failures i was seeing pertained to -small.

When i bump the N_CTX to 8000 it seems like things just hang and openai doesnt respond.

==========
== CUDA ==
==========

CUDA Version 12.1.1

Container image Copyright (c) 2016-2023, NVIDIA CORPORATION & AFFILIATES. All rights reserved.

This container image and its contents are governed by the NVIDIA Deep Learning Container License.
By pulling and using the container, you accept the terms and conditions of this license:
https://developer.nvidia.com/ngc/nvidia-deep-learning-container-license

A copy of this license is made available in this container at /NGC-DL-CONTAINER-LICENSE for your convenience.

None of PyTorch, TensorFlow >= 2.0, or Flax have been found. Models won't be available and only tokenizers, configuration and file/data utilities can be used.
tokenizer_config.json: 100% 2.86k/2.86k [00:00<00:00, 11.7MB/s]
tokenizer.model: 100% 493k/493k [00:00<00:00, 6.06MB/s]
tokenizer.json: 100% 1.80M/1.80M [00:00<00:00, 11.2MB/s]
added_tokens.json: 100% 95.0/95.0 [00:00<00:00, 479kB/s]
special_tokens_map.json: 100% 660/660 [00:00<00:00, 3.28MB/s]
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
functionary-small-v2.4.Q4_0.gguf: 100% 4.11G/4.11G [00:58<00:00, 69.8MB/s]
llama_model_loader: loaded meta data with 25 key-value pairs and 291 tensors from /root/.cache/huggingface/hub/models--meetkai--functionary-small-v2.4-GGUF/snapshots/a0d171eb78e02a58858c464e278234afbcf85c5c/./functionary-small-v2.4.Q4_0.gguf (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.name str              = .
llama_model_loader: - kv   2:                           llama.vocab_size u32              = 32004
llama_model_loader: - kv   3:                       llama.context_length u32              = 32768
llama_model_loader: - kv   4:                     llama.embedding_length u32              = 4096
llama_model_loader: - kv   5:                          llama.block_count u32              = 32
llama_model_loader: - kv   6:                  llama.feed_forward_length u32              = 14336
llama_model_loader: - kv   7:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv   8:                 llama.attention.head_count u32              = 32
llama_model_loader: - kv   9:              llama.attention.head_count_kv u32              = 8
llama_model_loader: - kv  10:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  11:                       llama.rope.freq_base f32              = 1000000.000000
llama_model_loader: - kv  12:                          general.file_type u32              = 2
llama_model_loader: - kv  13:                       tokenizer.ggml.model str              = llama
llama_model_loader: - kv  14:                      tokenizer.ggml.tokens arr[str,32004]   = ["<unk>", "<s>", "</s>", "<0x00>", "<...
llama_model_loader: - kv  15:                      tokenizer.ggml.scores arr[f32,32004]   = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv  16:                  tokenizer.ggml.token_type arr[i32,32004]   = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
llama_model_loader: - kv  17:                tokenizer.ggml.bos_token_id u32              = 1
llama_model_loader: - kv  18:                tokenizer.ggml.eos_token_id u32              = 2
llama_model_loader: - kv  19:            tokenizer.ggml.unknown_token_id u32              = 0
llama_model_loader: - kv  20:            tokenizer.ggml.padding_token_id u32              = 2
llama_model_loader: - kv  21:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  22:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  23:                    tokenizer.chat_template str              = {% for message in messages %}\n{% if m...
llama_model_loader: - kv  24:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:   65 tensors
llama_model_loader: - type q4_0:  225 tensors
llama_model_loader: - type q6_K:    1 tensors
llm_load_vocab: special tokens definition check successful ( 263/32004 ).
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = llama
llm_load_print_meta: vocab type       = SPM
llm_load_print_meta: n_vocab          = 32004
llm_load_print_meta: n_merges         = 0
llm_load_print_meta: n_ctx_train      = 32768
llm_load_print_meta: n_embd           = 4096
llm_load_print_meta: n_head           = 32
llm_load_print_meta: n_head_kv        = 8
llm_load_print_meta: n_layer          = 32
llm_load_print_meta: n_rot            = 128
llm_load_print_meta: n_embd_head_k    = 128
llm_load_print_meta: n_embd_head_v    = 128
llm_load_print_meta: n_gqa            = 4
llm_load_print_meta: n_embd_k_gqa     = 1024
llm_load_print_meta: n_embd_v_gqa     = 1024
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-05
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale    = 0.0e+00
llm_load_print_meta: n_ff             = 14336
llm_load_print_meta: n_expert         = 0
llm_load_print_meta: n_expert_used    = 0
llm_load_print_meta: causal attn      = 1
llm_load_print_meta: pooling type     = 0
llm_load_print_meta: rope type        = 0
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 1000000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx  = 32768
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: ssm_d_conv       = 0
llm_load_print_meta: ssm_d_inner      = 0
llm_load_print_meta: ssm_d_state      = 0
llm_load_print_meta: ssm_dt_rank      = 0
llm_load_print_meta: model type       = 8B
llm_load_print_meta: model ftype      = Q4_0
llm_load_print_meta: model params     = 7.24 B
llm_load_print_meta: model size       = 3.83 GiB (4.54 BPW) 
llm_load_print_meta: general.name     = .
llm_load_print_meta: BOS token        = 1 '<s>'
llm_load_print_meta: EOS token        = 2 '</s>'
llm_load_print_meta: UNK token        = 0 '<unk>'
llm_load_print_meta: PAD token        = 2 '</s>'
llm_load_print_meta: LF token         = 13 '<0x0A>'
ggml_cuda_init: failed to initialize CUDA: forward compatibility was attempted on non supported HW
llm_load_tensors: ggml ctx size =    0.15 MiB
llm_load_tensors: offloading 32 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 33/33 layers to GPU
llm_load_tensors:        CPU buffer size =  3917.89 MiB
..................................................................................................
llama_new_context_with_model: n_ctx      = 8000
llama_new_context_with_model: n_batch    = 192
llama_new_context_with_model: n_ubatch   = 192
llama_new_context_with_model: freq_base  = 1000000.0
llama_new_context_with_model: freq_scale = 1
ggml_cuda_host_malloc: warning: failed to allocate 1000.00 MiB of pinned memory: forward compatibility was attempted on non supported HW
llama_kv_cache_init:        CPU KV buffer size =  1000.00 MiB
llama_new_context_with_model: KV self size  = 1000.00 MiB, K (f16):  500.00 MiB, V (f16):  500.00 MiB
ggml_cuda_host_malloc: warning: failed to allocate 0.14 MiB of pinned memory: forward compatibility was attempted on non supported HW
llama_new_context_with_model:        CPU  output buffer size =     0.14 MiB
ggml_cuda_host_malloc: warning: failed to allocate 205.36 MiB of pinned memory: forward compatibility was attempted on non supported HW
llama_new_context_with_model:  CUDA_Host compute buffer size =   205.36 MiB
llama_new_context_with_model: graph nodes  = 1030
llama_new_context_with_model: graph splits = 1
AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 1 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | 
Model metadata: {'tokenizer.chat_template': '{% for message in messages %}\n{% if message[\'role\'] == \'user\' or message[\'role\'] == \'system\' %}\n{{ \'<|from|>\' + message[\'role\'] + \'\n<|recipient|>all\n<|content|>\' + message[\'content\'] + \'\n\' }}{% elif message[\'role\'] == \'tool\' %}\n{{ \'<|from|>\' + message[\'name\'] + \'\n<|recipient|>all\n<|content|>\' + message[\'content\'] + \'\n\' }}{% else %}\n{% set contain_content=\'no\'%}\n{% if message[\'content\'] is not none %}\n{{ \'<|from|>assistant\n<|recipient|>all\n<|content|>\' + message[\'content\'] }}{% set contain_content=\'yes\'%}\n{% endif %}\n{% if \'tool_calls\' in message and message[\'tool_calls\'] is not none %}\n{% for tool_call in message[\'tool_calls\'] %}\n{% set prompt=\'<|from|>assistant\n<|recipient|>\' + tool_call[\'function\'][\'name\'] + \'\n<|content|>\' + tool_call[\'function\'][\'arguments\'] %}\n{% if loop.index == 1 and contain_content == "no" %}\n{{ prompt }}{% else %}\n{{ \'\n\' + prompt}}{% endif %}\n{% endfor %}\n{% endif %}\n{{ \'<|stop|>\n\' }}{% endif %}\n{% endfor %}\n{% if add_generation_prompt %}{{ \'<|from|>assistant\n<|recipient|>\' }}{% endif %}', 'tokenizer.ggml.add_eos_token': 'false', 'tokenizer.ggml.padding_token_id': '2', 'tokenizer.ggml.unknown_token_id': '0', 'tokenizer.ggml.eos_token_id': '2', 'general.quantization_version': '2', 'tokenizer.ggml.model': 'llama', 'general.architecture': 'llama', 'llama.rope.freq_base': '1000000.000000', 'llama.context_length': '32768', 'general.name': '.', 'llama.vocab_size': '32004', 'general.file_type': '2', 'tokenizer.ggml.add_bos_token': 'true', 'llama.embedding_length': '4096', 'llama.feed_forward_length': '14336', 'llama.attention.layer_norm_rms_epsilon': '0.000010', 'llama.rope.dimension_count': '128', 'tokenizer.ggml.bos_token_id': '1', 'llama.attention.head_count': '32', 'llama.block_count': '32', 'llama.attention.head_count_kv': '8'}
INFO:     Started server process [27]
INFO:     Waiting for application startup.
INFO:     Application startup complete.
INFO:     Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
ggml_cuda_host_malloc: warning: failed to allocate 26.44 MiB of pinned memory: forward compatibility was attempted on non supported HW

I have no idea what all that means. The unsupported hardware bit catches my eye though. Im running on debian bookworm in docker on a Dell precision 5820 with the GeForce GTX 1080 card for more info.

Thanks for the quick reply. I noticed that in my .env file i see the N_CTX=4092 value. How does this pertain to the 8000 that we set in extended openai? Im wondering if i need to set that N_CTX to 8000 as well maybe?

Those settings were copied from this video, but were configured for a Nvidia Jetson board. Did not focus on tweaking those values since it all worked. I noticed that setting the Context threshold > 8000 in Extended OpenAI settings would give issues when the context increased (so when you keep the chat in home assistant open and have a long conversation). Thus, having N_CTX <= 8000 in llama-cpp-python is fine. I am currently investigating if increasing N_BATCH can increase the inference speeds (but it seems I am limited by the delay of function calling of the model).

So one thing at a time, leave the settings as they were for llama-cpp-python. Make sure you use the modified Extended OpenAI integration with Context Threshold = 8000 and Use Tools checked.

Oh I missed this one. What is the maximum supported CUDA version that your GPU drivers support? Can be check with command:

nvidia-smi