Hi, thanks for your suggestion. I removed the container, edited the dockerfile, build and compose again but same error, here is the log
==========
== CUDA ==
==========
CUDA Version 12.2.2
Container image Copyright (c) 2016-2023, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
This container image and its contents are governed by the NVIDIA Deep Learning Container License.
By pulling and using the container, you accept the terms and conditions of this license:
https://developer.nvidia.com/ngc/nvidia-deep-learning-container-license
A copy of this license is made available in this container at /NGC-DL-CONTAINER-LICENSE for your convenience.
None of PyTorch, TensorFlow >= 2.0, or Flax have been found. Models won't be available and only tokenizers, configuration and file/data utilities can be used.
/usr/local/lib/python3.10/dist-packages/huggingface_hub/file_download.py:1132: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
warnings.warn(
tokenizer_config.json: 0% 0.00/2.86k [00:00<?, ?B/s]
tokenizer_config.json: 100% 2.86k/2.86k [00:00<00:00, 6.54MB/s]
tokenizer.model: 0% 0.00/493k [00:00<?, ?B/s]
tokenizer.model: 100% 493k/493k [00:00<00:00, 4.38MB/s]
tokenizer.model: 100% 493k/493k [00:00<00:00, 4.32MB/s]
tokenizer.json: 0% 0.00/1.80M [00:00<?, ?B/s]
tokenizer.json: 100% 1.80M/1.80M [00:00<00:00, 2.30MB/s]
tokenizer.json: 100% 1.80M/1.80M [00:00<00:00, 2.28MB/s]
added_tokens.json: 0% 0.00/95.0 [00:00<?, ?B/s]
added_tokens.json: 100% 95.0/95.0 [00:00<00:00, 286kB/s]
special_tokens_map.json: 0% 0.00/660 [00:00<?, ?B/s]
special_tokens_map.json: 100% 660/660 [00:00<00:00, 7.13MB/s]
You set `add_prefix_space`. The tokenizer needs to be converted from the slow tokenizers
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
llama_model_loader: loaded meta data with 25 key-value pairs and 291 tensors from /var/model/functionary-small-v2.4.Q4_0.gguf (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.name str = .
llama_model_loader: - kv 2: llama.vocab_size u32 = 32004
llama_model_loader: - kv 3: llama.context_length u32 = 32768
llama_model_loader: - kv 4: llama.embedding_length u32 = 4096
llama_model_loader: - kv 5: llama.block_count u32 = 32
llama_model_loader: - kv 6: llama.feed_forward_length u32 = 14336
llama_model_loader: - kv 7: llama.rope.dimension_count u32 = 128
llama_model_loader: - kv 8: llama.attention.head_count u32 = 32
llama_model_loader: - kv 9: llama.attention.head_count_kv u32 = 8
llama_model_loader: - kv 10: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 11: llama.rope.freq_base f32 = 1000000.000000
llama_model_loader: - kv 12: general.file_type u32 = 2
llama_model_loader: - kv 13: tokenizer.ggml.model str = llama
llama_model_loader: - kv 14: tokenizer.ggml.tokens arr[str,32004] = ["<unk>", "<s>", "</s>", "<0x00>", "<...
llama_model_loader: - kv 15: tokenizer.ggml.scores arr[f32,32004] = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv 16: tokenizer.ggml.token_type arr[i32,32004] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
llama_model_loader: - kv 17: tokenizer.ggml.bos_token_id u32 = 1
llama_model_loader: - kv 18: tokenizer.ggml.eos_token_id u32 = 2
llama_model_loader: - kv 19: tokenizer.ggml.unknown_token_id u32 = 0
llama_model_loader: - kv 20: tokenizer.ggml.padding_token_id u32 = 2
llama_model_loader: - kv 21: tokenizer.ggml.add_bos_token bool = true
llama_model_loader: - kv 22: tokenizer.ggml.add_eos_token bool = false
llama_model_loader: - kv 23: tokenizer.chat_template str = {% for message in messages %}\n{% if m...
llama_model_loader: - kv 24: general.quantization_version u32 = 2
llama_model_loader: - type f32: 65 tensors
llama_model_loader: - type q4_0: 225 tensors
llama_model_loader: - type q6_K: 1 tensors
llm_load_vocab: special tokens definition check successful ( 263/32004 ).
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = llama
llm_load_print_meta: vocab type = SPM
llm_load_print_meta: n_vocab = 32004
llm_load_print_meta: n_merges = 0
llm_load_print_meta: n_ctx_train = 32768
llm_load_print_meta: n_embd = 4096
llm_load_print_meta: n_head = 32
llm_load_print_meta: n_head_kv = 8
llm_load_print_meta: n_layer = 32
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_embd_head_k = 128
llm_load_print_meta: n_embd_head_v = 128
llm_load_print_meta: n_gqa = 4
llm_load_print_meta: n_embd_k_gqa = 1024
llm_load_print_meta: n_embd_v_gqa = 1024
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-05
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale = 0.0e+00
llm_load_print_meta: n_ff = 14336
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: causal attn = 1
llm_load_print_meta: pooling type = 0
llm_load_print_meta: rope type = 0
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 1000000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx = 32768
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: ssm_d_conv = 0
llm_load_print_meta: ssm_d_inner = 0
llm_load_print_meta: ssm_d_state = 0
llm_load_print_meta: ssm_dt_rank = 0
llm_load_print_meta: model type = 8B
llm_load_print_meta: model ftype = Q4_0
llm_load_print_meta: model params = 7.24 B
llm_load_print_meta: model size = 3.83 GiB (4.54 BPW)
llm_load_print_meta: general.name = .
llm_load_print_meta: BOS token = 1 '<s>'
llm_load_print_meta: EOS token = 2 '</s>'
llm_load_print_meta: UNK token = 0 '<unk>'
llm_load_print_meta: PAD token = 2 '</s>'
llm_load_print_meta: LF token = 13 '<0x0A>'
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
ggml_cuda_init: CUDA_USE_TENSOR_CORES: yes
ggml_cuda_init: found 1 CUDA devices:
Device 0: NVIDIA GeForce RTX 3080, compute capability 8.6, VMM: yes
llm_load_tensors: ggml ctx size = 0.30 MiB
llm_load_tensors: offloading 32 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 33/33 layers to GPU
llm_load_tensors: CPU buffer size = 70.32 MiB
llm_load_tensors: CUDA0 buffer size = 3847.57 MiB
warning: failed to mlock 74473472-byte buffer (after previously locking 0 bytes): Cannot allocate memory
Try increasing RLIMIT_MEMLOCK ('ulimit -l' as root).
..................................................................................................
llama_new_context_with_model: n_ctx = 4096
llama_new_context_with_model: n_batch = 192
llama_new_context_with_model: n_ubatch = 192
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base = 1000000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init: CUDA0 KV buffer size = 512.00 MiB
llama_new_context_with_model: KV self size = 512.00 MiB, K (f16): 256.00 MiB, V (f16): 256.00 MiB
llama_new_context_with_model: CUDA_Host output buffer size = 0.14 MiB
llama_new_context_with_model: CUDA0 compute buffer size = 111.00 MiB
llama_new_context_with_model: CUDA_Host compute buffer size = 6.00 MiB
llama_new_context_with_model: graph nodes = 1030
llama_new_context_with_model: graph splits = 2
AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 1 | AVX512_VBMI = 1 | AVX512_VNNI = 1 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 |
Model metadata: {'tokenizer.chat_template': '{% for message in messages %}\n{% if message[\'role\'] == \'user\' or message[\'role\'] == \'system\' %}\n{{ \'<|from|>\' + message[\'role\'] + \'\n<|recipient|>all\n<|content|>\' + message[\'content\'] + \'\n\' }}{% elif message[\'role\'] == \'tool\' %}\n{{ \'<|from|>\' + message[\'name\'] + \'\n<|recipient|>all\n<|content|>\' + message[\'content\'] + \'\n\' }}{% else %}\n{% set contain_content=\'no\'%}\n{% if message[\'content\'] is not none %}\n{{ \'<|from|>assistant\n<|recipient|>all\n<|content|>\' + message[\'content\'] }}{% set contain_content=\'yes\'%}\n{% endif %}\n{% if \'tool_calls\' in message and message[\'tool_calls\'] is not none %}\n{% for tool_call in message[\'tool_calls\'] %}\n{% set prompt=\'<|from|>assistant\n<|recipient|>\' + tool_call[\'function\'][\'name\'] + \'\n<|content|>\' + tool_call[\'function\'][\'arguments\'] %}\n{% if loop.index == 1 and contain_content == "no" %}\n{{ prompt }}{% else %}\n{{ \'\n\' + prompt}}{% endif %}\n{% endfor %}\n{% endif %}\n{{ \'<|stop|>\n\' }}{% endif %}\n{% endfor %}\n{% if add_generation_prompt %}{{ \'<|from|>assistant\n<|recipient|>\' }}{% endif %}', 'tokenizer.ggml.add_eos_token': 'false', 'tokenizer.ggml.padding_token_id': '2', 'tokenizer.ggml.unknown_token_id': '0', 'tokenizer.ggml.eos_token_id': '2', 'general.quantization_version': '2', 'tokenizer.ggml.model': 'llama', 'general.architecture': 'llama', 'llama.rope.freq_base': '1000000.000000', 'llama.context_length': '32768', 'general.name': '.', 'llama.vocab_size': '32004', 'general.file_type': '2', 'tokenizer.ggml.add_bos_token': 'true', 'llama.embedding_length': '4096', 'llama.feed_forward_length': '14336', 'llama.attention.layer_norm_rms_epsilon': '0.000010', 'llama.rope.dimension_count': '128', 'tokenizer.ggml.bos_token_id': '1', 'llama.attention.head_count': '32', 'llama.block_count': '32', 'llama.attention.head_count_kv': '8'}
INFO: Started server process [27]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
llama_print_timings: load time = 1167.58 ms
llama_print_timings: sample time = 0.71 ms / 3 runs ( 0.24 ms per token, 4207.57 tokens per second)
llama_print_timings: prompt eval time = 1633.05 ms / 1069 tokens ( 1.53 ms per token, 654.60 tokens per second)
llama_print_timings: eval time = 4772.44 ms / 2 runs ( 2386.22 ms per token, 0.42 tokens per second)
llama_print_timings: total time = 7802.39 ms / 1071 tokens
Llama.generate: prefix-match hit
llama_print_timings: load time = 1167.58 ms
llama_print_timings: sample time = 16.19 ms / 105 runs ( 0.15 ms per token, 6485.89 tokens per second)
llama_print_timings: prompt eval time = 0.00 ms / 1 tokens ( 0.00 ms per token, inf tokens per second)
llama_print_timings: eval time = 1130.92 ms / 105 runs ( 10.77 ms per token, 92.84 tokens per second)
llama_print_timings: total time = 2178.45 ms / 106 tokens
INFO: 192.168.22.24:49774 - "POST /v1/chat/completions HTTP/1.1" 200 OK
Llama.generate: prefix-match hit
llama_print_timings: load time = 1167.58 ms
llama_print_timings: sample time = 0.72 ms / 5 runs ( 0.14 ms per token, 6983.24 tokens per second)
llama_print_timings: prompt eval time = 87.12 ms / 130 tokens ( 0.67 ms per token, 1492.14 tokens per second)
llama_print_timings: eval time = 43.32 ms / 4 runs ( 10.83 ms per token, 92.34 tokens per second)
llama_print_timings: total time = 329.79 ms / 134 tokens
from_string grammar:
char ::= [^"\] | [\] char_1
char_1 ::= ["\/bfnrt] | [u] [0-9a-fA-F] [0-9a-fA-F] [0-9a-fA-F] [0-9a-fA-F]
list ::= [[] space list_8 []] space
space ::= space_19
list_4 ::= list-item list_7
list-item ::= [{] space list-item-domain-kv [,] space list-item-service-kv [,] space list-item-service-data-kv [}] space
list_6 ::= [,] space list-item
list_7 ::= list_6 list_7 |
list_8 ::= list_4 |
list-item-domain-kv ::= ["] [d] [o] [m] [a] [i] [n] ["] space [:] space string
list-item-service-kv ::= ["] [s] [e] [r] [v] [i] [c] [e] ["] space [:] space string
list-item-service-data-kv ::= ["] [s] [e] [r] [v] [i] [c] [e] [_] [d] [a] [t] [a] ["] space [:] space list-item-service-data
string ::= ["] string_20 ["] space
list-item-service-data ::= [{] space list-item-service-data-entity-id-kv [}] space
list-item-service-data-entity-id-kv ::= ["] [e] [n] [t] [i] [t] [y] [_] [i] [d] ["] space [:] space string
list-kv ::= ["] [l] [i] [s] [t] ["] space [:] space list
root ::= [{] space root_18 [}] space
root_17 ::= list-kv
root_18 ::= root_17 |
space_19 ::= [ ] |
string_20 ::= char string_20 |
char ::= [^"\\] | "\\" (["\\/bfnrt] | "u" [0-9a-fA-F] [0-9a-fA-F] [0-9a-fA-F] [0-9a-fA-F])
list ::= "[" space (list-item ("," space list-item)*)? "]" space
list-item ::= "{" space list-item-domain-kv "," space list-item-service-kv "," space list-item-service-data-kv "}" space
list-item-domain-kv ::= "\"domain\"" space ":" space string
list-item-service-data ::= "{" space list-item-service-data-entity-id-kv "}" space
list-item-service-data-entity-id-kv ::= "\"entity_id\"" space ":" space string
list-item-service-data-kv ::= "\"service_data\"" space ":" space list-item-service-data
list-item-service-kv ::= "\"service\"" space ":" space string
list-kv ::= "\"list\"" space ":" space list
root ::= "{" space (list-kv )? "}" space
space ::= " "?
string ::= "\"" char* "\"" space
Llama.generate: prefix-match hit
llama_print_timings: load time = 1167.58 ms
llama_print_timings: sample time = 157.62 ms / 39 runs ( 4.04 ms per token, 247.43 tokens per second)
llama_print_timings: prompt eval time = 0.00 ms / 1 tokens ( 0.00 ms per token, inf tokens per second)
llama_print_timings: eval time = 424.33 ms / 39 runs ( 10.88 ms per token, 91.91 tokens per second)
llama_print_timings: total time = 987.77 ms / 40 tokens
Llama.generate: prefix-match hit
llama_print_timings: load time = 1167.58 ms
llama_print_timings: sample time = 0.14 ms / 1 runs ( 0.14 ms per token, 6896.55 tokens per second)
llama_print_timings: prompt eval time = 60.03 ms / 38 tokens ( 1.58 ms per token, 633.00 tokens per second)
llama_print_timings: eval time = 0.00 ms / 1 runs ( 0.00 ms per token, inf tokens per second)
llama_print_timings: total time = 110.12 ms / 39 tokens
INFO: 192.168.22.24:44866 - "POST /v1/chat/completions HTTP/1.1" 200 OK
Exception: [{'type': 'literal_error', 'loc': ('body', 'messages', 2, 'typed-dict', 'role'), 'msg': "Input should be 'system'", 'input': 'assistant', 'ctx': {'expected': "'system'"}}, {'type': 'literal_error', 'loc': ('body', 'messages', 2, 'typed-dict', 'role'), 'msg': "Input should be 'user'", 'input': 'assistant', 'ctx': {'expected': "'user'"}}, {'type': 'dict_type', 'loc': ('body', 'messages', 2, 'typed-dict', 'function_call'), 'msg': 'Input should be a valid dictionary', 'input': None}, {'type': 'literal_error', 'loc': ('body', 'messages', 2, 'typed-dict', 'role'), 'msg': "Input should be 'tool'", 'input': 'assistant', 'ctx': {'expected': "'tool'"}}, {'type': 'missing', 'loc': ('body', 'messages', 2, 'typed-dict', 'tool_call_id'), 'msg': 'Field required', 'input': {'content': None, 'role': 'assistant', 'function_call': None, 'tool_calls': [{'id': 'call_ES0hdHmriocpwwkwi7IePMTg', 'function': {'arguments': '{"list": [{"domain": "light", "service": "turn_off", "service_data": {"entity_id": "light.lampadario"}}]}', 'name': 'execute_services'}, 'type': 'function'}]}}, {'type': 'literal_error', 'loc': ('body', 'messages', 2, 'typed-dict', 'role'), 'msg': "Input should be 'function'", 'input': 'assistant', 'ctx': {'expected': "'function'"}}, {'type': 'missing', 'loc': ('body', 'messages', 2, 'typed-dict', 'name'), 'msg': 'Field required', 'input': {'content': None, 'role': 'assistant', 'function_call': None, 'tool_calls': [{'id': 'call_ES0hdHmriocpwwkwi7IePMTg', 'function': {'arguments': '{"list": [{"domain": "light", "service": "turn_off", "service_data": {"entity_id": "light.lampadario"}}]}', 'name': 'execute_services'}, 'type': 'function'}]}}]
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/llama_cpp/server/errors.py", line 171, in custom_route_handler
response = await original_route_handler(request)
File "/usr/local/lib/python3.10/dist-packages/fastapi/routing.py", line 315, in app
raise validation_error
fastapi.exceptions.RequestValidationError: [{'type': 'literal_error', 'loc': ('body', 'messages', 2, 'typed-dict', 'role'), 'msg': "Input should be 'system'", 'input': 'assistant', 'ctx': {'expected': "'system'"}}, {'type': 'literal_error', 'loc': ('body', 'messages', 2, 'typed-dict', 'role'), 'msg': "Input should be 'user'", 'input': 'assistant', 'ctx': {'expected': "'user'"}}, {'type': 'dict_type', 'loc': ('body', 'messages', 2, 'typed-dict', 'function_call'), 'msg': 'Input should be a valid dictionary', 'input': None}, {'type': 'literal_error', 'loc': ('body', 'messages', 2, 'typed-dict', 'role'), 'msg': "Input should be 'tool'", 'input': 'assistant', 'ctx': {'expected': "'tool'"}}, {'type': 'missing', 'loc': ('body', 'messages', 2, 'typed-dict', 'tool_call_id'), 'msg': 'Field required', 'input': {'content': None, 'role': 'assistant', 'function_call': None, 'tool_calls': [{'id': 'call_ES0hdHmriocpwwkwi7IePMTg', 'function': {'arguments': '{"list": [{"domain": "light", "service": "turn_off", "service_data": {"entity_id": "light.lampadario"}}]}', 'name': 'execute_services'}, 'type': 'function'}]}}, {'type': 'literal_error', 'loc': ('body', 'messages', 2, 'typed-dict', 'role'), 'msg': "Input should be 'function'", 'input': 'assistant', 'ctx': {'expected': "'function'"}}, {'type': 'missing', 'loc': ('body', 'messages', 2, 'typed-dict', 'name'), 'msg': 'Field required', 'input': {'content': None, 'role': 'assistant', 'function_call': None, 'tool_calls': [{'id': 'call_ES0hdHmriocpwwkwi7IePMTg', 'function': {'arguments': '{"list": [{"domain": "light", "service": "turn_off", "service_data": {"entity_id": "light.lampadario"}}]}', 'name': 'execute_services'}, 'type': 'function'}]}}]
INFO: 192.168.22.24:44866 - "POST /v1/chat/completions HTTP/1.1" 500 Internal Server Error
Exception: [{'type': 'literal_error', 'loc': ('body', 'messages', 2, 'typed-dict', 'role'), 'msg': "Input should be 'system'", 'input': 'assistant', 'ctx': {'expected': "'system'"}}, {'type': 'literal_error', 'loc': ('body', 'messages', 2, 'typed-dict', 'role'), 'msg': "Input should be 'user'", 'input': 'assistant', 'ctx': {'expected': "'user'"}}, {'type': 'dict_type', 'loc': ('body', 'messages', 2, 'typed-dict', 'function_call'), 'msg': 'Input should be a valid dictionary', 'input': None}, {'type': 'literal_error', 'loc': ('body', 'messages', 2, 'typed-dict', 'role'), 'msg': "Input should be 'tool'", 'input': 'assistant', 'ctx': {'expected': "'tool'"}}, {'type': 'missing', 'loc': ('body', 'messages', 2, 'typed-dict', 'tool_call_id'), 'msg': 'Field required', 'input': {'content': None, 'role': 'assistant', 'function_call': None, 'tool_calls': [{'id': 'call_ES0hdHmriocpwwkwi7IePMTg', 'function': {'arguments': '{"list": [{"domain": "light", "service": "turn_off", "service_data": {"entity_id": "light.lampadario"}}]}', 'name': 'execute_services'}, 'type': 'function'}]}}, {'type': 'literal_error', 'loc': ('body', 'messages', 2, 'typed-dict', 'role'), 'msg': "Input should be 'function'", 'input': 'assistant', 'ctx': {'expected': "'function'"}}, {'type': 'missing', 'loc': ('body', 'messages', 2, 'typed-dict', 'name'), 'msg': 'Field required', 'input': {'content': None, 'role': 'assistant', 'function_call': None, 'tool_calls': [{'id': 'call_ES0hdHmriocpwwkwi7IePMTg', 'function': {'arguments': '{"list": [{"domain": "light", "service": "turn_off", "service_data": {"entity_id": "light.lampadario"}}]}', 'name': 'execute_services'}, 'type': 'function'}]}}]
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/llama_cpp/server/errors.py", line 171, in custom_route_handler
response = await original_route_handler(request)
File "/usr/local/lib/python3.10/dist-packages/fastapi/routing.py", line 315, in app
raise validation_error
fastapi.exceptions.RequestValidationError: [{'type': 'literal_error', 'loc': ('body', 'messages', 2, 'typed-dict', 'role'), 'msg': "Input should be 'system'", 'input': 'assistant', 'ctx': {'expected': "'system'"}}, {'type': 'literal_error', 'loc': ('body', 'messages', 2, 'typed-dict', 'role'), 'msg': "Input should be 'user'", 'input': 'assistant', 'ctx': {'expected': "'user'"}}, {'type': 'dict_type', 'loc': ('body', 'messages', 2, 'typed-dict', 'function_call'), 'msg': 'Input should be a valid dictionary', 'input': None}, {'type': 'literal_error', 'loc': ('body', 'messages', 2, 'typed-dict', 'role'), 'msg': "Input should be 'tool'", 'input': 'assistant', 'ctx': {'expected': "'tool'"}}, {'type': 'missing', 'loc': ('body', 'messages', 2, 'typed-dict', 'tool_call_id'), 'msg': 'Field required', 'input': {'content': None, 'role': 'assistant', 'function_call': None, 'tool_calls': [{'id': 'call_ES0hdHmriocpwwkwi7IePMTg', 'function': {'arguments': '{"list": [{"domain": "light", "service": "turn_off", "service_data": {"entity_id": "light.lampadario"}}]}', 'name': 'execute_services'}, 'type': 'function'}]}}, {'type': 'literal_error', 'loc': ('body', 'messages', 2, 'typed-dict', 'role'), 'msg': "Input should be 'function'", 'input': 'assistant', 'ctx': {'expected': "'function'"}}, {'type': 'missing', 'loc': ('body', 'messages', 2, 'typed-dict', 'name'), 'msg': 'Field required', 'input': {'content': None, 'role': 'assistant', 'function_call': None, 'tool_calls': [{'id': 'call_ES0hdHmriocpwwkwi7IePMTg', 'function': {'arguments': '{"list": [{"domain": "light", "service": "turn_off", "service_data": {"entity_id": "light.lampadario"}}]}', 'name': 'execute_services'}, 'type': 'function'}]}}]
INFO: 192.168.22.24:44866 - "POST /v1/chat/completions HTTP/1.1" 500 Internal Server Error
Exception: [{'type': 'literal_error', 'loc': ('body', 'messages', 2, 'typed-dict', 'role'), 'msg': "Input should be 'system'", 'input': 'assistant', 'ctx': {'expected': "'system'"}}, {'type': 'literal_error', 'loc': ('body', 'messages', 2, 'typed-dict', 'role'), 'msg': "Input should be 'user'", 'input': 'assistant', 'ctx': {'expected': "'user'"}}, {'type': 'dict_type', 'loc': ('body', 'messages', 2, 'typed-dict', 'function_call'), 'msg': 'Input should be a valid dictionary', 'input': None}, {'type': 'literal_error', 'loc': ('body', 'messages', 2, 'typed-dict', 'role'), 'msg': "Input should be 'tool'", 'input': 'assistant', 'ctx': {'expected': "'tool'"}}, {'type': 'missing', 'loc': ('body', 'messages', 2, 'typed-dict', 'tool_call_id'), 'msg': 'Field required', 'input': {'content': None, 'role': 'assistant', 'function_call': None, 'tool_calls': [{'id': 'call_ES0hdHmriocpwwkwi7IePMTg', 'function': {'arguments': '{"list": [{"domain": "light", "service": "turn_off", "service_data": {"entity_id": "light.lampadario"}}]}', 'name': 'execute_services'}, 'type': 'function'}]}}, {'type': 'literal_error', 'loc': ('body', 'messages', 2, 'typed-dict', 'role'), 'msg': "Input should be 'function'", 'input': 'assistant', 'ctx': {'expected': "'function'"}}, {'type': 'missing', 'loc': ('body', 'messages', 2, 'typed-dict', 'name'), 'msg': 'Field required', 'input': {'content': None, 'role': 'assistant', 'function_call': None, 'tool_calls': [{'id': 'call_ES0hdHmriocpwwkwi7IePMTg', 'function': {'arguments': '{"list": [{"domain": "light", "service": "turn_off", "service_data": {"entity_id": "light.lampadario"}}]}', 'name': 'execute_services'}, 'type': 'function'}]}}]
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/llama_cpp/server/errors.py", line 171, in custom_route_handler
response = await original_route_handler(request)
File "/usr/local/lib/python3.10/dist-packages/fastapi/routing.py", line 315, in app
raise validation_error
fastapi.exceptions.RequestValidationError: [{'type': 'literal_error', 'loc': ('body', 'messages', 2, 'typed-dict', 'role'), 'msg': "Input should be 'system'", 'input': 'assistant', 'ctx': {'expected': "'system'"}}, {'type': 'literal_error', 'loc': ('body', 'messages', 2, 'typed-dict', 'role'), 'msg': "Input should be 'user'", 'input': 'assistant', 'ctx': {'expected': "'user'"}}, {'type': 'dict_type', 'loc': ('body', 'messages', 2, 'typed-dict', 'function_call'), 'msg': 'Input should be a valid dictionary', 'input': None}, {'type': 'literal_error', 'loc': ('body', 'messages', 2, 'typed-dict', 'role'), 'msg': "Input should be 'tool'", 'input': 'assistant', 'ctx': {'expected': "'tool'"}}, {'type': 'missing', 'loc': ('body', 'messages', 2, 'typed-dict', 'tool_call_id'), 'msg': 'Field required', 'input': {'content': None, 'role': 'assistant', 'function_call': None, 'tool_calls': [{'id': 'call_ES0hdHmriocpwwkwi7IePMTg', 'function': {'arguments': '{"list": [{"domain": "light", "service": "turn_off", "service_data": {"entity_id": "light.lampadario"}}]}', 'name': 'execute_services'}, 'type': 'function'}]}}, {'type': 'literal_error', 'loc': ('body', 'messages', 2, 'typed-dict', 'role'), 'msg': "Input should be 'function'", 'input': 'assistant', 'ctx': {'expected': "'function'"}}, {'type': 'missing', 'loc': ('body', 'messages', 2, 'typed-dict', 'name'), 'msg': 'Field required', 'input': {'content': None, 'role': 'assistant', 'function_call': None, 'tool_calls': [{'id': 'call_ES0hdHmriocpwwkwi7IePMTg', 'function': {'arguments': '{"list": [{"domain": "light", "service": "turn_off", "service_data": {"entity_id": "light.lampadario"}}]}', 'name': 'execute_services'}, 'type': 'function'}]}}]
INFO: 192.168.22.24:44866 - "POST /v1/chat/completions HTTP/1.1" 500 Internal Server Error