-
Notifications
You must be signed in to change notification settings - Fork 0
Expand file tree
/
Copy pathlogs
More file actions
50 lines (50 loc) · 8.34 KB
/
logs
File metadata and controls
50 lines (50 loc) · 8.34 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
Attaching to embedding-model-1, qdrant-1, web-1
qdrant-1 | _ _
qdrant-1 | __ _ __| |_ __ __ _ _ __ | |_
qdrant-1 | / _` |/ _` | '__/ _` | '_ \| __|
qdrant-1 | | (_| | (_| | | | (_| | | | | |_
qdrant-1 | \__, |\__,_|_| \__,_|_| |_|\__|
qdrant-1 | |_|
qdrant-1 |
qdrant-1 | Version: 1.14.0, build: 3617a011
qdrant-1 | Access web UI at http://localhost:6333/dashboard
qdrant-1 |
qdrant-1 | 2025-07-02T15:10:35.506524Z INFO storage::content_manager::consensus::persistent: Loading raft state from ./storage/raft_state.json
qdrant-1 | 2025-07-02T15:10:35.518777Z INFO storage::content_manager::toc: Loading collection: make_this_parameterizable_per_api_call
embedding-model-1 | [2m2025-07-02T15:10:35.596107Z[0m [32m INFO[0m [2mtext_embeddings_router[0m[2m:[0m [2mrouter/src/main.rs[0m[2m:[0m[2m189:[0m Args { model_id: "nom**-**/*****-*****-****-v1.5", revision: None, tokenization_workers: None, dtype: None, pooling: None, max_concurrent_requests: 512, max_batch_tokens: 16384, max_batch_requests: None, max_client_batch_size: 32, auto_truncate: false, default_prompt_name: None, default_prompt: None, hf_api_token: None, hf_token: None, hostname: "e2a69498f1f8", port: 80, uds_path: "/tmp/text-embeddings-inference-server", huggingface_hub_cache: Some("/data"), payload_limit: 2000000, api_key: None, json_output: false, disable_spans: false, otlp_endpoint: None, otlp_service_name: "text-embeddings-inference.server", prometheus_port: 9000, cors_allow_origin: None }
embedding-model-1 | [2m2025-07-02T15:10:35.740037Z[0m [32m INFO[0m [1mdownload_artifacts[0m[2m:[0m [2mtext_embeddings_core::download[0m[2m:[0m [2mcore/src/download.rs[0m[2m:[0m[2m20:[0m Starting download
embedding-model-1 | [2m2025-07-02T15:10:35.740549Z[0m [32m INFO[0m [1mdownload_artifacts[0m[2m:[0m[1mdownload_pool_config[0m[2m:[0m [2mtext_embeddings_core::download[0m[2m:[0m [2mcore/src/download.rs[0m[2m:[0m[2m53:[0m Downloading `1_Pooling/config.json`
embedding-model-1 | [2m2025-07-02T15:10:35.763406Z[0m [32m INFO[0m [1mdownload_artifacts[0m[2m:[0m[1mdownload_new_st_config[0m[2m:[0m [2mtext_embeddings_core::download[0m[2m:[0m [2mcore/src/download.rs[0m[2m:[0m[2m77:[0m Downloading `config_sentence_transformers.json`
embedding-model-1 | [2m2025-07-02T15:10:35.767840Z[0m [32m INFO[0m [1mdownload_artifacts[0m[2m:[0m [2mtext_embeddings_core::download[0m[2m:[0m [2mcore/src/download.rs[0m[2m:[0m[2m40:[0m Downloading `config.json`
embedding-model-1 | [2m2025-07-02T15:10:35.776679Z[0m [32m INFO[0m [1mdownload_artifacts[0m[2m:[0m [2mtext_embeddings_core::download[0m[2m:[0m [2mcore/src/download.rs[0m[2m:[0m[2m43:[0m Downloading `tokenizer.json`
embedding-model-1 | [2m2025-07-02T15:10:35.783314Z[0m [32m INFO[0m [1mdownload_artifacts[0m[2m:[0m [2mtext_embeddings_core::download[0m[2m:[0m [2mcore/src/download.rs[0m[2m:[0m[2m47:[0m Model artifacts downloaded in 43.234041ms
embedding-model-1 | [2m2025-07-02T15:10:35.835549Z[0m [32m INFO[0m [2mtext_embeddings_router[0m[2m:[0m [2mrouter/src/lib.rs[0m[2m:[0m[2m193:[0m Maximum number of tokens per request: 8192
embedding-model-1 | [2m2025-07-02T15:10:35.842553Z[0m [32m INFO[0m [2mtext_embeddings_core::tokenization[0m[2m:[0m [2mcore/src/tokenization.rs[0m[2m:[0m[2m38:[0m Starting 12 tokenization workers
qdrant-1 | 2025-07-02T15:10:35.868768Z INFO collection::shards::local_shard: Recovering shard ./storage/collections/make_this_parameterizable_per_api_call/0: 0/1 (0%)
qdrant-1 | 2025-07-02T15:10:35.879600Z INFO collection::shards::local_shard: Recovered collection make_this_parameterizable_per_api_call: 1/1 (100%)
qdrant-1 | 2025-07-02T15:10:35.884894Z INFO storage::content_manager::toc: Loading collection: lcg_collection
embedding-model-1 | [2m2025-07-02T15:10:35.930772Z[0m [32m INFO[0m [2mtext_embeddings_router[0m[2m:[0m [2mrouter/src/lib.rs[0m[2m:[0m[2m235:[0m Starting model backend
embedding-model-1 | [2m2025-07-02T15:10:35.932198Z[0m [32m INFO[0m [2mtext_embeddings_backend[0m[2m:[0m [2mbackends/src/lib.rs[0m[2m:[0m[2m534:[0m Downloading `model.onnx`
qdrant-1 | 2025-07-02T15:10:36.020276Z INFO collection::shards::local_shard: Recovering shard ./storage/collections/lcg_collection/0: 0/1 (0%)
qdrant-1 | 2025-07-02T15:10:36.024511Z INFO collection::shards::local_shard: Recovered collection lcg_collection: 1/1 (100%)
qdrant-1 | 2025-07-02T15:10:36.026516Z INFO qdrant: Distributed mode disabled
qdrant-1 | 2025-07-02T15:10:36.026590Z INFO qdrant: Telemetry reporting enabled, id: ede99c89-8385-4654-bba9-f9e4fb4d9cb4
qdrant-1 | 2025-07-02T15:10:36.026701Z INFO qdrant: Inference service is not configured.
qdrant-1 | 2025-07-02T15:10:36.027025Z INFO qdrant::actix: TLS disabled for REST API
qdrant-1 | 2025-07-02T15:10:36.027081Z INFO qdrant::actix: Qdrant HTTP listening on 6333
qdrant-1 | 2025-07-02T15:10:36.027091Z INFO actix_server::builder: Starting 11 workers
qdrant-1 | 2025-07-02T15:10:36.027104Z INFO actix_server::server: Actix runtime found; starting in Actix runtime
qdrant-1 | 2025-07-02T15:10:36.029868Z INFO qdrant::tonic: Qdrant gRPC listening on 6334
qdrant-1 | 2025-07-02T15:10:36.029877Z INFO qdrant::tonic: TLS disabled for gRPC API
embedding-model-1 | [2m2025-07-02T15:10:36.129959Z[0m [33m WARN[0m [2mtext_embeddings_backend[0m[2m:[0m [2mbackends/src/lib.rs[0m[2m:[0m[2m538:[0m Could not download `model.onnx`: request error: HTTP status client error (404 Not Found) for url (https://huggingface.co/nomic-ai/nomic-embed-text-v1.5/resolve/main/model.onnx)
embedding-model-1 | [2m2025-07-02T15:10:36.130090Z[0m [32m INFO[0m [2mtext_embeddings_backend[0m[2m:[0m [2mbackends/src/lib.rs[0m[2m:[0m[2m539:[0m Downloading `onnx/model.onnx`
embedding-model-1 | [2m2025-07-02T15:10:36.130691Z[0m [32m INFO[0m [2mtext_embeddings_backend[0m[2m:[0m [2mbackends/src/lib.rs[0m[2m:[0m[2m548:[0m Downloading `model.onnx_data`
embedding-model-1 | [2m2025-07-02T15:10:36.260482Z[0m [33m WARN[0m [2mtext_embeddings_backend[0m[2m:[0m [2mbackends/src/lib.rs[0m[2m:[0m[2m552:[0m Could not download `model.onnx_data`: request error: HTTP status client error (404 Not Found) for url (https://huggingface.co/nomic-ai/nomic-embed-text-v1.5/resolve/main/model.onnx_data)
embedding-model-1 | [2m2025-07-02T15:10:36.260613Z[0m [32m INFO[0m [2mtext_embeddings_backend[0m[2m:[0m [2mbackends/src/lib.rs[0m[2m:[0m[2m553:[0m Downloading `onnx/model.onnx_data`
embedding-model-1 | [2m2025-07-02T15:10:36.389432Z[0m [33m WARN[0m [2mtext_embeddings_backend[0m[2m:[0m [2mbackends/src/lib.rs[0m[2m:[0m[2m557:[0m Could not download `onnx/model.onnx_data`: request error: HTTP status client error (404 Not Found) for url (https://huggingface.co/nomic-ai/nomic-embed-text-v1.5/resolve/main/onnx/model.onnx_data)
embedding-model-1 | [2m2025-07-02T15:10:36.389651Z[0m [32m INFO[0m [2mtext_embeddings_backend[0m[2m:[0m [2mbackends/src/lib.rs[0m[2m:[0m[2m349:[0m Model ONNX weights downloaded in 457.376709ms
embedding-model-1 | [2m2025-07-02T15:10:39.389505Z[0m [33m WARN[0m [2mtext_embeddings_router[0m[2m:[0m [2mrouter/src/lib.rs[0m[2m:[0m[2m263:[0m Backend does not support a batch size > 8
embedding-model-1 | [2m2025-07-02T15:10:39.389569Z[0m [33m WARN[0m [2mtext_embeddings_router[0m[2m:[0m [2mrouter/src/lib.rs[0m[2m:[0m[2m264:[0m forcing `max_batch_requests=8`
embedding-model-1 | [2m2025-07-02T15:10:39.390984Z[0m [33m WARN[0m [2mtext_embeddings_router[0m[2m:[0m [2mrouter/src/lib.rs[0m[2m:[0m[2m313:[0m Invalid hostname, defaulting to 0.0.0.0
embedding-model-1 | [2m2025-07-02T15:10:39.419125Z[0m [32m INFO[0m [2mtext_embeddings_router::http::server[0m[2m:[0m [2mrouter/src/http/server.rs[0m[2m:[0m[2m1847:[0m Starting HTTP server: 0.0.0.0:80
embedding-model-1 | [2m2025-07-02T15:10:39.419140Z[0m [32m INFO[0m [2mtext_embeddings_router::http::server[0m[2m:[0m [2mrouter/src/http/server.rs[0m[2m:[0m[2m1848:[0m Ready