An open source embedding vector similarity search engine powered by Faiss, NMSLIB and Annoy

milvus banner

Click to take a quick look at our demos!
Image search Chatbots Chemical structure search

Milvus is an open-source vector database built to power AI applications and embedding similarity search. Milvus makes unstructured data search more accessible, and provides a consistent user experience regardless of the deployment environment.

Milvus was released under the open-source Apache License 2.0 in October 2019. It is currently an incubation-stage project under LF AI & Data Foundation.

  • Blazing Fast

Average latency measured in milliseconds on ten million vector datasets.

Supports CPU SIMD, GPU, and FPGA accelerations, fully utilizing available hardware resources to achieve cost efficiency.

  • Easy to Use

Rich APIs designed for data science workflows.

Consistent cross-platform UX from laptop, to local cluster, to cloud.

Embed real-time search and analytics into virtually any application.

  • Stable and Resilient

Milvus’ built-in replication and failover/failback features ensure data and applications can maintain business continuity in the event of a disruption.

  • High Elasticity

Component-level scalability makes it possible to only scale where necessary.

  • Community Backed

With over 1,000 enterprise users, 5,000+ stars on GitHub, and an active open-source community, you’re not alone when you use Milvus.

IMPORTANT The master branch is for the development of Milvus v2.0. On March 9th, 2021, we released Milvus v1.0, the first stable version of Milvus with long-term support. To use Milvus v1.0, switch to branch 1.0.

Getting Started

Demos

  • Image Search: Images made searchable. Instantaneously return the most similar images from a massive database.
  • Chatbots: Interactive digital customer service that saves users time and businesses money.
  • Chemical Structure Search: Blazing fast similarity search, substructure search, or superstructure search for a specified molecule.

Contributing

Contributions to Milvus are welcome from everyone. See Guidelines for Contributing for details on submitting patches and the contribution workflow. See our community repository to learn about our governance and access more community resources.

Documentation

Milvus Docs

For documentation about Milvus, see Milvus Docs.

SDK

The implemented SDK and its API documentatation are listed below:

Recommended Articles

Contact

Join the Milvus community on Slack Channel to share your suggestions, advice, and questions with our engineering team. You can also submit questions to our FAQ page.

Subscribe to our mailing lists:

Follow us on social media:

License

Milvus is licensed under the Apache License, Version 2.0. View a copy of the License file.

Acknowledgments

Milvus adopts dependencies from the following:

  • Thank FAISS for the excellent search library.
  • Thank etcd for providing some great open-source tools.
  • Thank Pulsar for its great distributed information pub/sub platform.
  • Thank RocksDB for the powerful storage engines.
Owner
The Milvus Project
The open source vector database designed for AI applications
The Milvus Project
Comments
  • Refactor QueryCoord

    Refactor QueryCoord

  • [Bug]: Crash caused by memory explosion

    [Bug]: Crash caused by memory explosion

    Is there an existing issue for this?

    • [X] I have searched the existing issues

    Environment

    - Milvus version:2.0.2
    - Deployment mode(standalone or cluster):standalone
    - SDK version(e.g. pymilvus v2.0.0rc2):java 2.0.4
    - OS(Ubuntu or CentOS): ubuntu
    - CPU/Memory: 24C/32G
    - GPU: 
    - Others:
    

    Current Behavior

    Changing the entity of the collection, adding or deleting it during collection query will cause the memory to continue to rise abnormally and eventually crash

    Expected Behavior

    Changing entities while querying is not affected

    Steps To Reproduce

    1、create collection,768DIM,IVF_SQ8 nlist 256 IP
    2、insert 10W entities into the collection
    3、load entities from collection for query param,do the query with per entity;
    5、during the query,adding or deleteing the entities
    

    Milvus Log

    No response

    Anything else?

    No response

  • Support diskann index for vector field

    Support diskann index for vector field

    issue: #19092

    For Diskann index, sdk add a new index type (DISKANN) and a new search param(search_list), the range of search_list is (topk, min(topk*10, 65535))

    Python Example:

        metric_type = 'L2'
        index_type = 'DISKANN'
        index_param = {
            "metric_type": metric_type,
            "index_type": index_type,
            "params": {}
        }
    
        res = collection.create_index("float_vector", index_param)
    
        search_params = {"metric_type": metric_type, "params": {"search_list": 7}}
        results = collection.search(
            vec_data[:nq],
            anns_field="xxxx",
            param=search_params,
            limit=5,
            consistency_level="Strong"
        )
    

    Signed-off-by: xige-16 [email protected]

  • [Bug]: [Nightly] Auto compaction sometimes runs slowly

    [Bug]: [Nightly] Auto compaction sometimes runs slowly

    Is there an existing issue for this?

    • [X] I have searched the existing issues

    Environment

    - Milvus version: latest
    - Deployment mode(standalone or cluster): standalone & cluster
    - SDK version(e.g. pymilvus v2.0.0rc2): pymilvus rc9.22
    - OS(Ubuntu or CentOS): 
    - CPU/Memory: 
    - GPU: 
    - Others:
    

    Current Behavior

    Auto compaction (segments reached 10), compaction could not be completed in 60s

    [2021-12-09T00:38:55.908Z] >               raise BaseException(1, "Ccompact auto-merge more than 60s")
    [2021-12-09T00:38:55.908Z] E               BaseException: (1, 'Ccompact auto-merge more than 60s')
    

    Expected Behavior

    Auto compaction (segments reached 10), compaction could be completed in relatively short time.

    Steps To Reproduce

    Run nightly: https://ci.milvus.io:18080/jenkins/blue/organizations/jenkins/milvus-nightly-ci/detail/master/256/pipeline/106

    Logs: https://ci.milvus.io:18080/jenkins/blue/organizations/jenkins/milvus-nightly-ci/detail/master/256/artifacts

    Failed case: testcases/test_compaction.py::TestCompactionOperation::test_compact_threshold_auto_merge

    Failed collection name: compact_QmYxDTZS

    TimeLine:

    [2021-12-08T22:07:42.854Z] [gw2] [ 17%] FAILED testcases/test_compaction.py::TestCompactionOperation::test_compact_threshold_auto_merge 
    

    Anything else?

    No response

  • After the distributed Milvus system is continuously inserted, the memory usage of the write node has been very high.

    After the distributed Milvus system is continuously inserted, the memory usage of the write node has been very high.

    Please state your issue using the following template and, most importantly, in English.

    After the distributed Milvus system is continuously inserted, the memory usage of the write node has been very high.

    Describe the bug After the distributed Milvus system is continuously inserted, the memory of the write node is maintained at more than 90%.

    Steps/Code to reproduce behavior

    1.Milvus version: 1.1.0 2.database: mysql 3.milvus cluster 20 read only nodes, cpu 32C + 56G memory 1 read write node, cpu 32C + 120G memory 4. Start the milvus service, create a new test collection, and create the SQ8 index on the test collection. Here is the param. { "colName": "test", "dimension": 128, "indexFileSize": 4096, "metricType": 1 } { "colName": "test", "nlist": 4096, "indexType": 3 }

    1. Continuously insert 200 million data at 5000qps to write node and Insert to 10 partitions evenly.

    During the execution of the above steps, the following problems will occur

    1. The memory of the write node continues to rise until it exceeds 90%, and after all the tasks are over, the memory is always maintained above 90%
    2. During the insertion request, the actual number of features inserted is inconsistent with the number of features queried through mysql, with a difference of 20% to 30%, but will eventually be the same.
    3. As the memory usage becomes higher and higher, the insertion speed will become slower and slower. It takes about 6 hours to insert 100 million features.
    4. In the process of inserting, almost all segments are queried by the client to be of IDMAP index type, indicating that the index is being built, but in fact the CPU usage has been below 10%.

    Expected behavior

    During the execution of the write operation, the memory usage of the write node should not be so high, and the CPU usage should be higher. After the write operation is performed, the memory footprint of the write node should be close to 0.

    Method of installation

    • [x] Docker/cpu
    • [ ] Docker/gpu
    • [ ] Build from source

    Environment details

    • Hardware/Software conditions (OS, CPU, GPU, Memory) Centos 32 CPU 120G Memory

    • Milvus version (master or released version)

    1.1.0

    Configuration file Settings you made in server_config.yaml or milvus.yaml

    # Copyright (C) 2019-2020 Zilliz. All rights reserved.
    #
    # Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance
    # with the License. You may obtain a copy of the License at
    #
    # http://www.apache.org/licenses/LICENSE-2.0
    #
    # Unless required by applicable law or agreed to in writing, software distributed under the License
    # is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express
    # or implied. See the License for the specific language governing permissions and limitations under the License.
    
    version: 0.5
    
    #----------------------+------------------------------------------------------------+------------+-----------------+
    # Cluster Config       | Description                                                | Type       | Default         |
    #----------------------+------------------------------------------------------------+------------+-----------------+
    # enable               | If running with Mishards, set true, otherwise false.       | Boolean    | false           |
    #----------------------+------------------------------------------------------------+------------+-----------------+
    # role                 | Milvus deployment role: rw / ro                            | Role       | rw              |
    #----------------------+------------------------------------------------------------+------------+-----------------+
    cluster:
      enable: true
      role: rw
    
    #----------------------+------------------------------------------------------------+------------+-----------------+
    # General Config       | Description                                                | Type       | Default         |
    #----------------------+------------------------------------------------------------+------------+-----------------+
    # timezone             | Use UTC-x or UTC+x to specify a time zone.                 | Timezone   | UTC+8           |
    #----------------------+------------------------------------------------------------+------------+-----------------+
    # meta_uri             | URI for metadata storage, using SQLite (for single server  | URI        | sqlite://:@:/   |
    #                      | Milvus) or MySQL (for distributed cluster Milvus).         |            |                 |
    #                      | Format: dialect://username:password@host:port/database     |            |                 |
    #                      | Keep 'dialect://:@:/', 'dialect' can be either 'sqlite' or |            |                 |
    #                      | 'mysql', replace other texts with real values.             |            |                 |
    #----------------------+------------------------------------------------------------+------------+-----------------+
    general:
      timezone: UTC+8
      meta_uri: mysql
    
    #----------------------+------------------------------------------------------------+------------+-----------------+
    # Network Config       | Description                                                | Type       | Default         |
    #----------------------+------------------------------------------------------------+------------+-----------------+
    # bind.address         | IP address that Milvus server monitors.                    | IP         | 0.0.0.0         |
    #----------------------+------------------------------------------------------------+------------+-----------------+
    # bind.port            | Port that Milvus server monitors. Port range (1024, 65535) | Integer    | 19530           |
    #----------------------+------------------------------------------------------------+------------+-----------------+
    # http.enable          | Enable HTTP server or not.                                 | Boolean    | true            |
    #----------------------+------------------------------------------------------------+------------+-----------------+
    # http.port            | Port that Milvus HTTP server monitors.                     | Integer    | 19121           |
    #                      | Port range (1024, 65535)                                   |            |                 |
    #----------------------+------------------------------------------------------------+------------+-----------------+
    network:
      bind.address: 0.0.0.0
      bind.port: 19530
      http.enable: true
      http.port: 19121
    
    #----------------------+------------------------------------------------------------+------------+-----------------+
    # Storage Config       | Description                                                | Type       | Default         |
    #----------------------+------------------------------------------------------------+------------+-----------------+
    # path                 | Path used to save meta data, vector data and index data.   | Path       | /var/lib/milvus |
    #----------------------+------------------------------------------------------------+------------+-----------------+
    # auto_flush_interval  | The interval, in seconds, at which Milvus automatically    | Integer    | 1 (s)           |
    #                      | flushes data to disk.                                      |            |                 |
    #                      | 0 means disable the regular flush.                         |            |                 |
    #----------------------+------------------------------------------------------------+------------+-----------------+
    storage:
      path: /var/lib/milvus
      auto_flush_interval: 1
    
    #----------------------+------------------------------------------------------------+------------+-----------------+
    # WAL Config           | Description                                                | Type       | Default         |
    #----------------------+------------------------------------------------------------+------------+-----------------+
    # enable               | Whether to enable write-ahead logging (WAL) in Milvus.     | Boolean    | true            |
    #                      | If WAL is enabled, Milvus writes all data changes to log   |            |                 |
    #                      | files in advance before implementing data changes. WAL     |            |                 |
    #                      | ensures the atomicity and durability for Milvus operations.|            |                 |
    #----------------------+------------------------------------------------------------+------------+-----------------+
    # recovery_error_ignore| Whether to ignore logs with errors that happens during WAL | Boolean    | false           |
    #                      | recovery. If true, when Milvus restarts for recovery and   |            |                 |
    #                      | there are errors in WAL log files, log files with errors   |            |                 |
    #                      | are ignored. If false, Milvus does not restart when there  |            |                 |
    #                      | are errors in WAL log files.                               |            |                 |
    #----------------------+------------------------------------------------------------+------------+-----------------+
    # buffer_size          | Sum total of the read buffer and the write buffer in Bytes.| String     | 256MB           |
    #                      | buffer_size must be in range [64MB, 4096MB].               |            |                 |
    #                      | If the value you specified is out of range, Milvus         |            |                 |
    #                      | automatically uses the boundary value closest to the       |            |                 |
    #                      | specified value. It is recommended you set buffer_size to  |            |                 |
    #                      | a value greater than the inserted data size of a single    |            |                 |
    #                      | insert operation for better performance.                   |            |                 |
    #----------------------+------------------------------------------------------------+------------+-----------------+
    # path                 | Location of WAL log files.                                 | String     |                 |
    #----------------------+------------------------------------------------------------+------------+-----------------+
    wal:
      enable: true
      recovery_error_ignore: false
      buffer_size: 1GB
      path: /var/lib/milvus/wal
    
    #----------------------+------------------------------------------------------------+------------+-----------------+
    # Cache Config         | Description                                                | Type       | Default         |
    #----------------------+------------------------------------------------------------+------------+-----------------+
    # cache_size           | The size of CPU memory used for caching data for faster    | String     | 4GB             |
    #                      | query. The sum of 'cache_size' and 'insert_buffer_size'    |            |                 |
    #                      | must be less than system memory size.                      |            |                 |
    #----------------------+------------------------------------------------------------+------------+-----------------+
    # insert_buffer_size   | Buffer size used for data insertion.                       | String     | 1GB             |
    #                      | The sum of 'insert_buffer_size' and 'cache_size'           |            |                 |
    #                      | must be less than system memory size.                      |            |                 |
    #----------------------+------------------------------------------------------------+------------+-----------------+
    # preload_collection   | A comma-separated list of collection names that need to    | StringList |                 |
    #                      | be pre-loaded when Milvus server starts up.                |            |                 |
    #                      | '*' means preload all existing tables (single-quote or     |            |                 |
    #                      | double-quote required).                                    |            |                 |
    #----------------------+------------------------------------------------------------+------------+-----------------+
    cache:
      cache_size: 80GB
      insert_buffer_size: 10GB
      preload_collection:
    
    #----------------------+------------------------------------------------------------+------------+-----------------+
    # GPU Config           | Description                                                | Type       | Default         |
    #----------------------+------------------------------------------------------------+------------+-----------------+
    # enable               | Use GPU devices or not.                                    | Boolean    | false           |
    #----------------------+------------------------------------------------------------+------------+-----------------+
    # cache_size           | The size of GPU memory per card used for cache.            | String     | 1GB             |
    #----------------------+------------------------------------------------------------+------------+-----------------+
    # gpu_search_threshold | A Milvus performance tuning parameter. This value will be  | Integer    | 1000            |
    #                      | compared with 'nq' to decide if the search computation will|            |                 |
    #                      | be executed on GPUs only.                                  |            |                 |
    #                      | If nq >= gpu_search_threshold, the search computation will |            |                 |
    #                      | be executed on GPUs only;                                  |            |                 |
    #                      | if nq < gpu_search_threshold, the search computation will  |            |                 |
    #                      | be executed on CPUs only.                                  |            |                 |
    #                      | The SQ8H index is special, if nq < gpu_search_threshold,   |            |                 |
    #                      | the search will be executed on both CPUs and GPUs.         |            |                 |
    #----------------------+------------------------------------------------------------+------------+-----------------+
    # search_devices       | The list of GPU devices used for search computation.       | DeviceList | gpu0            |
    #                      | Must be in format gpux.                                    |            |                 |
    #----------------------+------------------------------------------------------------+------------+-----------------+
    # build_index_devices  | The list of GPU devices used for index building.           | DeviceList | gpu0            |
    #                      | Must be in format gpux.                                    |            |                 |
    #----------------------+------------------------------------------------------------+------------+-----------------+
    
    gpu:
      enable: false
      cache_size: 1GB
      gpu_search_threshold: 1000
      search_devices:
        - gpu0
      build_index_devices:
        - gpu0
    
    #----------------------+------------------------------------------------------------+------------+-----------------+
    # FPGA Config           | Description                                               | Type       | Default         |
    #----------------------+------------------------------------------------------------+------------+-----------------+
    # enable               | Use FPGA devices or not.                                   | Boolean    | false           |
    #----------------------+------------------------------------------------------------+------------+-----------------+
    # search_devices       | The list of FPGA devices used for search computation.      | DeviceList | fpga0           |
    #                      | Must be in format fpgax.                                   |            |                 |
    #----------------------+------------------------------------------------------------+------------+-----------------+
    
    fpga:
       enable: false
       search_devices:
         - fpga0
    #----------------------+------------------------------------------------------------+------------+-----------------+
    # Logs Config          | Description                                                | Type       | Default         |
    #----------------------+------------------------------------------------------------+------------+-----------------+
    # level                | Log level in Milvus. Must be one of debug, info, warning,  | String     | debug           |
    #                      | error, fatal                                               |            |                 |
    #----------------------+------------------------------------------------------------+------------+-----------------+
    # trace.enable         | Whether to enable trace level logging in Milvus.           | Boolean    | true            |
    #----------------------+------------------------------------------------------------+------------+-----------------+
    # path                 | Absolute path to the folder holding the log files.         | String     |                 |
    #----------------------+------------------------------------------------------------+------------+-----------------+
    # max_log_file_size    | The maximum size of each log file, size range              | String     | 1024MB          |
    #                      | [512MB, 4096MB].                                           |            |                 |
    #----------------------+------------------------------------------------------------+------------+-----------------+
    # log_rotate_num       | The maximum number of log files that Milvus keeps for each | Integer    | 0               |
    #                      | logging level, num range [0, 1024], 0 means unlimited.     |            |                 |
    #----------------------+------------------------------------------------------------+------------+-----------------+
    logs:
      level: debug
      trace.enable: true
      path: /var/lib/milvus/logs
      max_log_file_size: 1024MB
      log_rotate_num: 20
    
    #----------------------+------------------------------------------------------------+------------+-----------------+
    # Metric Config        | Description                                                | Type       | Default         |
    #----------------------+------------------------------------------------------------+------------+-----------------+
    # enable               | Enable monitoring function or not.                         | Boolean    | false           |
    #----------------------+------------------------------------------------------------+------------+-----------------+
    # address              | Pushgateway address                                        | IP         | 127.0.0.1       +
    #----------------------+------------------------------------------------------------+------------+-----------------+
    # port                 | Pushgateway port, port range (1024, 65535)                 | Integer    | 9091            |
    #----------------------+------------------------------------------------------------+------------+-----------------+
    metric:
      enable: false
      address: 127.0.0.1
      port: 9091
    
    

    Screenshots If applicable, add screenshots to help explain your problem.

    Additional context Add any other context about the problem here.

  • [Bug]: [chaos][standalone]Milvus hangs at creating index after standalone pod kill chaos deleted

    [Bug]: [chaos][standalone]Milvus hangs at creating index after standalone pod kill chaos deleted

    Is there an existing issue for this?

    • [X] I have searched the existing issues

    Environment

    - Milvus version: `91be4b1`
    - Deployment mode(standalone or cluster): standalone
    - SDK version(e.g. pymilvus v2.0.0rc2):2.0.0rc8.dev23
    - OS(Ubuntu or CentOS): 
    - CPU/Memory: 
    - GPU: 
    - Others:
    

    Current Behavior

    Milvus hangs at search after standalone pod kill chaos deleted

    Expected Behavior

    All operations work well

    Steps To Reproduce

    1. cd `tests/python_client/chaos`
    
    2. modify param value and run script `chaos_test.sh`
    pod="standalone"
    chaos_type="pod_kill"
    
    A common way to reproduce
    deploy a standalone milvus by helm 
    use `kubectl delete` to kill standalone pod multi times (about 5~10times in 1 min)
    run hello_milvus.py to check the results
    
    
    

    Anything else?

    standalone.log

  • [Bug]: [benchmark][cluster]  loading 1billion vectors failed, raise error: collection 429437365324811585 has not been loaded to memory or load failed

    [Bug]: [benchmark][cluster] loading 1billion vectors failed, raise error: collection 429437365324811585 has not been loaded to memory or load failed

    Is there an existing issue for this?

    • [X] I have searched the existing issues

    Environment

    - Milvus version:master-20211129-cb952d6
    - Deployment mode(standalone or cluster):cluster
    - SDK version(e.g. pymilvus v2.0.0rc2):pymilvus-2.0.0rc9.dev7
    - OS(Ubuntu or CentOS): 
    - CPU/Memory: 
    - GPU: 
    - Others:
    

    Current Behavior

    client pod:benchmark-tag-8k2zj-989751322

    client logs:

    [2021-11-30 05:05:01,227] [   DEBUG] - Row count: 999875199 in collection: <sift_1b_128_l2> (milvus_benchmark.client:416)
    [2021-11-30 05:05:01,228] [   DEBUG] - 999875199 (milvus_benchmark.runners.base:89)
    [2021-11-30 05:05:01,229] [    INFO] - {'total_time': 49243.11, 'rps': 20307.41, 'ni_time': 2.46} (milvus_benchmark.runners.base:151)
    [2021-11-30 05:05:01,346] [   DEBUG] - Start flush. (milvus_benchmark.runners.locust:428)
    [2021-11-30 05:05:04,305] [   DEBUG] - Milvus flush run in 2.96s (milvus_benchmark.client:52)
    [2021-11-30 05:05:04,305] [   DEBUG] - Fulsh done, during time: 2.96 (milvus_benchmark.runners.locust:431)
    [2021-11-30 05:05:04,311] [   DEBUG] - Row count: 999925199 in collection: <sift_1b_128_l2> (milvus_benchmark.client:416)
    [2021-11-30 05:05:04,311] [   DEBUG] - 999925199 (milvus_benchmark.runners.locust:432)
    [2021-11-30 05:05:04,312] [   DEBUG] - Start build index for last file (milvus_benchmark.runners.locust:434)
    [2021-11-30 05:05:04,313] [    INFO] - Building index start, collection_name: sift_1b_128_l2, index_type: IVF_SQ8, metric_type: L2 (milvus_benchmark.client:273)
    [2021-11-30 05:05:04,313] [    INFO] - {'nlist': 1024} (milvus_benchmark.client:275)
    [2021-11-30 05:05:04,314] [   DEBUG] - collection: sift_1b_128_l2 Index params: {'index_type': 'IVF_SQ8', 'metric_type': 'L2', 'params': {'nlist': 1024}} (milvus_benchmark.client:281)
    [2021-11-30 05:05:52,106] [   DEBUG] - Building index done, collection_name: sift_1b_128_l2, response: Status(code=0, message='') (milvus_benchmark.client:283)
    [2021-11-30 05:05:52,107] [   DEBUG] - Milvus create_index run in 47.79s (milvus_benchmark.client:52)
    [2021-11-30 05:05:52,107] [   DEBUG] - {'flush_time': 2.96, 'build_time': 47.79} (milvus_benchmark.runners.locust:438)
    [2021-11-30 05:05:52,112] [   DEBUG] - Row count: 999925199 in collection: <sift_1b_128_l2> (milvus_benchmark.client:416)
    [2021-11-30 05:05:52,112] [    INFO] - 999925199 (milvus_benchmark.runners.locust:439)
    [2021-11-30 05:05:52,113] [    INFO] - Start load collection (milvus_benchmark.runners.locust:440)
    [2021-11-30 07:13:25,695] [   ERROR] - Error: <BaseException: (code=1, message=err: rpc error: code = Unknown desc = collection 429437365324811585 has not been loaded to memory or load failed
    , /usr/local/go/src/runtime/extern.go:216 runtime.Callers
    /go/src/github.com/milvus-io/milvus/internal/util/trace/stack_trace.go:25 github.com/milvus-io/milvus/internal/util/trace.StackTraceMsg
    /go/src/github.com/milvus-io/milvus/internal/util/trace/stack_trace.go:43 github.com/milvus-io/milvus/internal/util/trace.StackTrace
    /go/src/github.com/milvus-io/milvus/internal/distributed/querycoord/client/client.go:215 github.com/milvus-io/milvus/internal/distributed/querycoord/client.(*Client).recall
    /go/src/github.com/milvus-io/milvus/internal/distributed/querycoord/client/client.go:297 github.com/milvus-io/milvus/internal/distributed/querycoord/client.(*Client).ShowCollections
    /go/src/github.com/milvus-io/milvus/internal/proxy/task.go:2836 github.com/milvus-io/milvus/internal/proxy.(*showCollectionsTask).Execute
    /go/src/github.com/milvus-io/milvus/internal/proxy/task_scheduler.go:458 github.com/milvus-io/milvus/internal/proxy.(*taskScheduler).processTask
    /go/src/github.com/milvus-io/milvus/internal/proxy/task_scheduler.go:486 github.com/milvus-io/milvus/internal/proxy.(*taskScheduler).definitionLoop
    /usr/local/go/src/runtime/asm_amd64.s:1374 runtime.goexit
    )> (pymilvus.client.grpc_handler:69)
    [2021-11-30 07:13:25,734] [   ERROR] - Error: <BaseException: (code=1, message=err: rpc error: code = Unknown desc = collection 429437365324811585 has not been loaded to memory or load failed
    , /usr/local/go/src/runtime/extern.go:216 runtime.Callers
    /go/src/github.com/milvus-io/milvus/internal/util/trace/stack_trace.go:25 github.com/milvus-io/milvus/internal/util/trace.StackTraceMsg
    /go/src/github.com/milvus-io/milvus/internal/util/trace/stack_trace.go:43 github.com/milvus-io/milvus/internal/util/trace.StackTrace
    /go/src/github.com/milvus-io/milvus/internal/distributed/querycoord/client/client.go:215 github.com/milvus-io/milvus/internal/distributed/querycoord/client.(*Client).recall
    /go/src/github.com/milvus-io/milvus/internal/distributed/querycoord/client/client.go:297 github.com/milvus-io/milvus/internal/distributed/querycoord/client.(*Client).ShowCollections
    /go/src/github.com/milvus-io/milvus/internal/proxy/task.go:2836 github.com/milvus-io/milvus/internal/proxy.(*showCollectionsTask).Execute
    /go/src/github.com/milvus-io/milvus/internal/proxy/task_scheduler.go:458 github.com/milvus-io/milvus/internal/proxy.(*taskScheduler).processTask
    /go/src/github.com/milvus-io/milvus/internal/proxy/task_scheduler.go:486 github.com/milvus-io/milvus/internal/proxy.(*taskScheduler).definitionLoop
    /usr/local/go/src/runtime/asm_amd64.s:1374 runtime.goexit
    )> (pymilvus.client.grpc_handler:69)
    [2021-11-30 07:13:25,735] [   ERROR] - <BaseException: (code=1, message=err: rpc error: code = Unknown desc = collection 429437365324811585 has not been loaded to memory or load failed
    , /usr/local/go/src/runtime/extern.go:216 runtime.Callers
    /go/src/github.com/milvus-io/milvus/internal/util/trace/stack_trace.go:25 github.com/milvus-io/milvus/internal/util/trace.StackTraceMsg
    /go/src/github.com/milvus-io/milvus/internal/util/trace/stack_trace.go:43 github.com/milvus-io/milvus/internal/util/trace.StackTrace
    /go/src/github.com/milvus-io/milvus/internal/distributed/querycoord/client/client.go:215 github.com/milvus-io/milvus/internal/distributed/querycoord/client.(*Client).recall
    /go/src/github.com/milvus-io/milvus/internal/distributed/querycoord/client/client.go:297 github.com/milvus-io/milvus/internal/distributed/querycoord/client.(*Client).ShowCollections
    /go/src/github.com/milvus-io/milvus/internal/proxy/task.go:2836 github.com/milvus-io/milvus/internal/proxy.(*showCollectionsTask).Execute
    /go/src/github.com/milvus-io/milvus/internal/proxy/task_scheduler.go:458 github.com/milvus-io/milvus/internal/proxy.(*taskScheduler).processTask
    /go/src/github.com/milvus-io/milvus/internal/proxy/task_scheduler.go:486 github.com/milvus-io/milvus/internal/proxy.(*taskScheduler).definitionLoop
    /usr/local/go/src/runtime/asm_amd64.s:1374 runtime.goexit
    )> (milvus_benchmark.main:117)
    [2021-11-30 07:13:25,742] [   ERROR] - Traceback (most recent call last):
      File "main.py", line 86, in run_suite
        runner.prepare(**cases[0])
      File "/src/milvus_benchmark/runners/locust.py", line 442, in prepare
        self.milvus.load_collection()
      File "/src/milvus_benchmark/client.py", line 48, in wrapper
        result = func(*args, **kwargs)
      File "/src/milvus_benchmark/client.py", line 478, in load_collection
        return self._milvus.load_collection(collection_name, timeout=timeout)
      File "/usr/local/lib/python3.6/site-packages/pymilvus/client/stub.py", line 58, in handler
        raise e
      File "/usr/local/lib/python3.6/site-packages/pymilvus/client/stub.py", line 42, in handler
        return func(self, *args, **kwargs)
      File "/usr/local/lib/python3.6/site-packages/pymilvus/client/stub.py", line 322, in load_collection
        return handler.load_collection("", collection_name=collection_name, timeout=timeout, **kwargs)
      File "/usr/local/lib/python3.6/site-packages/pymilvus/client/grpc_handler.py", line 75, in handler
        raise e
      File "/usr/local/lib/python3.6/site-packages/pymilvus/client/grpc_handler.py", line 67, in handler
        return func(self, *args, **kwargs)
      File "/usr/local/lib/python3.6/site-packages/pymilvus/client/grpc_handler.py", line 823, in load_collection
        self.wait_for_loading_collection(collection_name, timeout)
      File "/usr/local/lib/python3.6/site-packages/pymilvus/client/grpc_handler.py", line 75, in handler
        raise e
      File "/usr/local/lib/python3.6/site-packages/pymilvus/client/grpc_handler.py", line 67, in handler
        return func(self, *args, **kwargs)
      File "/usr/local/lib/python3.6/site-packages/pymilvus/client/grpc_handler.py", line 841, in wait_for_loading_collection
        return self._wait_for_loading_collection_v2(collection_name, timeout)
      File "/usr/local/lib/python3.6/site-packages/pymilvus/client/grpc_handler.py", line 868, in _wait_for_loading_collection_v2
        raise BaseException(response.status.error_code, response.status.reason)
    pymilvus.client.exceptions.BaseException: <BaseException: (code=1, message=err: rpc error: code = Unknown desc = collection 429437365324811585 has not been loaded to memory or load failed
    , /usr/local/go/src/runtime/extern.go:216 runtime.Callers
    /go/src/github.com/milvus-io/milvus/internal/util/trace/stack_trace.go:25 github.com/milvus-io/milvus/internal/util/trace.StackTraceMsg
    /go/src/github.com/milvus-io/milvus/internal/util/trace/stack_trace.go:43 github.com/milvus-io/milvus/internal/util/trace.StackTrace
    /go/src/github.com/milvus-io/milvus/internal/distributed/querycoord/client/client.go:215 github.com/milvus-io/milvus/internal/distributed/querycoord/client.(*Client).recall
    /go/src/github.com/milvus-io/milvus/internal/distributed/querycoord/client/client.go:297 github.com/milvus-io/milvus/internal/distributed/querycoord/client.(*Client).ShowCollections
    /go/src/github.com/milvus-io/milvus/internal/proxy/task.go:2836 github.com/milvus-io/milvus/internal/proxy.(*showCollectionsTask).Execute
    /go/src/github.com/milvus-io/milvus/internal/proxy/task_scheduler.go:458 github.com/milvus-io/milvus/internal/proxy.(*taskScheduler).processTask
    /go/src/github.com/milvus-io/milvus/internal/proxy/task_scheduler.go:486 github.com/milvus-io/milvus/internal/proxy.(*taskScheduler).definitionLoop
    /usr/local/go/src/runtime/asm_amd64.s:1374 runtime.goexit
    )>
     (milvus_benchmark.main:118)
    [2021-11-30 07:13:25,748] [   DEBUG] - {'_version': '0.1', '_type': 'metric', 'run_id': 1638173528, 'mode': 'local', 'server': <milvus_benchmark.metrics.models.server.Server object at 0x7f000ea82358>, 'hardware': <milvus_benchmark.metrics.models.hardware.Hardware object at 0x7f000ea82208>, 'env': <milvus_benchmark.metrics.models.env.Env object at 0x7f000ea82128>, 'status': 'RUN_FAILED', 'err_message': '', 'collection': {'dimension': 128, 'metric_type': 'l2', 'dataset_name': 'sift_1b_128_l2', 'collection_size': 1000000000, 'other_fields': None, 'ni_per': 50000, 'shards_num': None}, 'index': {'index_type': 'ivf_sq8', 'index_param': {'nlist': 1024}}, 'search': None, 'run_params': {'task': {'types': [{'type': 'query', 'weight': 20, 'params': {'top_k': 10, 'nq': 10, 'search_param': {'nprobe': 16}}}, {'type': 'load', 'weight': 1}, {'type': 'get', 'weight': 2, 'params': {'ids_length': 10}}], 'connection_num': 1, 'clients_num': 20, 'spawn_rate': 2, 'during_time': 864000}, 'connection_type': 'single'}, 'metrics': {'type': 'locust_random_performance', 'value': {}}, 'datetime': '2021-11-29 08:12:08.166657', 'type': 'metric'} (milvus_benchmark.metric.api:29)
    

    Expected Behavior

    No response

    Steps To Reproduce

    No response

    Anything else?

    argo task: benchmark-tag-8k2zj

    test yaml: client-configmap: client-random-locust-search-84h-1b server-configmap: server-cluster-8c64m-datanode2-indexnode4-querynode6-nocompaction

    server:

    NAME                                                         READY   STATUS      RESTARTS   AGE     IP             NODE                      NOMINATED NODE   READINESS GATES
    benchmark-tag-8k2zj-1-etcd-0                                 1/1     Running     0          23h     10.97.17.186   qa-node014.zilliz.local   <none>           <none>
    benchmark-tag-8k2zj-1-etcd-1                                 1/1     Running     0          23h     10.97.17.187   qa-node014.zilliz.local   <none>           <none>
    benchmark-tag-8k2zj-1-etcd-2                                 1/1     Running     0          23h     10.97.17.185   qa-node014.zilliz.local   <none>           <none>
    benchmark-tag-8k2zj-1-milvus-datacoord-599c7f4cc8-ghg4z      1/1     Running     0          23h     10.97.8.105    qa-node006.zilliz.local   <none>           <none>
    benchmark-tag-8k2zj-1-milvus-datanode-d865f756f-9x4bq        1/1     Running     0          23h     10.97.11.185   qa-node009.zilliz.local   <none>           <none>
    benchmark-tag-8k2zj-1-milvus-datanode-d865f756f-lnlnv        1/1     Running     0          23h     10.97.11.184   qa-node009.zilliz.local   <none>           <none>
    benchmark-tag-8k2zj-1-milvus-indexcoord-8547476c8f-wvfrp     1/1     Running     0          23h     10.97.9.58     qa-node007.zilliz.local   <none>           <none>
    benchmark-tag-8k2zj-1-milvus-indexnode-6b5d94dc7d-8d8cs      1/1     Running     0          23h     10.97.5.172    qa-node003.zilliz.local   <none>           <none>
    benchmark-tag-8k2zj-1-milvus-indexnode-6b5d94dc7d-cxt5m      1/1     Running     0          23h     10.97.10.101   qa-node008.zilliz.local   <none>           <none>
    benchmark-tag-8k2zj-1-milvus-indexnode-6b5d94dc7d-f6s6k      1/1     Running     0          23h     10.97.15.124   qa-node012.zilliz.local   <none>           <none>
    benchmark-tag-8k2zj-1-milvus-indexnode-6b5d94dc7d-lh785      1/1     Running     0          23h     10.97.13.106   qa-node010.zilliz.local   <none>           <none>
    benchmark-tag-8k2zj-1-milvus-proxy-79bdcb98bd-fww7f          1/1     Running     0          23h     10.97.9.54     qa-node007.zilliz.local   <none>           <none>
    benchmark-tag-8k2zj-1-milvus-querycoord-6fdc5c6566-9z29v     1/1     Running     0          23h     10.97.8.104    qa-node006.zilliz.local   <none>           <none>
    benchmark-tag-8k2zj-1-milvus-querynode-98996b9b8-4mbhq       1/1     Running     0          23h     10.97.12.141   qa-node015.zilliz.local   <none>           <none>
    benchmark-tag-8k2zj-1-milvus-querynode-98996b9b8-brhtn       1/1     Running     0          23h     10.97.12.142   qa-node015.zilliz.local   <none>           <none>
    benchmark-tag-8k2zj-1-milvus-querynode-98996b9b8-cmzmm       1/1     Running     0          23h     10.97.14.236   qa-node011.zilliz.local   <none>           <none>
    benchmark-tag-8k2zj-1-milvus-querynode-98996b9b8-fxprl       1/1     Running     0          23h     10.97.15.125   qa-node012.zilliz.local   <none>           <none>
    benchmark-tag-8k2zj-1-milvus-querynode-98996b9b8-l8lm8       1/1     Running     0          23h     10.97.10.102   qa-node008.zilliz.local   <none>           <none>
    benchmark-tag-8k2zj-1-milvus-querynode-98996b9b8-lqq78       1/1     Running     0          23h     10.97.3.114    qa-node001.zilliz.local   <none>           <none>
    benchmark-tag-8k2zj-1-milvus-rootcoord-7f4d679bc4-8klcn      1/1     Running     0          23h     10.97.9.55     qa-node007.zilliz.local   <none>           <none>
    benchmark-tag-8k2zj-1-minio-0                                1/1     Running     0          23h     10.97.9.60     qa-node007.zilliz.local   <none>           <none>
    benchmark-tag-8k2zj-1-minio-1                                1/1     Running     0          23h     10.97.4.101    qa-node002.zilliz.local   <none>           <none>
    benchmark-tag-8k2zj-1-minio-2                                1/1     Running     0          23h     10.97.6.85     qa-node004.zilliz.local   <none>           <none>
    benchmark-tag-8k2zj-1-minio-3                                1/1     Running     0          23h     10.97.6.86     qa-node004.zilliz.local   <none>           <none>
    benchmark-tag-8k2zj-1-pulsar-autorecovery-58f6b6bbd6-hlpgd   1/1     Running     0          23h     10.97.8.106    qa-node006.zilliz.local   <none>           <none>
    benchmark-tag-8k2zj-1-pulsar-bastion-55b7db56-cd664          1/1     Running     0          23h     10.97.15.123   qa-node012.zilliz.local   <none>           <none>
    benchmark-tag-8k2zj-1-pulsar-bookkeeper-0                    1/1     Running     0          23h     10.97.9.62     qa-node007.zilliz.local   <none>           <none>
    benchmark-tag-8k2zj-1-pulsar-bookkeeper-1                    1/1     Running     0          23h     10.97.13.107   qa-node010.zilliz.local   <none>           <none>
    benchmark-tag-8k2zj-1-pulsar-broker-5f7c58cc86-9zwx9         1/1     Running     0          23h     10.97.14.235   qa-node011.zilliz.local   <none>           <none>
    benchmark-tag-8k2zj-1-pulsar-proxy-7cb5577568-w6v9p          2/2     Running     0          23h     10.97.7.247    qa-node005.zilliz.local   <none>           <none>
    benchmark-tag-8k2zj-1-pulsar-zookeeper-0                     1/1     Running     0          23h     10.97.9.61     qa-node007.zilliz.local   <none>           <none>
    benchmark-tag-8k2zj-1-pulsar-zookeeper-1                     1/1     Running     0          23h     10.97.6.87     qa-node004.zilliz.local   <none>           <none>
    benchmark-tag-8k2zj-1-pulsar-zookeeper-2                     1/1     Running     0          23h     10.97.6.88     qa-node004.zilliz.local   <none>           <none>
    benchmark-tag-8k2zj-1-pulsar-zookeeper-metadata-dq9xb        0/1     Completed   0          23h     10.97.3.113    qa-node001.zilliz.local   <none>           <none>
    
  • Milvus search not working after restarting docker

    Milvus search not working after restarting docker

    Is there an existing issue for this?

    • [X] I have searched the existing issues

    Current Behavior

    Running the reverse image search demo, the images are added to the collection and can be searched.

    When docker is restarted, the collection exists, I manually check through milvus_cli. But, the collection cannot be searched.

    It hangs at the milvus_client.search_vectors method call:

    https://github.com/milvus-io/bootcamp/blob/master/solutions/reverse_image_search/quick_deploy/server/src/operations/search.py#L13 def do_search(table_name, img_path, model, milvus_client, mysql_cli): try: if not table_name: table_name = DEFAULT_TABLE feat = model.resnet50_extract_feat(img_path) vectors = milvus_client.search_vectors(table_name, [feat], TOP_K) vids = [str(x.id) for x in vectors[0]] paths = mysql_cli.search_by_milvus_ids(vids, table_name) distances = [x.distance for x in vectors[0]] return paths, distances except Exception as e: LOGGER.error(" Error with search : {}".format(e)) sys.exit(1)

    when loading the collection via milvus_cli the load progress always remains at 0.

    As per the docker compose file, the volumes are mounted and contain data generated by milvus.

    Expected Behavior

    Vector search should function after restarting docker

    Steps To Reproduce

    No response

    Environment

    - Milvus version: Milvusdb/milvus v2.0.0-rc6-20210910-020f109
    minio/minio RELEASE.2020-12-03T00-03-10Z
    - Deployment mode(standalone or cluster): standalone
    - SDK version(e.g. pymilvus v2.0.0rc2): pymilvus==2.0.0rc6, milvus-cli==0.1.6
    - OS(Ubuntu or CentOS): Windows 10 20H2 19042.1237
    - CPU/Memory: 32GB
    - GPU: Nvidia MX250
    - Others:
    

    Anything else?

    No response

  • Use singleton Params

    Use singleton Params

    Signed-off-by: Enwei Jiao [email protected] issue : #18300

    What changed in this PR

    1. define global variable Params in paramtable/runtime.go
    2. make all of the components use the global Params instead of the private one
    3. use only 1 NodeID in 1 process even in standalone mode, the NodeID will be saved in the global Params
  • [Cherry-Pick] BulkInsert now completes when segments are flushed, without checking indexes

    [Cherry-Pick] BulkInsert now completes when segments are flushed, without checking indexes

    A combination of cherry picking: (1) https://github.com/milvus-io/milvus/pull/21253, and (2) https://github.com/milvus-io/milvus/pull/21578

    /kind improvement

  • [Enhancement]: GetIndexState fails when IndexName is not set

    [Enhancement]: GetIndexState fails when IndexName is not set

    Is there an existing issue for this?

    • [X] I have searched the existing issues

    What would you like to be added?

    milvus-standalone | [2023/01/09 11:07:35.471 +00:00] [DEBUG] [proxy/impl.go:2376] ["GetIndexState received"] [traceID=31bc39db25db440b] [role=proxy] [db=] [collection=resnet50_vehicle_bigtruck_5] [field=] ["index name"=]

    milvus-standalone | [2023/01/09 11:07:35.471 +00:00] [DEBUG] [proxy/impl.go:2407] ["GetIndexState enqueued"] [traceID=31bc39db25db440b] [role=proxy] [MsgID=438635713120174085] [BeginTs=438635713120174085] [EndTs=438635713120174085] [db=] [collection=resnet50_vehicle_bigtruck_5] [field=] ["index name"=]

    milvus-standalone | [2023/01/09 11:07:35.472 +00:00] [INFO] [indexcoord/index_coord.go:490] ["IndexCoord get index state"] [collectionID=437684186007170009] [indexName=_default_idx]

    milvus-standalone | [2023/01/09 11:07:35.472 +00:00] [ERROR] [indexcoord/index_coord.go:506] ["IndexCoord get index state fail"] [collectionID=437684186007170009] [indexName=_default_idx] ["fail reason"="there is no index on collection: 437684186007170009 with the index name: _default_idx"] [stack="github.com/milvus-io/milvus/internal/indexcoord.(*IndexCoord).GetIndexState\n\t/go/src/github.com/milvus-io/milvus/internal/indexcoord/index_coord.go:506\ngithub.com/milvus-io/milvus/internal/distributed/indexcoord.(*Server).GetIndexState\n\t/go/src/github.com/milvus-io/milvus/internal/distributed/indexcoord/service.go:256\ngithub.com/milvus-io/milvus/internal/proto/indexpb._IndexCoord_GetIndexState_Handler.func1\n\t/go/src/github.com/milvus-io/milvus/internal/proto/indexpb/index_coord.pb.go:2448\ngithub.com/milvus-io/milvus/internal/util/logutil.UnaryTraceLoggerInterceptor\n\t/go/src/github.com/milvus-io/milvus/internal/util/logutil/grpc_interceptor.go:22\ngithub.com/grpc-ecosystem/go-grpc-middleware.ChainUnaryServer.func1.1.1\n\t/go/pkg/mod/github.com/grpc-ecosystem/[email protected]/chain.go:25\ngithub.com/grpc-ecosystem/go-grpc-middleware/tracing/opentracing.UnaryServerInterceptor.func1\n\t/go/pkg/mod/github.com/grpc-ecosystem/[email protected]/tracing/opentracing/server_interceptors.go:38\ngithub.com/grpc-ecosystem/go-grpc-middleware.ChainUnaryServer.func1.1.1\n\t/go/pkg/mod/github.com/grpc-ecosystem/[email protected]/chain.go:25\ngithub.com/grpc-ecosystem/go-grpc-middleware.ChainUnaryServer.func1\n\t/go/pkg/mod/github.com/grpc-ecosystem/[email protected]/chain.go:34\ngithub.com/milvus-io/milvus/internal/proto/indexpb._IndexCoord_GetIndexState_Handler\n\t/go/src/github.com/milvus-io/milvus/internal/proto/indexpb/index_coord.pb.go:2450\ngoogle.golang.org/grpc.(*Server).processUnaryRPC\n\t/go/pkg/mod/google.golang.org/[email protected]/server.go:1283\ngoogle.golang.org/grpc.(*Server).handleStream\n\t/go/pkg/mod/google.golang.org/[email protected]/server.go:1620\ngoogle.golang.org/grpc.(*Server).serveStreams.func1.2\n\t/go/pkg/mod/google.golang.org/[email protected]/server.go:922"] milvus-standalone | [2023/01/09 11:07:35.472 +00:00] [DEBUG] [proxy/impl.go:2443] ["GetIndexState done"] [traceID=31bc39db25db440b] [role=proxy] [MsgID=438635713120174085] [BeginTs=438635713120174085] [EndTs=438635713120174085] [db=] [collection=resnet50_vehicle_bigtruck_5] [field=] ["index name"=_default_idx]

    Why is this needed?

    No response

    Anything else?

    No response

  • [Bug]: datanode panic after refresh `common.chanNamePrefix.cluster` to true

    [Bug]: datanode panic after refresh `common.chanNamePrefix.cluster` to true

    Is there an existing issue for this?

    • [X] I have searched the existing issues

    Environment

    - Milvus version: master-20230109-9fd9d9cc
    - Deployment mode(standalone or cluster): cluster
    - MQ type(rocksmq, pulsar or kafka):  pulsar
    - SDK version(e.g. pymilvus v2.0.0rc2):
    - OS(Ubuntu or CentOS): 
    - CPU/Memory: 
    - GPU: 
    - Others:
    

    Current Behavior

    1. etcd refresh config
    ./etcdctl --endpoints 10.101.251.162:2379 put config-operator/config/common/chanNamePrefix/cluster true
    
    1. etcd get config keys, and ``
    ./etcdctl --endpoints 10.101.251.162:2379 get --prefix config-operator/config/
    config-operator/config/common.retentionDuration
    0
    config-operator/config/common.topKLimit
    1000
    config-operator/config/common/chanNamePrefix/cluster
    true
    config-operator/config/common/defaultIndexName
    my_index
    config-operator/config/common/entityExpiration
    20
    config-operator/config/dataCoord.compaction.enableAutoCompaction
    false
    config-operator/config/dataCoord/segment/smallProportion
    0.8
    config-operator/config/proxy.maxNameLength
    120
    config-operator/config/proxy/maxNameLength
    128
    config-operator/config/quotaAndLimits/ddl/enabled
    123
    
    1. Run case: flush collection and datanode panic
    @pytest.mark.tags(CaseLabel.L0)
        def test_query(self):
            """
            target: test query
            method: query with term expr
            expected: verify query result
            """
            # create collection, insert default_nb, load collection
            collection_w, vectors = self.init_collection_general(prefix, insert_data=True)[0:2]
            int_values = vectors[0][ct.default_int64_field_name].values.tolist()
            pos = 5
            term_expr = f'{ct.default_int64_field_name} in {int_values[:pos]}'
            res = vectors[0].iloc[0:pos, :1].to_dict('records')
            collection_w.query(term_expr, check_task=CheckTasks.check_query_results, check_items={exp_res: res})
    
            replicas = collection_w.get_replicas()
            log.debug(replicas)
    
    1. datanode panic log:
    config-operator-milvus-datacoord-557955486b-v4l8f              1/1     Running     0               9h
    config-operator-milvus-datanode-76cc748cc7-f5gbr               0/1     Error       6 (29m ago)     9h
    config-operator-milvus-indexcoord-655d8cb95b-g5zfb             1/1     Running     0               9h
    config-operator-milvus-indexnode-6f87d98599-f8tn5              1/1     Running     0               9h
    config-operator-milvus-proxy-9b4858975-wkm7h                   1/1     Running     0               9h
    config-operator-milvus-querycoord-746bd5f4b4-9g7vd             1/1     Running     0               9h
    config-operator-milvus-querynode-6c9d5b88f5-9zdq7              1/1     Running     0               9h
    config-operator-milvus-rootcoord-6788475874-nnj2m              1/1     Running     0               9h
    

    image

    config-operator-milvus-datanode-76cc748cc7-f5gbr_pre.log

    1. del keys
    ./etcdctl --endpoints 10.101.251.162:2379 del config-operator/config/common/chanNamePrefix/cluster
    1
    ./etcdctl --endpoints 10.101.251.162:2379 get config-operator/config/common/chanNamePrefix/cluster
    
    

    after del config, datanode turns running. but the collection inserted data is missing and num_entities=0, query return empty data

    Expected Behavior

    No response

    Steps To Reproduce

    No response

    Milvus Log

    No response

    Anything else?

    No response

  • [Bug]: Manually delete the session key, but the component does not exit

    [Bug]: Manually delete the session key, but the component does not exit

    Is there an existing issue for this?

    • [X] I have searched the existing issues

    Environment

    - Milvus version: master
    - Deployment mode(standalone or cluster): cluster
    - MQ type(rocksmq, pulsar or kafka):    
    - SDK version(e.g. pymilvus v2.0.0rc2):
    - OS(Ubuntu or CentOS): 
    - CPU/Memory: 
    - GPU: 
    - Others:
    

    Current Behavior

    Manually delete the session key of IndexCoord, DataCoord or node, but the component does not exit.

    Expected Behavior

    Manually delete the session key, and the component exits after one minute at the latest.

    Steps To Reproduce

    No response

    Milvus Log

    No response

    Anything else?

    No response

  • [Bug]: 给int64类型的标量手动创建索引一直卡在那里

    [Bug]: 给int64类型的标量手动创建索引一直卡在那里

    Is there an existing issue for this?

    • [X] I have searched the existing issues

    Environment

    - Milvus version:  2.2.0
    - Deployment mode(standalone or cluster):  cluster
    - MQ type(rocksmq, pulsar or kafka):    pulsar
    - SDK version(e.g. pymilvus v2.0.0rc2):  V2.2.0
    - OS(Ubuntu or CentOS):  Ubuntu
    - CPU/Memory: 
    - GPU: 
    - Others:
    

    Current Behavior

    创建号collection后手动在ATTU给int64类型的标量创建索引,一直转圈圈

    Expected Behavior

    期望给标量建索引可以很快

    Steps To Reproduce

    1、创建collection并插入数据
    2、在ATTU里release数据
    3、点击创建标量索引
    4、一直转圈圈
    

    Milvus Log

    No response

    Anything else?

    image

txtai: AI-powered search engine for Go

txtai builds an AI-powered index over sections of text. txtai supports building text indices to perform similarity searches and create extractive question-answering based systems. txtai also has functionality for zero-shot classification.

Dec 6, 2022
The open source, end-to-end computer vision platform. Label, build, train, tune, deploy and automate in a unified platform that runs on any cloud and on-premises.
The open source, end-to-end computer vision platform. Label, build, train, tune, deploy and automate in a unified platform that runs on any cloud and on-premises.

End-to-end computer vision platform Label, build, train, tune, deploy and automate in a unified platform that runs on any cloud and on-premises. onepa

Dec 12, 2022
Open-source software engineering competency and career plans.

Software Engineering Competency Matrix This repository contains an "Open Competency Matrix" for Software Engineers. It includes a standard data struct

Oct 4, 2022
Spice.ai is an open source, portable runtime for training and using deep learning on time series data.
Spice.ai is an open source, portable runtime for training and using deep learning on time series data.

Spice.ai Spice.ai is an open source, portable runtime for training and using deep learning on time series data. ⚠️ DEVELOPER PREVIEW ONLY Spice.ai is

Dec 15, 2022
Recommendation engine for Go

regommend Recommendation engine for Go Installation Make sure you have a working Go environment (Go 1.2 or higher is required). See the install instru

Oct 18, 2022
Casbin Neo (neo for new engine option)
Casbin Neo (neo for new engine option)

A Casbin-compatible engine Casbin NEO Casbin NEO(neo for new engine option), A Casbin-compatible engine. In this project, we would go to restructure t

Nov 4, 2022
A tool for building identical machine images for multiple platforms from a single source configuration
A tool for building identical machine images for multiple platforms from a single source configuration

Packer Packer is a tool for building identical machine images for multiple platforms from a single source configuration. Packer is lightweight, runs o

Oct 3, 2021
Go types, funcs, and utilities for working with cards, decks, and evaluating poker hands (Holdem, Omaha, Stud, more)

cardrank.io/cardrank Package cardrank.io/cardrank provides a library of types, funcs, and utilities for working with playing cards, decks, and evaluat

Dec 25, 2022
Genetic Algorithm and Particle Swarm Optimization

evoli Genetic Algorithm and Particle Swarm Optimization written in Go Example Problem Given f(x,y) = cos(x^2 * y^2) * 1/(x^2 * y^2 + 1) Find (x,y) suc

Dec 22, 2022
k-modes and k-prototypes clustering algorithms implementation in Go

go-cluster GO implementation of clustering algorithms: k-modes and k-prototypes. K-modes algorithm is very similar to well-known clustering algorithm

Nov 29, 2022
Probability distributions and associated methods in Go

godist godist provides some Go implementations of useful continuous and discrete probability distributions, as well as some handy methods for working

Sep 27, 2022
On-line Machine Learning in Go (and so much more)

goml Golang Machine Learning, On The Wire goml is a machine learning library written entirely in Golang which lets the average developer include machi

Jan 5, 2023
Bayesian text classifier with flexible tokenizers and storage backends for Go

Shield is a bayesian text classifier with flexible tokenizer and backend store support Currently implemented: Redis backend English tokenizer Example

Nov 25, 2022
Training materials and labs for a "Getting Started" level course on COBOL

COBOL Programming Course This project is a set of training materials and labs for COBOL on z/OS. The following books are available within this reposit

Dec 30, 2022
A curated list of Awesome Go performance libraries and tools

Awesome Go performance Collection of the Awesome™ Go libraries, tools, project around performance. Contents Algorithm Assembly Benchmarks Compiling Co

Jan 3, 2023
Deploy, manage, and scale machine learning models in production
Deploy, manage, and scale machine learning models in production

Deploy, manage, and scale machine learning models in production. Cortex is a cloud native model serving platform for machine learning engineering teams.

Dec 30, 2022
The Go kernel for Jupyter notebooks and nteract.
The Go kernel for Jupyter notebooks and nteract.

gophernotes - Use Go in Jupyter notebooks and nteract gophernotes is a Go kernel for Jupyter notebooks and nteract. It lets you use Go interactively i

Dec 30, 2022
Library for multi-armed bandit selection strategies, including efficient deterministic implementations of Thompson sampling and epsilon-greedy.
Library for multi-armed bandit selection strategies, including efficient deterministic implementations of Thompson sampling and epsilon-greedy.

Mab Multi-Armed Bandits Go Library Description Installation Usage Creating a bandit and selecting arms Numerical integration with numint Documentation

Jan 2, 2023
A program that generates a folder structure with challenges and projects for mastering a programming language.

Challenge Generator A program that generates a folder structure with challenges and projects for mastering a programming language. Explore the docs »

Aug 31, 2022