An Elasticsearch Migration Tool.

An Elasticsearch Migration Tool

Elasticsearch cross version data migration.

Dec 3rd, 2020: [EN] Cross version Elasticsearch data migration with ESM

Features:

  • Cross version migration supported
  • Overwrite index name
  • Copy index settings and mapping
  • Support http basic auth
  • Support dump index to local file
  • Support loading index from local file
  • Support http proxy
  • Support sliced scroll ( elasticsearch 5.0 +)
  • Support run in background
  • Generate testing data by randomize the source document id
  • Support rename filed name
  • Support unify document type name
  • Support specify which _source fields to return from source
  • Support specify query string query to filter the data source
  • Support rename source fields while do bulk indexing
  • Load generating with

ESM is fast!

A 3 nodes cluster(3 * c5d.4xlarge, 16C,32GB,10Gbps)

root@ip-172-31-13-181:/tmp# ./esm -s https://localhost:8000 -d https://localhost:8000 -x logs1kw -y logs122 -m elastic:medcl123 -n elastic:medcl123 -w 40 --sliced_scroll_size=60 -b 5 --buffer_count=2000000  --regenerate_id
[12-19 06:31:20] [INF] [main.go:506,main] start data migration..
Scroll 10064570 / 10064570 [=================================================] 100.00% 55s
Bulk 10062602 / 10064570 [==================================================]  99.98% 55s
[12-19 06:32:15] [INF] [main.go:537,main] data migration finished.

Migrated 10,000,000 documents within a minute, Nginx log generated from kibana_sample_data_logs.

Example:

copy index index_name from 192.168.1.x to 192.168.1.y:9200

./bin/esm  -s http://192.168.1.x:9200   -d http://192.168.1.y:9200 -x index_name  -w=5 -b=10 -c 10000

copy index src_index from 192.168.1.x to 192.168.1.y:9200 and save with dest_index

./bin/esm -s http://localhost:9200 -d http://localhost:9200 -x src_index -y dest_index -w=5 -b=100

support Basic-Auth

./bin/esm -s http://localhost:9200 -x "src_index" -y "dest_index"  -d http://localhost:9201 -n admin:111111

copy settings and override shard size

./bin/esm -s http://localhost:9200 -x "src_index" -y "dest_index"  -d http://localhost:9201 -m admin:111111 -c 10000 --shards=50  --copy_settings

copy settings and mapping, recreate target index, add query to source fetch, refresh after migration

./bin/esm -s http://localhost:9200 -x "src_index" -q=query:phone -y "dest_index"  -d http://localhost:9201  -c 10000 --shards=5  --copy_settings --copy_mappings --force  --refresh

dump elasticsearch documents into local file

./bin/esm -s http://localhost:9200 -x "src_index"  -m admin:111111 -c 5000 -q=query:mixer  --refresh -o=dump.bin 

loading data from dump files, bulk insert to another es instance

./bin/esm -d http://localhost:9200 -y "dest_index"   -n admin:111111 -c 5000 -b 5 --refresh -i=dump.bin

support proxy

 ./bin/esm -d http://123345.ap-northeast-1.aws.found.io:9200 -y "dest_index"   -n admin:111111  -c 5000 -b 1 --refresh  -i dump.bin  --dest_proxy=http://127.0.0.1:9743

use sliced scroll(only available in elasticsearch v5) to speed scroll, and update shard number

 ./bin/esm -s=http://192.168.3.206:9200 -d=http://localhost:9200 -n=elastic:changeme -f --copy_settings --copy_mappings -x=bestbuykaggle  --sliced_scroll_size=5 --shards=50 --refresh

migrate 5.x to 6.x and unify all the types to doc

./esm -s http://source_es:9200 -x "source_index*"  -u "doc" -w 10 -b 10 - -t "10m" -d https://target_es:9200 -m elastic:passwd -n elastic:passwd -c 5000 

to migrate version 7.x and you may need to rename _type to _doc

./esm -s http://localhost:9201 -x "source" -y "target"  -d https://localhost:9200 --rename="_type:type,age:myage"  -u"_doc"

filter migration with range query

./esm -s https://192.168.3.98:9200 -m elastic:password -o json.out -x kibana_sample_data_ecommerce -q "order_date:[2020-02-01T21:59:02+00:00 TO 2020-03-01T21:59:02+00:00]"

range query, keyword type and escape

./esm -s https://192.168.3.98:9200 -m test:123 -o 1.txt -x test1  -q "@timestamp.keyword:[\"2021-01-17 03:41:20\" TO \"2021-03-17 03:41:20\"]"

generate testing data, if input.json contains 10 documents, the follow command will ingest 100 documents, good for testing

./bin/esm -i input.json -d  http://localhost:9201 -y target-index1  --regenerate_id  --repeat_times=10 

select source fields

 ./bin/esm -s http://localhost:9201 -x my_index -o dump.json --fields=author,title

rename fields while do bulk indexing

./bin/esm -i dump.json -d  http://localhost:9201 -y target-index41  --rename=title:newtitle

user buffer_count to control memory used by ESM, and use gzip to compress network traffic

./esm -s https://localhost:8000 -d https://localhost:8000 -x logs1kw -y logs122 -m elastic:medcl123 -n elastic:medcl123 --regenerate_id -w 20 --sliced_scroll_size=60 -b 5 --buffer_count=1000000 --compress false 

Download

https://github.com/medcl/esm/releases

Compile:

if download version is not fill you environment,you may try to compile it yourself. go required.

make build

  • go version >= 1.7

Options

Usage:
  esm [OPTIONS]

Application Options:
  -s, --source=                    source elasticsearch instance, ie: http://localhost:9200
  -q, --query=                     query against source elasticsearch instance, filter data before migrate, ie: name:medcl
  -d, --dest=                      destination elasticsearch instance, ie: http://localhost:9201
  -m, --source_auth=               basic auth of source elasticsearch instance, ie: user:pass
  -n, --dest_auth=                 basic auth of target elasticsearch instance, ie: user:pass
  -c, --count=                     number of documents at a time: ie "size" in the scroll request (10000)
      --buffer_count=              number of buffered documents in memory (100000)
  -w, --workers=                   concurrency number for bulk workers (1)
  -b, --bulk_size=                 bulk size in MB (5)
  -t, --time=                      scroll time (1m)
      --sliced_scroll_size=        size of sliced scroll, to make it work, the size should be > 1 (1)
  -f, --force                      delete destination index before copying
  -a, --all                        copy indexes starting with . and _
      --copy_settings              copy index settings from source
      --copy_mappings              copy index mappings from source
      --shards=                    set a number of shards on newly created indexes
  -x, --src_indexes=               indexes name to copy,support regex and comma separated list (_all)
  -y, --dest_index=                indexes name to save, allow only one indexname, original indexname will be used if not specified
  -u, --type_override=             override type name
      --green                      wait for both hosts cluster status to be green before dump. otherwise yellow is okay
  -v, --log=                       setting log level,options:trace,debug,info,warn,error (INFO)
  -o, --output_file=               output documents of source index into local file
  -i, --input_file=                indexing from local dump file
      --input_file_type=           the data type of input file, options: dump, json_line, json_array, log_line (dump)
      --source_proxy=              set proxy to source http connections, ie: http://127.0.0.1:8080
      --dest_proxy=                set proxy to target http connections, ie: http://127.0.0.1:8080
      --refresh                    refresh after migration finished
      --fields=                    filter source fields, comma separated, ie: col1,col2,col3,...
      --rename=                    rename source fields, comma separated, ie: _type:type, name:myname
  -l, --logstash_endpoint=         target logstash tcp endpoint, ie: 127.0.0.1:5055
      --secured_logstash_endpoint  target logstash tcp endpoint was secured by TLS
      --repeat_times=              repeat the data from source N times to dest output, use align with parameter regenerate_id to amplify the data size
  -r, --regenerate_id              regenerate id for documents, this will override the exist document id in data source
      --compress                   use gzip to compress traffic
  -p, --sleep=                     sleep N seconds after finished a bulk request (-1)

Help Options:
  -h, --help                       Show this help message


FAQ

  • Scroll ID too long, update elasticsearch.yml on source cluster.
http.max_header_size: 16k
http.max_initial_line_length: 8k

Versions

From To
1.x 1.x
1.x 2.x
1.x 5.x
1.x 6.x
1.x 7.x
2.x 1.x
2.x 2.x
2.x 5.x
2.x 6.x
2.x 7.x
5.x 1.x
5.x 2.x
5.x 5.x
5.x 6.x
5.x 7.x
6.x 1.x
6.x 2.x
6.x 5.0
6.x 6.x
6.x 7.x
7.x 1.x
7.x 2.x
7.x 5.x
7.x 6.x
7.x 7.x
Owner
Medcl
Developer| Evangelist | Consultant
Medcl
Comments
  • query不起作用。

    query不起作用。

    源: GET source-index/_search?q=@timestamp:[2018-03-08T00:00:00 TO 2018-03-08T02:00:00] 返回: { "took": 1, "timed_out": false, "_shards": { "total": 5, "successful": 5, "failed": 0 }, "hits": { "total": 54653, "max_score": 1, "hits": .....

    ./esm -s http://192.168.0.21:9900 -x "source-index" q=@timestamp:[2018-03-08T00:00:00 TO 2018-03-08T02:00:00] -y "dest-index" -d http://192.168.0.185:19400 --sliced_scroll_size=5 结果: Scroll 150000 / 334089 [===================>------------------------] 44.90% 3s Scroll 334089 / 334089 [=========================================] 100.00% 1m17s Bulk 334062 / 334089 [===========================================] 99.99% 1m53s [03-26 11:08:04] [INF] [main.go:410,main] data migration finished.

    还是做了全量复制。

  • panic: runtime error: invalid memory address or nil pointer dereference 这个错误是什么意思?

    panic: runtime error: invalid memory address or nil pointer dereference 这个错误是什么意思?

     ./esm --source=http://192.168.10.141:9200 --dest=http://192.168.10.26:9200 --source_auth=elastic:elastic --src_indexes=* --copy_mappings --copy_settings --shards=3 --log=debug
    [01-23 11:56:24] [DBG] [main.go:536,ClusterVersion] {
      "name" : "es-node03",
      "cluster_name" : "es",
      "cluster_uuid" : "Bi43wnN8QECy8o9lmxJ1IQ",
      "version" : {
        "number" : "7.12.0",
        "build_flavor" : "default",
        "build_type" : "rpm",
        "build_hash" : "78722783c38caa25a70982b5b042074cde5d3b3a",
        "build_date" : "2021-03-18T06:17:15.410153305Z",
        "build_snapshot" : false,
        "lucene_version" : "8.8.0",
        "minimum_wire_compatibility_version" : "6.8.0",
        "minimum_index_compatibility_version" : "6.0.0-beta1"
      },
      "tagline" : "You Know, for Search"
    }
    
    [01-23 11:56:24] [DBG] [main.go:120,main] source es is V7,7.12.0
    [01-23 11:56:35] [DBG] [scroll.go:131,Next] scroll result is empty
    [01-23 11:56:35] [DBG] [main.go:197,func1] closing doc chan
    [01-23 11:56:35] [DBG] [main.go:536,ClusterVersion] {
      "name" : "es-node01",
      "cluster_name" : "es",
      "cluster_uuid" : "GHAC3nVXTkmyba_6MYCXIA",
      "version" : {
        "number" : "7.16.3",
        "build_flavor" : "default",
        "build_type" : "deb",
        "build_hash" : "4e6e4eab2297e949ec994e688dad46290d018022",
        "build_date" : "2022-01-06T23:43:02.825887787Z",
        "build_snapshot" : false,
        "lucene_version" : "8.10.1",
        "minimum_wire_compatibility_version" : "6.8.0",
        "minimum_index_compatibility_version" : "6.0.0-beta1"
      },
      "tagline" : "You Know, for Search"
    }
    
    [01-23 11:56:35] [DBG] [main.go:264,main] target es is V7,7.16.3
    [01-23 11:56:35] [DBG] [main.go:294,main] start process with mappings
    [01-23 11:56:35] [DBG] [v0.go:53,ClusterHealth] http://192.168.10.141:9200/_cluster/health
    [01-23 11:56:35] [DBG] [v0.go:54,ClusterHealth] {"cluster_name":"es","status":"green","timed_out":false,"number_of_nodes":4,"number_of_data_nodes":4,"active_primary_shards":280,"active_shards":560,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":0,"delayed_unassigned_shards":0,"number_of_pending_tasks":0,"number_of_in_flight_fetch":0,"task_max_waiting_in_queue_millis":0,"active_shards_percent_as_number":100.0}
    [01-23 11:56:35] [DBG] [v0.go:53,ClusterHealth] http://192.168.10.26:9200/_cluster/health
    [01-23 11:56:35] [DBG] [v0.go:54,ClusterHealth] {"cluster_name":"es","status":"green","timed_out":false,"number_of_nodes":3,"number_of_data_nodes":3,"active_primary_shards":398,"active_shards":530,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":0,"delayed_unassigned_shards":0,"number_of_pending_tasks":0,"number_of_in_flight_fetch":0,"task_max_waiting_in_queue_millis":0,"active_shards_percent_as_number":100.0}
    Scroll 10000 / 1755555077 [>-----------------------------------------------------------------------------------------------------------------------------------------------------]   0.00% 0s
    Output  0 / 1755555077 [------------------------------------------------------------------------------------------------------------------------------------------------------------]   0.00%
    panic: runtime error: invalid memory address or nil pointer dereference
    [signal SIGSEGV: segmentation violation code=0x1 addr=0x90 pc=0x585820]
    
    goroutine 1 [running]:
    regexp.(*Regexp).doExecute(0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0xc02c7f9ca0, 0x11, 0x0, 0x0, ...)
    	/usr/local/go/src/regexp/exec.go:527 +0x560
    regexp.(*Regexp).doMatch(...)
    	/usr/local/go/src/regexp/exec.go:514
    regexp.(*Regexp).MatchString(...)
    	/usr/local/go/src/regexp/regexp.go:525
    main.(*ESAPIV7).GetIndexMappings(0xc0000963c0, 0xac3800, 0x7ffd9b061707, 0x1, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0)
    	/Users/medcl/go/src/infini.sh/esm/v7.go:180 +0x825
    main.main()
    	/Users/medcl/go/src/infini.sh/esm/main.go:325 +0xb2a
    
    
  • -q query can't find documents from source

    -q query can't find documents from source

    my field updated_at type is

    "updated_at": { - "type": "date", "format": "yyyy-MM-dd HH:mm:ss" },


    when I use: -q updated_at:"[2020-06-01T00:00:00 TO 2020-06-29T23:59:59]"

    print the error: [07-01 16:54:57] [ERR] [main.go:161,main] can't find documents from source.

    but the documents have the data

    "_id": "35227656", "_score": 0, "_source": { - "updated_at": "2020-06-29 14:58:38" } },


    when i use -q updated_at:"[2020-06-01 00:00:00 TO 2020-06-29 23:59:59]" print the error: "reason":"Failed to parse query [updated_at:[2020-06-01 00:00:00 TO 2020-06-29 23:59:59]]",

    why?

  • [10-21 16:51:45] [ERR] [v0.go:66,Bulk] data is empty, skip

    [10-21 16:51:45] [ERR] [v0.go:66,Bulk] data is empty, skip

    esm -s http://xxxx:9200 -x "base" -d http://XXXX:8200 -c 10000 -w=5 -b=50 --shards=5 --copy_settings --copy_mappings --force 报错:[10-21 16:51:45] [ERR] [v0.go:66,Bulk] data is empty, skip,这是正常的么?

  • 无法在后台执行,必须要在终端下面跑。

    无法在后台执行,必须要在终端下面跑。

    每次跑得时候必须在终端下才能正常运行,如果在后台运行,则报错:

    panic: Can't get terminal settings: inappropriate ioctl for device

    goroutine 1 [running]: main.main() /Users/medcl/BTSync/github/elasticsearch-migration/main.go:174 +0x4b7b panic: Can't get terminal settings: inappropriate ioctl for device

    goroutine 1 [running]: main.main() /Users/medcl/BTSync/github/elasticsearch-migration/main.go:174 +0x4b7b 重现方法: myshell script: for i in $datelist;do for x in curl -s ${es1}/_cat/indices | awk '{print $3}' | grep ${i};do /opt/bin/linux64/esm -s ${es1} -d ${es2} -w 5 -b 40 -c 10000 -x $x -f --copy_mappings >/dev/null if [ "$?" -eq 0 ]; then es1_doc_num=curl -s ${es1}/_cat/indices/$x | awk '{print $(NF-3)}' es2_doc_num=curl -s ${es2}/_cat/indices/$x | awk '{print $(NF-3)}' doc_diff=expr ${es1_doc_num} - ${es2_doc_num} echo "$x is migrationed done, the ${x}_doc_diff is $doc_diff " else echo "$x is migrated err" fi done done 这个脚本运行时候加 & 在后台跑就会报错,在crontab 里面跑也会报错。 谢谢。

  • Couple questions about migration, can't find the information nowhere.

    Couple questions about migration, can't find the information nowhere.

    • When the migration is in progress do I need to stop read write to my old cluster?

    • When the migration is done can I just connect to that new cluster and continue using it? I mean all the indexes will be in place and new writing will be continuing to the same (new) indexes?

    • Can I stop the migration progress any time? can it somehow damage my old cluster/indexes? Thank for the help.

  • 5.5迁移索引数据到7.10报错

    5.5迁移索引数据到7.10报错

    esm版本 [v0.6.1] 先是用elasticdump从es 5.5导出settings和mapping,修改后(主要是删除了type)导入到es 7.10,没问题

    使用esm导入就报错 {"index":{"_index":".monitoring-kibana-6-2022.06.03","_type":"doc","_id":"AYErz4uccUkBh-HnCVHG","status":400,"error":{"type":"illegal_argument_exception","reason":"mapper [cluster_uuid] cannot be changed from type [keyword] to [text]"}}}

    命令如下: migrator-linux-amd64 -s http://localhost::9200 -m elastic:elastic -d http://localhost:9201 -x ".monitoring-kibana-6-2022.06.03"

    部分mapping

    ".monitoring-kibana-6-2022.06.03" : {
        "mappings" : {
          "dynamic" : "false",
          "properties" : {
            "cluster_uuid" : {
              "type" : "keyword"
            },
            "kibana_stats" : {
    

    部分数据 {"_id":"AYEm3blscUkBh-HnCRDJ","_index":".monitoring-kibana-6-2022.06.03","_score":1.0,"_source":{"cluster_uuid":"bnipfW_oSuuNKSwhx3Hqeg","kibana_stats":{"concurrent_connections":98,"kibana":{"host":"test","name":"test","snapshot":false,"status":"green","transport_address":"0.0.0.0:5601","uuid":"e8f6a016-52c9-4ed5-8ea7-ef05ffcbc461","version":"5.5.3"},"os":{"load":{"15m":0.14013671875,"1m":0.3447265625,"5m":0.21240234375},"memory":{"free_in_bytes":298958848,"total_in_bytes":1929297920,"used_in_bytes":1630339072},"uptime_in_millis":68987584000},"process":{"event_loop_delay":0.6322479248046875,"memory":{"heap":{"size_limit":1501560832,"total_in_bytes":118280192,"used_in_bytes":105647368},"resident_set_size_in_bytes":149434368},"uptime_in_millis":5471499671},"requests":{"disconnects":0,"status_codes":{"302":1},"total":1},"response_times":{"average":8,"max":8},"timestamp":"2022-06-03T00:01:13.946Z"},"source_node":{"attributes":{"ml.enabled":"true","ml.max_open_jobs":"10"},"name":"KLn5bOO","uuid":"KLn5bOOATXWmSFYMUyptrA"},"timestamp":"2022-06-03T00:01:14.088Z","type":"kibana_stats"},"_type":"doc"}

    啥原因?

  • 5.5到7.11迁移的时候发现文档数少了不少

    5.5到7.11迁移的时候发现文档数少了不少

    esm -s http://192.168.8.100:9200 -x "resources" -y "search" -d http://192.168.2.14:9200 --rename="_type:type" -u "_doc" -w=5 -b=1 [03-16 11:06:42] [INF] [main.go:474,main] start data migration.. Scroll 1533416 / 1533416 [=======================================================================================================] 100.00% 7m44s Bulk 1532879 / 1533416 [=========================================================================================================] 99.96% 7m44s [03-16 11:14:27] [INF] [main.go:505,main] data migration finished.

    大概500多MB的数据,我在es7.11查看文档数只有1397748,低于1533416 不知道是什么原因导致的。 我的老索引是多type的,会不会是这个原因

  •  --copy_mapping is incomplete?

    --copy_mapping is incomplete?

    source version:5.6.16 des version:5.6.16

    ./bin/linux64/esm -s http://127.0.0.1:9202 -d http://127.0.0.2:9200 -x _all --copy_settings --copy_mappings --shards=4 --refresh

    source settings: "number_of_replicas": "1" dest settings: "number_of_replicas": "0"

    I found a problem with this setting in my example, and I don’t know any more settings

  • An HTTP line is larger than 4096 bytes,是index的内容比较长?

    An HTTP line is larger than 4096 bytes,是index的内容比较长?

    测试版本 es 6.5.4 >> es 7.2.0 测试的命令 ./esm -s http://10.27.69.118:9200 -d http://10.81.176.31:9200 -a -w=5 -b=10 -c=10000 ./esm -s http://10.27.69.118:9200 -d http://10.81.176.31:9200 -a -w=5 -b=10 -c=1000 ./esm -s http://10.27.69.118:9200 -d http://10.81.176.31:9200 -a -w=5 -b=5 -c=1000 ./esm -s http://10.27.69.118:9200 -d http://10.81.176.31:9200 -a -w=5 -b=5 -c=100

    [08-21 09:08:31] [ERR] [scroll.go:49,Next] {"error":{"root_cause":[{"type":"too_long_frame_exception","reason":"An HTTP line is larger than 4096 bytes."}],"type":"too_long_frame_exception","reason":"An HTTP line is larger than 4096 bytes."},"status":400}

    如何理解这种报错的意思? 如何解决呢? 是设置--bulk_size的大小吗?

  • help

    help

    @medcl

    when i run esm with parameters like

    ./esm -s http://old-es-ip -d http://new-es-ip -x log-cmdb -y log-cmdb-prod -u doc -w 20 -b 1 -t "100m" --log error --repeat_times=2
    

    i got some errors

    [05-18 15:27:15] [ERR] [v0.go:80,Bulk] server error: <html>
    <head><title>413 Request Entity Too Large</title></head>
    <body bgcolor="white">
    <center><h1>413 Request Entity Too Large</h1></center>
    <hr><center>nginx/1.14.0</center>
    </body>
    </html>
    

    how can i fix it?

    i thought it is becase of nginx, but -b was already 1MB.

  • 迁移的时候能否把源数据库的mapping拷贝到目标数据库

    迁移的时候能否把源数据库的mapping拷贝到目标数据库

    报错如下

    [10-21 18:22:02] [ERR] [main.go:436,main] server error: {"error":"Incorrect HTTP method for uri [//wcrm_groupchat] and method [PUT], allowed: [POST]","status":405}
    [10-21 18:22:02] [ERR] [main.go:436,main] server error: {"error":"Incorrect HTTP method for uri [//wcrm_customer] and method [PUT], allowed: [POST]","status":405}
    [10-21 18:22:02] [ERR] [main.go:436,main] server error: {"error":"Incorrect HTTP method for uri [//wcrm_staff] and method [PUT], allowed: [POST]","status":405}
    [10-21 18:22:02] [ERR] [main.go:436,main] server error: {"error":"Incorrect HTTP method for uri [//es_comment] and method [PUT], allowed: [POST]","status":405}
    [10-21 18:22:02] [ERR] [main.go:436,main] server error: {"error":"Incorrect HTTP method for uri [//.security-7] and method [PUT], allowed: [POST]","status":405}
    [10-21 18:22:02] [ERR] [main.go:436,main] server error: {"error":"Incorrect HTTP method for uri [//es_content] and method [PUT], allowed: [POST]","status":405}
    [10-21 18:22:02] [ERR] [buffer.go:61,String] http://159.75.227.164:19200//.security-7/_mapping
    [10-21 18:22:02] [ERR] [v7.go:224,UpdateIndexMapping] 
    [10-21 18:22:02] [ERR] [v7.go:221,UpdateIndexMapping] server error: {"error":{"root_cause":[{"type":"illegal_argument_exception","reason":"Types cannot be provided in put mapping requests, unless the include_type_name parameter is set to true."}],"type":"illegal_argument_exception","reason":"Types cannot be provided in put mapping requests, unless the include_type_name parameter is set to true."},"status":400}
    panic: server error: {"error":{"root_cause":[{"type":"illegal_argument_exception","reason":"Types cannot be provided in put mapping requests, unless the include_type_name parameter is set to true."}],"type":"illegal_argument_exception","reason":"Types cannot be provided in put mapping requests, unless the include_type_name parameter is set to true."},"status":400}
    
    
  • 数据总量4.2亿,数据同步了1千万,自动退出了

    数据总量4.2亿,数据同步了1千万,自动退出了

    [linux@linux43 linux]$ ./esm -s http://localhost:9200 -d http://localhost:9204 -x my-data-2021-39 -w=10 -c 8888 -m name:pw -n name:pw my-data-2021-39 [09-26 18:27:01] [INF] [main.go:474,main] start data migration.. Scroll 10016776 / 423557337 [===>------------------------------------------------------------------------] 2.36% 1h11m3s Bulk 10008837 / 423557337 [===>-------------------------------------------------------------------------] 2.36% 1h11m3s [09-26 19:38:04] [INF] [main.go:505,main] data migration finished.

Парсер технологического журнала, основанный на стеке технологий Golang goroutines + Redis + Elasticsearch.
Парсер технологического журнала, основанный на стеке технологий Golang goroutines + Redis + Elasticsearch.

go-techLog1C Парсер технологического журнала, основанный на стеке технологий Golang goroutines + Redis + Elasticsearch. Стек является кросс-платформен

Nov 30, 2022
Vuls Beater for Elasticsearch - connecting vuls

vulsbeat Welcome to vulsbeat.Please push Star. This software allows you Vulnerability scan results of vuls can be imported to Elastic Stack.

Jan 25, 2022
Elastic is an Elasticsearch client for the Go programming language.

Elastic is an Elasticsearch client for the Go programming language.

Jan 9, 2023
This utility parses stackoverflow data and pushes it to Zinc/Elasticsearch

Gostack This utility parses stackoverflow data and pushes it to Zinc/Elasticsear

Jun 8, 2022
Quickly collect data from thousands of exposed Elasticsearch or Kibana instances and generate a report to be reviewed.
Quickly collect data from thousands of exposed Elasticsearch or Kibana instances and generate a report to be reviewed.

elasticpwn Quickly collects data from exposed Elasticsearch or Kibana instances and generates a report to be reviewed. It mainly aims for sensitive da

Nov 9, 2022
Discobeat is an elastic beat that publishes messages from Discord to elasticsearch

Discobeat Discobeat is an elastic beat that publishes messages from Discord to elasticsearch Ensure that this folder is at the following location: ${G

Apr 30, 2022
Zinc Search engine. A lightweight alternative to elasticsearch that requires minimal resources, written in Go.
Zinc Search engine. A lightweight alternative to elasticsearch that requires minimal resources, written in Go.

Zinc Search Engine Zinc is a search engine that does full text indexing. It is a lightweight alternative to Elasticsearch and runs using a fraction of

Jan 1, 2023
The forgotten go tool that executes and caches binaries included in go.mod files.
The forgotten go tool that executes and caches binaries included in go.mod files.

The forgotten go tool that executes and caches binaries included in go.mod files. This makes it easy to version cli tools in your projects such as gol

Sep 27, 2022
[TOOL, CLI] - Filter and examine Go type structures, interfaces and their transitive dependencies and relationships. Export structural types as TypeScript value object or bare type representations.

typex Examine Go types and their transitive dependencies. Export results as TypeScript value objects (or types) declaration. Installation go get -u gi

Dec 6, 2022
A tool for design-by-contract in Go

gocontracts gocontracts is a tool for design-by-contract in Go. It generates pre- and post-condition checks from the function descriptions so that the

Jan 1, 2023
elPrep: a high-performance tool for analyzing sequence alignment/map files in sequencing pipelines.
elPrep: a high-performance tool for analyzing sequence alignment/map files in sequencing pipelines.

Overview elPrep is a high-performance tool for analyzing .sam/.bam files (up to and including variant calling) in sequencing pipelines. The key advant

Nov 2, 2022
A command line tool to generate sequence diagrams
A command line tool to generate sequence diagrams

goseq - text based sequence diagrams A small command line utility used to generate UML sequence diagrams from a text-base definition file. Inspired by

Dec 22, 2022
Wprecon, is a vulnerability recognition tool in CMS Wordpress, 100% developed in Go.
Wprecon, is a vulnerability recognition tool in CMS Wordpress, 100% developed in Go.

WPrecon (Wordpress Recon) Hello! Welcome. Wprecon (Wordpress Recon), is a vulnerability recognition tool in CMS Wordpress, 100% developed in Go. Featu

Dec 25, 2022
Stargather is fast GitHub repository stargazers information gathering tool

Stargather is fast GitHub repository stargazers information gathering tool that can scrapes: Organization, Location, Email, Twitter, Follow

Dec 12, 2022
Podman: A tool for managing OCI containers and pods

Podman: A tool for managing OCI containers and pods Podman (the POD MANager) is a tool for managing containers and images, volumes mounted into those

Jan 1, 2023
A tool that facilitates building OCI images
A tool that facilitates building OCI images

Buildah - a tool that facilitates building Open Container Initiative (OCI) container images The Buildah package provides a command line tool that can

Jan 3, 2023
A tool to run queries in defined frequency and expose the count as prometheus metrics.
A tool to run queries in defined frequency and expose the count as prometheus metrics.

A tool to run queries in defined frequency and expose the count as prometheus metrics. Supports MongoDB and SQL

Jul 1, 2022
CodePlayground is a playground tool for go and rust language.

CodePlayground CodePlayground is a playground tool for go and rust language. Installation Use homebrews to install code-playground. brew tap trendyol/

Mar 5, 2022
Squizit is a simple tool, that aim to help you get the grade you want, not the one you have learnt for.
Squizit is a simple tool, that aim to help you get the grade you want, not the one you have learnt for.

Squizit is a simple tool, that aim to help you get the grade you want, not the one you have learnt for. Screenshots First, input PIN Then enjoy! Hoste

Mar 11, 2022