Elastic is an Elasticsearch client for the Go programming language.

Elastic

This is a development branch that is actively being worked on. DO NOT USE IN PRODUCTION! If you want to use stable versions of Elastic, please use Go modules for the 7.x release (or later) or a dependency manager like dep for earlier releases.

Elastic is an Elasticsearch client for the Go programming language.

Build Status Godoc license

See the wiki for additional information about Elastic.

Buy Me A Coffee

Releases

The release branches (e.g. release-branch.v7) are actively being worked on and can break at any time. If you want to use stable versions of Elastic, please use Go modules.

Here's the version matrix:

Elasticsearch version Elastic version Package URL Remarks
7.x                   7.0             github.com/olivere/elastic/v7 (source doc) Use Go modules.
6.x                   6.0             github.com/olivere/elastic (source doc) Use a dependency manager (see below).
5.x 5.0 gopkg.in/olivere/elastic.v5 (source doc) Actively maintained.
2.x 3.0 gopkg.in/olivere/elastic.v3 (source doc) Deprecated. Please update.
1.x 2.0 gopkg.in/olivere/elastic.v2 (source doc) Deprecated. Please update.
0.9-1.3 1.0 gopkg.in/olivere/elastic.v1 (source doc) Deprecated. Please update.

Example:

You have installed Elasticsearch 7.0.0 and want to use Elastic. As listed above, you should use Elastic 7.0 (code is in release-branch.v7).

To use the required version of Elastic in your application, you should use Go modules to manage dependencies. Make sure to use a version such as 7.0.0 or later.

To use Elastic, import:

import "github.com/olivere/elastic/v7"

Elastic 7.0

Elastic 7.0 targets Elasticsearch 7.x which was released on April 10th 2019.

As always with major version, there are a lot of breaking changes. We will use this as an opportunity to clean up and refactor Elastic, as we already did in earlier (major) releases.

Elastic 6.0

Elastic 6.0 targets Elasticsearch 6.x which was released on 14th November 2017.

Notice that there are a lot of breaking changes in Elasticsearch 6.0 and we used this as an opportunity to clean up and refactor Elastic as we did in the transition from earlier versions of Elastic.

Elastic 5.0

Elastic 5.0 targets Elasticsearch 5.0.0 and later. Elasticsearch 5.0.0 was released on 26th October 2016.

Notice that there are will be a lot of breaking changes in Elasticsearch 5.0 and we used this as an opportunity to clean up and refactor Elastic as we did in the transition from Elastic 2.0 (for Elasticsearch 1.x) to Elastic 3.0 (for Elasticsearch 2.x).

Furthermore, the jump in version numbers will give us a chance to be in sync with the Elastic Stack.

Elastic 3.0

Elastic 3.0 targets Elasticsearch 2.x and is published via gopkg.in/olivere/elastic.v3.

Elastic 3.0 will only get critical bug fixes. You should update to a recent version.

Elastic 2.0

Elastic 2.0 targets Elasticsearch 1.x and is published via gopkg.in/olivere/elastic.v2.

Elastic 2.0 will only get critical bug fixes. You should update to a recent version.

Elastic 1.0

Elastic 1.0 is deprecated. You should really update Elasticsearch and Elastic to a recent version.

However, if you cannot update for some reason, don't worry. Version 1.0 is still available. All you need to do is go-get it and change your import path as described above.

Status

We use Elastic in production since 2012. Elastic is stable but the API changes now and then. We strive for API compatibility. However, Elasticsearch sometimes introduces breaking changes and we sometimes have to adapt.

Having said that, there have been no big API changes that required you to rewrite your application big time. More often than not it's renaming APIs and adding/removing features so that Elastic is in sync with Elasticsearch.

Elastic has been used in production starting with Elasticsearch 0.90 up to recent 7.x versions. We recently switched to GitHub Actions for testing. Before that, we used Travis CI successfully for years).

Elasticsearch has quite a few features. Most of them are implemented by Elastic. I add features and APIs as required. It's straightforward to implement missing pieces. I'm accepting pull requests :-)

Having said that, I hope you find the project useful.

Getting Started

The first thing you do is to create a Client. The client connects to Elasticsearch on http://127.0.0.1:9200 by default.

You typically create one client for your app. Here's a complete example of creating a client, creating an index, adding a document, executing a search etc.

An example is available here.

Here's a link to a complete working example for v6.

Here are a few tips on how to get used to Elastic:

  1. Head over to the Wiki for detailed information and topics like e.g. how to add a middleware or how to connect to AWS.
  2. If you are unsure how to implement something, read the tests (all _test.go files). They not only serve as a guard against changes, but also as a reference.
  3. The recipes contains small examples on how to implement something, e.g. bulk indexing, scrolling etc.

API Status

Document APIs

  • Index API
  • Get API
  • Delete API
  • Delete By Query API
  • Update API
  • Update By Query API
  • Multi Get API
  • Bulk API
  • Reindex API
  • Term Vectors
  • Multi termvectors API

Search APIs

  • Search
  • Search Template
  • Multi Search Template
  • Search Shards API
  • Suggesters
    • Term Suggester
    • Phrase Suggester
    • Completion Suggester
    • Context Suggester
  • Multi Search API
  • Count API
  • Validate API
  • Explain API
  • Profile API
  • Field Capabilities API

Aggregations

  • Metrics Aggregations
    • Avg
    • Boxplot (X-pack)
    • Cardinality
    • Extended Stats
    • Geo Bounds
    • Geo Centroid
    • Matrix stats
    • Max
    • Median absolute deviation
    • Min
    • Percentile Ranks
    • Percentiles
    • Rate (X-pack)
    • Scripted Metric
    • Stats
    • String stats (X-pack)
    • Sum
    • T-test (X-pack)
    • Top Hits
    • Top metrics (X-pack)
    • Value Count
    • Weighted avg
  • Bucket Aggregations
    • Adjacency Matrix
    • Auto-interval Date Histogram
    • Children
    • Composite
    • Date Histogram
    • Date Range
    • Diversified Sampler
    • Filter
    • Filters
    • Geo Distance
    • Geohash Grid
    • Geotile grid
    • Global
    • Histogram
    • IP Range
    • Missing
    • Nested
    • Parent
    • Range
    • Rare terms
    • Reverse Nested
    • Sampler
    • Significant Terms
    • Significant Text
    • Terms
    • Variable width histogram
  • Pipeline Aggregations
    • Avg Bucket
    • Bucket Script
    • Bucket Selector
    • Bucket Sort
    • Cumulative cardinality (X-pack)
    • Cumulative Sum
    • Derivative
    • Extended Stats Bucket
    • Inference bucket (X-pack)
    • Max Bucket
    • Min Bucket
    • Moving Average
    • Moving function
    • Moving percentiles (X-pack)
    • Normalize (X-pack)
    • Percentiles Bucket
    • Serial Differencing
    • Stats Bucket
    • Sum Bucket
  • Aggregation Metadata

Indices APIs

  • Create Index
  • Delete Index
  • Get Index
  • Indices Exists
  • Open / Close Index
  • Shrink Index
  • Rollover Index
  • Put Mapping
  • Get Mapping
  • Get Field Mapping
  • Types Exists
  • Index Aliases
  • Update Indices Settings
  • Get Settings
  • Analyze
    • Explain Analyze
  • Index Templates
  • Indices Stats
  • Indices Segments
  • Indices Recovery
  • Indices Shard Stores
  • Clear Cache
  • Flush
    • Synced Flush
  • Refresh
  • Force Merge

Index Lifecycle Management APIs

  • Create Policy
  • Get Policy
  • Delete Policy
  • Move to Step
  • Remove Policy
  • Retry Policy
  • Get Ilm Status
  • Explain Lifecycle
  • Start Ilm
  • Stop Ilm

cat APIs

  • cat aliases
  • cat allocation
  • cat count
  • cat fielddata
  • cat health
  • cat indices
  • cat master
  • cat nodeattrs
  • cat nodes
  • cat pending tasks
  • cat plugins
  • cat recovery
  • cat repositories
  • cat thread pool
  • cat shards
  • cat segments
  • cat snapshots
  • cat templates

Cluster APIs

  • Cluster Health
  • Cluster State
  • Cluster Stats
  • Pending Cluster Tasks
  • Cluster Reroute
  • Cluster Update Settings
  • Nodes Stats
  • Nodes Info
  • Nodes Feature Usage
  • Remote Cluster Info
  • Task Management API
  • Nodes hot_threads
  • Cluster Allocation Explain API

Query DSL

  • Match All Query
  • Inner hits
  • Full text queries
    • Match Query
    • Match Phrase Query
    • Match Phrase Prefix Query
    • Multi Match Query
    • Common Terms Query
    • Query String Query
    • Simple Query String Query
  • Term level queries
    • Term Query
    • Terms Query
    • Terms Set Query
    • Range Query
    • Exists Query
    • Prefix Query
    • Wildcard Query
    • Regexp Query
    • Fuzzy Query
    • Type Query
    • Ids Query
  • Compound queries
    • Constant Score Query
    • Bool Query
    • Dis Max Query
    • Function Score Query
    • Boosting Query
  • Joining queries
    • Nested Query
    • Has Child Query
    • Has Parent Query
    • Parent Id Query
  • Geo queries
    • GeoShape Query
    • Geo Bounding Box Query
    • Geo Distance Query
    • Geo Polygon Query
  • Specialized queries
    • Distance Feature Query
    • More Like This Query
    • Script Query
    • Script Score Query
    • Percolate Query
  • Span queries
    • Span Term Query
    • Span Multi Term Query
    • Span First Query
    • Span Near Query
    • Span Or Query
    • Span Not Query
    • Span Containing Query
    • Span Within Query
    • Span Field Masking Query
  • Minimum Should Match
  • Multi Term Query Rewrite

Modules

  • Snapshot and Restore
    • Repositories
    • Snapshot get
    • Snapshot create
    • Snapshot delete
    • Restore
    • Snapshot status
    • Monitoring snapshot/restore status
    • Stopping currently running snapshot and restore
  • Scripting
    • GetScript
    • PutScript
    • DeleteScript

Sorting

  • Sort by score
  • Sort by field
  • Sort by geo distance
  • Sort by script
  • Sort by doc

Scrolling

Scrolling is supported via a ScrollService. It supports an iterator-like interface. The ClearScroll API is implemented as well.

A pattern for efficiently scrolling in parallel is described in the Wiki.

How to contribute

Read the contribution guidelines.

Credits

Thanks a lot for the great folks working hard on Elasticsearch and Go.

Elastic uses portions of the uritemplates library by Joshua Tacoma, backoff by Cenk Altı and leaktest by Ian Chiles.

LICENSE

MIT-LICENSE. See LICENSE or the LICENSE file provided in the repository for details.

Comments
  • "No ElasticSearch Node Available"

    Please use the following questions as a guideline to help me answer your issue/question without further inquiry. Thank you.

    Which version of Elastic are you using?

    [ ] elastic.v2 (for Elasticsearch 1.x) [x ] elastic.v3 (for Elasticsearch 2.x)

    Please describe the expected behavior

    NewClient(elastic.SetURL("http://:9200") would correctly generate a new Client object connecting to the node

    Please describe the actual behavior

    "no ElasticSearch node available"

    Any steps to reproduce the behavior?

    elastic.NewClient(elastic.SetURL("http://:9200")

  • Problems on connect

    Problems on connect

    @dashaus, I copied it over from #57:

    Hi, I have the same problem here:

    panic: main: conn db: no Elasticsearch node available

    goroutine 1 [running]:
    log.Panicf(0x84de50, 0x11, 0xc2080c7e90, 0x1, 0x1)
        /usr/local/go/src/log/log.go:314 +0xd0
    main.init·1()
        /Users/emilio/go/src/monoculum/init.go:40 +0x348
    main.init()
        /Users/emilio/go/src/monoculum/main.go:334 +0xa4
    
    goroutine 526 [select]:
    net/http.(*persistConn).roundTrip(0xc2088ad1e0, 0xc2086a9d50, 0x0, 0x0, 0x0)
    20:30:13 app         |  /usr/local/go/src/net/http/transport.go:1082 +0x7ad
    net/http.(*Transport).RoundTrip(0xc20806c000, 0xc2086f6000, 0xc20873ff50, 0x0, 0x0)
    20:30:13 app         |  /usr/local/go/src/net/http/transport.go:235 +0x558
    20:30:13 app         | net/http.send(0xc2086f6000, 0xed4f18, 0xc20806c000, 0x21, 0x0, 
    20:30:13 app         | 0x0)
        /usr/local/go/src/net/http/client.go:219
    20:30:13 app         |  +0x4fc
    net/http.(*Client).send(0xc08b00, 0xc2086f6000, 0x21
    20:30:13 app         | , 0x0, 0x0)
        /usr/local/go/src/net/http/client.go:142 +0x15b
    20:30:13 app         | net/http.(*Client).doFollowingRedirects(0xc08b00, 0xc2086f6000, 0x97cd00, 0x0, 0x0, 0x0)
    20:30:13 app         |  /usr/local/go/src/net/http/client.go:367 +0xb25
    net/http.(*Client).Do(0xc08b00, 0xc2086f6000, 0xc20873fce0, 0x0, 
    20:30:13 app         | 0x0)
        /usr/local/go/src/net/http/client.go
    20:30:13 app         | :174 +0xa4
    github.com/olivere/elastic.(*Client).sniffNode(0xc208659d10, 0xc208569920, 0x15
    20:30:13 app         | , 0x0, 0x0, 0x0)
    20:30:13 app         |  /Users/emilio/go/src/github.com/olivere/elastic/client.go:543
    20:30:13 app         |  +0x16a
    20:30:13 app         | 
    github.com/olivere/elastic.func·014(0xc208569920, 0x15
    20:30:13 app         | )
        /Users/emilio/go/src/github.com/olivere/elastic/client.go:508 +0x47
    20:30:13 app         | created by github.com/olivere/elastic.(*Client).sniff
        /Users/emilio/go/src/github.com/olivere/elastic/client.go:508 +0x744
    
    goroutine 525 [chan receive]:
    20:30:13 app         | database/sql.(*DB).connectionOpener(0xc2086de960)
        /usr/local/go/src/database/sql/sql.go:589 +0x4c
    created by database/sql.Open
        /usr/local/go/src/database/sql/sql.go:452 +0x31c
    
    goroutine 529 [IO wait]:
    20:30:13 app         | net.(*pollDesc).Wait(0xc2084fe370, 0x72, 0x0
    20:30:13 app         | , 
    20:30:13 app         | 0x0)
        /usr/local/go/src/net/fd_poll_runtime.go:84 +0x47
    net.(*pollDesc).WaitRead(0xc2084fe370, 0x0, 0x0)
        /usr/local/go/src/net/fd_poll_runtime.go:89 +0x43
    net.(*netFD).Read(0xc2084fe310, 0xc208709000, 0x1000, 0x1000, 0x0, 0xed4d48, 0xc2086a9ec8)
        /usr/local/go/src/net/fd_unix.go:242 +0x40f
    net.(*conn).Read(0xc20896a800, 0xc208709000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
        /usr/local/go/src/net/net.go:121 +0xdc
    net/http.noteEOFReader.Read(0xef0410, 0xc20896a800, 0xc2088ad238, 0xc208709000, 0x1000, 0x1000, 0xeb7010, 0x0, 0x0)
        /usr/local/go/src/net/http/transport.go:1270 +0x6e
    net/http.(*noteEOFReader).Read(0xc208569b40, 0xc208709000, 0x1000, 0x1000, 0xc207f6957f, 0x0, 0x0)
        <autogenerated>:125 +0xd4
    bufio.(*Reader).fill(0xc2088f3c80)
        /usr/local/go/src/bufio/bufio.go:97 +0x1ce
    bufio.(*Reader).Peek(0xc2088f3c80, 0x1, 0x0, 0x0, 0x0, 0x0, 0x0)
        /usr/local/go/src/bufio/bufio.go:132 +0xf0
    net/http.(*persistConn).readLoop(0xc2088ad1e0)
        /usr/local/go/src/net/http/transport.go:842 +0xa4
    created by net/http.(*Transport).dialConn
        /usr/local/go/src/net/http/transport.go:660 +0xc9f
    
    goroutine 530 [select]:
    net/http.(*persistConn).writeLoop(0xc2088ad1e0)
        /usr/local/go/src/net/http/transport.go:945 +0x41d
    created by net/http.(*Transport).dialConn
        /usr/local/go/src/net/http/transport.go:661 +0xcbc
    

    This occurs sometimes... not always...

    curl -XGET 127.0.0.1:9200/_nodes/http?pretty=1
    {
      "cluster_name" : "elasticsearch",
      "nodes" : {
        "3l_Ing0oSfWu5U63US5kxg" : {
          "name" : "Rattler",
          "transport_address" : "inet[192.168.1.91/192.168.1.91:9300]",
          "host" : "Mac-Emilio",
          "ip" : "192.168.1.91",
          "version" : "1.3.4",
          "build" : "a70f3cc",
          "http_address" : "inet[/192.168.1.91:9200]",
          "http" : {
            "bound_address" : "inet[/0:0:0:0:0:0:0:0:9200]",
            "publish_address" : "inet[/192.168.1.91:9200]",
            "max_content_length_in_bytes" : 104857600
          }
        }
      }
    }
    
  • Problems With Sniffing

    Problems With Sniffing

    I'm running Elasticsearch v1.4.4 in a Docker container. I kept having trouble getting the client to work properly. I was trying to run the sample in the README (obviously pointing to my Docker container instead of localhost). It was taking ~30 seconds to create the client, and then would fail to create the index with the error: no Elasticsearch node available.

    As soon as I set turned off sniffing when creating the client (elastic.SetSniff(false)), everything worked perfectly. It doesn't really bother me that I have to turn sniffing off, but I wanted to put this issue out to see if anyone else had seen an issue like this.

    P.S. @olivere - The documentation is awesome! :+1:

  • cannot go import elastic.v5.  v7 import error

    cannot go import elastic.v5. v7 import error

    Please use the following questions as a guideline to help me answer your issue/question without further inquiry. Thank you.

    Which version of Elastic are you using?

    [ ] elastic.v5 (for Elasticsearch 5.x)

    Please describe the expected behavior

    import elastic v5 verson

    Please describe the actual behavior

    import elastic v7 version and cannot find package such as below cannot find package "github.com/olivere/elastic/v7/config"

    Any steps to reproduce the behavior?

  • Can't put document into AWS ES service.

    Can't put document into AWS ES service.

    Which version of Elastic are you using?

    elastic.v2 (for Elasticsearch 1.x)

    Please describe the expected behavior

    Successful document put into the index.

    Please describe the actual behavior

    Error is returned:

    elastic: Error 403 (Forbidden)
    

    Any steps to reproduce the behavior?

    Setup:

        creds := credentials.NewEnvCredentials()
        signer := v4.NewSigner(creds)
        awsClient, err := aws_signing_client.New(signer, nil, "es", "us-west-2")
        if err != nil {
            return nil, err
        }
    
        return elastic.NewClient(
            elastic.SetURL(...),
            elastic.SetScheme("https"),
            elastic.SetHttpClient(awsClient),
            elastic.SetSniff(false),
        )
    

    Put:

        _, err = e.Client.Index().Index(indexName).Type(indexType).
            Id(doc.ID).
            BodyJson(doc).
            Do()
    

    Not sure if this is elastic or aws_signing_client issue.

  • Default branch, release-branch.v6 has some compile/import problems

    Default branch, release-branch.v6 has some compile/import problems

    I am about to switch to go modules but it appears the branch release-branch.v6 has some import problems when you just try to use it using old-fashioned GOPATH...

    When you compile you get:

    /go/src/github.com/olivere/elastic/client.go:24:2: cannot find package "github.com/olivere/elastic/v6/config" in any of:
            /usr/local/go/src/github.com/olivere/elastic/v6/config (from $GOROOT)
            /go/src/github.com/olivere/elastic/v6/config (from $GOPATH)
    /go/src/github.com/olivere/elastic/bulk.go:14:2: cannot find package "github.com/olivere/elastic/v6/uritemplates" in any of:
            /usr/local/go/src/github.com/olivere/elastic/v6/uritemplates (from $GOROOT)
            /go/src/github.com/olivere/elastic/v6/uritemplates (from $GOPATH)
    

    I realize you recommend to use a dependency manager however prior to your latest update it still worked okay.

  • When the context is cancelled the node is marked dead

    When the context is cancelled the node is marked dead

    Version

    elastic.v5 (for Elasticsearch 5.x)

    How to reproduce:

    package main
    
    import (
    	"context"
    	"gopkg.in/olivere/elastic.v5"
    	"log"
    	"os"
    	"time"
    )
    
    func main() {
    
    	var err error
    
    	client, err := elastic.NewClient(
    		elastic.SetURL("https://httpbin.org/delay/3?"), // every request will take about 3 seconds
    		elastic.SetHealthcheck(false),
    		elastic.SetSniff(false),
    		elastic.SetErrorLog(log.New(os.Stderr, "", log.LstdFlags)),
    		elastic.SetInfoLog(log.New(os.Stdout, "", log.LstdFlags)),
    	)
    	if err != nil {
    		log.Fatal(err)
    	}
    
    	ctx, _ := context.WithTimeout(context.Background(), 1*time.Second) // requests will time out after 1 second
    
    	log.Println("Running request")
    
    	_, err = client.Get().Index("whatever").Id("1").Do(ctx)
    
    	if err != nil {
    		log.Println("Error: " + err.Error())
    	}
    
    	log.Println("Running second request")
    
    	_, err = client.Get().Index("whatever").Id("1").Do(ctx)
    
    	if err != nil {
    		log.Println("Error: " + err.Error())
    	}
    
    }
    

    Actual

    2017/03/17 08:02:33 Running request
    2017/03/17 08:02:34 elastic: https://httpbin.org/delay/3? is dead
    2017/03/17 08:02:34 Error: context deadline exceeded
    2017/03/17 08:02:34 Running second request
    2017/03/17 08:02:34 elastic: all 1 nodes marked as dead; resurrecting them to prevent deadlock
    2017/03/17 08:02:34 Error: no Elasticsearch node available
    

    Expected

    Something like (I edited that "log" myself):

    2017/03/17 08:02:33 Running request
    2017/03/17 08:02:34 Error: context deadline exceeded
    2017/03/17 08:02:34 Running second request
    2017/03/17 08:02:37 GET https://httpbin.org/delay/3?/whatever/_all/1 [status:200, request:3.500s]
    
  • Pattern for unit testing with interfaces?

    Pattern for unit testing with interfaces?

    Hi!

    Apologies if this is an already answered question, I was unable to find a satisfactory answer online. I am trying to find a way to write unit tests for one of my services however I feel this example could apply outside of unit tests to more general encapsulation of code.

    I want to mock the client so I can simulate a specific type of request (in my case bulk requests) without going all the way to a test ElasticSearch instance. Ideally there would be an interface to allow me to generate a mock in my tests. For example what I want is an interface like so:

    type IBulkClient interface {
        Bulk() IBulkService // return the BulkService interface
        ...
    }
    
    type IBulkService interface {
       Add(requests ...E.BulkableRequest) IBulkService
       Do() (IBulkResponse, error) // return the BulkResponse interface (not shown here)
       ...
    }
    

    This would allow me to mock BulkClient and BulkService to better test my code. The reason why I can't do this myself right now is that the real BulkService.Add() returns a *BulkService which screws up the interface as I want my IBulkClient to return another interface not a pointer to a struct.

    Here is a go playground with the issue I am talking about and here is it working with an interface reference rather than a pointer to a struct.

    My ultimate question is this: Is it possible for the API to provide interfaces for all its structs? This would allow for better unit testing and also allow the user to better encapsulate their code. If there is a reason why there shouldn't be interfaces how do you recommend writing unit tests that don't go all the way to ElasticSearch or mocking the response at an http level?

  • Elasticsearch 7: hits.total is an object in the search response

    Elasticsearch 7: hits.total is an object in the search response

    In Elasticsearch 7, hits.total is an object in the search response. This breaks searches in the current version of the library with an unmarshalling error: json: cannot unmarshal object into Go struct field SearchHits.total of type int64.

    There is a request parameter (rest_total_hits_as_int=true) that can be added to get back the old behaviour, but I don't think this library currently has an easy way of adding this parameter to requests.

  • how to get search result full raw json?

    how to get search result full raw json?

    Please use the following questions as a guideline to help me answer your issue/question without further inquiry. Thank you.

    Which version of Elastic are you using?

    elastic.v5 (for Elasticsearch 5.x)

    Please describe the expected behavior

    searchResult,err:= client().search().....Do(...)

    searchResult.RawJson() // get full result raw json

    Please describe the actual behavior

    Did not find this method

    Any steps to reproduce the behavior?

    // Search with a term query termQuery := elastic.NewTermQuery("user", "olivere") searchResult, err := client.Search(). Index("twitter"). // search in index "twitter" Query(termQuery). // specify the query Sort("user", true). // sort by "user" field, ascending From(0).Size(10). // take documents 0-9 Pretty(true). // pretty print request and response JSON Do(context.Background()) // execute if err != nil { // Handle error panic(err) }

  • Function Undefined error

    Function Undefined error

    Please use the following questions as a guideline to help me answer your issue/question without further inquiry. Thank you.

    Which version of Elastic are you using?

    [ ] elastic.v6 (for Elasticsearch 6.x)

    Getting undefined error for NewV4SigningClient(cred,clusterLocation)

  • Generating nested histogram query

    Generating nested histogram query

    Please use the following questions as a guideline to help me answer your issue/question without further inquiry. Thank you.

    Which version of Elastic are you using?

    [x] elastic.v7 (for Elasticsearch 7.x)

    I'm having difficulty generating the desired histogram query with sub aggregations.

    This is the query I want to generate.

    {
      "size": 0,
      "aggs": {
        "range": {
          "histogram": {
            "field": "counter",
            "interval": 2
          },
          "aggs": {
            "nested": {
              "nested": {
                "path": "deposits"
              },
              "aggs": {
                "scripts": {
                  "avg": {
                    "script": {
                      "lang": "painless", 
                      "source": "return doc['deposits.depositA'].value + doc['deposits.depositB'].value"
                    }
                  }
                }
              }
            }
          }
        }
      }
    }
    

    You'll have to excuse me if this is a dumb question, since I'm new to both elasticsearch and this package. But this is as far as I've gotten:

    nested := NewNestedAggregation().Path("deposits")
    
    scriptq := elastic.NewScriptQuery(NewScript("return doc['deposits.depositA'].value + doc['deposits.depositB'].value"))
    hist := NewHistogramAggregation().Field("counter").Interval(2).SubAggregation("avg", scriptq)
    
    client.Search().Index(indexName).Aggregation() ... // this is where I get stuck
    

    I'm not sure of the type of aggregation I can attach in Aggregation().

    I've been reviewing the _test files of this package to get some idea on what to do but a lot of this is still going over my head.

    Any help would be greatly appreciated.

  • Does olivere support

    Does olivere support "REST API compatibility"

    Does olivere support the concept of REST API compatibility? Although it is a good practice not to make cross-communication between client and server which are of different versions, sometimes in practice, we encounter the need to use the same client to ping elasticsearch servers of different versions.

    As mentioned in the page, in order to request REST API compatibility, the client needs to specify in the header about Accept and "Content-Types":

    Accept: "application/vnd.elasticsearch+json;compatible-with=7"
    Content-Type: "application/vnd.elasticsearch+json;compatible-with=7"
    

    This is not a bug, but a question about current or potential future feature of the library. Does olivere already support this?

    Which version of Elastic are you using?

    [x] elastic.v7 (for Elasticsearch 7.x) [x] elastic.v6 (for Elasticsearch 6.x) [ ] elastic.v5 (for Elasticsearch 5.x) [ ] elastic.v3 (for Elasticsearch 2.x) [ ] elastic.v2 (for Elasticsearch 1.x)

    Please describe the expected behavior

    elastic v7 is able to make requests to both Elasticsearch 7 and 8 servers with the same endpoint, expecting the same behavior.

    Please describe the actual behavior

    NA, as this issue is a question rather than a bug.

    Any steps to reproduce the behavior?

    NA, as this issue is a question rather than a bug.

  • Completely disabling backoff

    Completely disabling backoff

    Please use the following questions as a guideline to help me answer your issue/question without further inquiry. Thank you.

    Which version of Elastic are you using?

    [ x] elastic.v7 (for Elasticsearch 7.x)

    Please describe the expected behavior

    Once backoff is disabled, we expect all the data sent to the disabled (unavailable) cluster will be discarded, so we only process the response from the cluster in the after method, and store all the failed values into DLQ. Instead, the data sent at the time of the ES failure is kept and submitted once the connection to the ES cluster is restored.

    Please describe the actual behavior

    We have a constant flow of values, that are kept in different ES clusters. We have added the code that suppose to handle the case when of the ES clusters failed, which is sending that values to the dead letter queue (DLQ) and read them back to it when it is restored back. The issue we are having now - double records. Seems that even without the backoff policy defined (or something like stopBackoff{} one), BulkProcessor, once the ES cluster is restored sends all the values processed from the source into ES, at the same time as we are starting to process DLQ... This means that all the records ingested by the app while the ES cluster is off simply piled up on top of each other, instead of being discarded after the "after" method has been called. We've tried to overcome that issue by setting up StopBackoff like that:

    processor, err = client.BulkProcessor().
    			Workers(o.Workers).
    			BulkActions(o.BatchSize).
    			BulkSize(o.BatchBytes).
    			FlushInterval(o.FlushInterval).
    			RetryItemStatusCodes(o.RetryItemStatusCodes...).
                            Backoff(elastic.StopBackoff{}).
    			Stats(o.WantStats).
    			After(after). // call "after" after every commit
    			Do(o.Ctx)
    

    But that didn't help, since it seems to only generate errors one after another without doing much.

    This behavior would've been fine if we were processing a low amount of messages, but when there are a lot of messages stored in the memory, eventually the app would be crashed and all the stored messages are lost.

    Any steps to reproduce the behavior?

    1. Set up the BulkProcessor with stopBackoff defined as backoff.
    2. Disable ES cluster, while BulkProcessor app keeps running and ingest messages from the source (kafka)
    3. Send some values to the app ingestion channel
    4. Enable ES cluster
    5. All the values sent to the BulkProcessor while the ES cluster was disabled are can be seen in the ES cluster.

    Suggested solutions

    I am seeing that we are only clearing records at one place, where we call s.Reset(), that part of the code is seems to be never reached if the cluster is off. Is there a way to clear out the records at the time after method has finished? Or add some setting that will allow doing so?

  • `Client::PerformRequest` dumps response before checking for `MaxResponseSize`

    `Client::PerformRequest` dumps response before checking for `MaxResponseSize`

    Which version of Elastic are you using?

    [ ] elastic.v7 (for Elasticsearch 7.x)

    Please describe the expected behavior

    Client::dumpResponse should check for MaxResponseSize before dumping the request similar to what Client::newResponse does. Currently the library can cause subtle OOM situations even if the request is bounded by a MaxResponseSize limit unless the trace logger is also nil.

    Consumers should be able to set a trace logger without risking OOMs. At the very least it should be made clear in the documentation that MaxResponseSize must be set and the trace logger must be nil to guard against OOM exceptions.

    Please describe the actual behavior

    Library can cause OOM exceptions if trace logger is set, regardless of the MaxResponseSize limits.

    Any steps to reproduce the behavior?

    Instantiate a client and set both the MaxResponseSize and the trace logger to be non-zero valued. You will see that the process will use a lot of memory even if you set the MaxResponseSize to be a low value.

  • How to add a tokenizer uax_url_email ?

    How to add a tokenizer uax_url_email ?

    [x] elastic.v6 (for Elasticsearch 6.x)

    Please describe the expected behavior

    I need to search an email

    Please describe the actual behavior

    without the token uax_url_email sent to E/S, my email is not found

    How to add a tokenizer uax_url_email ?

Advent of Code is an Advent calendar of small programming puzzles for a variety of skill sets and skill levels that can be solved in any programming language you like.

Advent of Code 2021 Advent of Code is an Advent calendar of small programming puzzles for a variety of skill sets and skill levels that can be solved

Dec 2, 2021
Jan 4, 2022
Парсер технологического журнала, основанный на стеке технологий Golang goroutines + Redis + Elasticsearch.
Парсер технологического журнала, основанный на стеке технологий Golang goroutines + Redis + Elasticsearch.

go-techLog1C Парсер технологического журнала, основанный на стеке технологий Golang goroutines + Redis + Elasticsearch. Стек является кросс-платформен

Nov 30, 2022
Vuls Beater for Elasticsearch - connecting vuls

vulsbeat Welcome to vulsbeat.Please push Star. This software allows you Vulnerability scan results of vuls can be imported to Elastic Stack.

Jan 25, 2022
An Elasticsearch Migration Tool.

An Elasticsearch Migration Tool Elasticsearch cross version data migration. Dec 3rd, 2020: [EN] Cross version Elasticsearch data migration with ESM Fe

Dec 19, 2022
This utility parses stackoverflow data and pushes it to Zinc/Elasticsearch

Gostack This utility parses stackoverflow data and pushes it to Zinc/Elasticsear

Jun 8, 2022
Quickly collect data from thousands of exposed Elasticsearch or Kibana instances and generate a report to be reviewed.
Quickly collect data from thousands of exposed Elasticsearch or Kibana instances and generate a report to be reviewed.

elasticpwn Quickly collects data from exposed Elasticsearch or Kibana instances and generates a report to be reviewed. It mainly aims for sensitive da

Nov 9, 2022
Zinc Search engine. A lightweight alternative to elasticsearch that requires minimal resources, written in Go.
Zinc Search engine. A lightweight alternative to elasticsearch that requires minimal resources, written in Go.

Zinc Search Engine Zinc is a search engine that does full text indexing. It is a lightweight alternative to Elasticsearch and runs using a fraction of

Jan 1, 2023
Unit tests generator for Go programming language
Unit tests generator for Go programming language

GoUnit GoUnit is a commandline tool that generates tests stubs based on source function or method signature. There are plugins for Vim Emacs Atom Subl

Jan 1, 2023
FreeSWITCH Event Socket library for the Go programming language.

eventsocket FreeSWITCH Event Socket library for the Go programming language. It supports both inbound and outbound event socket connections, acting ei

Dec 11, 2022
Simple interface to libmagic for Go Programming Language

File Magic in Go Introduction Provides simple interface to libmagic for Go Programming Language. Table of Contents Contributing Versioning Author Copy

Dec 22, 2021
The Gorilla Programming Language
The Gorilla Programming Language

Gorilla Programming Language Gorilla is a tiny, dynamically typed, multi-engine programming language It has flexible syntax, a compiler, as well as an

Apr 16, 2022
👩🏼‍💻A simple compiled programming language
👩🏼‍💻A simple compiled programming language

The language is written in Go and the target language is C. The built-in library is written in C too

Nov 29, 2022
Lithia is an experimental functional programming language with an implicit but strong and dynamic type system.

Lithia is an experimental functional programming language with an implicit but strong and dynamic type system. Lithia is designed around a few core concepts in mind all language features contribute to.

Dec 24, 2022
accessor methods generator for Go programming language

accessory accessory is an accessor generator for Go programming language. What is accessory? Accessory is a tool that generates accessor methods from

Nov 15, 2022
Http web frame with Go Programming Language

Http web frame with Go Programming Language

Oct 17, 2021
A modern programming language written in Golang.

MangoScript A modern programming language written in Golang. Here is what I want MangoScript to look like: struct Rectangle { width: number he

Nov 12, 2021
A stack oriented esoteric programming language inspired by poetry and forth

paperStack A stack oriented esoteric programming language inspired by poetry and forth What is paperStack A stack oriented language An esoteric progra

Nov 14, 2021
Oak is an expressive, dynamically typed programming language

Oak ?? Oak is an expressive, dynamically typed programming language. It takes the best parts of my experience with Ink, and adds what I missed and rem

Dec 30, 2022