Go client library for Pilosa

Go Client for Pilosa

GoDoc

Go client for Pilosa high performance distributed index.

What's New?

See: CHANGELOG

Requirements

  • Go 1.12 and higher.

Install

Download the library in your GOPATH using:

go get github.com/pilosa/go-pilosa

After that, you can import the library in your code using:

import "github.com/pilosa/go-pilosa"

Usage

Quick overview

Assuming Pilosa server is running at localhost:10101 (the default):

package main

import (
	"fmt"

	"github.com/pilosa/go-pilosa"
)

func main() {
	var err error

	// Create the default client
	client := pilosa.DefaultClient()

	// Retrieve the schema
	schema, err := client.Schema()

	// Create an Index object
	myindex := schema.Index("myindex")

	// Create a Field object
	myfield := myindex.Field("myfield")

	// make sure the index and the field exists on the server
	err = client.SyncSchema(schema)

	// Send a Set query. If err is non-nil, response will be nil.
	response, err := client.Query(myfield.Set(5, 42))

	// Send a Row query. If err is non-nil, response will be nil.
	response, err = client.Query(myfield.Row(5))

	// Get the result
	result := response.Result()
	// Act on the result
	if result != nil {
		columns := result.Row().Columns
		fmt.Println("Got columns: ", columns)
	}

	// You can batch queries to improve throughput
	response, err = client.Query(myindex.BatchQuery(
		myfield.Row(5),
		myfield.Row(10)))
	if err != nil {
		fmt.Println(err)
	}

	for _, result := range response.Results() {
		// Act on the result
		fmt.Println(result.Row().Columns)
	}
}

Documentation

Data Model and Queries

See: Data Model and Queries

Executing Queries

See: Server Interaction

Importing and Exporting Data

See: Importing and Exporting Data

Other Documentation

Contributing

See: CONTRIBUTING

License

See: LICENSE

Comments
  • Adds support for importing keys

    Adds support for importing keys

    This PR works against Pilosa master ~This PR works against: https://github.com/pilosa/pilosa/pull/1557 There are 3 failing tests regarding exports with keys, but those tests should pass once https://github.com/pilosa/pilosa/issues/1580 is resolved.~

  • Broken import with Pilosa repo master

    Broken import with Pilosa repo master

    Trying to import using Pilosa cluster master branch - returns error "starting field import for segment: doing import: Server error 415 Unsupported Media Type body:'Unsupported media type" - server expects "application/x-protobuf" but client sends "application/x-binary" to roaring import endpoint

  • Unable to read back bits set in pilosa

    Unable to read back bits set in pilosa

    I 'm running a pilosa docker container: https://hub.docker.com/u/pilosa/ on my OSX host.

    When running the code on the your github page from the quick overview I do not retrieve any results back. However, I can see in the web-ui of pilosa that the bit 42 is set on row 5 of myindex, myframe. The code itself is not capable of reading the bit back out. It does not return an error but rather empty result arrays:

    go run cmd/test2/main.go
    Got bits:  []
    []
    []
    

    I have the same behaviour in the real application i'm developing. I can set bits, retrieve them from the webui, but simple Bitmap queries from code do not work.

  • add client method for /info endpoint

    add client method for /info endpoint

    This unpacks additional server data which is added to the server by a corresponding PR, and adds a direct call for the /info endpoint as a convenience feature. This is intended to be helpful for benchmarking-type functionality.

  • add import logging functionality

    add import logging functionality

    This adds a new client option to enable logging of all import requests. It gob encodes the index, path, shard, and request body to a file for all requests. The index and shard are needed so that the code replaying the requests can send them to a cluster of a different size and still figure out which nodes a request needs to go to.

    I just realized that we're currently writing out every request to every node, but when we read back in, we're sending each request we read in to every node that should receive it. This is a bit of a problem if replication is > 1. I'll think about how best to address that, but this is the basic idea.

  • Add QueryOptions.Slices support.

    Add QueryOptions.Slices support.

    This commit adds support for specifying individual slices when executing a query:

    client.Query(bitmap, &QueryOptions{Slices: []uint64{0, 3}})
    

    Fixes https://github.com/pilosa/pilosa/issues/641

  • Mass update

    Mass update

    • Added custom Pilosa server address support for running tests.
    • Updated travis config to run tests using https too.
    • Added client.HttpRequest function which sends an HTTP request to a Pilosa server.
    • Using /recalculate-caches endpoint to decrease integration test times by 2 * 10 secs.
  • WIP: adds support for importing values via the `import-value` endpoint.

    WIP: adds support for importing values via the `import-value` endpoint.

    Added a new import function: client.ImportValueFrame. The following code imports data into field foo in index i, frame f.

        client := pilosa.DefaultClient()
    
        // Retrieve the schema
        schema, err := client.Schema()
    
        text := `10,7
            11,5
            2,3
            7,1`
        iterator := pilosa.NewCSVValueIterator(strings.NewReader(text))
    
        index, _ := schema.Index("i", nil)
        frame, _ := index.Frame("f", nil)
        field := "foo"
    
        err = client.ImportValueFrame(frame, field, iterator, 10000)
        if err != nil {
            panic(err)
        }
    

    TODO:

    • [ ] Write tests.
    • [ ] Update the documentation.
  • Error: can't skip unknown wire type 7 for internal.QueryResponse

    Error: can't skip unknown wire type 7 for internal.QueryResponse

    Hi, I got the following error: proto: can't skip unknown wire type 7 for internal.QueryResponse

    func BenchmarkIntersectSegments(b *testing.B) {
    	q := index.Intersect(frame.Bitmap(2), frame.Bitmap(3))
    	if q.Error() != nil {
    		b.Error(q.Error())
    		return
    	}
    	response, err := client.Query(q, nil)
    	if err != nil {
    		b.Error(err) // this is where the error happens
    		return
    	}
    	if response.ErrorMessage != "" {
    		b.Error(response.ErrorMessage)
    		return
    	}
    
    	for _, result := range response.Results() {
    		if len(result.Bitmap.Bits) == 0 {
    			b.Error("bitmap is 0")
    			return
    		}
    	}
    	return
    }
    

    The query works in the webUI: Intersect(Bitmap(segment_id=2,frame='segments'), Bitmap(segment_id=3,frame='segments'))

  • performance tweaks

    performance tweaks

    This is some low-hanging fruit for improving client runtime. Actual wall-clock time is barely affected by this, which makes me suspect that some of that is server-side, but I haven't diagnosed it more clearly, I just wanted to get a bunch of the easy things out there for review.

    Before:

        real    0m43.058s
        user    1m12.904s
        sys     0m1.072s
    

    After:

        real    0m40.063s
        user    0m29.443s
        sys     0m0.888s
    

    This certainly frees up a lot of CPU time, but the impact on wall-clock time is trivial thus far.

  • How to use

    How to use "Rows(field)"

    Rows(, previous=<UINT|STRING>, limit=, column=<UINT|STRING>, from=, to=)

    I try to execute this method, but error happened. see following:

    curl "localhost:10101/index/repository/query" -XPOST -d 'Rows(stargazer)' {"error":"parsing: parsing: \nparse error near IDENT (line 1 symbol 6 - line 1 symbol 15):\n"stargazer"\n"}

    Am i use it wrong? or What are the special requirements for the field。

  • ImportField doesn't work.  Time out in case of Pilosa in Linux container on Docker Desktop for Windows

    ImportField doesn't work. Time out in case of Pilosa in Linux container on Docker Desktop for Windows

    Hello! Linux container and Windows host have different ip addresses. And, unfortunately, Docker Desktop for Windows can’t route traffic to Linux containers. (see https://docs.docker.com/docker-for-windows/networking/)

    In the same time, we have "(c *Client) fetchFragmentNodes(indexName string, shard uint64) ([]fragmentNode, error)" with http request to "/internal/fragment/nodes?shard=%d&index=%s " and as result "fragmentNodeURIs.Host" contains ip address of the lunix container and during the import this address is unreachable from windows host where a pilosa's client is trying to make import.

  • client.go ExperimentalReplayImport() races against client.go logImport.func1()

    client.go ExperimentalReplayImport() races against client.go logImport.func1()

    go version go1.14.4 darwin/amd64

    At tip, 28cb67f61c4a7db69c0907c64d8b3363b587ad9f, running against a tip pilosa/pilosa server (at 9dc1775b93464f78acc8573cfad2f405b1175fb5), make test-all-race detected the following race:

    (base) jaten@Jasons-MacBook-Pro ~/go/src/github.com/pilosa/go-pilosa (master) $ make test-all-race
    PILOSA_BIND=http://:10101 /Applications/Xcode.app/Contents/Developer/usr/bin/make test-all TESTFLAGS=-race
    PILOSA_BIND=http://:10101 go test -count=1 ./... -race
    ==================
    WARNING: DATA RACE
    Write at 0x00c00012caa0 by goroutine 11:
      bytes.(*Buffer).Read()
          /usr/local/go/src/bytes/buffer.go:297 +0x4a
      io.ReadAtLeast()
          /usr/local/go/src/io/io.go:310 +0x98
      io.ReadFull()
          /usr/local/go/src/io/io.go:329 +0x93
      encoding/gob.decodeUintReader()
          /usr/local/go/src/encoding/gob/decode.go:120 +0x40
      encoding/gob.(*Decoder).recvMessage()
          /usr/local/go/src/encoding/gob/decoder.go:81 +0xa7
      encoding/gob.(*Decoder).decodeTypeSequence()
          /usr/local/go/src/encoding/gob/decoder.go:143 +0x1f2
      encoding/gob.(*Decoder).DecodeValue()
          /usr/local/go/src/encoding/gob/decoder.go:211 +0x17f
      encoding/gob.(*Decoder).Decode()
          /usr/local/go/src/encoding/gob/decoder.go:188 +0x236
      github.com/pilosa/go-pilosa.(*Client).ExperimentalReplayImport()
          /Users/jaten/go/src/github.com/pilosa/go-pilosa/client.go:1302 +0x396
      github.com/pilosa/go-pilosa.TestImportWithReplayErrors.func1()
          /Users/jaten/go/src/github.com/pilosa/go-pilosa/client_internal_it_test.go:159 +0x5b
    
    Previous write at 0x00c00012caa0 by goroutine 77:
      bytes.(*Buffer).Write()
          /usr/local/go/src/bytes/buffer.go:169 +0x42
      encoding/gob.(*Encoder).writeMessage()
          /usr/local/go/src/encoding/gob/encoder.go:82 +0x41a
      encoding/gob.(*Encoder).EncodeValue()
          /usr/local/go/src/encoding/gob/encoder.go:253 +0x881
      encoding/gob.(*Encoder).Encode()
          /usr/local/go/src/encoding/gob/encoder.go:176 +0x5b
      github.com/pilosa/go-pilosa.(*Client).logImport.func1()
          /Users/jaten/go/src/github.com/pilosa/go-pilosa/client.go:1237 +0x2b9
    
    Goroutine 11 (running) created at:
      github.com/pilosa/go-pilosa.TestImportWithReplayErrors()
          /Users/jaten/go/src/github.com/pilosa/go-pilosa/client_internal_it_test.go:158 +0x929
      testing.tRunner()
          /usr/local/go/src/testing/testing.go:991 +0x1eb
    
    Goroutine 77 (finished) created at:
      github.com/pilosa/go-pilosa.(*Client).logImport()
          /Users/jaten/go/src/github.com/pilosa/go-pilosa/client.go:1225 +0xfb
      github.com/pilosa/go-pilosa.(*Client).importRoaringBitmap()
          /Users/jaten/go/src/github.com/pilosa/go-pilosa/client.go:885 +0x98a
      github.com/pilosa/go-pilosa.(*Client).importColumnsRoaring()
          /Users/jaten/go/src/github.com/pilosa/go-pilosa/client.go:606 +0x5fb
      github.com/pilosa/go-pilosa.(*Client).importColumns()
          /Users/jaten/go/src/github.com/pilosa/go-pilosa/client.go:533 +0x946
      github.com/pilosa/go-pilosa.(*Client).importColumns-fm()
          /Users/jaten/go/src/github.com/pilosa/go-pilosa/client.go:521 +0xca
      github.com/pilosa/go-pilosa.importRecords()
          /Users/jaten/go/src/github.com/pilosa/go-pilosa/import_manager.go:204 +0x1be
      github.com/pilosa/go-pilosa.recordImportWorker()
          /Users/jaten/go/src/github.com/pilosa/go-pilosa/import_manager.go:166 +0x5eb
    ==================
    ==================
    WARNING: DATA RACE
    Read at 0x00c00012ca80 by goroutine 11:
      bytes.(*Buffer).empty()
          /usr/local/go/src/bytes/buffer.go:69 +0x5f
      bytes.(*Buffer).Read()
          /usr/local/go/src/bytes/buffer.go:298 +0x94
      io.ReadAtLeast()
          /usr/local/go/src/io/io.go:310 +0x98
      io.ReadFull()
          /usr/local/go/src/io/io.go:329 +0x93
      encoding/gob.decodeUintReader()
          /usr/local/go/src/encoding/gob/decode.go:120 +0x40
      encoding/gob.(*Decoder).recvMessage()
          /usr/local/go/src/encoding/gob/decoder.go:81 +0xa7
      encoding/gob.(*Decoder).decodeTypeSequence()
          /usr/local/go/src/encoding/gob/decoder.go:143 +0x1f2
      encoding/gob.(*Decoder).DecodeValue()
          /usr/local/go/src/encoding/gob/decoder.go:211 +0x17f
      encoding/gob.(*Decoder).Decode()
          /usr/local/go/src/encoding/gob/decoder.go:188 +0x236
      github.com/pilosa/go-pilosa.(*Client).ExperimentalReplayImport()
          /Users/jaten/go/src/github.com/pilosa/go-pilosa/client.go:1302 +0x396
      github.com/pilosa/go-pilosa.TestImportWithReplayErrors.func1()
          /Users/jaten/go/src/github.com/pilosa/go-pilosa/client_internal_it_test.go:159 +0x5b
    
    Previous write at 0x00c00012ca80 by goroutine 77:
      bytes.(*Buffer).tryGrowByReslice()
          /usr/local/go/src/bytes/buffer.go:108 +0x196
      bytes.(*Buffer).Write()
          /usr/local/go/src/bytes/buffer.go:170 +0x8f
      encoding/gob.(*Encoder).writeMessage()
          /usr/local/go/src/encoding/gob/encoder.go:82 +0x41a
      encoding/gob.(*Encoder).EncodeValue()
          /usr/local/go/src/encoding/gob/encoder.go:253 +0x881
      encoding/gob.(*Encoder).Encode()
          /usr/local/go/src/encoding/gob/encoder.go:176 +0x5b
      github.com/pilosa/go-pilosa.(*Client).logImport.func1()
          /Users/jaten/go/src/github.com/pilosa/go-pilosa/client.go:1237 +0x2b9
    
    Goroutine 11 (running) created at:
      github.com/pilosa/go-pilosa.TestImportWithReplayErrors()
          /Users/jaten/go/src/github.com/pilosa/go-pilosa/client_internal_it_test.go:158 +0x929
      testing.tRunner()
          /usr/local/go/src/testing/testing.go:991 +0x1eb
    
    Goroutine 77 (finished) created at:
      github.com/pilosa/go-pilosa.(*Client).logImport()
          /Users/jaten/go/src/github.com/pilosa/go-pilosa/client.go:1225 +0xfb
      github.com/pilosa/go-pilosa.(*Client).importRoaringBitmap()
          /Users/jaten/go/src/github.com/pilosa/go-pilosa/client.go:885 +0x98a
      github.com/pilosa/go-pilosa.(*Client).importColumnsRoaring()
          /Users/jaten/go/src/github.com/pilosa/go-pilosa/client.go:606 +0x5fb
      github.com/pilosa/go-pilosa.(*Client).importColumns()
          /Users/jaten/go/src/github.com/pilosa/go-pilosa/client.go:533 +0x946
      github.com/pilosa/go-pilosa.(*Client).importColumns-fm()
          /Users/jaten/go/src/github.com/pilosa/go-pilosa/client.go:521 +0xca
      github.com/pilosa/go-pilosa.importRecords()
          /Users/jaten/go/src/github.com/pilosa/go-pilosa/import_manager.go:204 +0x1be
      github.com/pilosa/go-pilosa.recordImportWorker()
          /Users/jaten/go/src/github.com/pilosa/go-pilosa/import_manager.go:166 +0x5eb
    ==================
    --- FAIL: TestImportWithReplayErrors (0.43s)
        testing.go:906: race detected during execution of test
    go-pilosa 2020/06/22 15:51:20 invalidating shard node cache, uri mismatch at 0 old: [zzz://:0], new: [http://localhost:10101]
    go-pilosa 2020/06/22 15:51:58 299 pilosa/2.0 "FAKE WARNING: Deprecated PQL version: PQL v2 will remove support for SetBit() in Pilosa 2.1. Please update your client to support Set() (See https://docs.pilosa.com/pql#versioning)." "Sat, 25 Aug 2019 23:34:45 GMT"
    FAIL
    FAIL	github.com/pilosa/go-pilosa	44.363s
    ok  	github.com/pilosa/go-pilosa/csv	0.051s
    ?   	github.com/pilosa/go-pilosa/examples/multicol-csv-import	[no test files]
    ?   	github.com/pilosa/go-pilosa/gopilosa_pbuf	[no test files]
    ok  	github.com/pilosa/go-pilosa/gpexp	2.752s
    ?   	github.com/pilosa/go-pilosa/lru	[no test files]
    FAIL
    make[1]: *** [test-all] Error 1
    make: *** [test-all-race] Error 2
    (base) jaten@Jasons-MacBook-Pro ~/go/src/github.com/pilosa/go-pilosa (master) $ git log|head
    commit 6bc638d761338d5189736e9beed8546a4bc6e5ce
    Merge: d55c16e 28cb67f
    Author: Travis Turner <[email protected]>
    Date:   Sat Nov 30 22:00:22 2019 -0600
    
        Merge pull request #262 from travisturner/groupby-having
        
        add having clause support to GroupByBuilder
    
    commit 28cb67f61c4a7db69c0907c64d8b3363b587ad9f
    (base) jaten@Jasons-MacBook-Pro ~/go/src/github.com/pilosa/go-pilosa (master) $
    
  • need to fix perf issue with integer import in importbatch.go

    need to fix perf issue with integer import in importbatch.go

    Detailed in a TODO comment as usual.

    		// TODO(jaffee) I think this may be very inefficient. It looks
    		// like we're copying the `ids` and `values` slices over
    		// themselves (an O(n) operation) for each nullIndex so this
    		// is effectively O(n^2). What we could do is iterate through
    		// ids and values each once, while simultaneously iterating
    		// through nullindices and keeping track of how many
    		// nullIndices we've passed, and so how far back we need to
    		// copy each item.
    		//
    		// It was a couple weeks ago that I wrote this code, and I
    		// vaguely remember thinking about this, so I may just be
    		// missing something now. We should benchmark on what should
    		// be a bad case (an int field which is mostly null), and see
    		// if the improved implementation helps a lot.
    

    Now I've actually run into it:

    		// Update: I ran into this on a largish batch size (4M) with a
    		// very small percentage of nils (0.5%) - was very obvious in
    		// the CPU profile
    
  • CacheSize is type int, but should be uint32

    CacheSize is type int, but should be uint32

    In Pilosa, the CacheSize is a uint32. In go-pilosa it's an int. At this point, changing it breaks the go-pilosa API, but I wanted to ticket it just to keep track of it.

Go Memcached client library #golang

About This is a memcache client library for the Go programming language (http://golang.org/). Installing Using go get $ go get github.com/bradfitz/gom

Jan 8, 2023
Redis client library for Go

go-redis go-redis is a Redis client library for the Go programming language. It's built on the skeleton of gomemcache. It is safe to use by multiple g

Nov 8, 2022
Aerospike Client Go

Aerospike Go Client An Aerospike library for Go. This library is compatible with Go 1.9+ and supports the following operating systems: Linux, Mac OS X

Dec 14, 2022
Couchbase client in Go

A smart client for couchbase in go This is a unoffical version of a Couchbase Golang client. If you are looking for the Offical Couchbase Golang clien

Nov 27, 2022
Golang client for redislabs' ReJSON module with support for multilple redis clients (redigo, go-redis)

Go-ReJSON - a golang client for ReJSON (a JSON data type for Redis) Go-ReJSON is a Go client for ReJSON Redis Module. ReJSON is a Redis module that im

Dec 25, 2022
redis client implement by golang, inspired by jedis.

godis redis client implement by golang, refers to jedis. this library implements most of redis command, include normal redis command, cluster command,

Dec 6, 2022
Neo4j Rest API Client for Go lang

neo4j.go Implementation of client package for communication with Neo4j Rest API. For more information and documentation please read Godoc Neo4j Page s

Nov 9, 2022
Neo4j REST Client in golang

DEPRECATED! Consider these instead: https://github.com/johnnadratowski/golang-neo4j-bolt-driver https://github.com/go-cq/cq Install: If you don't ha

Nov 9, 2022
Neo4j client for Golang

neoism - Neo4j client for Go Package neoism is a Go client library providing access to the Neo4j graph database via its REST API. Status System Status

Dec 30, 2022
Go client for Redis

Redigo Redigo is a Go client for the Redis database. Features A Print-like API with support for all Redis commands. Pipelining, including pipelined tr

Jan 1, 2023
Type-safe Redis client for Golang

Redis client for Golang ❤️ Uptrace.dev - distributed traces, logs, and errors in one place Join Discord to ask questions. Documentation Reference Exam

Jan 1, 2023
Go Redis Client

xredis Built on top of github.com/garyburd/redigo with the idea to simplify creating a Redis client, provide type safe calls and encapsulate the low l

Sep 26, 2022
godis - an old Redis client for Go

godis Implements a few database clients for Redis. There is a stable client and an experimental client, redis and exp, respectively. To use any of the

Apr 16, 2022
Google Go Client and Connectors for Redis

Go-Redis Go Clients and Connectors for Redis. The initial release provides the interface and implementation supporting the (~) full set of current Red

Oct 25, 2022
Type-safe Redis client for Golang

Redis client for Golang ❤️ Uptrace.dev - distributed traces, logs, and errors in one place Join Discord to ask questions. Documentation Reference Exam

Jan 4, 2023
Redis client Mock Provide mock test for redis query

Redis client Mock Provide mock test for redis query, Compatible with github.com/go-redis/redis/v8 Install Confirm that you are using redis.Client the

Dec 27, 2022
GoBigdis is a persistent database that implements the Redis server protocol. Any Redis client can interface with it and start to use it right away.

GoBigdis GoBigdis is a persistent database that implements the Redis server protocol. Any Redis client can interface with it and start to use it right

Apr 27, 2022
HDFS for Go - This is a native golang client for hdfs.

HDFS for Go This is a native golang client for hdfs. It connects directly to the namenode using the protocol buffers API. It tries to be idiomatic by

Dec 24, 2022
PostgreSQL API Client

PostgreSQL API language search PostgreSQL API functions response: We don't use PostgreSQL in the usual way. We do everything through API functions, wh

May 9, 2022