Berkeley Tree Database (BTrDB) server

BTrDB

The Berkeley TRee DataBase is a high performance time series database designed to support high density data storage applications.

We are now doing binary and container releases with (mostly) standard semantic versioning. The only variation on this is that we will use odd-numbered minor version numbers to indicate and unstable/development release series. Therefore the meanings of the version numbers are:

  • Major: an increase in major version number indicates that there is no backwards compatibility with existing databases. To upgrade, we recommend using the migration tool.
  • Minor: minor versions are compatible on-disk, but may have an incompatible network API. Therefore while it is safe to upgrade to a new minor version number, you may need to upgrade other programs that connect to BTrDB too. Furthermore, odd-numbered minor version numbers should be considered unstable and for development use only, patch releases within an odd numbered minor version number may not be compatible with eachother.
  • Patch: patch releases on an odd numbered minor version number are not necessarily compatible with eachother in any way. Patch releases on an even minor version number are guaranteed to be compatible both in the disk format and in network API.

While using odd-numbered versions to indicate development releases is a somewhat archaic practice, it allows us to use our production release system for development, which reduces the odds that there is a discrepancy between the well-tested development binaries/containers and the subsequently released production version. Note that we will flag all development releases as "pre-release" on github.

The main distribution of BTrDB v4 is smartgridstore which is basically BTrDB packaged for deployment on Kubernetes, along with some (optional) utilities for working with synchrophasors. You can follow the installation guide on https://docs.smartgrid.store

If you are interested in deploying BTrDB, please file an issue on this repository so we can get an idea for what type of deployments we should focus on supporting. Smartgrid.store is focused on very large city or country-scale data aggregation from smart grid devices, but BTrDB can support other use cases.

Comments
  • BTrDB API service crash

    BTrDB API service crash

    This isn't a fully baked issue yet since I'm having trouble reproducing the behavior, however, I thought I'd log it in case I can refine the information later.

    I seem to have been able to take down our BTrDB API service while sending curl requests to our deployed copy of Mr Plotter. While I haven't been able to completely crash it again I did notice the following.

    Specifically, I was sending posts to the /csv endpoint while experimenting with different JSON values in the payload to see the effects on the data returned. I noticed that when sending a request for an alignment queryType of "windows" that I could get a non deterministic response from the DB. Sometimes I would get data, sometimes I would get an error message, and sometimes I'd get an error trying to connect.

    Here's the JSON payload with the auth token removed:

    json={"StartTime":1494405229456,"EndTime":1494405345800,"UUIDS":["2e59878a-62f8-57ff-8834-16de04a45d62"],"Labels":["NavyYard/ pmu3003151/ L1MAG"],"QueryType":"windows","WindowText":"3","WindowUnit":"seconds","UnitofTime":"ms","PointWidth":44,"_token":""}
    

    The primary thing that seemed to cause it to fail was changing the PointWidth value, but I'm not sure if I'm simply seeing patterns in randomness.

    Here is the curl command I was sending (with auth token stripped again):

    curl --request POST \
      --url https://viz.predictivegrid.com/csv \
      --header 'accept: text/csv' \
      --header 'accept-encoding: gzip, deflate, br' \
      --header 'accept-language: en-US,en;q=0.8' \
      --header 'cache-control: no-cache' \
      --header 'content-type: text/plain' \
      --header 'dnt: 1' \
      --header 'origin: https://viz.predictivegrid.com' \
      --header 'postman-token: 32b4c31a-fe7c-156c-2d40-cea40fa1b1e3' \
      --header 'referer: https://viz.predictivegrid.com/' \
      --header 'upgrade-insecure-requests: 1' \
      --header 'user-agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/61.0.3163.100 Safari/537.36' \
      --data 'json={"StartTime":1494405229456,"EndTime":1494405345800,"UUIDS":["2e59878a-62f8-57ff-8834-16de04a45d62"],"Labels":["NavyYard/ pmu3003151/ L1MAG"],"QueryType":"windows","WindowText":"3","WindowUnit":"seconds","UnitofTime":"ms","PointWidth":44,"_token":""}'
    

    That either resulted in data being returned, or this error message:

    Could not complete CSV query: rpc error: code = Internal desc = transport is closing
    

    Or it flat out refused to connect.

    As I said I haven't been able to reproduce a full crash, but when I did all the API requests stopped responding (e.g. I noticed I could no longer query for streams) although Jerry (@pingthings) relayed that he didn't see anything unusual in the logs.

    I'll update this if I figure out more.

  • Installation

    Installation

    I have a kubernetes cluster with ceph as well. Are there any docs to getting btrdb setup?

    Also what is the state of the project? Is it alpha / beta / or pretty stable?

    Thanks!

  • How to replace the kubernetes controller manager container using minikube

    How to replace the kubernetes controller manager container using minikube

    I'm installing the BTrDB on my laptop, using ceph/daemon and minikube as suggested, following the instruction in https://docs.smartgrid.store/prerequisites.html#connecting-kubernetes-to-ceph.

    For kubeadm

    edit the file /etc/kubernetes/manifests/kube-controller-manager.json. In that file, change the specified image to btrdb/kubernetes-controller-manager-rbd:1.6.2

    However, I cannot find the equivalent configure file for minikube. Any suggestion for this?

    Thanks

  • Aligned Window Query Start/End Inclusivity is Inconsistent

    Aligned Window Query Start/End Inclusivity is Inconsistent

    When BTrDB receives an aligned windows query, it rounds the start/end timestamps down to the nearest point boundary.

    If pw < 32, it returns all points from start (inclusive) to end (exclusive), after rounding down. If pw >= 32, it returns all points from start (inclusive) to end (inclusive), after rounding down.

    I believe this is a bug. This causes rendering artifacts in the WaveViewer plotter.

    The following Python3 script reproduces this issue (I've emailed this to Michael). All timestamps used are point-aligned, so the rounding down behavior isn't tested.

    #!/usr/bin/env python3
    import btrdb4
    import time
    import uuid
    
    uu = uuid.UUID("80741975-8247-4d12-9b73-becf9eb8ff31")
    
    conn = btrdb4.Connection("compound-3.cs.berkeley.edu:4410")
    b = conn.newContext()
    
    print("Connected to compound-3.cs.berkeley.edu:4410")
    
    s = b.streamFromUUID(uu)
    assert(s.exists())
    
    
    
    print("All of the endpoints of these queries are pointwidth-aligned.")
    
    
    
    print()
    print("Making a query at pw=31 from 1458040387951132672 to 1458040396541067264")
    for statpoint, version in s.alignedWindows(1458040387951132672, 1458040396541067264, 31):
        print(statpoint)
    
    print()
    print("Making a query at pw=31 from 1458040387951132672 to 1458040398688550912")
    for statpoint, version in s.alignedWindows(1458040387951132672, 1458040398688550912, 31):
        print(statpoint)
    
    print()
    print("The above two queries show that BTrDB is treating the endpoint as exclusive.")
    print("The point at 1458040396541067264 is not in the result of the first query, but such a point does exist with count != 0.")
    
    
    
    print()
    print("Making a query at pw=31 from 1458040387951132672 to 1458040396541067264")
    for statpoint, version in s.alignedWindows(1458040387951132672, 1458040396541067264, 31):
        print(statpoint)
    
    print()
    print("Making a query at pw=32 from 1458040387951132672 to 1458040396541067264")
    for statpoint, version in s.alignedWindows(1458040387951132672, 1458040396541067264, 32):
        print(statpoint)
    
    print()
    print("The last query shows that BTrDB is treating the endpoint as inclusive.")
    print("The point at 1458040396541067264 is in the result of the query.")
    
    
    
    print()
    print("In conclusion, it seems that at pw=32, BTrDB treats the endpoint as inclusive, but at pw=31, BTrDB treats the endpoint as exclusive.")
    
    
  • Installation error

    Installation error

    I was hoping to evaluate btrdb but ran into difficulty during installation while following the instructions in the README in the master branch. The instructions say to execute:

    go get github.com/SoftwareDefinedBuildings/btrdb/server
    

    However, this results in the error:

    package github.com/SoftwareDefinedBuildings/btrdb/server: cannot find package "github.com/SoftwareDefinedBuildings/btrdb/server" in any of:
    /usr/lib/go/src/github.com/SoftwareDefinedBuildings/btrdb/server (from $GOROOT)
    

    While I'm not a Go developer it seems as if the installation is expecting a server folder under the root which is no longer available and I'm unsure how to continue. Using the alternate installation method also appears to error as shown:

    $ go get -d ./... && go install ./btrdbd
    package ./btrdb
        imports github.com/SoftwareDefinedBuildings/btrdb/internal/bstore: use of internal package not allowed
    package ./btrdb/btrdbd
        imports github.com/SoftwareDefinedBuildings/btrdb/internal/bstore: use of internal package not allowed
    package ./btrdb/internal/bstore
        imports github.com/SoftwareDefinedBuildings/btrdb/internal/bprovider: use of internal package not allowed
    package ./btrdb/internal/bstore
        imports github.com/SoftwareDefinedBuildings/btrdb/internal/cephprovider: use of internal package not allowed
    package ./btrdb/internal/bstore
        imports github.com/SoftwareDefinedBuildings/btrdb/internal/fileprovider: use of internal package not allowed
    package ./btrdb/internal/cephprovider
        imports github.com/SoftwareDefinedBuildings/btrdb/internal/bprovider: use of internal package not allowed
    package ./btrdb/internal/fileprovider
        imports github.com/SoftwareDefinedBuildings/btrdb/internal/bprovider: use of internal package not allowed
    package ./btrdb/qtree
        imports github.com/SoftwareDefinedBuildings/btrdb/internal/bstore: use of internal package not allowed
    gocode/src/github.com/stretchr/graceful/tests/main.go:7:2: cannot find package "github.com/codegangsta/negroni" in any of:
        /usr/lib/go/src/github.com/codegangsta/negroni (from $GOROOT)
        /home/allen/gocode/src/github.com/codegangsta/negroni (from $GOPATH)
    gocode/src/github.com/stretchr/graceful/tests/main.go:8:2: cannot find package "github.com/tylerb/graceful" in any of:
        /usr/lib/go/src/github.com/tylerb/graceful (from $GOROOT)
        /home/allen/gocode/src/github.com/tylerb/graceful (from $GOPATH)
    gocode/src/golang.org/x/net/html/charset/charset.go:20:2: cannot find package "golang.org/x/text/encoding" in any of:
        /usr/lib/go/src/golang.org/x/text/encoding (from $GOROOT)
        /home/allen/gocode/src/golang.org/x/text/encoding (from $GOPATH)
    gocode/src/golang.org/x/net/html/charset/charset.go:21:2: cannot find package "golang.org/x/text/encoding/charmap" in any of:
        /usr/lib/go/src/golang.org/x/text/encoding/charmap (from $GOROOT)
        /home/allen/gocode/src/golang.org/x/text/encoding/charmap (from $GOPATH)
    gocode/src/golang.org/x/net/html/charset/charset.go:22:2: cannot find package "golang.org/x/text/encoding/htmlindex" in any of:
        /usr/lib/go/src/golang.org/x/text/encoding/htmlindex (from $GOROOT)
        /home/allen/gocode/src/golang.org/x/text/encoding/htmlindex (from $GOPATH)
    gocode/src/golang.org/x/net/html/charset/charset.go:23:2: cannot find package "golang.org/x/text/transform" in any of:
        /usr/lib/go/src/golang.org/x/text/transform (from $GOROOT)
        /home/allen/gocode/src/golang.org/x/text/transform (from $GOPATH)
    gocode/src/golang.org/x/net/http2/h2i/h2i.go:38:2: cannot find package "golang.org/x/crypto/ssh/terminal" in any of:
        /usr/lib/go/src/golang.org/x/crypto/ssh/terminal (from $GOROOT)
        /home/allen/gocode/src/golang.org/x/crypto/ssh/terminal (from $GOPATH)
    gocode/src/gopkg.in/mgo.v2/dbtest/dbserver.go:13:2: cannot find package "gopkg.in/tomb.v2" in any of:
        /usr/lib/go/src/gopkg.in/tomb.v2 (from $GOROOT)
        /home/allen/gocode/src/gopkg.in/tomb.v2 (from $GOPATH)
    

    This was using a clean install of Ubuntu 14.04, fully updated, with librados-dev and MongoDB already installed. I found the FAST paper extremely interesting and was hoping someone could point me in the right direction. Thanks

  • BTrDB Single Node Installation

    BTrDB Single Node Installation

    We are working on a single-node installation of BTrDB. We have gotten the basic Ceph installation from https://docs.smartgrid.store/ working (from the Prerequisites section), but we are having trouble finding a good guide for installing kubernetes on a single node, as well as knowing which features we need to look at installing (e.g. do we need 'flannel' for networking?). Can you recommend one?

  • Could not read allocator for cold pool! Has the DB been created properly?

    Could not read allocator for cold pool! Has the DB been created properly?

    I try to run btrdb in Docker with image btrdb/db:4.6.0 but got these error:

    db1_1  | [CRITICAL]cephprovider.go:489 > Could not read allocator for cold pool! Has the DB been created properly?
    db1_1  | panic: Could not read allocator for cold pool! Has the DB been created properly?
    db1_1  |
    db1_1  | goroutine 1 [running]:
    db1_1  | github.com/op/go-logging.(*Logger).Panic(0xc420149260, 0xc420011920, 0x1, 0x1)
    db1_1  | 	/home/immesys/w/go/src/github.com/op/go-logging/logger.go:188 +0xc7
    db1_1  | github.com/SoftwareDefinedBuildings/btrdb/internal/cephprovider.(*CephStorageProvider).Initialize(0xc420083a20, 0xf610e0, 0xc420196270)
    db1_1  | 	/home/immesys/w/go/src/github.com/SoftwareDefinedBuildings/btrdb/internal/cephprovider/cephprovider.go:489 +0x8f4
    db1_1  | github.com/SoftwareDefinedBuildings/btrdb/internal/bstore.NewBlockStore(0xf610e0, 0xc420196270, 0x80, 0xd, 0x0)
    db1_1  | 	/home/immesys/w/go/src/github.com/SoftwareDefinedBuildings/btrdb/internal/bstore/blockstore.go:133 +0x303
    db1_1  | github.com/SoftwareDefinedBuildings/btrdb.NewQuasar(0xf610e0, 0xc420196270, 0x1, 0xd, 0x0)
    db1_1  | 	/home/immesys/w/go/src/github.com/SoftwareDefinedBuildings/btrdb/quasar.go:116 +0x4d
    db1_1  | main.main()
    db1_1  | 	/home/immesys/w/go/src/github.com/SoftwareDefinedBuildings/btrdb/btrdbd/main.go:113 +0x312
    

    My Ceph pool list:

    ceph osd dump | grep pool
    pool 0 'rbd' replicated size 1 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 64 pgp_num 64 last_change 2 flags hashpspool stripe_width 0
    pool 1 'cephfs_data' replicated size 1 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 10 flags hashpspool crash_replay_interval 45 stripe_width 0
    pool 2 'cephfs_metadata' replicated size 1 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 9 flags hashpspool stripe_width 0
    pool 3 '.rgw.root' replicated size 1 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 11 owner 18446744073709551615 flags hashpspool stripe_width 0
    pool 4 'default.rgw.control' replicated size 1 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 13 owner 18446744073709551615 flags hashpspool stripe_width 0
    pool 5 'default.rgw.data.root' replicated size 1 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 15 owner 18446744073709551615 flags hashpspool stripe_width 0
    pool 6 'default.rgw.gc' replicated size 1 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 16 owner 18446744073709551615 flags hashpspool stripe_width 0
    pool 7 'default.rgw.lc' replicated size 1 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 17 owner 18446744073709551615 flags hashpspool stripe_width 0
    pool 8 'default.rgw.log' replicated size 1 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 18 owner 18446744073709551615 flags hashpspool stripe_width 0
    pool 11 'btrdb' replicated size 1 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 128 pgp_num 128 last_change 27 flags hashpspool stripe_width 0
    

    I use btrdb pool.

    And the status of Ceph cluster:

    ceph -s
        cluster 136dff26-fe9d-4513-b5f3-7ba1b290a609
         health HEALTH_OK
         monmap e2: 1 mons at {ccb1.circeboard.com=10.174.7.90:6789/0}
                election epoch 7, quorum 0 ccb1.circeboard.com
          fsmap e5: 1/1/1 up {0=mds-ccb1.circeboard.com=up:active}
            mgr no daemons active
         osdmap e28: 1 osds: 1 up, 1 in
                flags sortbitwise,require_jewel_osds,require_kraken_osds
          pgmap v148: 256 pgs, 10 pools, 3829 bytes data, 224 objects
                40092 kB used, 102209 MB / 102249 MB avail
                     256 active+clean
    

    Any one know what should i do?

  • psl.pqube.DEVICEID

    psl.pqube.DEVICEID

    The current implementation expects that the device will always be defined as psl.pqube.DEVICEID which is not optimal considering device may not be a pqube etc.

    In manifest: add psl.pqube.p300001 path=myDevice

    I like the idea of the dot notation for the name for logical groupings or as a label.

  • Unexpected (very) short read

    Unexpected (very) short read

    I get this panic whenever I want to read/write to btrdb. Reads/writes happen through giles2. Any ideas on this? I already updated both giles and btrdb to the most recent version. It started when I moved my database files to a new drive, but this problem persists when I use my original drive.

    Cheers! Jonathan

    `2016/09/27 11:23:28 cpinterface.go:59 ▶^[[0m cpnp connection 2016/09/27 11:23:32 ^[[35mfileprovider.go:225 ▶^[[0m Unexpected (very) short read panic: Unexpected (very) short read

    goroutine 35 [running]: panic(0x833100, 0xc8200f4a10) /usr/lib/go/src/runtime/panic.go:464 +0x3e6 github.com/op/go-logging.(_Logger).Panic(0xc8200589c0, 0xc8200f4980, 0x1, 0x1) /home/jofu/go/src/github.com/op/go-logging/logger.go:188 +0xc8 github.com/SoftwareDefinedBuildings/btrdb/internal/fileprovider.(_FileStorageProvider).Read(0xc82004ccb0, 0xc8203a6048, 0x10, 0x5b8, 0x36400004255f6b3, 0xc8203e2000, 0x5007, 0x5007, 0x0, 0x0, ...) /home/jofu/go/src/github.com/SoftwareDefinedBuildings/btrdb/internal/fileprovider/fileprovider.go:225 +0x3ba github.com/SoftwareDefinedBuildings/btrdb/internal/bstore.(_BlockStore).ReadDatablock(0xc82007c120, 0xc8203a6048, 0x10, 0x5b8, 0x36400004255f6b3, 0x11dd4, 0x38, 0xf000000000000000, 0x0, 0x0) /home/jofu/go/src/github.com/SoftwareDefinedBuildings/btrdb/internal/bstore/blockstore.go:308 +0x19c github.com/SoftwareDefinedBuildings/btrdb/qtree.(_QTree).LoadNode(0xc8200f8630, 0x36400004255f6b3, 0x11dd4, 0x538, 0xf000000000000000, 0xc8200f8600, 0x0, 0x0) /home/jofu/go/src/github.com/SoftwareDefinedBuildings/btrdb/qtree/qtree_utils.go:120 +0xbb github.com/SoftwareDefinedBuildings/btrdb/qtree.NewReadQTree(0xc82007c120, 0xc8203a6048, 0x10, 0x5b8, 0xffffffffffffffff, 0x0, 0x0, 0x0) /home/jofu/go/src/github.com/SoftwareDefinedBuildings/btrdb/qtree/qtree_utils.go:187 +0x165 github.com/SoftwareDefinedBuildings/btrdb.(_Quasar).QueryValuesStream(0xc8200129b0, 0xc8203a6048, 0x10, 0x5b8, 0x147822aee5ce5355, 0x147823c64b68efbc, 0xffffffffffffffff, 0xc8200f8240, 0x20, 0x0) /home/jofu/go/src/github.com/SoftwareDefinedBuildings/btrdb/quasar.go:194 +0x5a github.com/SoftwareDefinedBuildings/btrdb/cpinterface.(_CPInterface).dispatchCommands.func1(0xc8200f8240, 0xc820119f28, 0x7f845e403bb0, 0xc820021108, 0xc8200129b0, 0x0, 0x0) /home/jofu/go/src/github.com/SoftwareDefinedBuildings/btrdb/cpinterface/cpinterface.go:94 +0xa12 created by github.com/SoftwareDefinedBuildings/btrdb/cpinterface.(*CPInterface).dispatchCommands /home/jofu/go/src/github.com/SoftwareDefinedBuildings/btrdb/cpinterface/cpinterface.go:441 +0x475 `

  • Fix handling of faulty inserts

    Fix handling of faulty inserts

    Hello, I noticed that while BTrDB does not seem to be supposed to support overwriting records(duplicate timestamps in one insert or inserting records with already existing timestamps) such behavior is not strictly enforced. There is a check in QtreeNode InsertValues function, but it catches these errors only in some cases. What I think have experienced firsthand are these situations:

    • Duplicates are successfuly inserted (that means, multiple records with the same timestamp are in one version of the tree) and written without any notice and thus silently corrupting the state. System continues to ingest data and answer queries, but all queries targeting windows that contain these duplicates return incorrect statistics.
    • Duplicates are inserted successfully but one of the following (correct without any duplicates) inserts triggers the error. The correct insert is truncated and correct records are lost while the duplicates remain in the tree.
    • On some special cases the duplicates can make otherwise shallow tree grow very long branches (when all the duplicates are in some really small time interval for example) which might be impossible to delete or shorten in following versions without deleting even the correct records from previous inserts that are in that leaf.

    My proposed changes fix most of this behavior. I had some troubles understanding the error returning and panicing patterns in insert, so hopefully simply error returning is ok. There should not be too big of a performance penalty from using those two hashtables and I removed one sort in ConvertToCore function :)

    These changes make BTrDB err out on any duplicate-containing insert and might even throw away the whole batch from PQM which might be a bit too harsh but it is still better than the situations I encountered I think. There is also a different approach possible to simply choose one of the duplicated records and write only that one along with all the nonduplicate records from a batch (possibly with a bit different treatment if for ex. the duplicates occur inside one insert call etc.). If you feel that this approach would be better I can rework it to this behavior.

    What this currently does not fix is the situation when there are two records with duplicated timestamps - one in tree and one in PQM buffer. Queries on the latest version (with uncommited records) of stream return incorrect statistics but it is not really fixable as the query does not even read raw records (it is of course fixable in rawvalues query but I didn't handle it either).

    I reasonably tested my changes using the tests from brtdb go library - there was one testcase which was inserting duplicates deliberately, that confused me a little bit - and the whole thing seems to be working OK. However I am not a Golang pro so I will be grateful for any suggestions as this is a bit larger change and surely more tweaks and reviews are necessary.

    Hope this would help the project! Filip

  • server crash on restart

    server crash on restart

    I had to restart a server pod, which did not work.

    The process did not shut down. So i had to kill it.

    When trying to start it again, the server crashes during journal checking after emitting "panic: (404: stream does not exist)"

    I have put the ceph pools aside so i could provide them for further analysis.

    logs-from-btrdb-in-btrdb-0 .txt

  • GPL vs Apache vs BSD-*

    GPL vs Apache vs BSD-*

    I'm a bit surprised to see this is under the GPL. I expected it would be under a BSD or Apache license, especially given the funding source. I wouldn't say it's a deal-breaker but it does give me pause when even considering this database because future use will be architecturally limited (e.g., can't go fully embedded or rework it as an in-memory storage library) and it will complicate distribution of other (more permissively licensed) software. That's not a great database feature. Especially given the integration with Apache Spark and DoE funding, I would have expected Apache license more appropriate.

  • Does BtrDB support non-scalar data?

    Does BtrDB support non-scalar data?

    Love the idea behind btrdb. I get thousands of ticks per second of financial data, but I often only want to get the aggregates out.

    Problem is I also want to use this for computed data. So for example, as time series come in, I create yield curves. These yield curve structures require two numeric vectors to describe. Is it possible to store data that is more than scalar, in BtrDB, and in a related question, can I pass my own aggregation functions in (since in this vector structure case, mean, min, max etc will require custom functions)?

  • Support grafana

    Support grafana

    Could btrdb support the ever popular viewing front end grafana, I know this is probably a grafana issue but I would mind knowing how to pull values via http for grafana or if it is even possible

  • flush open trees under starvation

    flush open trees under starvation

    When the open_trees pool is starved, a scavenger should pick the best tree and flush it to open up a slot. At the moment the client must wait until a tree times out.

  • Support for streaming data on an edge device

    Support for streaming data on an edge device

    We are evaluating BTrDB (and others) with an eye towards using it as short-term storage (~1 day) on an edge device. Such a device would be a linux server with limited memory and CPU (2-4 cores, 4-8 GB) and would typically be responsible for ingesting data at machine resolution. The simple nature of what was described on the readme for the 3.x distro therefore appealed; and having read the Berkeley paper, this paradigm seems the most promising to us for time series acquisition. A stripped down container with perhaps clustering ability and (ideally) some method for expiring data, particularly at fine-grained levels, would be ideal.

Related tags
Scan your project tree for tag comments.

TagSpot TagSpot is a small programm that scans a project tree for tag comments like TODO or FIXME (full list of supported tags). Usage From the comman

Jan 14, 2022
Create a gRPC Server from Database

xo-grpc Create a gRPC Server from the generated code by the xo project. Requirements Go 1.16 or superior protoc xo, protoc-gen-go and protoc-gen-go-gr

Dec 12, 2022
`kawipiko` -- blazingly fast static HTTP server -- focused on low latency and high concurrency, by leveraging Go, `fasthttp` and the CDB embedded database
`kawipiko` -- blazingly fast static HTTP server -- focused on low latency and high concurrency, by leveraging Go, `fasthttp` and the CDB embedded database

kawipiko -- blazingly fast static HTTP server kawipiko is a lightweight static HTTP server written in Go; focused on serving static content as fast an

Jan 3, 2023
A Language Server Protocol (LSP) server for Jsonnet

Jsonnet Language Server Warning: This project is in active development and is likely very buggy. A Language Server Protocol (LSP) server for Jsonnet.

Nov 22, 2022
The server-pubsub is the main backend of DATAVOC project that manages all the other web-server modules of the same project such as the processor

server-pubsub The server-pubsub is the main backend of DATAVOC project that manages all the other web-server modules of the same project such as the p

Dec 3, 2021
server-to-server sync application, written in go/golang.

svcpy: server to server copy a basic server-to-server copy application. on a single binary, it can be a server or a client. example usage: on the serv

Nov 4, 2021
Server and client implementation of the grpc go libraries to perform unary, client streaming, server streaming and full duplex RPCs from gRPC go introduction

Description This is an implementation of a gRPC client and server that provides route guidance from gRPC Basics: Go tutorial. It demonstrates how to u

Nov 24, 2021
Pape-server - A small server written in golang to serve a random wallpaper.

pape-server I like to inject custom CSS themes into a lot of websites and electron apps, however browsers don't let websites access local disk through

Dec 31, 2021
Cert bound sts server - Certificate Bound Tokens using Security Token Exchange Server (STS)
Cert bound sts server - Certificate Bound Tokens using Security Token Exchange Server (STS)

Certificate Bound Tokens using Security Token Exchange Server (STS) Sample demonstration of Certificate Bound Tokens acquired from a Security Token Ex

Jan 2, 2022
Echo-server - An HTTP echo server designed for testing applications and proxies

echo-server An HTTP echo server designed for testing applications and proxies. R

Dec 20, 2022
Broadcast-server - A simple Go server that broadcasts any data/stream

broadcast A simple Go server that broadcasts any data/stream usage data You can

Oct 21, 2022
Videos2gether-server - Server for the Realtime video streaming app Videos2Gether

Videos Together server Server source code for the https://videos2gether.com Arch

Jan 9, 2022
JPRQ Customizer is a customizer that helps to use the JPRQ server code and make it compatible with your own server with custom subdomain and domain
JPRQ Customizer is a customizer that helps to use the JPRQ server code and make it compatible with your own server with custom subdomain and domain

JPRQ Customizer is a customizer that helps to use the JPRQ server code and make it compatible with your own server with custom subdomain and domain.You can upload the generated directory to your web server and expose user localhost to public internet. You can use this to make your local machine a command center for your ethical hacking purpose ;)

Jan 19, 2022
Envoy-eds-server - Envoy EDS server is a working Envoy Discovery Service implementation

envoy-eds-server Intro Envoy EDS server is a working Envoy Discovery Service imp

Apr 2, 2022
Http-server - A HTTP server and can be accessed via TLS and non-TLS mode

Application server.go runs a HTTP/HTTPS server on the port 9090. It gives you 4

Feb 3, 2022
Server - Dupman server written in Go

server dupman server written in Go Requirements Go (>=1.17) Installation Usage C

Feb 22, 2022
“Dear Port80” is a zero-config TCP proxy server that hides SSH connection behind a HTTP server!

Dear Port80 About The Project: “Dear Port80” is a zero-config TCP proxy server that hides SSH connection behind a HTTP server! +---------------------

Jun 29, 2022
DeepValueNetwork is a peer-to-peer database network managed and hosted by its community.

DeepValueNetwork To understand what DeepValueNetwork will be, I suggest you read this document. In progress This software is currently being developed

Dec 10, 2022
Prisma Client Go is an auto-generated and fully type-safe database client

Prisma Client Go Typesafe database access for Go Quickstart • Website • Docs • API reference • Blog • Slack • Twitter Prisma Client Go is an auto-gene

Jan 9, 2023