Vertica-sql-go - Official native Go client for the Vertica Analytics Database.

vertica-sql-go

License Go Reference Go Report Card

vertica-sql-go is a native Go adapter for the Vertica (http://www.vertica.com) database.

Please check out release notes to learn about the latest improvements.

vertica-sql-go has been tested with Vertica 11.0.1 and Go 1.13/1.14/1.15/1.16.

Installation

Source code for vertica-sql-go can be found at:

https://github.com/vertica/vertica-sql-go

Alternatively you can use the 'go get' variant to install the package into your local Go environment.

go get github.com/vertica/vertica-sql-go

Usage

As this library is written to Go's SQL standard database/sql, usage is compliant with its methods and behavioral expectations.

Importing

First ensure that you have the library checked out in your standard Go hierarchy and import it.

import (
    "context"
    "database/sql"
    "github.com/vertica/vertica-sql-go"
)

Setting the Log Level

The vertica-sql-go driver supports multiple log levels, as defined in the following table

Log Level (int) Log Level Name Description
0 TRACE Show function calls, plus all below
1 DEBUG Show low-level functional operations, plus all below
2 INFO Show important state information, plus all below
3 WARN (default) Show non-breaking abnormalities, plus all below
4 ERROR Show breaking errors, plus all below
5 FATAL Show process-breaking errors
6 NONE Disable all log messages

and they can be set programmatically by calling the logger global level itself

logger.SetLogLevel(logger.DEBUG)

or by setting the environment variable VERTICA_SQL_GO_LOG_LEVEL to one of the integer values in the table above. This must be done before the process using the driver has started as the global log level will be read from here on start-up.

Example:

export VERTICA_SQL_GO_LOG_LEVEL=3

Setting the Log File

By default, log messages are sent to stdout, but the vertica-sql-go driver can also output to a file in cases where stdout is not available. Simply set the environment variable VERTICA_SQL_GO_LOG_FILE to your desired output location.

Example:

export VERTICA_SQL_GO_LOG_FILE=/var/log/vertica-sql-go.log

Creating a connection

connDB, err := sql.Open("vertica", myDBConnectString)

where myDBConnectString is of the form:

vertica://(user):(password)@(host):(port)/(database)?(queryArgs)

Currently supported query arguments are:

Query Argument Description Values
use_prepared_statements whether to use client-side query interpolation or server-side argument binding 1 = (default) use server-side bindings
0 = user client side interpolation (LESS SECURE)
connection_load_balance whether to enable connection load balancing on the client side 0 = (default) disable load balancing
1 = enable load balancing
tlsmode the ssl/tls policy for this connection 'none' (default) = don't use SSL/TLS for this connection
'server' = server must support SSL/TLS, but skip verification (INSECURE!)
'server-strict' = server must support SSL/TLS
{customName} = use custom registered tls.Config (see "Using custom TLS config" section below)
backup_server_node a list of backup hosts for the client to try to connect if the primary host is unreachable a comma-seperated list of backup host-port pairs. E.g.
'host1:port1,host2:port2,host3:port3'

To ping the server and validate a connection (as the connection isn't necessarily created at that moment), simply call the PingContext() method.

ctx := context.Background()

err = connDB.PingContext(ctx)

If there is an error in connection, the error result will be non-nil and contain a description of whatever problem occurred.

Using custom TLS config

Custom TLS config(s) can be registered for TLS / SSL encrypted connection to the server. Here is an example of registering and using a tls.Config:

import vertigo "github.com/vertica/vertica-sql-go"

// Register tls.Config
rootCertPool := x509.NewCertPool()
pem, err := ioutil.ReadFile("/certs/ca.crt")
if err != nil {
    LOG.Warningln("ERROR: failed reading cert file", err)
}
if ok := rootCertPool.AppendCertsFromPEM(pem); !ok {
    LOG.Warningln("ERROR: Failed to append PEM")
}
tlsConfig := &tls.Config{RootCAs: rootCertPool, ServerName: host}
vertigo.RegisterTLSConfig("myCustomName", tlsConfig)

// Connect using tls.Config
var rawQuery = url.Values{}
rawQuery.Add("tlsmode", "myCustomName")
var query = url.URL{
    Scheme:   "vertica",
    User:     url.UserPassword(user, password),
    Host:     fmt.Sprintf("%s:%d", host, port),
    Path:     databaseName,
    RawQuery: rawQuery.Encode(),
}
sql.Open("vertica", query.String())

Performing a simple query

Performing a simple query is merely a matter of using that connection to create a query and iterate its results. Here is an example of a query that should always work.

rows, err := connDB.QueryContext(ctx, "SELECT * FROM v_monitor.cpu_usage LIMIT 5")

defer rows.Close()

IMPORTANT : Just as with connections, you should always Close() the results cursor once you are done with it. It's often easier to just defer the closure, for convenience.

Performing a query with arguments

This is done in a similar manner on the client side.

rows, err := connDB.QueryContext(ctx, "SELECT name FROM MyTable WHERE id=?", 21)

Behind the scenes, this will be handled in one of two ways, based on whether or not you requested client interpolation in the connection string.

With client interpolation enabled, the client library will create a new query string with the arguments already in place, and submit it as a simple query.

With client interpolation disabled (default), the client library will use the full server-side parse(), describe(), bind(), execute() cycle.

Named Arguments

rows, err := connDB.QueryContext(ctx, "SELECT name FROM MyTable WHERE id=@id and something=@example", sql.Named("id", 21), sql.Named("example", "hello"))

Named arguments are emulated by the driver. They will be converted to positional arguments by the driver and the named arguments given later will be slotted into the required positions. This still allows server side prepared statements as @id and @example above will be replaced by ? before being sent. If you use named arguments, all the arguments must be named. Do not mix positional and named together. All named arguments are normalized to upper case which means @param, @PaRaM, and @PARAM are treated as equivalent.

Reading query result rows

As outlined in the GoLang specs, reading the results of a query is done via a loop, bounded by a .next() iterator.

for rows.Next() {
    var nodeName string
    var startTime string
    var endTime string
    var avgCPU float64

    rows.Scan(&nodeName, &startTime, &endTime, &avgCPU)

    // Use these values for something here.
}

If you need to examine the names of the columns, simply access the Columns() operator of the rows object.

columnNames, _ := rows.Columns()

for _, columnName := range columnNames {
        // use the column name here.
}

Paging in Data

By default, the query results are cached in memory allowing for rapid iteration of result row content. This generally works well, but in the case of exceptionally large result sets, you could run out of memory.

If such a query needs to be performed, it is recommended that you tell the driver that you wish to cache that data in a temporary file, so its results can be "paged in" as you iterate the results. The data is stored in a process-read-only file in the OS's temp directory.

To enable result paging, simply create a VerticaContext and use it to perform your query.

vCtx := NewVerticaContext(context.Background())

// Only keep 50000 rows in memory at once.
vCtx.SetInMemoryResultRowLimit(50000)

rows, _ := connDB.QueryContext(
    vCtx,
    "SELECT a, b, c, d, e FROM result_cache_test ORDER BY a")

defer rows.Close()

// Use rows result as normal.

If you want to disable paging on the same context all together, you can simply set the row limit to 0 (the default).

Performing a simple execute call

This is very similar to a simple query, but has a slightly different result type. A simple execute() might look like this:

res, err = connDB.ExecContext(ctx, "DROP TABLE IF EXISTS MyTable")

In this instance, res will contain information (such as 'rows affected') about the result of this execution.

Performing an execute with arguments

This, again, looks very similar to the query-with-arguments use case and is subject to the same effects of client-side interpolation.

res, err := connDB.ExecContext(
        ctx,
        "INSERT INTO MyTable VALUES (?)", 21)

Server-side prepared statements

IMPORTANT : Vertica does not support executing a command string containing multiple statements using server-side prepared statements.

If you wish to reuse queries or executions, you can prepare them once and supply arguments only.

// Prepare the query.
stmt, err := connDB.PrepareContext(ctx, "SELECT id FROM MyTable WHERE name=?")

// Execute it with this argument.
rows, err = stmt.Query("Joe Perry")

NOTE : Please note that this method is subject to modification by the 'interpolate' setting. If the client side interpolation is requested, the statement will simply be stored on the client and interpolated with arguments each time it's used. If not using client side interpolation (default), the statement will be parsed and described on the server as expected.

Transactions

The vertica-sql-go driver supports basic transactions as defined by the GoLang standard.

// Define the options for this transaction state
opts := &sql.TxOptions{
    Isolation: sql.LevelDefault,
    ReadOnly:  false,
}

// Begin the transaction.
tx, err := connDB.BeginTx(ctx, opts)
// You can either commit it.
err = tx.Commit()
// Or roll it back.
err = tx.Rollback()

The following transaction isolation levels are supported:

  • sql.LevelReadUncommitted
  • sql.LevelReadCommitted
  • sql.LevelSerializable
  • sql.LevelRepeatableRead
  • sql.LevelDefault

The following transaction isolation levels are unsupported:

  • sql.LevelSnapshot
  • sql.LevelLinearizable

Although Vertica supports the grammars for these transaction isolation levels, they are internally promoted to stronger isolation levels.

COPY modes Supported

COPY FROM STDIN

vertica-sql-go supports copying from stdin. This allows you to write a command-line tool that accepts stdin as an input and passes it to Vertica for processing. An example:

_, err = connDB.ExecContext(ctx, "COPY stdin_data FROM STDIN DELIMITER ','")

This will process input from stdin until an EOF is reached.

COPY FROM STDIN with alternate stream

In your code, you may also supply a different io.Reader object (such as *File) from which to supply your data. Simply create a new VerticaContext, set the copy input stream, and provide this context to the execute call. An example:

fp, err := os.OpenFile("./resources/csv/sample_data.csv", os.O_RDONLY, 0600)
...
vCtx := NewVerticaContext(ctx)
vCtx.SetCopyInputStream(fp)

_, err = connDB.ExecContext(vCtx, "COPY stdin_data FROM STDIN DELIMITER ','")

If you provide a VerticaContext but don't set a copy input stream, the driver will fall back to os.stdin.

Full Example

By following the above instructions, you should be able to successfully create a connection to your Vertica instance and perform the operations you require. A complete example program is listed below:

package main

import (
    "context"
    "database/sql"
    "os"

    _ "github.com/vertica/vertica-sql-go"
    "github.com/vertica/vertica-sql-go/logger"
)

func main() {
    // Have our logger output INFO and above.
    logger.SetLogLevel(logger.INFO)

    var testLogger = logger.New("samplecode")

    ctx := context.Background()

    // Create a connection to our database. Connection is lazy and won't
    // happen until it's used.
    connDB, err := sql.Open("vertica", "vertica://dbadmin:@localhost:5433/dbadmin")

    if err != nil {
        testLogger.Fatal(err.Error())
        os.Exit(1)
    }

    defer connDB.Close()

    // Ping the database connnection to force it to attempt to connect.
    if err = connDB.PingContext(ctx); err != nil {
        testLogger.Fatal(err.Error())
        os.Exit(1)
    }

    // Query a standard metric table in Vertica.
    rows, err := connDB.QueryContext(ctx, "SELECT * FROM v_monitor.cpu_usage LIMIT 5")

    if err != nil {
        testLogger.Fatal(err.Error())
        os.Exit(1)
    }

    defer rows.Close()

    // Iterate over the results and print them out.
    for rows.Next() {
        var nodeName string
        var startTime string
        var endTime string
        var avgCPU float64

        if err = rows.Scan(&nodeName, &startTime, &endTime, &avgCPU); err != nil {
            testLogger.Fatal(err.Error())
            os.Exit(1)
        }

        testLogger.Info("%s\t%s\t%s\t%f", nodeName, startTime, endTime, avgCPU)
    }

    testLogger.Info("Test complete")

    os.Exit(0)
}

License

Apache 2.0 License, please see LICENSE for details.

Contributing guidelines

Have a bug or an idea? Please see CONTRIBUTING.md for details.

Benchmarks

You can run a benchmark and profile it with a command like: go test -bench '^BenchmarkRowsWithLimit$' -benchmem -memprofile memprofile.out -cpuprofile profile.out -run=none

and then explore it with go tool pprof. The -run part excludes the tests for brevity.

Acknowledgements

  • @grzm (Github)
  • @watercraft (Github)
  • @fbernier (Github)
  • @mlh758 (Github) for the awesome work filling in and enhancing the driver in many important ways.
  • Tom Wall (Vertica) for the infinite patience and deep knowledge.
  • The creators and contributors of the vertica-python library, and members of the Vertica team, for their help in understanding the wire protocol.
Comments
  • response batching / cursor / lazy queries

    response batching / cursor / lazy queries

    Hey,

    It does not seem possible right now but it would be nice if we could fetch rows in batches for very large responses. I don't know the exact Vertica facilities that could be leveraged to achieve this but I guess is a way.

    Thanks for your work!

  • Request Cancellation

    Request Cancellation

    This driver does not honor the cancellation of a context and will leave a query running forever.

    It looks like there is a cancellation message that can be sent if you open another connection (judging by the Python driver) which will close out a long running query. All operations seem to pass through QueryContextRaw so monitoring the context there should be sufficient.

    A question I have about implementation - what does sending a cancellation do if the query is already sending rows? Will Vertica send an error message? Complete?

  • Possible issue with wrong rows returned from current stmt results

    Possible issue with wrong rows returned from current stmt results

    I'm seeing an issue which looks like the driver mismatching prepared statements and result sets when running concurrently. The Prometheus free/sql_exporter runs the queries for each "scrape" concurrently (in go routines). The rows returned by stmt.QueryContext(ctx) are not always the rows associated with the prepared statement.

    This is all with the patch in #21 applied.

    Here's the function that executes each query, with some debug logging and comments added by me (un-mutilated, release version):

    // run executes the query on the provided database, in the provided context.
    func (q *Query) run(ctx context.Context, conn *sql.DB) (*sql.Rows, errors.WithContext) {
            // `q` is a struct which contains the query to be executed at q.config.Query
    	r, _ := regexp.Compile("\\s+")
    	queryString := r.ReplaceAllString(q.config.Query, " ")
    	if q.conn != nil && q.conn != conn {
    		panic(fmt.Sprintf("[%s] Expecting to always run on the same database handle", q.logContext))
    	}
    
    	if q.stmt == nil {
    		stmt, err := conn.PrepareContext(ctx, q.config.Query)
    		if err != nil {
    			return nil, errors.Wrapf(q.logContext, err, "prepare query failed")
    		}
    		q.conn = conn
    		q.stmt = stmt
    	}
            // checking query associated with the prepared statement `stmt`
    	stmtValue := reflect.ValueOf(q.stmt)
    	stmtQueryString := r.ReplaceAllString(reflect.Indirect(stmtValue).FieldByName("query").String(), " ")
    	rows, err := q.stmt.QueryContext(ctx)
    	columns, _ := rows.Columns()
    	log.Infof("run query [%s] stmt.query same? %t columns %q", queryString, (stmtQueryString == queryString), columns)
    	return rows, errors.Wrap(q.logContext, err)
    }
    

    Here's an annotated sample of the associated logging output for an exporter that issues 5 queries for each scrape.

    ///////
    // first scrape after launching sql_exporter
    ///////
    
    // XXX columns don't match query
    I0821 17:06:34.977006   22312 query.go:131] run query [SELECT current_epoch, ahm_epoch, last_good_epoch, last_good_epoch - ahm_epoch as ahm_epoch_lag, designed_fault_tolerance, current_fault_tolerance, wos_used_bytes, ros_used_bytes FROM v_monitor.system ] stmt.query same? true columns ["mode" "scope" "object_name" "count"]
    
    // XXX columns missing
    I0821 17:06:35.071003   22312 query.go:131] run query [SELECT lower(locks.lock_mode) as mode, lower(locks.lock_scope) as scope, object_name, count(*) FROM v_monitor.locks GROUP BY lower(lock_mode), lower(lock_scope), object_name; ] stmt.query same? true columns []
    
    // OK
    I0821 17:06:35.244324   22312 query.go:131] run query [SELECT count(*) FROM v_monitor.delete_vectors ] stmt.query same? true columns ["count"]
    
    // OK
    I0821 17:06:35.247949   22312 query.go:131] run query [SELECT EXTRACT(epoch FROM current_timestamp) AS epoch ] stmt.query same? true columns ["epoch"]
    
    // OK
    I0821 17:06:35.261840   22312 query.go:131] run query [SELECT node_name, lower(node_state) as node_state, count(*) FROM v_catalog.nodes GROUP BY node_name, node_state ] stmt.query same? true columns ["node_name" "node_state" "count"]
    
    ///////
    // Second scrape
    ///////
    
    // XXX columns missing
    I0821 17:06:41.483174   22312 query.go:131] run query [SELECT lower(locks.lock_mode) as mode, lower(locks.lock_scope) as scope, object_name, count(*) FROM v_monitor.locks GROUP BY lower(lock_mode), lower(lock_scope), object_name; ] stmt.query same? true columns []
    
    // OK
    I0821 17:06:41.589738   22312 query.go:131] run query [SELECT count(*) FROM v_monitor.delete_vectors ] stmt.query same? true columns ["count"]
    
    // OK
    I0821 17:06:41.595501   22312 query.go:131] run query [SELECT current_epoch, ahm_epoch, last_good_epoch, last_good_epoch - ahm_epoch as ahm_epoch_lag, designed_fault_tolerance, current_fault_tolerance, wos_used_bytes, ros_used_bytes FROM v_monitor.system ] stmt.query same? true columns ["current_epoch" "ahm_epoch" "last_good_epoch" "ahm_epoch_lag" "designed_fault_tolerance" "current_fault_tolerance" "wos_used_bytes" "ros_used_bytes"]
    
    // OK
    I0821 17:06:41.672825   22312 query.go:131] run query [SELECT EXTRACT(epoch FROM current_timestamp) AS epoch ] stmt.query same? true columns ["epoch"]
    
    // OK
    I0821 17:06:41.791733   22312 query.go:131] run query [SELECT node_name, lower(node_state) as node_state, count(*) FROM v_catalog.nodes GROUP BY node_name, node_state ] stmt.query same? true columns ["node_name" "node_state" "count"]
    
    ///////
    // third scrape
    ///////
    
    // OK
    I0821 17:06:53.111116   22312 query.go:131] run query [SELECT EXTRACT(epoch FROM current_timestamp) AS epoch ] stmt.query same? true columns ["epoch"]
    
    // OK
    I0821 17:06:53.113658   22312 query.go:131] run query [SELECT count(*) FROM v_monitor.delete_vectors ] stmt.query same? true columns ["count"]
    
    // OK
    I0821 17:06:53.210031   22312 query.go:131] run query [SELECT node_name, lower(node_state) as node_state, count(*) FROM v_catalog.nodes GROUP BY node_name, node_state ] stmt.query same? true columns ["node_name" "node_state" "count"]
    
    // XXX mismatched columns
    I0821 17:06:53.222417   22312 query.go:131] run query [SELECT current_epoch, ahm_epoch, last_good_epoch, last_good_epoch - ahm_epoch as ahm_epoch_lag, designed_fault_tolerance, current_fault_tolerance, wos_used_bytes, ros_used_bytes FROM v_monitor.system ] stmt.query same? true columns ["mode" "scope" "object_name" "count"]
    
    // OK
    I0821 17:06:53.305371   22312 query.go:131] run query [SELECT lower(locks.lock_mode) as mode, lower(locks.lock_scope) as scope, object_name, count(*) FROM v_monitor.locks GROUP BY lower(lock_mode), lower(lock_scope), object_name; ] stmt.query same? true columns ["mode" "scope" "object_name" "count"]
    

    Note that the errors aren't consistent between scrapes.

    I haven't been able to produce a test case isolated to just the Vertica driver, though I don't know if that's due the bug actually being in the sql_exporter code (I think unlikely, as I think they would have come across it before now, and the code that runs the queries looks pretty straightforward), my limited go experience, or an inherent difficulty in reproducing concurrency bugs. I'd rather not open an issue without an isolated test case, but at this point I'm not sure how to.

    Any insight or suggestions, including pointers on making a test case appreciated.

  • vertica-go-sql connector hangs when processing failed exec command.

    vertica-go-sql connector hangs when processing failed exec command.

    This is a bit of a complex test but none the less it fails predictably.

    Situation is that during the development of a UDX Source plugin we found that simply throwing an error from the UDSouce::processWithMetadata function causes the vertica-go-client to never return from the call and hangs the session.

    // The below implementation on the source causes the vertica go client to hang indefinitely. StreamState TestSource::processWithMetadata(ServerInterface &srvInterface, DataBuffer &output, LengthBuffer &output_lengths) { vt_report_error(2, "FORCE process with Metadata error"); return StreamState::DONE; }

    Now we use the vertica-go-sql sample app thats provided with the appropriate connection string changed. Plus the following additional snippet...

    // Query a standard metric table in Vertica. _, err = connDB.ExecContext(ctx, "COPY public.from_test SOURCE TestSource() PARSER KafkaJSONParser(flatten_arrays=True, flatten_maps=True) DIRECT NO COMMIT;")

    if err != nil {
        testLogger.Fatal(err.Error())
        os.Exit(1)
    }
    testLogger.Info("Test complete")
    

    Running the above snippet from vsql, the proper error is returned every time and no hangs. And the vertica.log file shows that the TestSource was executed and returns with same logs as when executed from the vsql command. testudx.tar.gz

  • Question: Do you support Vertica 8.1.x?

    Question: Do you support Vertica 8.1.x?

    I read from the following link that Vertica-sql-go supports Vertica 8.0 and later, but I am actually getting errors:

    Error: [08000] Unsupported frontend protocol 3.8: server supports 3.0 to 3.6

    Here is the link I mentioned: https://www.vertica.com/blog/the-vertica-sql-driver-for-go/

    It will be great if you can clarify. Thank you!

  • Interpolation with Named Paramters

    Interpolation with Named Paramters

    I am trying to run a query using named parameters like this:

    select
    *
    from
    some_table
    where
    client like @CLIENT_NAME
    

    The @ symbol appears to be the correct syntax based on the docs here but when I run this with this adapter like this:

    queryTerm := "test"
    rows, err := stmt.QueryContext(ctx, sql.Named("CLIENT_NAME", queryTerm))
    

    I get Error: [42703] Column "CLIENT_NAME" does not exist

  • Move row storage logic to separate package

    Move row storage logic to separate package

    I thought it might be easier to manage the file caching vs rows in memory by putting the two paths behind an interface so rows.go just has to worry about picking a place to put the data.

    Row message can just be a byte slice, which reduces memory pressure a bit.

    One thing that will change here is that if you set a memory row limit, a temp file will always be created, even if you don't actually reach that limit. I'm assuming people use this setting for queries they assume will return a lot of rows and it keeps the logic a little simpler.

    I made the buffer around the file a bit bigger to reduce the amount of syscalls which helps the performance of writing data off to the file quite a bit.

    Before and after:

    1,000,000 Rows and limit:
    BenchmarkRowsWithLimit-12    	       3	 405907456 ns/op	215970194 B/op	 9994027 allocs/op
    BenchmarkRowsWithLimit-12    	       4	 285466348 ns/op	192031624 B/op	 8994028 allocs/op
    
    BenchmarkRows-12    	     381	   3139275 ns/op	 2545353 B/op	  100019 allocs/op
    BenchmarkRows-12    	     411	   2873378 ns/op	 2304814 B/op	   90020 allocs/op
    
    BenchmarkRowsWithLimit-12    	     264	   4154224 ns/op	 2177694 B/op	   99967 allocs/op
    BenchmarkRowsWithLimit-12    	     372	   3034079 ns/op	 1999121 B/op	   89968 allocs/op
    

    @fbernier Does this make a practical difference for your data set? The benchmarks look good but it would be nice to see how it holds up with a real data set.

  • Add unit tests for interpolate function

    Add unit tests for interpolate function

    Fix string interpolation containing single quotes

    Fix interpolation with multiple parameters

    Fix a couple of linter issues in readme and stmt.go

    I see there are also tests in driver_test.go but those seem to actually hit a database and I don't have a development database set up to run against.

  • Implement ping and test connection state

    Implement ping and test connection state

    Fixes #51

    Right now the documentation here is a little misleading because the driver does not actually implement Pinger and so Ping will never return an error. Implementing Ping allows us to cleanly implement SessionResetter to the sql package can check the connection state.

    Try running this gist before and after this commit. You'll need to fix the replace directive in go.mod to match wherever you have your local copy of this repo. You'll also need a user mike identified by pass (or change the gist's connection string). Start the gist program, then run select close_user_sessions('mike'); to kill the session. Before the patch, every tick will error with an EOF. After the patch one tick will error with an EOF and future connections will work.

    A next step would be internally implementing a retry.

  • Rollback severity seems to end the connection

    Rollback severity seems to end the connection

    Check for ROLLBACK severity and guard against it in Close

    Fixes #61 Fixes #48

    Most of the changes in Close are white space, I swapped the if check to a if-return guard and then un-indented the body of it.

  • does not handle sql comments with question mark

    does not handle sql comments with question mark

    I had a large query that had a question mark in the SQL comments (after --). This resulted in an error "sql: expected 1 arguments, got 0". It took me a while to track this down, but the question mark is used by this driver's NumInput() function to indicate the presence of an argument. As alexbrainman/odbc doesn't have this issue I suspect there is another way to evaluate this property. For now I have removed the question mark and the query seems to be working correctly.

  • Conn.close() causes a panic when there is no valid connection

    Conn.close() causes a panic when there is no valid connection

    Mar 22 20:20:00.807275 ERROR driver: cannot connect to vertica-default.secopstn01.us-west-1.internal.mp.idi-dev.cyberreseng.com:5433 (dial tcp 10.35.87.182:5433: connect: connection refused) panic: runtime error: invalid memory address or nil pointer dereference [signal SIGSEGV: segmentation violation code=0x1 addr=0x28 pc=0x7075e9]

    goroutine 36 [running]: database/sql.(*Conn).close(0x10?, {0x0?, 0x0?}) /usr/local/go/src/database/sql/sql.go:2108 +0x49 database/sql.(*Conn).Close(0xc000037b70?) /usr/local/go/src/database/sql/sql.go:2129 +0x1d main.(*vertica).CopyFilesFromS3(0xc000068820, {0xc00001e4c0, 0x1, 0x81c2bd?}, {0xc000258c00, 0x5b0}, 0xc000064100) /src/vertica.go:144 +0x299 main.(*ingestor).ProcessBatch(0xc0000900e0, {0xc00009e3c0, 0x1, 0x1}, 0xc000064100) /src/ingestor.go:94 +0x283 main.(*ingestor).Run(0xc0000900e0, 0x3) /src/ingestor.go:55 +0x1b2 created by main.main /src/main.go:92 +0x885

  • incorrect parsing of timestamp offset for time zones with non integer offset

    incorrect parsing of timestamp offset for time zones with non integer offset

    Hi!

    in connection.go 499 it is assumed that the time zone offset of vertica is an integer number of hours. for time zones like india (+530) this code fails. attempts after that to read a timestamp from the database fail because the appended time zone information is incomplete.

    a fix would be something like:

    func getTimeZoneOffset(str string) string {
        for i := len(str) - 1; i >= 0 && i >= len(str)-8; i-- {
    	ch := str[i]
    	if ch == '+' || ch == '-' {
    		return str[i:]
    	}
        }
        return "+00"
    }
    

    many thanks!

  • Hung connection to vertica

    Hung connection to vertica

    Hi! Can anybody help with understaning problem? I've got code like in examples:

    	connDB, err := sql.Open("vertica", dwhConnection)
    	if err != nil {
    		return fmt.Errorf("failed to connect to dwh: %w", err)
    	}
    	defer connDB.Close()
    
    	vCtx := vertigo.NewVerticaContext(context.Background())
    	// Only keep 40,000 rows in memory at once.
    	if err = vCtx.SetInMemoryResultRowLimit(40000); err != nil {
    		return fmt.Errorf("failed to setup vertica context 'in memory row limit' param: %w", err)
    	}
    
    	if err = connDB.PingContext(vCtx); err != nil {
    		return fmt.Errorf("failed to ping dwh vertica database: %w", err)
    	}
    

    set env VERTICA_SQL_GO_LOG_LEVEL=1 to debug connection:

    Dec  8 22:35:08.397069 DEBUG connection: Established socket connection to vertica-proxy:5435
    Dec  8 22:35:08.397491 DEBUG connection: -> Startup (packet): ProtocolVersion:00030009, DriverName='vertica-sql-go', DriverVersion='1.2.0', UserName='*******', Database='DWH', SessionID='vertica-sql-go-1.2.0-7362-1638992108', ClientPID=7362
    Dec  8 22:35:08.444401 DEBUG connection: <- Authentication: 3, extraAuthData 0 byte(s)
    Dec  8 22:35:08.444528 DEBUG connection: -> Password: *********
    Dec  8 22:35:08.507618 DEBUG connection: <- Authentication: 0, extraAuthData 0 byte(s)
    Dec  8 22:35:08.523473 DEBUG connection: <- ParameterStatus: client_locale='en_US@collation=binary'
    Dec  8 22:35:08.523540 DEBUG connection: <- ParameterStatus: client_label='vertica-sql-go-1.2.0-7362-1638992108'
    Dec  8 22:35:08.523576 DEBUG connection: <- ParameterStatus: server_version='v9.2.1-5'
    Dec  8 22:35:08.523618 DEBUG connection: <- ParameterStatus: long_string_types='on'
    Dec  8 22:35:08.523648 DEBUG connection: <- ParameterStatus: protocol_version='196616'
    Dec  8 22:35:08.523676 DEBUG connection: <- ParameterStatus: standard_conforming_strings='on'
    Dec  8 22:35:08.523710 DEBUG connection: <- KeyData: BackendPID=15975925, CancelKey=320BD015'
    Dec  8 22:35:08.523746 DEBUG connection: <- Notice: (7) notice(s)
    Dec  8 22:35:08.523779 DEBUG connection: <- ReadyForQuery: TransactionState='I'
    Dec  8 22:35:08.523842 DEBUG stmt: stmt.QueryContextRaw(): select now()::timestamptz
    Dec  8 22:35:08.523986 DEBUG connection: -> Query: Query='select now()::timestamptz'
    Dec  8 22:35:08.573319 DEBUG connection: <- RowDesc: 1 column(s)
    Dec  8 22:35:08.573379 DEBUG connection: <- Cmd Completed: 
    Dec  8 22:35:08.573404 DEBUG connection: <- ReadyForQuery: TransactionState='T'
    Dec  8 22:35:08.573424 DEBUG connection: Setting server timezone offset to +03
    Dec  8 22:35:08.573524 DEBUG connection: -> Parse: PreparedName='S736216389921082019727887', Command='select 1 as test', NumArgs=0
    Dec  8 22:35:08.573587 DEBUG connection: -> Describe: TargetType=S, TargetName='S736216389921082019727887'
    Dec  8 22:35:08.573614 DEBUG connection: -> Flush
    Dec  8 22:35:08.622399 DEBUG connection: <- ParseComplete
    Dec  8 22:35:08.622474 DEBUG connection: <- ParameterDesc: 0 parameter(s) described: []
    Dec  8 22:35:08.622516 DEBUG connection: <- RowDesc: 1 column(s)
    Dec  8 22:35:08.622623 DEBUG connection: -> Close: TargetType=S, TargetName='S736216389921082019727887'
    Dec  8 22:35:08.622677 DEBUG connection: -> Flush
    Dec  8 22:35:08.622724 DEBUG connection: <- Cmd Description: tag=SELECT, hasRewrite=false, rewrite=''
    

    So, thats it) after that nothing else happens

  • no RowsAffected available after DDL with COPY DML

    no RowsAffected available after DDL with COPY DML

    I am getting no RowsAffected available after DDL statement

    Also 'no RowsAffected available' is very annoying, for simple Exec command, no other driver behaves that way including Go Lang ODBC Imagine we want to run SQL script, now we need to add extra logic what's DML and DDL to see whether to call AffectedRow very annoying

  • Need to add connection args to support result caching

    Need to add connection args to support result caching

    We've added several enhancements to support result spooling to disk when results cannot fit into memory. Would like to add a few connection string arguments to globally set them for a connection.

Interactive Cloud-Native Environment Client
Interactive Cloud-Native Environment Client

Fenix-CLI:Interactive Cloud-Native Environment Client English | 简体中文 Fenix-CLI is an interactive cloud-native operating environment client. The goal i

Dec 15, 2022
The official container networking plugin for both OECP of Alibaba Cloud and SOFAStack of Ant Financial Co.

Rama What is Rama? Rama is an open source container networking solution, integrated with Kubernetes and used officially by following well-known PaaS p

Dec 29, 2022
Official Golang implementation of the Thinkium node

Go Thinkium Official Golang implementation of the Thinkium node. Building the source mkdir build docker run --rm -w /go/src/github.com/ThinkiumGroup/g

Nov 22, 2022
Terraform provider to help with various AWS automation tasks (mostly all that stuff we cannot accomplish with the official AWS terraform provider)
Terraform provider to help with various AWS automation tasks (mostly all that stuff we cannot accomplish with the official AWS terraform provider)

terraform-provider-awsutils Terraform provider for performing various tasks that cannot be performed with the official AWS Terraform Provider from Has

Dec 8, 2022
Pulumi provider for Vultr (based on the Terraform one), not official

Vultr Resource Provider The Vultr Resource Provider lets you manage Vultr resources. Installing This package is currently not available for most langu

Apr 23, 2022
Build Go Toolchains /w native libs for cross-compilation

gonative Cross compiled Go binaries are not suitable for production applications because code in the standard library relies on Cgo for DNS resolution

Dec 20, 2022
The Cloud Native Application Proxy
The Cloud Native Application Proxy

Traefik (pronounced traffic) is a modern HTTP reverse proxy and load balancer that makes deploying microservices easy. Traefik integrates with your ex

Jan 9, 2023
Kubernetes Operator for a Cloud-Native OpenVPN Deployment.

Meerkat is a Kubernetes Operator that facilitates the deployment of OpenVPN in a Kubernetes cluster. By leveraging Hashicorp Vault, Meerkat securely manages the underlying PKI.

Jan 4, 2023
Open Service Mesh (OSM) is a lightweight, extensible, cloud native service mesh that allows users to uniformly manage, secure, and get out-of-the-box observability features for highly dynamic microservice environments.
Open Service Mesh (OSM) is a lightweight, extensible, cloud native service mesh that allows users to uniformly manage, secure, and get out-of-the-box observability features for highly dynamic microservice environments.

Open Service Mesh (OSM) Open Service Mesh (OSM) is a lightweight, extensible, Cloud Native service mesh that allows users to uniformly manage, secure,

Jan 2, 2023
Litmus helps Kubernetes SREs and developers practice chaos engineering in a Kubernetes native way.
Litmus helps Kubernetes SREs and developers practice chaos engineering in a Kubernetes native way.

Litmus Cloud-Native Chaos Engineering Read this in other languages. ???? ???? ???? ???? Overview Litmus is a toolset to do cloud-native chaos engineer

Jan 1, 2023
Kubernetes Native Policy Management
Kubernetes Native Policy Management

Kyverno Kubernetes Native Policy Management Kyverno is a policy engine designed for Kubernetes. It can validate, mutate, and generate configurations u

Jan 2, 2023
Kubernetes Native Serverless Framework
Kubernetes Native Serverless Framework

kubeless is a Kubernetes-native serverless framework that lets you deploy small bits of code without having to worry about the underlying infrastructu

Dec 25, 2022
Zadig is a cloud native, distributed, developer-oriented continuous delivery product.

Zadig Developer-oriented Continuous Delivery Product English | 简体中文 Table of Contents Zadig Table of Contents What is Zadig Quick start How to use? Ho

Jan 8, 2023
Zadig is a cloud native, distributed, developer-oriented continuous delivery product.

Zadig Developer-oriented Continuous Delivery Product ⁣ English | 简体中文 Table of Contents Zadig Table of Contents What is Zadig Quick start How to use?

May 12, 2021
This is a cloud-native application that focuses on the DevOps area.

Get started Install KubeSphere via kk (or other ways). This is an optional step, basically we need a Kubernetes Cluster and the front-end of DevOps. I

Jan 5, 2023
Polaris is a cloud-native service discovery and governance center

It can be used to solve the problem of service connection, fault tolerance, traffic control and secure in distributed and microservice architecture.

Dec 26, 2022
The OCI Service Operator for Kubernetes (OSOK) makes it easy to connect and manage OCI services from a cloud native application running in a Kubernetes environment.

OCI Service Operator for Kubernetes Introduction The OCI Service Operator for Kubernetes (OSOK) makes it easy to create, manage, and connect to Oracle

Sep 27, 2022
Cloud Native Configurations for Kubernetes

CNCK CNCK = Cloud Native Configurations for Kubernetes Make your Kubernetes applications more cloud native by injecting runtime cluster information in

Nov 4, 2021
cloud native application deploy flow
cloud native application deploy flow

Triton-io/Triton English | 简体中文 Introduction Triton provides a cloud-native DeployFlow, which is safe, controllable, and policy-rich. For more introdu

May 28, 2022