Gorgonia is a library that helps facilitate machine learning in Go.

Logo

GoDoc GitHub version test and build Coverage Status Go Report Card unstable

Gorgonia is a library that helps facilitate machine learning in Go. Write and evaluate mathematical equations involving multidimensional arrays easily. If this sounds like Theano or TensorFlow, it's because the idea is quite similar. Specifically, the library is pretty low-level, like Theano, but has higher goals like Tensorflow.

Gorgonia:

  • Can perform automatic differentiation
  • Can perform symbolic differentiation
  • Can perform gradient descent optimizations
  • Can perform numerical stabilization
  • Provides a number of convenience functions to help create neural networks
  • Is fairly quick (comparable to Theano and Tensorflow's speed)
  • Supports CUDA/GPGPU computation (OpenCL not yet supported, send a pull request)
  • Will support distributed computing

Goals

The primary goal for Gorgonia is to be a highly performant machine learning/graph computation-based library that can scale across multiple machines. It should bring the appeal of Go (simple compilation and deployment process) to the ML world. It's a long way from there currently, however, the baby steps are already there.

The secondary goal for Gorgonia is to provide a platform for exploration for non-standard deep-learning and neural network related things. This includes things like neo-hebbian learning, corner-cutting algorithms, evolutionary algorithms and the like.

Why Use Gorgonia?

The main reason to use Gorgonia is developer comfort. If you're using a Go stack extensively, now you have access to the ability to create production-ready machine learning systems in an environment that you are already familiar and comfortable with.

ML/AI at large is usually split into two stages: the experimental stage where one builds various models, test and retest; and the deployed state where a model after being tested and played with, is deployed. This necessitate different roles like data scientist and data engineer.

Typically the two phases have different tools: Python (PyTorch, etc) is commonly used for the experimental stage, and then the model is rewritten in some more performant language like C++ (using dlib, mlpack etc). Of course, nowadays the gap is closing and people frequently share the tools between them. Tensorflow is one such tool that bridges the gap.

Gorgonia aims to do the same, but for the Go environment. Gorgonia is currently fairly performant - its speeds are comparable to PyTorch's and Tensorflow's CPU implementations. GPU implementations are a bit finnicky to compare due to the heavy cgo tax, but rest assured that this is an area of active improvement.

Getting started

Installation

The package is go-gettable: go get -u gorgonia.org/gorgonia.

Gorgonia is compatible with go modules.

Documentation

Up-to-date documentation, references and tutorials are present on the official Gorgonia website at https://gorgonia.org.

Keeping Updated

Gorgonia's project has a Slack channel on gopherslack, as well as a Twitter account. Official updates and announcements will be posted to those two sites.

Usage

Gorgonia works by creating a computation graph, and then executing it. Think of it as a programming language, but is limited to mathematical functions, and has no branching capability (no if/then or loops). In fact this is the dominant paradigm that the user should be used to thinking about. The computation graph is an AST.

Microsoft's CNTK, with its BrainScript, is perhaps the best at exemplifying the idea that building of a computation graph and running of the computation graphs are different things, and that the user should be in different modes of thoughts when going about them.

Whilst Gorgonia's implementation doesn't enforce the separation of thought as far as CNTK's BrainScript does, the syntax does help a little bit.

Here's an example - say you want to define a math expression z = x + y. Here's how you'd do it:

package gorgonia_test

import (
	"fmt"
	"log"

	. "gorgonia.org/gorgonia"
)

// Basic example of representing mathematical equations as graphs.
//
// In this example, we want to represent the following equation
//		z = x + y
func Example_basic() {
	g := NewGraph()

	var x, y, z *Node
	var err error

	// define the expression
	x = NewScalar(g, Float64, WithName("x"))
	y = NewScalar(g, Float64, WithName("y"))
	if z, err = Add(x, y); err != nil {
		log.Fatal(err)
	}

	// create a VM to run the program on
	machine := NewTapeMachine(g)
	defer machine.Close()

	// set initial values then run
	Let(x, 2.0)
	Let(y, 2.5)
	if err = machine.RunAll(); err != nil {
		log.Fatal(err)
	}

	fmt.Printf("%v", z.Value())
	// Output: 4.5
}

You might note that it's a little more verbose than other packages of similar nature. For example, instead of compiling to a callable function, Gorgonia specifically compiles into a *program which requires a *TapeMachine to run. It also requires manual a Let(...) call.

The author would like to contend that this is a Good Thing - to shift one's thinking to a machine-based thinking. It helps a lot in figuring out where things might go wrong.

Additionally, there are no support for branching - that is to say there are no conditionals (if/else) or loops. The aim is not to build a Turing-complete computer.


More examples are present in the example subfolder of the project, and step-by-step tutorials are present on the main website

Using CUDA

Gorgonia comes with CUDA support out of the box. Please see the reference documentation about how cuda works on the Gorgonia.org website, or jump to the tutorial.

About Gorgonia's development process

Versioning

We use semver 2.0.0 for our versioning. Before 1.0, Gorgonia's APIs are expected to change quite a bit. API is defined by the exported functions, variables and methods. For the developers' sanity, there are minor differences to semver that we will apply prior to version 1.0. They are enumerated below:

  • The MINOR number will be incremented every time there is a deleterious break in API. This means any deletion, or any change in function signature or interface methods will lead to a change in MINOR number.
  • Additive changes will NOT change the MINOR version number prior to version 1.0. This means that if new functionality were added that does not break the way you use Gorgonia, there will not be an increment in the MINOR version. There will be an increment in the PATCH version.

API Stability

Gorgonia's API is as of right now, not considered stable. It will be stable from version 1.0 forwards.

Go Version Support

Gorgonia supports 2 versions below the Master branch of Go. This means Gorgonia will support the current released version of Go, and up to 4 previous versions - providing something doesn't break. Where possible a shim will be provided (for things like new sort APIs or math/bits which came out in Go 1.9).

The current version of Go is 1.13.1. The earliest version Gorgonia supports is Go 1.11.x but Gonum supports only 1.12+. Therefore, the minimum Go version to run the master branch is Go > 1.12.

Hardware and OS supported

Gorgonia runs on :

  • linux/AMD64
  • linux/ARM7
  • linux/ARM64
  • win32/AMD64
  • darwin/AMD64
  • freeBSD/AMD64

If you have tested gorgonia on other platform, please update this list.

Hardware acceleration

Gorgonia use some pure assembler instructions to accelerate somes mathematical operations. Unfortunately, only amd64 is supported.

Contributing

Obviously since you are most probably reading this on Github, Github will form the major part of the workflow for contributing to this package.

See also: CONTRIBUTING.md

Contributors and Significant Contributors

All contributions are welcome. However, there is a new class of contributor, called Significant Contributors.

A Significant Contributor is one who has shown deep understanding of how the library works and/or its environs. Here are examples of what constitutes a Significant Contribution:

  • Wrote significant amounts of documentation pertaining to why/the mechanics of particular functions/methods and how the different parts affect one another
  • Wrote code, and tests around the more intricately connected parts of Gorgonia
  • Wrote code and tests, and have at least 5 pull requests accepted
  • Provided expert analysis on parts of the package (for example, you may be a floating point operations expert who optimized one function)
  • Answered at least 10 support questions.

Significant Contributors list will be updated once a month (if anyone even uses Gorgonia that is).

How To Get Support

The best way of support right now is to open a ticket on Github.

Frequently Asked Questions

Why are there seemingly random runtime.GC() calls in the tests?

The answer to this is simple - the design of the package uses CUDA in a particular way: specifically, a CUDA device and context is tied to a VM, instead of at the package level. This means for every VM created, a different CUDA context is created per device per VM. This way all the operations will play nicely with other applications that may be using CUDA (this needs to be stress-tested, however).

The CUDA contexts are only destroyed when the VM gets garbage collected (with the help of a finalizer function). In the tests, about 100 VMs get created, and garbage collection for the most part can be considered random. This leads to cases where the GPU runs out of memory as there are too many contexts being used.

Therefore at the end of any tests that may use GPU, a runtime.GC() call is made to force garbage collection, freeing GPU memories.

In production, one is unlikely to start that many VMs, therefore it's not really a problem. If there is, open a ticket on Github, and we'll look into adding a Finish() method for the VMs.

Licence

Gorgonia is licenced under a variant of Apache 2.0. It's for all intents and purposes the same as the Apache 2.0 Licence, with the exception of not being able to commercially profit directly from the package unless you're a Significant Contributor (for example, providing commercial support for the package). It's perfectly fine to profit directly from a derivative of Gorgonia (for example, if you use Gorgonia as a library in your product)

Everyone is still allowed to use Gorgonia for commercial purposes (example: using it in a software for your business).

Dependencies

There are very few dependencies that Gorgonia uses - and they're all pretty stable, so as of now there isn't a need for vendoring tools. These are the list of external packages that Gorgonia calls, ranked in order of reliance that this package has (subpackages are omitted):

Package Used For Vitality Notes Licence
gonum/graph Sorting *ExprGraph Vital. Removal means Gorgonia will not work Development of Gorgonia is committed to keeping up with the most updated version gonum license (MIT/BSD-like)
gonum/blas Tensor subpackage linear algebra operations Vital. Removal means Gorgonial will not work Development of Gorgonia is committed to keeping up with the most updated version gonum license (MIT/BSD-like)
cu CUDA drivers Needed for CUDA operations Same maintainer as Gorgonia MIT/BSD-like
math32 float32 operations Can be replaced by float32(math.XXX(float64(x))) Same maintainer as Gorgonia, same API as the built in math package MIT/BSD-like
hm Type system for Gorgonia Gorgonia's graphs are pretty tightly coupled with the type system Same maintainer as Gorgonia MIT/BSD-like
vecf64 optimized []float64 operations Can be generated in the tensor/genlib package. However, plenty of optimizations have been made/will be made Same maintainer as Gorgonia MIT/BSD-like
vecf32 optimized []float32 operations Can be generated in the tensor/genlib package. However, plenty of optimizations have been made/will be made Same maintainer as Gorgonia MIT/BSD-like
set Various set operations Can be easily replaced Stable API for the past 1 year set licence (MIT/BSD-like)
gographviz Used for printing graphs Graph printing is only vital to debugging. Gorgonia can survive without, but with a major (but arguably nonvital) feature loss Last update 12th April 2017 gographviz licence (Apache 2.0)
rng Used to implement helper functions to generate initial weights Can be replaced fairly easily. Gorgonia can do without the convenience functions too rng licence (Apache 2.0)
errors Error wrapping Gorgonia won't die without it. In fact Gorgonia has also used goerrors/errors in the past. Stable API for the past 6 months errors licence (MIT/BSD-like)
gonum/mat Compatibility between Tensor and Gonum's Matrix Development of Gorgonia is committed to keeping up with the most updated version gonum license (MIT/BSD-like)
testify/assert Testing Can do without but will be a massive pain in the ass to test testify licence (MIT/BSD-like)

Various Other Copyright Notices

These are the packages and libraries which inspired and were adapted from in the process of writing Gorgonia (the Go packages that were used were already declared above):

Source How it's Used Licence
Numpy Inspired large portions. Directly adapted algorithms for a few methods (explicitly labelled in the docs) MIT/BSD-like. Numpy Licence
Theano Inspired large portions. (Unsure: number of directly adapted algorithms) MIT/BSD-like Theano's licence
Caffe im2col and col2im directly taken from Caffe. Convolution algorithms inspired by the original Caffee methods Caffe Licence
Comments
  • Masked tensor

    Masked tensor

    As promised, set about trying to implement basic masked array functionality.

    To begin with, created a new iterator type 'MultIterator', which is designed to iterate over multiple arrays simultaneously, with the same syntax as 'FlatIterator', to allow switching between them (it uses an array of FlatIterators internally). For single non-masked arrays, MultIterator is about 20% slower than FlatIterator. However, it only calculates offsets for unique shapes/stride combinations, and so when indexing arrays of same shape, a single FlatIterator is shared between them all, allowing significant compute savings.

    func BenchmarkFlatIteratorMulti6(b *testing.B) {
    	ap := make([]*AP, 6)
    	for j := 0; j < 6; j++ {
    		ap[j] = NewAP(Shape{30, 60, 10}, []int{1000000, 15000, 50})
    	}
    	it := NewMultIterator(ap...)
    	for n := 0; n < b.N; n++ {
    		for _, err := it.Next(); err == nil; _, err = it.Next() {
    		}
    		it.Reset()
    	}
    	DestroyMultIterator(it)
    }
    

    You could create a MultiIterator from tensors directly

    T1 := New(Of(Float64), WithShape(3, 20), WithMaskStrides([]bool{true, true}))
    T2 := New(Of(Float64), WithShape(3, 20), WithMaskStrides([]int{20,1}))
    T3 := New(Of(Float64), FromScalar(7))
    it := MultIteratorFromDense(T1, T2, T3)
    

    It also means that you don't have to worry when creating functions of multiple arguments in which the same array could be repeated as different arguments - in which case naive use of FlatIterator could cause that array to be iterated multiple times in a single for loop iteration - with MultIterator that can not happen.

    As for the mask, for the time being I opted to simply add a []bool to the Dense struct, and an additional stride int to AP. MultIterator supports masked operations, such as NextValid() or NextInvalid(), in addition to Next(). Examples of usage can be seen in dense_maskmethods_test.go and iterator_test.go.

    func TestMaskedIteration(t *testing.T) {
    	assert := assert.New(t)
    	T := New(Of(Float64), WithShape(2, 3, 4, 5))
    	assert.True(len(T.mask) < 1)
    	dataF64 := T.Data().([]float64)
    	for i := range dataF64 {
    		dataF64[i] = float64(i)
    	}
    	for i := 0; i < 5; i++ {
    		T.MaskedEqual(float64(i) * 10.0)
    	}
    
    	it := MultIteratorFromDense(T)
    
    	j := 0
    	for _, err := it.Next(); err == nil; _, err = it.Next() {
    		j++
    	}
    	it.Reset()
    	assert.True(j == 120)
    
    	j = 0
    	for _, err := it.NextValid(); err == nil; _, err = it.NextValid() {
    		j++
    	}
    	it.Reset()
    	assert.True(j == 115)
    
    	j = 0
    	for _, err := it.NextInvalid(); err == nil; _, err = it.NextInvalid() {
    		j++
    	}
    	it.Reset()
    	assert.True(j == 5)
    }
    

    I did not want to spend too much time going further before agreeing on the basics. While I show some basic mask setting operations in dense_maskmethods.go, I only do this for float64 tensors as a demonstration - the functionality would have to be implemented in genlib at some point which would take me sometime to do properly as this is my first time using text/template.

    I also did not optimize masked iteration, there are smarter ways to find next valid/valid, e.g. by processing >=8 bytes at once, but I figure that it's best to leave that until the structure is agreed upon.

  • [WIP] work on getting the gorgonia to use the errors package

    [WIP] work on getting the gorgonia to use the errors package

    This is the work in progress for the integration of the errors package into gorgonia which addressed #46

    As this work touches a large surface area, I wanted to open this branch early to mark the progress, so that I can get some feedback (if any) about the approach.

    once all the work is done, I will be squashing all the commits into one with a meaningful commit message to keep everything in master clean, so please note that I may be doing a force push at some point.

  • [fix] Using the iterator of the new Gonum API

    [fix] Using the iterator of the new Gonum API

    This change allows the v0.9.2-working2 branch to compile and work with the latest evolution of the Gonum API. The implementation relies on the OrderedNode implementation of the iterator package.

  • The Broadcast function is exported but not usable outside of the package

    The Broadcast function is exported but not usable outside of the package

    I need to implement a "add" operator for two tensors with a broadcasting mechanism as described here.

    The Broadcast function seems to be a perfect fit for this. Moreover, the test is partially implementing what I am trying to do. But nor the ʘBinaryOperatorType or any other binOp implementations are exported.

    Therefore the Broadcast function can only be used within the Gorgonia package.

    Maybe we should make it private to avoid confusion in the documentation and expose "Broadcasted version" of some operators instead? What do you think?

  • Iterator.Chan() considered harmful

    Iterator.Chan() considered harmful

    sketch space for describing how to create a chan int of negative length, and how to reproduce it

    Background/Context of the Issue

    Gorgonia is a library for representing and executing mathematical equations, and performing automatic differentiation. It's like Tensorflow and PyTorch for Go. It's currently undergoing some major internal refactor (that will not affect the public APIs much)

    I was improving the backend tensor package by splitting up the data structure into a data structure + pluggable execution engine, instead of having built in methods (see also #128). The reasons are so that it's easier to change out execution backends (CPU, GPU... even a network CPU (actual experiment I did was to run a small neural network on a Raspberry Pi and all computation is offshored to my workstation, and vice versa, which turned out to be a supremely bad idea)).

    Another reason was due to the fact that I wanted to do some experiments at my work which use algorithms that involve sparse tensors (see also #127) for matrix factorization tasks.

    Lastly, I wanted to clean up the generics support of the tensor package. The current master branch of the tensor package had a lot of code to support arbitrary tensor types. With the split of execution engines and data structure, more of this support could be offloaded to the execution engine instead. This package provides a default execution engine (type StdEng struct{}: https://github.com/chewxy/gorgonia/blob/debugrace/tensor/defaultengine.go), which could be extended (example: https://github.com/chewxy/gorgonia/blob/debugrace/tensor/example_extension_test.go) . The idea was to have an internal/execution package which held all the code for the default execution engine.

    Data Structures

    The most fundamental data structure is storage.Header, which is an analogue for a Go slice: it's a three word structure. It's chosen because it is a ridiculously simple structure can store Go-allocated memory, C-allocated memory and device-allocated memory (like CUDA).

    On top of storage.Header is tensor.array. It's essentially a storage.Header with an additional field for the type. The v field will eventually be phased out once the refactor is complete.

    On top of tensor.array are the various implementations of tensor.Tensor. Chief amongst these is the tensor.Dense struct. Essentially it's a tensor.array coupled with some access patterns and meta information.

    Access to the data in the tensor.Tensor can be achieved by use of Iterators. The Iterator basically assumes that the data is held in a flat slice, and returns the next index on the slice. There are auxiliary methods like NextValidity to handle special case tensors like masked tensors, where some elements are masked from operations.

    The bug happens in the Chan method of the FlatIterator type.

    How to reproduce

    The branch where the bug is known to exist is the debugrace branch, which can be found here: 1dee6d2 .

    1. git checkout debugrace
    2. Run tests with various GOMAXPROCS like so: GOMAXPROCS=1 go test -run=. . Try it with various GOMAXPROCS, one of them is bound to trigger an issue.
    3. The test won't panic, because I have added a recover statement here https://github.com/chewxy/gorgonia/blob/debugrace/tensor/dense_viewstack_specializations.go#L636. Removing the deferred function causes a index out of bounds panic.
    4. All the tests must be run to trigger the issue.
    5. The issue is found in the test for the Stack function: https://github.com/chewxy/gorgonia/blob/debugrace/tensor/dense_matop_test.go#L768 . If only the stack test is run (for example GOMAXPROCS=1 go test -run=Stack), it is unlikely the problem will show up (I wrote a tiny python script to run it as many times as possible with many GOMAXPROCS configurations and none of them caused an error).

    You should get something like this:

    image

    Environments

    I've managed to reproduce the issue on OS X, with Go 1.8 and on Ubuntu 16.10 with Go 1.8.2 and Go tip (whatever gvm thinks is Go tip). I've no access to Go on a windows box so I can't test it on Windows.

    Magic and Unsafe Use

    As part of the refactoring, there are a few magic bits being used. Here I attempt to list them all (may not be exhaustive):

    • The Go slice structure is re-implemented in https://github.com/chewxy/gorgonia/blob/debugrace/tensor/internal/storage/header.go. Note that here an unsafe.Pointer is used instead of the standard one like reflect.SliceHeader which stores a uintptr. This is due to the fact that I want Go to keep a reference to the actual slice. This may affect the runtime and memory allocation.. I'm not too sure.
    • //go:linkname is used in some internal packages (specific example here: https://github.com/chewxy/gorgonia/blob/debugrace/tensor/internal/execution/generic_arith_vv.go). It's basically just a rename of functions in github.com/chewxy/vecf32 and github.com/chewxy/vecf64. Those packages contain optional AVX/SSE related vector operations like arithmetics. However, those have to be manually invoked via a build tag. By default it uses go algorithms, not SSE/AVX operations.
    • //go:linkname is used in unsafe.go: https://github.com/chewxy/gorgonia/blob/debugrace/tensor/unsafe.go#L105. However it should be noted that memmove is never called as after some tests I decided it would be too unsafe to use (also explains why there are comments that say TODO: implement memmove.
    • There are several naughty pointer arithmetics at play:

    What I suspect

    I suspect that there may be some naughty things happening in memory (because it only happens when all the tests are run). The problem is I don't know exactly where to start looking.

  • Adopt

    Adopt "dep" as the official installation mechanism

    With the possible integration of the dep package manager into the Go toolchain, it may be worth it to adopt it as the official installation method for Gorgonia. There are a good bit of packages to install to fully utilize Gorgonia, which may scare people new to Go and/or programming in general.

    • [x] provide a Gopkg.toml with explicit versions of the libraries whenever possible
    • [x] provide a Gopkg.lock
    • [x] Add the vendor directory to the .gitignore file so that it is not checked in.
    • [ ] Provide an installation section in the readme showing how to use dep to install Gorgonia and how to test if your installation was successful.
  • 1.15.3

    1.15.3 "Import Cycle Not Allowed" on convnet example w/ Cuda

    Hello,

    I get the following error when trying to run the convnet cuda example.

    package command-line-arguments
    	imports gorgonia.org/gorgonia
    	imports gorgonia.org/gorgonia: import cycle not allowed
    

    My file structure is as below;

    ├── project
    │ ├── convnet.go
    │ └── cudamodules.go
    

    Note that if I roll back Go to 1.13.9 (Which is fine so by no means urgent), this error does not present itself.

    However... Without piggybacking too much of this issue for another, when I run the following command in the project directory, on version 1.13.9, I get the following output which indefinitely hangs.

    >>>/usr/local/go-1.13/bin/go run -tags='cuda' .
    2020/10/21 10:03:07 Using CUDA build
    2020/10/21 10:03:07 gorgonia. true
    2020/10/21 10:03:08 p0 (100, 32, 14, 14)
    2020/10/21 10:03:08 p2 shape (100, 128, 3, 3)
    2020/10/21 10:03:08 r2 shape (100, 1152)
    2020/10/21 10:03:08 l2 shape (100, 1152) | (1152, 625)
    2020/10/21 10:03:08 l3 name Dropout 0.55(%15) :: Matrix float64 | a3 name ReLU(%14) :: Matrix float64
    2020/10/21 10:03:08 DONE
    2020/10/21 10:03:08 m.out.Shape (100, 10), y.Shape (100, 10)
    2020/10/21 10:03:08 Batches 600
    Epoch 0 0 / 600 [------------------------------------------------------]   0.00%
    

    Running nvidia-smi I can see it has allocated memory to it, but seems to just hang

    0      6477      C   /tmp/go-build071137485/b001/exe/goHide       470MiB
    

    It could be that i'm doing something wrong, but I build the cudamodules using the cudagen tool. The only strange thing is that I have to remove //+build cuda from the top of convnet.go prior to running the cudagen tool otherwise I get the following

    2020/10/21 10:07:50 failed to get name of package in working directory. Error: exit status 1. go list error: package .: build constraints exclude all Go files in /path/to/project. I then add it back in afterwards and it runs, but hangs as mentioned.

    Apologies if the latter is just me being a Go novice!

  • Test broadcast add

    Test broadcast add

    See #301

    Note that this branch currently fails, but I believe that it is due an implementation error in the broadcasting. Specifically, I was unable to make the following operation:

    given a tensor a with shape (2,) and a tensor b with shape (2,2,2), broadcast-add them, such that the result c has shape (2,2,2). This stems from the fact that when I try to broadcast a into shape (2,2,2) (using left=[]byte{1, 2}) in the broadcastAdd, it panics.

    This situation is demonstrated on the second commit of this MR.

    This behavior is not consistent with when we do the same operation where b has shape (2,2) and a is broadcasted using left=[]byte{1}, which is valid (as per test named "vec-mat").

    Note that I am assuming that we are following the same rules as the broadcasting rules of numpy

  • go get-u gorgonia assembly failed error

    go get-u gorgonia assembly failed error

    Hey guys, whenever I try to use go get gorgonia, I keep hitting the following error:

    gorgonia.org/gorgonia

    asm: asmins: illegal 64: 00000 (C:\Users\manikandank\go\src\gorgonia.org\gorgonia\mathutils.s:7) MOVQ a+4(FP), SI asm: asmins: illegal in mode 32: 00000 (C:\Users\manikandank\go\src\gorgonia.org\gorgonia\mathutils.s:7) MOVQ a+4(FP), SI (24 18) asm: asmins: illegal 64: 00005 (C:\Users\manikandank\go\src\gorgonia.org\gorgonia\mathutils.s:8) MOVQ b+12(FP), CX asm: asmins: illegal in mode 32: 00005 (C:\Users\manikandank\go\src\gorgonia.org\gorgonia\mathutils.s:8) MOVQ b+12(FP), CX (24 15) asm: asmins: illegal 64: 00010 (C:\Users\manikandank\go\src\gorgonia.org\gorgonia\mathutils.s:9) MOVQ SI, AX asm: asmins: illegal in mode 32: 00010 (C:\Users\manikandank\go\src\gorgonia.org\gorgonia\mathutils.s:9) MOVQ SI, AX (18 14) asm: asmins: illegal 64: 00013 (C:\Users\manikandank\go\src\gorgonia.org\gorgonia\mathutils.s:10) CMPQ CX, $-1 asm: asmins: illegal in mode 32: 00013 (C:\Users\manikandank\go\src\gorgonia.org\gorgonia\mathutils.s:10) CMPQ CX, $-1 (15 5) asm: asmins: illegal 64: 00019 (C:\Users\manikandank\go\src\gorgonia.org\gorgonia\mathutils.s:13) CQO asm: asmins: illegal in mode 32: 00019 (C:\Users\manikandank\go\src\gorgonia.org\gorgonia\mathutils.s:13) CQO (1 1) asm: asmins: illegal 64: 00021 (C:\Users\manikandank\go\src\gorgonia.org\gorgonia\mathutils.s:14) IDIVQ CX asm: asmins: illegal in mode 32: 00021 (C:\Users\manikandank\go\src\gorgonia.org\gorgonia\mathutils.s:14) IDIVQ CX (15 1) asm: asmins: illegal 64: 00024 (C:\Users\manikandank\go\src\gorgonia.org\gorgonia\mathutils.s:15) MOVQ AX, q+20(FP) asm: asmins: illegal in mode 32: 00024 (C:\Users\manikandank\go\src\gorgonia.org\gorgonia\mathutils.s:15) MOVQ AX, q+20(FP) (14 24) asm: asmins: illegal 64: 00029 (C:\Users\manikandank\go\src\gorgonia.org\gorgonia\mathutils.s:16) MOVQ DX, r+28(FP) asm: asmins: illegal in mode 32: 00029 (C:\Users\manikandank\go\src\gorgonia.org\gorgonia\mathutils.s:16) MOVQ DX, r+28(FP) (21 24) asm: asmins: illegal 64: 00035 (C:\Users\manikandank\go\src\gorgonia.org\gorgonia\mathutils.s:20) NEGQ AX asm: asmins: illegal in mode 32: 00035 (C:\Users\manikandank\go\src\gorgonia.org\gorgonia\mathutils.s:20) NEGQ AX (1 14) asm: asmins: illegal 64: 00038 (C:\Users\manikandank\go\src\gorgonia.org\gorgonia\mathutils.s:21) MOVQ AX, q+20(FP) asm: asmins: illegal in mode 32: 00038 (C:\Users\manikandank\go\src\gorgonia.org\gorgonia\mathutils.s:21) MOVQ AX, q+20(FP) (14 24) asm: asmins: illegal 64: 00043 (C:\Users\manikandank\go\src\gorgonia.org\gorgonia\mathutils.s:22) MOVQ $0, r+28(FP) asm: asmins: illegal in mode 32: 00043 (C:\Users\manikandank\go\src\gorgonia.org\gorgonia\mathutils.s:22) MOVQ $0, r+28(FP) (2 24) asm: assembly failed

  • How to do a prediction after training

    How to do a prediction after training

    Hi,

    I have tried example code CONVNET. I am able to run it. After the program finish the training, is there a way to predict immediately?

    I have tried for few days without any luck. I could not find any sample code which is doing prediction after Training. Or saving and load the result after Training. Hope you can help me.

    func main() {
    	flag.Parse()
    	parseDtype()
    	rand.Seed(1337)
    
    	// intercept Ctrl+C
    	sigChan := make(chan os.Signal, 1)
    	signal.Notify(sigChan, syscall.SIGINT, syscall.SIGTERM)
    	doneChan := make(chan bool, 1)
    
    	var inputs, targets tensor.Tensor
    	var err error
    
    	go func() {
    		log.Println(http.ListenAndServe("localhost:6060", nil))
    	}()
    
    	trainOn := *dataset
    	if inputs, targets, err = mnist.Load(trainOn, loc, dt); err != nil {
    		log.Fatal(err)
    	}
    
    	// the data is in (numExamples, 784).
    	// In order to use a convnet, we need to massage the data
    	// into this format (batchsize, numberOfChannels, height, width).
    	//
    	// This translates into (numExamples, 1, 28, 28).
    	//
    	// This is because the convolution operators actually understand height and width.
    	//
    	// The 1 indicates that there is only one channel (MNIST data is black and white).
    	numExamples := inputs.Shape()[0]
    	bs := *batchsize
    	// todo - check bs not 0
    
    	if err := inputs.Reshape(numExamples, 1, 28, 28); err != nil {
    		log.Fatal(err)
    	}
    	g := gorgonia.NewGraph()
    	x := gorgonia.NewTensor(g, dt, 4, gorgonia.WithShape(bs, 1, 28, 28), gorgonia.WithName("x"))
    	y := gorgonia.NewMatrix(g, dt, gorgonia.WithShape(bs, 10), gorgonia.WithName("y"))
    	m := newConvNet(g)
    	if err = m.fwd(x); err != nil {
    		log.Fatalf("%+v", err)
    	}
    	losses := gorgonia.Must(gorgonia.HadamardProd(m.out, y))
    	cost := gorgonia.Must(gorgonia.Mean(losses))
    	cost = gorgonia.Must(gorgonia.Neg(cost))
    
    	// we wanna track costs
    	var costVal gorgonia.Value
    	gorgonia.Read(cost, &costVal)
    
    	if _, err = gorgonia.Grad(cost, m.learnables()...); err != nil {
    		log.Fatal(err)
    	}
    
    	// debug
    	// ioutil.WriteFile("fullGraph.dot", []byte(g.ToDot()), 0644)
    	// prog, _, _ := gorgonia.Compile(g)
    	// log.Printf("%v", prog)
    	// logger := log.New(os.Stderr, "", 0)
    	// vm := gorgonia.NewTapeMachine(g, gorgonia.BindDualValues(m.learnables()...), gorgonia.WithLogger(logger), gorgonia.WithWatchlist())
    
    	vm := gorgonia.NewTapeMachine(g, gorgonia.BindDualValues(m.learnables()...))
    	solver := gorgonia.NewRMSPropSolver(gorgonia.WithBatchSize(float64(bs)))
    
    	// pprof
    	// handlePprof(sigChan, doneChan)
    
    	var profiling bool
    	if *cpuprofile != "" {
    		f, err := os.Create(*cpuprofile)
    		if err != nil {
    			log.Fatal(err)
    		}
    		profiling = true
    		pprof.StartCPUProfile(f)
    		defer pprof.StopCPUProfile()
    	}
    	go cleanup(sigChan, doneChan, profiling)
    
    	batches := numExamples / bs
    	log.Printf("Batches %d", batches)
    	bar := pb.New(batches)
    	bar.SetRefreshRate(time.Second)
    	bar.SetMaxWidth(80)
    
    	for i := 0; i < *epochs; i++ {
    		bar.Prefix(fmt.Sprintf("Epoch %d", i))
    		bar.Set(0)
    		bar.Start()
    		for b := 0; b < batches; b++ {
    			start := b * bs
    			end := start + bs
    			if start >= numExamples {
    				break
    			}
    			if end > numExamples {
    				end = numExamples
    			}
    
    			var xVal, yVal tensor.Tensor
    			if xVal, err = inputs.Slice(sli{start, end}); err != nil {
    				log.Fatal("Unable to slice x")
    			}
    
    			if yVal, err = targets.Slice(sli{start, end}); err != nil {
    				log.Fatal("Unable to slice y")
    			}
    			if err = xVal.(*tensor.Dense).Reshape(bs, 1, 28, 28); err != nil {
    				log.Fatalf("Unable to reshape %v", err)
    			}
    
    			gorgonia.Let(x, xVal)
    			gorgonia.Let(y, yVal)
    			if err = vm.RunAll(); err != nil {
    				log.Fatalf("Failed at epoch  %d: %v", i, err)
    			}
    			solver.Step(m.learnables())
    			vm.Reset()
    			bar.Increment()
    		}
    		log.Printf("Epoch %d | cost %v", i, costVal)
    
    	}
    
  • Export tapeMachine / lispMachine

    Export tapeMachine / lispMachine

    I know in vm.go you emphasize that tapeMachine and lispMachine are not exported. But since one is able to get these types anyway through e.g. NewTapeMachine it feels awkward.

    My specific "issue" is that I am playing around with your library (thank you very much btw) and want to do stuff like

    var machine G.TapeMachine if option_enabled { machine = G.NewTapeMachine(g, option) } else { machine = G.NewTapeMachine(g) }

  • Load model from redisai

    Load model from redisai

    @auxten Hello, I have trained the cifar10 with pytorch lenet and put weights to the redisai. I load model from redisai and write to nodes of gorgonia. By forward function I just get 10% accuracy.

  • There is an inexplicable error when running convnet_cuda, and I have no clue to solve it. Can you provide some ideas?

    There is an inexplicable error when running convnet_cuda, and I have no clue to solve it. Can you provide some ideas?

    2022/11/02 16:14:34 Batches 600 Epoch 0 0 / 600 [------------------------------------------------------] 0.00%Exception 0xc0000006 0x0 0xe05ba6400 0x13f4632 PC=0x13f4632

    gorgonia.org/gorgonia.CloneValue({0x17161e8, 0xe05ba6400}) E:/code/selfCode/gorgonia/values_utils.go:94 +0xf2 fp=0xc00052cff8 sp=0xc00052cd78 pc=0x13f4632 gorgonia.org/gorgonia.constantDV({0x17161e8, 0xe05ba6400}) E:/code/selfCode/gorgonia/dual.go:123 +0xd4 fp=0xc00052d0c8 sp=0xc00052cff8 pc=0x13640f4 gorgonia.org/gorgonia.dvUnit({0x17161e8, 0xe05ba6400}) E:/code/selfCode/gorgonia/dual.go:160 +0xe5 fp=0xc00052d150 sp=0xc00052d0c8 pc=0x1364365 gorgonia.org/gorgonia.(*execOp).exec(0xc000328730, 0xc000186000) E:/code/selfCode/gorgonia/vm_tape_cuda.go:164 +0x18d6 fp=0xc00052d9d0 sp=0xc00052d150 pc=0x14021b6 gorgonia.org/gorgonia.(*tapeMachine).runall(0xc000186000, 0xc00031a120, 0xc00031a180) E:/code/selfCode/gorgonia/vm_tape.go:262 +0x28a fp=0xc00052dfa0 sp=0xc00052d9d0 pc=0x13f9b0a gorgonia.org/gorgonia.(*tapeMachine).RunAll.func2() E:/code/selfCode/gorgonia/vm_tape.go:223 +0x47 fp=0xc00052dfe0 sp=0xc00052dfa0 pc=0x13f97e7 runtime.goexit() D:/software/go/src/runtime/asm_amd64.s:1571 +0x1 fp=0xc00052dfe8 sp=0xc00052dfe0 pc=0x610a41 created by gorgonia.org/gorgonia.(*tapeMachine).RunAll E:/code/selfCode/gorgonia/vm_tape.go:223 +0x225

    goroutine 1 [select, locked to thread]: gorgonia.org/gorgonia.(*tapeMachine).RunAll(0xc000186000) E:/code/selfCode/gorgonia/vm_tape.go:225 +0x34d main.main() E:/code/selfCode/gorgonia/examples/convnet_cuda/main.go:296 +0x1d74

    goroutine 20 [syscall]: os/signal.signal_recv() D:/software/go/src/runtime/sigqueue.go:151 +0x2f os/signal.loop() D:/software/go/src/os/signal/signal_unix.go:23 +0x1d created by os/signal.Notify.func1.1 D:/software/go/src/os/signal/signal.go:151 +0x2e

    goroutine 21 [IO wait]: internal/poll.runtime_pollWait(0xc00032a000?, 0x72) D:/software/go/src/runtime/netpoll.go:302 +0x45 internal/poll.(*pollDesc).wait(0xc0003201b8, 0x72, 0x0) D:/software/go/src/internal/poll/fd_poll_runtime.go:83 +0x88 internal/poll.execIO(0xc000320018, 0xc0000955a0) D:/software/go/src/internal/poll/fd_windows.go:175 +0x2d0 internal/poll.(*FD).acceptOne(0xc000320000, 0x304, {0xc00032a000, 0x2, 0x2}, 0xc000320018) D:/software/go/src/internal/poll/fd_windows.go:942 +0xfd internal/poll.(*FD).Accept(0xc000320000, 0xc0000959c8) D:/software/go/src/internal/poll/fd_windows.go:976 +0x43f net.(*netFD).accept(0xc000320000) D:/software/go/src/net/fd_windows.go:139 +0xc5 net.(*TCPListener).accept(0xc000306090) D:/software/go/src/net/tcpsock_posix.go:139 +0x55 net.(*TCPListener).Accept(0xc000306090) D:/software/go/src/net/tcpsock.go:288 +0x67 net/http.(*Server).Serve(0xc000106000, {0x1713638, 0xc000306090}) D:/software/go/src/net/http/server.go:3039 +0x4c8 net/http.(*Server).ListenAndServe(0xc000106000) D:/software/go/src/net/http/server.go:2968 +0x165 net/http.ListenAndServe({0x162f82a, 0xe}, {0x0, 0x0}) D:/software/go/src/net/http/server.go:3222 +0xf6 main.main.func1() E:/code/selfCode/gorgonia/examples/convnet_cuda/main.go:186 +0x2d created by main.main E:/code/selfCode/gorgonia/examples/convnet_cuda/main.go:185 +0x1b4

    goroutine 25 [select, locked to thread]: gorgonia.org/gorgonia/cuda.(*Engine).Run(0xc0000001e0) E:/code/selfCode/gorgonia/cuda/external.go:248 +0x2d1 created by gorgonia.org/gorgonia/cuda.(*Engine).doInit E:/code/selfCode/gorgonia/cuda/external.go:168 +0x128c

    goroutine 26 [chan receive]: gorgonia.org/gorgonia.(*ExternMetadata).collectWork(0xc000186000, 0x0, 0xc00008e9c0) E:/code/selfCode/gorgonia/cuda.go:283 +0x39 created by gorgonia.org/gorgonia.(*ExternMetadata).init E:/code/selfCode/gorgonia/cuda.go:256 +0x678

    goroutine 43 [select]: main.cleanup(0xc0000e0660, 0xc0000dd180, 0x0) E:/code/selfCode/gorgonia/examples/convnet_cuda/main.go:324 +0xb3 created by main.main E:/code/selfCode/gorgonia/examples/convnet_cuda/main.go:258 +0x1334

    goroutine 44 [select]: gopkg.in/cheggaaa/pb%2ev1.(*ProgressBar).refresher(0xc000854000) C:/Users/Administrator/go/pkg/mod/gopkg.in/cheggaaa/[email protected]/pb.go:493 +0xbd created by gopkg.in/cheggaaa/pb%2ev1.(*ProgressBar).Start C:/Users/Administrator/go/pkg/mod/gopkg.in/cheggaaa/[email protected]/pb.go:124 +0x14c rax 0xc0001b4110 rbx 0x26d3eeab3e0 rcx 0xe05ba6400 rdi 0x0 rsi 0x0 rbp 0xc00052cfe8 rsp 0xc00052cd78 r8 0xc0001b4110 r9 0x1 r10 0x0 r11 0x0 r12 0xc00052cdf8 r13 0x0 r14 0xc0003196c0 r15 0x20 rip 0x13f4632 rflags 0x10202 cs 0x33 fs 0x53 gs 0x2b

    Debugger finished with the exit code 0

    That's the error message. There is an inexplicable error when running convnet_cuda, and I have no clue to solve it. Can you provide some ideas?

  • Stacking multiple tensors

    Stacking multiple tensors

    Hi all,

    New to golang and wanted to a get a feel for the language. My goal is read a geospatial raster (using godal) as a multi dimensional tensor via gorgonia/tensor . I was able to read the raster and convert it into a list of t.Dense but I'm kinda stuck at how to merge them together. Any suggestions?

    package main
    
    import (
    	"fmt"
    
    	"github.com/airbusgeo/godal"
    	t "gorgonia.org/tensor"
    )
    
    func main() {
    
    	godal.RegisterAll()
    	hDataset, err := godal.Open("data/LT5_19980329_sub.tif")
    	if err != nil {
    		panic(err)
    	}
    	structure := hDataset.Structure()
    	fmt.Printf("Size is %dx%dx%d\n", structure.SizeX, structure.SizeY, structure.NBands)
    
    	bands := hDataset.Bands()
    	count := len(bands)
    	fmt.Printf("Number of Bands: %d\n", count)
    
    	bandArrays := make([]*t.Dense, 0)
    
    	for i := range bands {
    		band := bands[i]
    		buf := make([]int16, structure.SizeX*structure.SizeY)
    		band.Read(0, 0, buf, structure.SizeY, structure.SizeX)
    		bandArray := t.New(t.WithShape(structure.SizeY, structure.SizeX, 1), t.WithBacking(buf))
    		bandArrays = append(bandArrays, bandArray)
    	}
    
    	fmt.Println("DONE!")
    
    }
    

    Would also appreciate any tips to improve this snippet if it is suboptimal.

  • Getting start code failed to run

    Getting start code failed to run

    Hi, I've copied the getting started code from here: https://gorgonia.org/getting-started/ and I run with go1.18 and get this: panic: Something in this program imports go4.org/unsafe/assume-no-moving-gc to declare that it assumes a non-moving garbage collector, but your version of go4.org/unsafe/assume-no-moving-gc hasn't been updated to assert that it's safe against the go1.18 runtime. If you want to risk it, run with environment variable ASSUME_NO_MOVING_GC_UNSAFE_RISK_IT_WITH=go1.18 set. Notably, if go1.18 adds a moving garbage collector, this program is unsafe to use. I've found a few things on the internet, but I wouldn't disable it.

Generative Adversarial Network in Go via Gorgonia
Generative Adversarial Network in Go via Gorgonia

Generative adversarial networks Recipe for simple GAN in Golang ecosystem via Gorgonia library Table of Contents About Why Instruments Usage Code expl

Dec 2, 2022
A High-level Machine Learning Library for Go
A High-level Machine Learning Library for Go

Overview Goro is a high-level machine learning library for Go built on Gorgonia. It aims to have the same feel as Keras. Usage import ( . "github.

Nov 20, 2022
Self-contained Machine Learning and Natural Language Processing library in Go
Self-contained Machine Learning and Natural Language Processing library in Go

Self-contained Machine Learning and Natural Language Processing library in Go

Jan 8, 2023
Machine Learning for Go
Machine Learning for Go

GoLearn GoLearn is a 'batteries included' machine learning library for Go. Simplicity, paired with customisability, is the goal. We are in active deve

Jan 3, 2023
On-line Machine Learning in Go (and so much more)

goml Golang Machine Learning, On The Wire goml is a machine learning library written entirely in Golang which lets the average developer include machi

Jan 5, 2023
Machine Learning libraries for Go Lang - Linear regression, Logistic regression, etc.

package ml - Machine Learning Libraries ###import "github.com/alonsovidales/go_ml" Package ml provides some implementations of usefull machine learnin

Nov 10, 2022
Prophecis is a one-stop machine learning platform developed by WeBank
Prophecis is a one-stop machine learning platform developed by WeBank

Prophecis is a one-stop machine learning platform developed by WeBank. It integrates multiple open-source machine learning frameworks, has the multi tenant management capability of machine learning compute cluster, and provides full stack container deployment and management services for production environment.

Dec 28, 2022
Go Machine Learning Benchmarks
Go Machine Learning Benchmarks

Benchmarks of machine learning inference for Go

Dec 30, 2022
Deploy, manage, and scale machine learning models in production
Deploy, manage, and scale machine learning models in production

Deploy, manage, and scale machine learning models in production. Cortex is a cloud native model serving platform for machine learning engineering teams.

Dec 30, 2022
Standard machine learning models

Cog: Standard machine learning models Define your models in a standard format, store them in a central place, run them anywhere. Standard interface fo

Jan 9, 2023
Katib is a Kubernetes-native project for automated machine learning (AutoML).
Katib is a Kubernetes-native project for automated machine learning (AutoML).

Katib is a Kubernetes-native project for automated machine learning (AutoML). Katib supports Hyperparameter Tuning, Early Stopping and Neural Architec

Jan 2, 2023
PaddleDTX is a solution that focused on distributed machine learning technology based on decentralized storage.
PaddleDTX is a solution that focused on distributed machine learning technology based on decentralized storage.

中文 | English PaddleDTX PaddleDTX is a solution that focused on distributed machine learning technology based on decentralized storage. It solves the d

Dec 14, 2022
Go (Golang) encrypted deep learning library; Fully homomorphic encryption over neural network graphs

DC DarkLantern A lantern is a portable case that protects light, A dark lantern is one who's light can be hidden at will. DC DarkLantern is a golang i

Oct 31, 2022
Reinforcement Learning in Go
Reinforcement Learning in Go

Overview Gold is a reinforcement learning library for Go. It provides a set of agents that can be used to solve challenges in various environments. Th

Dec 11, 2022
Spice.ai is an open source, portable runtime for training and using deep learning on time series data.
Spice.ai is an open source, portable runtime for training and using deep learning on time series data.

Spice.ai Spice.ai is an open source, portable runtime for training and using deep learning on time series data. ⚠️ DEVELOPER PREVIEW ONLY Spice.ai is

Dec 15, 2022
FlyML perfomant real time mashine learning libraryes in Go

FlyML perfomant real time mashine learning libraryes in Go simple & perfomant logistic regression (~100 LoC) Status: WIP! Validated on mushrooms datas

May 30, 2022
A tool for building identical machine images for multiple platforms from a single source configuration
A tool for building identical machine images for multiple platforms from a single source configuration

Packer Packer is a tool for building identical machine images for multiple platforms from a single source configuration. Packer is lightweight, runs o

Oct 3, 2021
Genetic Algorithms library written in Go / golang

Description Genetic Algorithms for Go/Golang Install $ go install git://github.com/thoj/go-galib.git Compiling examples: $ git clone git://github.com

Sep 27, 2022
Go package for OCR (Optical Character Recognition), by using Tesseract C++ library

gosseract OCR Golang OCR package, by using Tesseract C++ library. OCR Server Do you just want OCR server, or see the working example of this package?

Jan 3, 2023