onnx-go gives the ability to import a pre-trained neural network within Go without being linked to a framework or library.

ONNX Logo Go Logo

Mentioned in Awesome Go GoDoc Go Report Card Build Status CodeCov

This is a Go Interface to Open Neural Network Exchange (ONNX).

Overview

onnx-go contains primitives to decode a onnx binary model into a computation backend, and use it like any other library in your go code. for more information about onnx, please visit onnx.ai.

The implementation of the the spec of ONNX is partial on the import, and non-existent for the export.

Vision statement

For the Go developer who needs to add a machine learning capability to his/her code, onnx-go is a package that facilitates the use of neural network models (software 2.0) and unlike any other computation library, this package does not require special skills in data-science.

Warning The API is experimental and may change.

Disclaimer

This is a new version of the API.
The tweaked version of Gorgonia have been removed. It is now compatible with the master branch of Gorgonia.
Some operators are not yet available though.

A utility has been added in order to run models from the zoo.
check the `examples` subdirectory.

Install

Install it via go get

go get github.com/owulveryck/onnx-go

onnx-go is compatible with go modules.

Example

Those examples assumes that you have a pre-trained model.onnx file available. You can download pre-trained modles from the onnx model zoo.

Very simple example

This example does nothing but decoding the graph into a simple backend. Then you can do whatever you want with the generated graph.

// Create a backend receiver
	backend := simple.NewSimpleGraph()
	// Create a model and set the execution backend
	model := onnx.NewModel(backend)

	// read the onnx model
	b, _ := ioutil.ReadFile("model.onnx")
	// Decode it into the model
	err := model.UnmarshalBinary(b)

Simple example to run a pre-trained model

This example uses Gorgonia as a backend.

import "github.com/owulveryck/onnx-go/backend/x/gorgonnx"

At the present time, Gorgonia does not implement all the operators of ONNX. Therefore, most of the model from the model zoo will not work. Things will go better little by little by adding more operators to the backend.

You can find a list of tested examples and a coverage here.

func Example_gorgonia() {
	// Create a backend receiver
	backend := gorgonnx.NewGraph()
	// Create a model and set the execution backend
	model := onnx.NewModel(backend)

	// read the onnx model
	b, _ := ioutil.ReadFile("model.onnx")
	// Decode it into the model
	err := model.UnmarshalBinary(b)
	if err != nil {
		log.Fatal(err)
	}
	// Set the first input, the number depends of the model
	model.SetInput(0, input)
	err = backend.Run()
	if err != nil {
		log.Fatal(err)
	}
	// Check error
	output, _ := model.GetOutputTensors()
	// write the first output to stdout
	fmt.Println(output[0])
}

Model zoo

In the examples subdirectory, you will find a utility to run a model from the zoo, as well as a sample utility to analyze a picture with Tiny YOLO v2

Internal

ONNX protobuf definition

The protobuf definition of onnx has is compiled into Go with the classic protoc tool. The definition can be found in the internal directory. The definition is not exposed to avoid external dependencies to this repo. Indeed, the pb code can change to use a more efficient compiler such as gogo protobuf and this change should be transparent to the user of this package.

Execution backend

In order to execute the neural network, you need a backend able to execute a computation graph (for more information on computation graphs, please read this blog post

This picture represents the mechanism:

Schema

onnx-go do not provide any executable backend, but for a reference, a simple backend that builds an information graph is provided as an example (see the simple subpackage). Gorgonia is the main target backend of ONNX-Go.

Backend implementation

a backend is basically a Weighted directed graph that can apply on Operation on its nodes. It should fulfill this interface:

type Backend interface {
	OperationCarrier
	graph.DirectedWeightedBuilder
}
type OperationCarrier interface {
	// ApplyOperation on the graph nodes
	// graph.Node is an array because it allows to handle multiple output
	// for example a split operation returns n nodes...
	ApplyOperation(Operation, ...graph.Node) error
}

An Operation is represented by its name and a map of attributes. For example the Convolution operator as described in the spec of onnx will be represented like this:

convOperator := Operation{
		Name: "Conv",
		Attributes: map[string]interface{}{
			"auto_pad":  "NOTSET",
			"dilations": []int64{1, 1},
			"group":     1,
			"pads":      []int64{1, 1},
			"strides":   []int64{1, 1},
		},
	}

Besides, operators, a node can carry a value. Values are described as tensor.Tensor To carry data, a Node of the graph should fulfill this interface:

type DataCarrier interface {
	SetTensor(t tensor.Tensor) error
	GetTensor() tensor.Tensor
}

Backend testing

onnx-go provides a some utilities to test a backend. Visit the testbackend package for more info.

Contributing

Contributions are welcome. A contribution guide will be eventually written. Meanwhile, you can raise an issue or send a PR. You can also contact me via Twitter or on the gophers' slack (I am @owulveryck on both)

This project is intended to be a safe, welcoming space for collaboration, and contributors are expected to adhere to the Contributor Covenant code of conduct.

Author

Olivier Wulveryck

License

MIT.

Owner
Olivier Wulveryck
Here I am a geek and a gopher. In real life, I am a consultant. It means that most of the contribution to the OSS projects is out of office hours.
Olivier Wulveryck
Comments
  • Fix build at Raspberry PI

    Fix build at Raspberry PI

    Is your feature request related to a problem? Please describe. I'm having some errors during the build of the YOLO example with a Raspberry Pi.

    Describe the solution you'd like The solution is quite easy and most of the problems are related to gorgonia. I've already opened an issue. https://github.com/gorgonia/gorgonia/issues/311

    The only thing that I had to do at onnx-go is to update this dependency replace github.com/chewxy/math32 => github.com/chewxy/math32 v1.0.1.

    Performance I didn't any profile yet, but the performance is really slow at the Pi. It's possible to use the Pi GPU to speed up the things? On my Mac, the YOLO example is taking something like 2s and at the Pi 50s.

    I did some tests with some other projects like https://github.com/shizukachan/darknet-nnpack and got something like 500ms, 100x faster.

  • Gorgonia's evaluation of the MNIST model does not give expected result

    Gorgonia's evaluation of the MNIST model does not give expected result

    This is related to the issue #2 I have with gorgonnx (the previous test implementation of onnx-to-gorgonia).

    The problem is exactly the same with the new version of the unmarsaler (from the directed-graph branch)

    To investigate, I will check every operator to see where the bug is hidden.

    To do so, I have created a test file here. This file contains the evaluated input and output of all the node that compose the MNIST model (from the ONNX model zoo).

    The next task is to evaluate all the tests to see if the results are ok.

    To help me, each test function generates a "numpy" compatible tensor for input and output. For simple operators, that should be enough to run them within python and to compute the expected result.

    Any help welcome.

    HOWTO:

    • go-get this repository
    • checkout the directed-graph branch
    • cd into examples/gorgonia
    • go run mnist.go run the (unsuccessful) test (gorgonia has been vendorer in this directory)
    • go test generate a numpy subdirectory with the tests files.
    • find which operator is not ok

    Remark: I did not export the attributes of the Convolution operator yet, but you can find their values in the internal/examples/mnist directory

  • Analyse (and enhance) performances on multiple predictions

    Analyse (and enhance) performances on multiple predictions

    I have a demo model with 39 inputs.it takes 0.5s to predict 10000 data using keras.with onnx-go it takes 5s to predict.

    	for i := 0; i < 10000; i++ {
    		model.SetInput(0, input)
    		err = backend.Run()
    		model.GetOutputTensors()
    	}
    

    Am I make some mistake here?

  • refactor(internal/onnx/ir): rename internal/pb-onnx to internal/onnx/ir

    refactor(internal/onnx/ir): rename internal/pb-onnx to internal/onnx/ir

    In internal/pb-onnx, the package name is pb but the path ends pb-onnx,

    In order to improve maintainability, I propose

    • to change the path to internal/onnx/<packagename>,
    • to rename it ir instead of pb to be compliant with the content (see @owulveryck comment)
  • Is it possible to make onnx-go run distributedly?

    Is it possible to make onnx-go run distributedly?

    Is your feature request related to a problem? Please describe. I am a graduate student who needs a graduation project. I am thinking about whether it is necessary and feasible to make the program run distributedly.

    Describe the solution you'd like Possibly with the help of github.com/chrislusf/gleam ?

  • Installation error

    Installation error

    When trying to install this package in a go modules enabled project(golang:latest image) I got the following error:

    go: extracting github.com/tensorflow/tensorflow v1.13.1
    # github.com/owulveryck/onnx-go/internal/pb-onnx
    /go/pkg/mod/github.com/owulveryck/[email protected]/internal/pb-onnx/onnx.proto3.pb.go:22:11: undefined: proto.ProtoPackageIsVersion3
    # gorgonia.org/gorgonia
    /go/pkg/mod/gorgonia.org/[email protected]/graph.go:569:2: cannot use e (type edge) as type graph.Edge in return argument:
            edge does not implement graph.Edge (missing ReversedEdge method)
    /go/pkg/mod/gorgonia.org/[email protected]/node.go:437:16: n.shape.CalcStrides undefined (type tensor.Shape has no field or method CalcStrides)
    /go/pkg/mod/gorgonia.org/[email protected]/node.go:756:15: cannot use e (type edge) as type graph.Edge in argument to n.g.SetEdge:
            edge does not implement graph.Edge (missing ReversedEdge method)
    /go/pkg/mod/gorgonia.org/[email protected]/utils.go:147:28: undefined: tensor.InfChecker
    /go/pkg/mod/gorgonia.org/[email protected]/utils.go:190:28: undefined: tensor.NaNChecker
    
  • [TinyYolo v2] Bug in maxpool?

    [TinyYolo v2] Bug in maxpool?

    This commit allows the model tiny Yolo v2 to be compiled and executed With Gorgonia.

    Sadly the execution does not give the expected result:

    ➜  model_zoo_executor git:(tiny-yolov2) ✗  export MODELDIR=~/Documents/tiny_yolov2
    ➜  model_zoo_executor git:(tiny-yolov2) ✗ go run main.go -model $MODELDIR/model.onnx -input $MODELDIR/test_data_set_0/input_0.pb -output $MODELDIR/test_data_set_0/output_0.pb
    
            Error Trace:    main.go:72
                                                    proc.go:200
                                                    asm_amd64.s:1337
            Error:          Max difference between -0.17929432 and 0.056231752 allowed is 0.005, but difference was -0.23552606999874115
            Messages:       the two tensors should be equal.
    exit status 1
    

    According to this blog post the architecture should be:

    Layer         kernel  stride  output shape
    ---------------------------------------------
    Input                          (416, 416, 3)
    Convolution    3×3      1      (416, 416, 16)
    MaxPooling     2×2      2      (208, 208, 16)
    Convolution    3×3      1      (208, 208, 32)
    MaxPooling     2×2      2      (104, 104, 32)
    Convolution    3×3      1      (104, 104, 64)
    MaxPooling     2×2      2      (52, 52, 64)
    Convolution    3×3      1      (52, 52, 128)
    MaxPooling     2×2      2      (26, 26, 128)
    Convolution    3×3      1      (26, 26, 256)
    MaxPooling     2×2      2      (13, 13, 256)
    Convolution    3×3      1      (13, 13, 512)
    MaxPooling     2×2      1      (13, 13, 512)
    Convolution    3×3      1      (13, 13, 1024)
    Convolution    3×3      1      (13, 13, 1024)
    Convolution    1×1      1      (13, 13, 125)
    ---------------------------------------------
    

    After setting some logs, the architecture of the decoded network is:

    +Convolution             (3, 3)          [1 1]           (1, 16, 416, 416)
    +MaxPooling              (2, 2)          [2 2]           (1, 16, 208, 208)
    +Convolution             (3, 3)          [1 1]           (1, 32, 208, 208)
    +MaxPooling              (2, 2)          [2 2]           (1, 32, 104, 104)
    +Convolution             (3, 3)          [1 1]           (1, 64, 104, 104)
    +MaxPooling              (2, 2)          [2 2]           (1, 64, 52, 52)
    +Convolution             (3, 3)          [1 1]           (1, 128, 52, 52)
    +MaxPooling              (2, 2)          [2 2]           (1, 128, 26, 26)
    +Convolution             (3, 3)          [1 1]           (1, 256, 26, 26)
    +MaxPooling              (2, 2)          [2 2]           (1, 256, 13, 13)
    +Convolution             (3, 3)          [1 1]           (1, 512, 13, 13)
    -MaxPooling              (2, 2)          [1 1]           (1, 512, 14, 14)
    -Convolution             (3, 3)          [1 1]           (1, 1024, 14, 14)
    -Convolution             (3, 3)          [1 1]           (1, 1024, 14, 14)
    -Convolution             (1, 1)          [1 1]           (1, 125, 14, 14)
    

    The last layer using the Maxpool operator does not give the correct output size. The padding used is computed from the auto_pad argument but seems ok (padding is [1,1]).

    It requires more investigation; maybe a bug in Gorgonia.

    Note : the computation is slow, but Make it work, then Make it fast

    cc @chewxy

  • Broadcasting is consuming a lot of memory in Gorgonnx/Gorgonia

    Broadcasting is consuming a lot of memory in Gorgonnx/Gorgonia

    Bench

    I've created this simple benchmark with the MNIST model to analyze the behavior of the code:

    package onnx_test
    
    import (
            "testing"
    
            "github.com/owulveryck/onnx-go"
            "github.com/owulveryck/onnx-go/backend/x/gorgonnx"
            "github.com/owulveryck/onnx-go/internal/examples/mnist"
            "gorgonia.org/tensor"
    )
    
    func BenchmarkUnmarshalBinary(b *testing.B) {
            input := tensor.New(tensor.WithShape(1, 1, 28, 28), tensor.Of(tensor.Float32))
            for n := 0; n < b.N; n++ {
                    // Create a backend receiver
                    backend := gorgonnx.NewGraph()
                    // Create a model and set the execution backend
                    model := onnx.NewModel(backend)
    
                    // Decode it into the model
                    err := model.UnmarshalBinary(mnist.GetMnist())
                    if err != nil {
                            b.Fatal(err)
                    }
                    // Set the first input, the number depends of the model
                    model.SetInput(0, input)
                    err = backend.Run()
                    if err != nil {
                            b.Fatal(err)
                    }
            }
    }
    

    Running this with go test -bench=. -benchmem -memprofile memprofile.out -cpuprofile profile.out -benchtime=10s generates two files to decode with the go profiler.

    CPU

    The result for the CPU is displayed herer: mnist cpu flamegraph

    There are possible enhancements, but nothing obvious.

    Memory

    The result for the Memory usage is more interesting. It shows that the repeatOp of Gorgonia is using a lot of memory. The repeatOp is the foundation of the broadcasting.

    Screenshot 2019-05-28 at 11 15 46

    This op seems to copy a lot of data:

    bla

    gorgonia.Tensor

    The analysis point that this function from the tensor package is involved in extra memory consumption:

    https://github.com/gorgonia/tensor/blob/8eeece33868236224d51e7362e36a68642870bd2/array.go#L34-L51

    Especially this call to val.Interface()

    	return array{
    		Header: hdr,
    		t:      t,
    		v:      val.Interface(),
    	}
    

    According to the comment, this field is even not mandatory by the array.

    // array is the underlying generic array.
    type array struct {
    	storage.Header             // the header - the Go representation (a slice)
    	t              Dtype       // the element type
    	v              interface{} // an additional reference to the underlying slice. This is not strictly necessary, but does improve upon anything that calls .Data()
    }
    

    On top of that, the reflect package from stdlib references a TODO with something to enhance in the packEface function (packEface converts v to the empty interface. ):

    		if v.flag&flagAddr != 0 {
    			// TODO: pass safe boolean from valueInterface so
    			// we don't need to copy if safe==true?
    			c := unsafe_New(t)
    			typedmemmove(t, c, ptr)
    			ptr = c
    		}
    

    The safe flag is true when calling Interface() function:

    // Interface returns v's current value as an interface{}.
    // It is equivalent to:
    //	var i interface{} = (v's underlying value)
    // It panics if the Value was obtained by accessing
    // unexported struct fields.
    func (v Value) Interface() (i interface{}) {
    	return valueInterface(v, true)
    }
    

    This suggests that avoiding the copy would significantly improve the performances.

    cc @chewxy

  • Added Squeeze operator

    Added Squeeze operator

    Hi,

    I've added the Squeeze operator, following a previous PR on Unsqueeze. Specifications are here: https://github.com/onnx/onnx/blob/master/docs/Operators.md#squeeze

    I'll try to add more tests, but do you have any tips to generate the model and put it in binary in the test?

    Thanks!

  • Implement operator Softmax for backend Gorgonia/Gorgonnx

    Implement operator Softmax for backend Gorgonia/Gorgonnx

    Why is this operator needed?

    This operator is needed at least to run the inception v1 model;

    Implementation

    Link to existing material on the backend

    Expected problems?

    • Two versions of the operator exist in Gorgonia; we should decide whether we need the stable or the non-stable version
    • The Softmax operator of ONNX carries one attribute (the axis for the softmax); this attribute does not exist in gorgonia; the full implementation of the operator may require to tweak Gorgonia.

    Tests

    go test -run=ONNX/TestSoftmax

  • Work in progress

    Work in progress

    About

    This is just the interface between the ONNX structures/files and the Go ecosystem. There is also an ongoing effort to implement a backend into Gorgonia which is a Computation Lib for Go.

    So far, the TensorProto import is partially implemented into the tensor lib but I am waiting for more progress before I ask for a PR and a merge to the master branch.

    Regarding the Model and the Graph structures, I have started a POC which is quick'n'dirty by now. (if you are interested, the code is here). My goal is to be able to run the MNIST example. So far I have generated an ExprGraph and I can run it, but the result is wrong. I am doing some bug hunting.

    Next

    Once the POC is working, I will do some PR, and start a complete integration process of ONNX into Gorgonia (it may need some tooling and enhanced testing). Meanwhile, if you have any idea for enhancing the onnx-go repo, please feel free to open an issue or a PR.

    cc @jspisak @prasanthpul @lupesko @bddppq

  • Implement operator `PReLU` for backend `Gorgonia`

    Implement operator `PReLU` for backend `Gorgonia`

  • Tape machine does not reset properly for some models

    Tape machine does not reset properly for some models

    For some models, I get the correct result the first time running, but subsequent runs return the wrong results. Maybe there is some state in the tape machine that does not reset?

    For example, using the following Julia code, I created a simple one-layer test network and saved it as ONNX (testmodel.zip).

    using Flux, ONNXNaiveNASflux, Random
    Random.seed!(0)
    testmodel = Dense(rand(Float32, 2, 8))
    save("testmodel.onnx", testmodel)
    

    When running the following Go code

    package main
    
    import (
    	"fmt"
    	"io/ioutil"
    	"log"
    	"math/rand"
    
    	"github.com/owulveryck/onnx-go"
    	"github.com/owulveryck/onnx-go/backend/x/gorgonnx"
    	"gorgonia.org/tensor"
    )
    
    func randSlice(n int) []float32 {
    	x := make([]float32, n)
    	for i := 0; i < n; i++ {
    		x[i] = rand.Float32()
    	}
    	return x
    }
    
    func main() {
    	rand.Seed(0)
    
    	backend := gorgonnx.NewGraph()
    	model := onnx.NewModel(backend)
    
    	b, _ := ioutil.ReadFile("testmodel.onnx")
    	input := tensor.New(tensor.WithShape(1, 8), tensor.Of(tensor.Float32), tensor.WithBacking(randSlice(8)))
    
    	// b, _ := ioutil.ReadFile("mnist-12.onnx")
    	// input := tensor.New(tensor.WithShape(1, 1, 28, 28), tensor.Of(tensor.Float32))
    
    	// b, _ := ioutil.ReadFile("resnet50-v1-12.onnx")
    	// input := tensor.New(tensor.WithShape(1, 3, 300, 300), tensor.Of(tensor.Float32))
    
    	err := model.UnmarshalBinary(b)
    	if err != nil {
    		log.Fatal(err)
    	}
    	model.SetInput(0, input)
    	fmt.Println(input)
    
    	for i := 0; i < 5; i++ {
    		err = backend.Run()
    		if err != nil {
    			log.Fatal(err)
    		}
    		output, err := model.GetOutputTensors()
    		if err != nil {
    			log.Fatal(err)
    		}
    		fmt.Println(output[0])
    	}
    }
    

    I get the printout

    R[0.94519615  0.24496509  0.65595627  0.05434384   0.3675872  0.28948045   0.1924386  0.65533215]
    R[1.382385  2.133724]
    R[ 2.76477  4.267448]
    R[4.1471553   6.401172]
    R[5.5295405   8.534896]
    R[ 6.911926  10.668619]
    

    The first result 1.382385 2.133724 is the same as what I get in Julia with the same input vector, but the subsequent runs produce ever increasing values. Indeed, it looks like the output is not reset to zero and the results simply accumulate.

    This seems to always happen for models I create in Julia, but also some other ones, e.g., resnet50-v1-12.onnx. However, other models, e.g., mnist-12.onnx, seems to not have this issue. I am running Go 1.19.3 and onnx-go v0.5.0.

    I do not know if this is an issue with the ONNX files, the models, the tape machine, or if I simply have done something wrong in the Go code. Any help is appreciated. Thanks.

  • Support for empty tensors

    Support for empty tensors

    Context

    I'm trying to load a feature pyramid network on top of a resnet model into onnx-go. The FPN uses an onnx Resize operator because it needs to upsample the feature maps. The Resize operator has an input (roi) that are optional.

    I'm using torch. When exporting a torch Resize operator to onnx the roi parameter is not used (only used for tf_crop_and_resize coordinate transformation mode). But the torch onnx export uses a constant value of an empty tensor. It has [0] as dims and no float_data or raw_data. Since this parameter isn't used at all the value should not matter.

    The bug

    When loading an onnx model in onnx-go it crashes because "No data found".

    To generate the onnx I'm using this.

    import os
    
    import torch
    from torchvision.transforms import transforms
    
    torch.onnx.export(
        transforms.Resize((100, 100)),
        torch.zeros((1, 3, 200, 200)),
        "model.onnx",
        opset_version=11,
        verbose=True,
    )
    
    Output
    graph(%img : Float(1, 3, 200, 200, strides=[120000, 40000, 200, 1], requires_grad=0, device=cpu),
          %12 : Long(2, strides=[1], requires_grad=0, device=cpu)):
      %2 : Long(4, strides=[1], device=cpu) = onnx::Shape(%img)
      %3 : Long(1, strides=[1], device=cpu) = onnx::Constant[value={0}]()
      %4 : Long(1, strides=[1], device=cpu) = onnx::Constant[value={0}]()
      %5 : Long(1, strides=[1], device=cpu) = onnx::Constant[value={2}]()
      %6 : Long(2, strides=[1], device=cpu) = onnx::Slice(%2, %4, %5, %3)
      %8 : Long(4, strides=[1], device=cpu) = onnx::Concat[axis=0](%6, %12)
      %9 : Float(0, strides=[1], device=cpu) = onnx::Constant[value=[ CPUFloatType{0} ]]()
      %10 : Float(0, strides=[1], device=cpu) = onnx::Constant[value=[ CPUFloatType{0} ]]()
      %11 : Float(*, *, *, *, strides=[30000, 10000, 100, 1], requires_grad=0, device=cpu) = onnx::Resize[coordinate_transformation_mode="pytorch_half_pixel", cubic_coeff_a=-0.75, mode="linear", nearest_mode="floor"](%img, %9, %10, %8) # /home/pieter/projects/orbisk/pytorch-image-classification/.venv/lib/python3.8/site-packages/torch/nn/functional.py:3731:0
      return (%11)
    

    To load it I'm using

    func main() {
    	// Create a backend receiver
    	backend := gorgonnx.NewGraph()
    
    	// Create a model and set the execution backend
    	model := onnx.NewModel(backend)
    
    	// read the onnx model
    	b, err := os.ReadFile("model.onnx")
    	if err != nil {
    		log.Fatal("error reading file ", err)
    	}
    
    	// Decode it into the model
    	err = model.UnmarshalBinary(b)
    	if err != nil {
    		log.Fatal("error loading model ", err)
    	}
    }
    

    Output:

    2022/11/16 16:35:11 error loading model No data found
    

    Why this happens

    The onnx::Resize operator takes %9 and %10 as an input. These are of type Float(0) and dont have any data. These tensors cannot be read properly by onnx-go.

    The error happens here: https://github.com/owulveryck/onnx-go/blob/master/internal/onnx/ir/tensor.go#L113

    Solution

    I think this can be solved by adding a check for dimensionality of the tensor to generateConsOptsFromFloat64Tensor and alike. If it is zero then an empty gorgonia tensor should be created.

    I do have some time to work on this (work project) if this solution is acceptable.

  • Question: unsqueeze: axes in not an []int64

    Question: unsqueeze: axes in not an []int64

    Hello @owulveryck,

    i try to run this onnx model:

    model

    with this code:

    package main
    
    import (
    	"fmt"
    	"github.com/owulveryck/onnx-go"
    	"github.com/owulveryck/onnx-go/backend/x/gorgonnx"
    	"gorgonia.org/tensor"
    	"io/ioutil"
    	"log"
    )
    
    func main() {
    	backend := gorgonnx.NewGraph()
    	model := onnx.NewModel(backend)
    
    	var err error
    	var b []byte
    	var output []tensor.Tensor
    
    	if b, err = ioutil.ReadFile("model.onnx"); err != nil {
    		log.Fatal(err)
    	}
    
    	if err = model.UnmarshalBinary(b); err != nil {
    		log.Fatal(err)
    	}
    
    	var acosGroupTensor tensor.Tensor
    	if acosGroupTensor, err = tensor.Argmax(
    		tensor.New(tensor.WithShape(1, 5), tensor.Of(tensor.Float32), tensor.WithBacking([]float32{0, 1, 0, 0, 0})),
    		1,
    	); err != nil {
    		log.Fatal(err)
    	}
    
    	if err = model.SetInput(0, acosGroupTensor); err != nil {
    		log.Fatal(err)
    	}
    
    	var acosRatioTensor = tensor.New(tensor.WithShape(1, 4), tensor.Of(tensor.Float32), tensor.WithBacking([]float32{0, 0, -1, 0}))
    
    	if err = model.SetInput(1, acosRatioTensor); err != nil {
    		log.Fatal(err)
    	}
    
    	var salesRatioTensor = tensor.New(tensor.WithShape(1, 4), tensor.Of(tensor.Float32), tensor.WithBacking([]float32{0, 0, -1, 0}))
    
    	if err = model.SetInput(2, salesRatioTensor); err != nil {
    		log.Fatal(err)
    	}
    
    	if err = backend.Run(); err != nil {
    		log.Fatal(err)
    	}
    
    	if output, err = model.GetOutputTensors(); err != nil {
    		log.Fatal(err)
    	}
    	// write the first output to stdout
    	fmt.Println(output[0])
    }
    

    It looks like i miss something pretty small - would love to get some help ❤️ thank you

  • run() function calls newMachine() everytime

    run() function calls newMachine() everytime

    Hi @owulveryck

    in the below code of Run() , everytime it calls newmachine, is it required to be called everytime ? i guess its should work only when g.m is nil?

    //if g.m == nil { g.m = xvm.NewMachine(g.exprgraph) defer g.m.Close() //g.m = gorgonia.NewTapeMachine(g.exprgraph) //}

    thanks Manjunath

  • poor performance (run model)

    poor performance (run model)

    my onnx file is very simple (light weight) but still run model takes lot of time(1ms) to process the data . how can i improve the performace of the run model. could you please help me on any solution to solve this issue . thank you

Go (Golang) encrypted deep learning library; Fully homomorphic encryption over neural network graphs

DC DarkLantern A lantern is a portable case that protects light, A dark lantern is one who's light can be hidden at will. DC DarkLantern is a golang i

Oct 31, 2022
fonet is a deep neural network package for Go.

fonet fonet is a deep neural network package for Go. It's mainly created because I wanted to learn about neural networks and create my own package. I'

Oct 27, 2022
Artificial Neural Network

go-deep Feed forward/backpropagation neural network implementation. Currently supports: Activation functions: sigmoid, hyperbolic, ReLU Solvers: SGD,

Jan 8, 2023
Neural Network for Go.

gonet gonet is a Go module implementing multi-layer Neural Network. Install Install the module with: go get github.com/dathoangnd/gonet Import it in

Nov 25, 2022
Golang Neural Network
Golang Neural Network

Varis Neural Networks with GO About Package Some time ago I decided to learn Go language and neural networks. So it's my variation of Neural Networks

Sep 27, 2022
Example of Neural Network models of social and personality psychology phenomena

SocialNN Example of Neural Network models of social and personality psychology phenomena This repository gathers a collection of neural network models

Dec 5, 2022
Neural network in Go

network Package network is a simple implementation of a nonbiased neural network. The networks created by this package can be trained with backpropaga

Nov 25, 2021
Neural Networks written in go

gobrain Neural Networks written in go Getting Started The version 1.0.0 includes just basic Neural Network functions such as Feed Forward and Elman Re

Dec 20, 2022
An implementation of Neural Turing Machines
An implementation of Neural Turing Machines

Neural Turing Machines Package ntm implements the Neural Turing Machine architecture as described in A.Graves, G. Wayne, and I. Danihelka. arXiv prepr

Sep 13, 2022
Implementation of E(n)-Equivariant Graph Neural Networks, in Pytorch
Implementation of E(n)-Equivariant Graph Neural Networks, in Pytorch

EGNN - Pytorch Implementation of E(n)-Equivariant Graph Neural Networks, in Pytorch. May be eventually used for Alphafold2 replication.

Dec 23, 2022
A multilayer perceptron network implemented in Go, with training via backpropagation.

Neural Go I'm in the process of making significant changes to this package, particularly, to make it more modular, and to base it around an actual lin

Sep 27, 2022
Generative Adversarial Network in Go via Gorgonia
Generative Adversarial Network in Go via Gorgonia

Generative adversarial networks Recipe for simple GAN in Golang ecosystem via Gorgonia library Table of Contents About Why Instruments Usage Code expl

Dec 2, 2022
Tpu-traffic-classifier - This small program creates ipsets and iptables rules for nodes in the Solana network

TPU traffic classifier This small program creates ipsets and iptables rules for

Nov 23, 2022
Distributed hyperparameter optimization framework, inspired by Optuna.
Distributed hyperparameter optimization framework, inspired by Optuna.

Goptuna Distributed hyperparameter optimization framework, inspired by Optuna [1]. This library is particularly designed for machine learning, but eve

Jan 1, 2023
The Go App Boot Framework

The Go App Boot Framework good is a http framework that makes developers write go applications much easier. Download and Install go get -u github.com/

Jan 11, 2022
Genetic Algorithms library written in Go / golang

Description Genetic Algorithms for Go/Golang Install $ go install git://github.com/thoj/go-galib.git Compiling examples: $ git clone git://github.com

Sep 27, 2022
Gorgonia is a library that helps facilitate machine learning in Go.
Gorgonia is a library that helps facilitate machine learning in Go.

Gorgonia is a library that helps facilitate machine learning in Go. Write and evaluate mathematical equations involving multidimensional arrays easily

Dec 30, 2022
Go package for OCR (Optical Character Recognition), by using Tesseract C++ library

gosseract OCR Golang OCR package, by using Tesseract C++ library. OCR Server Do you just want OCR server, or see the working example of this package?

Jan 3, 2023
Gorgonia is a library that helps facilitate machine learning in Go.
Gorgonia is a library that helps facilitate machine learning in Go.

Gorgonia is a library that helps facilitate machine learning in Go. Write and evaluate mathematical equations involving multidimensional arrays easily

Dec 27, 2022