A Go idiomatic binding to the C++ core of PyTorch

GoTorch

TravisCI codecov CircleCI GoDoc

GoTorch reimplements PyTorch high-level APIs, including modules and functionals, in idiomatic Go. Thus enables deep learning programming in Go and Go+. This project is in its very early stage.

Easy Switch

Writing deep learning systems in Go is as efficiently as in Python. The DCGAN training programs in GoTorch and PyTorch call similar APIs, have similar program structure, and have a similar number of lines. Go+ has a syntax similar to Python. The Go+ compiler translates Go+ programs into Go source programs. It is a joy to write Go+ programs that calls Go packages like GoTorch.

We have a plan of a translator that migrates existing PyTorch models in Python into GoTorch.

Benefits

  1. Higher runtime efficiency. Go programs run as efficiently as C++.

  2. Training and prediction in the same language. No longer training in Python and online prediction in C++. All in Go/Go+. No TensorFlow graphs or PyTorch tracing.

  3. Same data processing code for training and prediction. No need to Wrap OpenCV functions into TensorFlow operators in C++ for prediction and Python for training.

  4. Supports many machine learning paradigms., including adversarial, reinforcement, and imitation learning -- those we cannot split into training and prediction.

  5. Same program for edge and cloud. GoTorch programs compile and run on phones and self-driving cars as they do on servers and desktops.

The Tech Stack

GoTorch works with the following open-source communities to form Go+Torch.

  • the Go+ community,
  • the PyTorch community, and
  • the TensorFlow XLA ecosystem.

The following figure reveals the stack of technologies.

Go+ applications   # users write DL applications in Go+,
     │             # whose syntax is as concise as Python
 [Go+ compiler]
     ↓
Go source code -→ GoTorch -→ libtorch -→ pytorch/xla -→ XLA ops
     │
 [Go compiler]
     ↓
executable binary  # x86_64, ARM, CUDA, TPU
                   # Linux, macOS, Android, iOS

Documentation

Comments
  • torch.nn.Module in Go

    torch.nn.Module in Go

    PyTorch APi has a key concept -- torch.nn.Module. Many builtin and user-defined models are classes derived from torch.nn.Module. The only method to override is forward(x).

    Usually, a torch.nn.Module-derived class has data members representing the model parameters. For example, nn.Linear, the PyTorch implementation of the fully-connected layer has W and B -- the weights and the bias respectively.

    In Go/Go+, the concept corresponds to a base class in Python is an interface. So, we provide type Module interface to mimic torch.nn.Module.

    Then, we need a solution to free up tensors when a model's life is over.

  • Why The Monad Pattern Looks Promising in Go+Torch Design

    Why The Monad Pattern Looks Promising in Go+Torch Design

    Monad is a programming pattern that records the output of each function call in a data structure, so we can free them at once afterward. It applies to many programming languages. Let us see why it is important to Go+Torch.

    Go uses the pattern extensively, see https://www.innoq.com/en/blog/golang-errors-monads/ for an example.

    Case Study 1: Free Tensors

    We now allocate Tensor objects using new to keep the reference count in the shared_ptr field of the C++ Tensor class: https://github.com/wangkuiyi/gotorch/blob/4ade9aa9ce84ae2532df260dd910c2b7bcf1e47a/cgotorch/cgotorch.cc#L16 The Tensor objects newed would cause memory leak if we don't recycle them.

    Assume that Go has a similar frontend API as C++, then according to the C++ MNIST example, let's think about the following problems.

    Problem 1: Destruct Tensors Created In The Train Loop To Avoid Memory Leak

    1. Tensors Created In the C++ train loop: The train loop in mnist.cpp is like:

      for (auto& batch : data_loader) {
          auto data = batch.data.to(device), targets = batch.target.to(device);  // `data` and `targets` are `Tensor`s
          optimizer.zero_grad();
          auto output = model.forward(data);  // `output` is a `Tensor`
          auto loss = torch::nll_loss(output, targets);  // `loss` is a `Tensor`
          AT_ASSERT(!std::isnan(loss.template item<float>()));
          loss.backward();
          optimizer.step();
          //...
      }
      

      We can see these Tensors has to be created:

      1. data and targets as the features and labels of the dataset
      2. output as the predictions of the data
      3. loss

      We can use defer to destruct Tensors in the train loop

      Because data, targets, output, and loss are all stack variables, they are created and destroyed in each iteration of the C++ train loop. This implied the libtorch framework would take ownership of the Tensor s if necessary. As a result, a naive API of gotorch can use defer to recycle the reference-counted Tensors. That is, the following imaginary code would work okay.

      // We need this nested function to make `defer` works as expected.
      func step(batch *Batch) {
          // `data`, `targets`, `output`, `loss` are `Tensor`s.
          data := batch.Data.To(device)
          defer data.Close()
          target := batch.Target.To(device)  
          defer target.Close()
          optimizer.zero_grad()
          output := model.Forward(data)
          defer output.Close()
          loss = torch.NllLoss(output, targets)
          defer loss.Close()
          loss.Backward()
          optimizer.Step()
          // ...
      }
      for batch := range data_loader {
          step(batch)
      }
      

      The defers are a bit tedious, maybe we can improve the syntax of Go+ to save typing.

    2. Tensors Created In the C++ forward method The forward method is called by the train loop above, in the C++ mnist example, the forward method looks like:

      torch::Tensor forward(torch::Tensor x) {
          x = torch::relu(torch::max_pool2d(conv1->forward(x), 2));
          x = torch::relu(
              torch::max_pool2d(conv2_drop->forward(conv2->forward(x)), 2));
          x = x.view({-1, 320});
          x = torch::relu(fc1->forward(x));
          x = torch::dropout(x, /*p=*/0.5, /*training=*/is_training());
          x = fc2->forward(x);
          return torch::log_softmax(x, /*dim=*/1);
        }
      

      We can use defer to destruct Tensors in the Forward function (in a tricky way)

      Similar to the train loop above, x is a Tensor on the stack and is destroyed at the end of the function scope. The difference is that x is reassigned multiple times. So we cannot simply use defer x.Close() here. A workaround is requiring users to use a different idiom, for a naive example:

      func (net *Net) Forward(x torch.Tensor) torch.Tensor {  // The argument x is recycled in the train loop
          var tensors []Tensor
          defer func () {
              for t := range tensors {
                  t.Close()
              }
          }()
          x = torch.Relu(torch.MaxPool2d(net.conv1.Forward(x), 2))
          append(tensors, x)
          x = torch.Relu(
              torch.MaxPool2d(net.conv2_drop.Forward(net.conv2.Forward(x)), 2))
          append(tensors, x)
          x = x.View([]int{-1, 320})
          append(tensors, x)
          x = torch.Relu(net.fc1.Forward(x))
          append(tensors, x)
          x = torch.Dropout(x, /*p=*/0.5, /*training=*/is_training())
          append(tensors, x)
          x = net.fc2.Forward(x)
          append(tensors, x)
          return torch.LogSoftmax(x, /*dim=*/1)  // The return value is recycled in the train loop
        }
      

      Obviously, this is not very elegant.

      Should we bookkeeping the Tensors in C++?

      A better way is keeping the tensors array in C++ rather than in Go, for example, we can use std::vector to record each C++ Tensor created by Go API, and provide a torch.CleanTensors for users to call at the end of the train loop. However, this solution is harder to design properly, for example, we have to take goroutines into consideration so as to avoid corrupting the std::vector.

    Case Study 2: Record Errors

    Few functions in libtorch have the noexcept tag. This implies that most of the functions in C++ may throw an exception. We have to expose an error return type for these functions' wrappers in Go. Recall the step function above:

    func step(batch *Batch) {
        // `data`, `targets`, `output`, `loss` are `Tensor`s.
        data := batch.Data.To(device)
        defer data.Close()
        // ...
    }
    

    It may become the following in production code:

    func step(batch *Batch) error {
        // `data`, `targets`, `output`, `loss` are `Tensor`s.
        data, err := batch.Data.To(device)
        if err != nil {
            return ...
        }
        defer data.Close()
        // ...
    }
    

    That is, the user should check whether there's an error on each line. This may be tedious too. Go+ has a neat syntax to unwrap errors, but I cannot think of an elegant way to solve the problem for the time being. See previous discussions also: https://github.com/goplus/gop/issues/307#issuecomment-663396846, https://github.com/goplus/gop/issues/307#issuecomment-663942929

  • Test Go GC on Tensors

    Test Go GC on Tensors

    This example program calls runtime.SetFinalizer with a torch.Tenosr to set a finalized that calls Tensor.Close() and prints a message "Closed Tensor".

  • Compare different frontend language training on MNIST dataset

    Compare different frontend language training on MNIST dataset

    Just like wring a program to print "Hello World" is our first cause on coding, training a model to implement handwriting recognition on the MNIST database is usually the first course on Deep Learning.

    This issue tried to compare various frond-end language on how to train the model with C++, Go, Python, and Go+Torch.

    C++ Go
    #include <torch/torch.h>
    
    #include <cstddef>
    #include <cstdio>
    #include <iostream>
    #include <string>
    #include <vector>
    
    struct Net: torch::nn::Module {
      Net()
          : conv1(torch::nn::Conv2dOptions(1, 10, /*kernel_size=*/5)),
            conv2(torch::nn::Conv2dOptions(10, 20, /*kernel_size=*/5)),
            dropout1(0.25),
            dropout2(0.5),
            fc1(320, 50),
            fc2(50, 10) {
        register_module("conv1", conv1);
        register_module("conv2", conv2);
        register_module("dropout1", dropout1);
        register_module("dropout2", dropout2);
        register_module("fc1", fc1);
        register_module("fc2", fc2);
      }
    
      torch::Tensor forward(torch::Tensor x) {
        auto x = conv1->forward(x);
        x = torch::relu(x);
        x = conv2->forward(x);
        x = torch::relu(x);
        x = torch::max_pool2d(x, 2);
        x = dropout1(x);
        x = torch::flatten(x, 1);
        x = fc1(x);
        x = torch::relu(x);
        x = dropout2(x);
        auto output = fc2(x);
        return torch::log_softmax(x, 1);
      }
    
      torch::nn::Conv2d conv1;
      torch::nn::Conv2d conv2;
      torch::nn::Dropout dropout1;
      torch::nn::Dropout dropout2;
      torch::nn::Linear fc1;
      torch::nn::Linear fc2;
    };
    
    auto main() -> int {
      Net model;
      model.train();
      auto sgd = torch::optim::SGD(
          model.parameters(), torch::optim::SGDOptions(0.01).momentum(0.5));
      sgd.zero_grad();
      auto data = torch::rand({2, 3, 224, 224});
      auto target = torch::randint(1, 10, {2, });
      auto output = model.forward(data);
      auto loss = torch::nll_loss(output, target);
      loss.backward();
      sgd.step();
      std::printf("Loss: %.6f", loss.template item<float>());
    }
    
    package main
    import (
    	torch "github.com/wangkuiyi/gotorch"
    )
    
    type Net struct {
    	torch.Module
    	conv1 torch.Conv2d
    	conv2 torch.Conv2d
    	dropout1 torch.Dropout1
    	dropout2 torch.Dropout2
    	fc1 torch.Linear
    	fc2 torch.Linear
    }
    
    func NewNet() {
    	n := &Net{
    		torch.Model{},
    		conv1: &torch.Conv2d(1, 10, 5),
    		conv2: &torch.Conv2d(10, 20, 5),
    		dropout1: &torch.Dropout1(0.25)
    		dropout2: &torch.Dropout2(0.5)
    		fc1: &torch.Linear(9216, 128),
    		fc2: &torch.Linear(128, 10),
    	}
    	n.registerModule()
    	return m
    }
    
    func (n Net) registerModule() {
    	n.RegisterModule("conv1", n.conv1)
    	n.RegisterModule("conv2", n.conv2)
    	n.RegisterModule("dropout1", n.dropout1)
    	n.RegisterModule("dropout2", n.dropout2)
    	n.RegisterModule("fc1", n.fc1)
    	n.RegisterModule("fc2", n.fc2)
    }
    
    func (n Net) Forward(x torch.Tensor) torch.Tensor {
    	x := n.conv1.Forward(x)
    	x = torch.Relu(x)
    	x = n.conv2.Forward(x)
    	x = torch.Relu(x)
    	x = torch.MaxPool2d(x, 2)
    	x = n.dropout1(x)
    	x = torch.Flatten(x, 1)
    	x = n.fc1(x)
    	x = torch.Relu(x)
    	x = n.dropout2(x)
    	x = n.fc2(x)
    	output := torch.LogSoftMax(x, 1)
    	return output 
    }
    
    func main() {
    	model := NewNet()
    	model.Train()
    	sgd := torch.NewSGD(n.Parameters(), 0.01, 0.5)
    	sgd.ZeroGrad()
    	data := torch.Rand({2, 1, 28, 28})
    	target := torch.RandInt({1,10, {2, }})
    	output := n.Forward(data)
    	loss := torch.NllLoss(output, target)
    	loss.Backward()
    	sgd.Step()
    	fmt.Println("Loss:")
    }
    
    Python Go+Torch
    from __future__ import print_function
    import torch
    import torch.nn as nn
    import torch.nn.functional as F
    import torch.optim as optim
    
    class Net(nn.Module):
        def __init__(self):
            super(Net, self).__init__()
            self.conv1 = nn.Conv2d(1, 32, 3, 1)
            self.conv2 = nn.Conv2d(32, 64, 3, 1)
            self.dropout1 = nn.Dropout2d(0.25)
            self.dropout2 = nn.Dropout2d(0.5)
            self.fc1 = nn.Linear(9216, 128)
            self.fc2 = nn.Linear(128, 10)
    
        def forward(self, x):
            x = self.conv1(x)
            x = F.relu(x)
            x = self.conv2(x)
            x = F.relu(x)
            x = F.max_pool2d(x, 2)
            x = self.dropout1(x)
            x = torch.flatten(x, 1)
            x = self.fc1(x)
            x = F.relu(x)
            x = self.dropout2(x)
            x = self.fc2(x)
            output = F.log_softmax(x, dim=1)
            return output
    
    
    model = Net()
    model.train()
    optimizer = optim.Adadelta(model.parameters(), lr=0.1)
    data = torch.rand((2, 1, 28, 28))
    target = torch.randint(1, 10, (2,))
    output = model(data)
    loss = F.nll_loss(output, target)
    loss.backward()
    optimizer.step()
    print("Loss: {:.6f}".format(loss.item()))
    
    package main
    import (
    	torch "github.com/wangkuiyi/gotorch"
    )
    
    type Net struct {
    	torch.Module
    	conv1 torch.Conv2d
    	conv2 torch.Conv2d
    	dropout1 torch.Dropout1
    	dropout2 torch.Dropout2
    	fc1 torch.Linear
    	fc2 torch.Linear
    }
    
    func NewNet() {
    	n := &Net {
    		torch.Model{},
    		conv1: &torch.Conv2d(1, 10, 5),
    		conv2: &torch.Conv2d(10, 20, 5),
    		dropout1: &torch.Dropout1(0.25)
    		dropout2: &torch.Dropout2(0.5)
    		fc1: &torch.Linear(9216, 128),
    		fc2: &torch.Linear(128, 10),
    	}
    	return n
    }
    
    func (n Net) Forward(x torch.Tensor) torch.Tensor {
    	x := n.conv1.Forward(x)
    	x = torch.Relu(x)
    	x = n.conv2.Forward(x)
    	x = torch.Relu(x)
    	x = torch.MaxPool2d(x, 2)
    	x = n.dropout1(x)
    	x = torch.Flatten(x, 1)
    	x = n.fc1(x)
    	x = torch.Relu(x)
    	x = n.dropout2(x)
    	x = n.fc2(x)
    	output := torch.LogSoftMax(x, 1)
    	return output 
    }
    
    model := Net()
    model.Train()
    sgd := torch.NewSGD(m.Parameters(), 0.01, 0.5)
    sgd.ZeroGrad()
    data := torch.Rand({2, 1, 28, 28})
    target := torch.RandInt({1,10, {2, }})
    output := m.Forward(data)
    loss := torch.NllLoss(output, target)
    loss.Backward()
    sgd.Step()
    println("Loss: %0.6f", loss.Item())
    
  • Try to fix to_tensor precison

    Try to fix to_tensor precison

    Not sure if https://github.com/wangkuiyi/gotorch/issues/288 is due to the float precision problem -- it would be helpful if this issue describes how to reproduce the problem -- I guess this PR might be the reason.

  • Decoding jpg diff between Go image library and Python PIL library

    Decoding jpg diff between Go image library and Python PIL library

    I use the ToTensor transform to read the same image in GoTorch and PyTorch:

    The last three Tensor value in GoTorch:

    0.0788  0.0936  0.0936
    

    In PyTorch:

    0.0784, 0.0941, 0.0941
    

    There is a little diff.

  • Add pickle_load/save C++ example

    Add pickle_load/save C++ example

    Run this program

    cd example/pickle/
    make
    ./pickle
    

    The result is

    Generated tensor = -0.5005  0.5228 -0.9541  0.9453
     0.6573 -0.1432 -0.5520 -0.9114
    -2.1619 -0.7022  0.3464  0.1554
    [ CPUFloatType{3,4} ]
    Encoded buffer size = 747
    Loaded tensor = -0.5005  0.5228 -0.9541  0.9453
     0.6573 -0.1432 -0.5520 -0.9114
    -2.1619 -0.7022  0.3464  0.1554
    [ CPUFloatType{3,4} ]
    

    It seems that it works.

  • CircleCI runs mandatory Linux test, Travis CI runs optional macOS

    CircleCI runs mandatory Linux test, Travis CI runs optional macOS

    • CircleCI runs Linux tests, pre-commit checks, and codecov reporting. Required to pass before merging.
    • Travis CI runs macOS tests. No pre-commit checks, no codecov reporting. It takes forever to install clang-format in Travis CI macOS VM image as it upgrades too many Homebrew packages. It is NOT required to pass Travis CI before merging.
  • Which levels of abstractions in C++ should be exposed to Go

    Which levels of abstractions in C++ should be exposed to Go

    There are three levels of abstractions:

    • Level 1: native function is a low-level API. There are many basic mathematical operations in it.

    • Level 2: nn.functional is a middle-level API. It's more close to deep learning. It uses basic mathematical operations to compose a complex neural network operation.

    • Level 3: nn.module is a high-level API. A module contains many states, such as parameters and buffers. It's a C++ class.

    Let's take padding operator as an example:

    | | expose to Go | API | contain state| |--------------- | --------- |-------|------| | native function | C++ function, easy| low-level API, flexible, few users may use it | No | | nn.functional | C++ function, easy| middle-level API, most users use it | No | | nn.module | C++ class, hard| high-level API, most users use it | Yes, parameters and buffers|

    There is another interesting thing, nn.functional will try to fuse some basic native functions. Here is an example of nn.function.linear.

    I am wondering which levels of abstractions in C++ I am supposed to expose to Go?

  • Reduce OS threads in ResNet training

    Reduce OS threads in ResNet training

    Before this PR, the OS threads would be 700+ at 60 iterations and continue to grow. This PR reduced the OS threads to 190 and stable in the future iterations.

  • fix initialize lr set to 0 in ResNet  training

    fix initialize lr set to 0 in ResNet training

    An int variable multiple a float value would be an INT, an int value multiple a float value would be FLOAT in Go. https://play.golang.org/p/f39Vw-9p23E image

  • Do we support Embedding LSTM GRU Transformer Attention Layers for NLP?

    Do we support Embedding LSTM GRU Transformer Attention Layers for NLP?

    HI , I want to use go to build torch neural network,build model, but these layers not found in our project, could you bring them from libtorch implement in go. thanks

  • Random errors in mnist

    Random errors in mnist

    I use the command test mnist and sometimes encountered errors. The following error sometimes does not occur. How can I resolve this problem?

    go run mnist.go train -epoch 30
    2022/04/07 22:28:37 CUDA is valid
    2022/04/07 22:28:52 Train Epoch: 0, Loss: 0.0039, throughput: 7698.638501 samples/sec
    2022/04/07 22:28:52 Test average loss: 0.0078, Accuracy: 83.32%
    2022/04/07 22:28:59 Train Epoch: 1, Loss: 0.0116, throughput: 9216.783733 samples/sec
    2022/04/07 22:29:00 Test average loss: 0.0064, Accuracy: 86.60%
    2022/04/07 22:29:06 Train Epoch: 2, Loss: 0.0541, throughput: 9619.654450 samples/sec
    2022/04/07 22:29:07 Test average loss: 0.0053, Accuracy: 88.97%
    ......
    2022/04/07 22:32:24 Train Epoch: 16, Loss: 0.0003, throughput: 12521.215569 samples/sec
    2022/04/07 22:32:24 Test average loss: 0.0016, Accuracy: 96.81%
    2022/04/07 22:32:29 Train Epoch: 17, Loss: 0.0015, throughput: 12425.405242 samples/sec
    2022/04/07 22:32:30 Test average loss: 0.0016, Accuracy: 96.92%
    2022/04/07 22:32:34 Train Epoch: 18, Loss: 0.0002, throughput: 13145.095818 samples/sec
    2022/04/07 22:32:35 Test average loss: 0.0015, Accuracy: 97.01%
    fatal error: unexpected signal during runtime execution
    [signal SIGSEGV: segmentation violation code=0x80 addr=0x0 pc=0x7efcd98f2d58]
    
    runtime stack:
    runtime.throw({0x5b77ff?, 0xbed93271bed93271?})
            /usr/local/go/src/runtime/panic.go:992 +0x71
    runtime.sigpanic()
            /usr/local/go/src/runtime/signal_unix.go:802 +0x3a9
    
    goroutine 38 [syscall, locked to thread]:
    runtime.cgocall(0x5568b0, 0xc00014f528)
            /usr/local/go/src/runtime/cgocall.go:157 +0x5c fp=0xc00014f500 sp=0xc00014f4c8 pc=0x42265c
    github.com/wangkuiyi/gotorch._Cfunc_Div(0x7efc501303a0, 0x70226360, 0xc000010610)
            _cgo_gotypes.go:423 +0x4d fp=0xc00014f528 sp=0xc00014f500 pc=0x513bed
    github.com/wangkuiyi/gotorch.Div.func1({0xc03f800000?}, {0x2?}, 0x3f80000000000002?)
            /home/xjun/GOPATH/pkg/mod/github.com/wangkuiyi/[email protected]/tensor_ops.go:97 +0x9b fp=0xc00014f570 sp=0xc00014f528 pc=0x519d7b
    github.com/wangkuiyi/gotorch.Div({0x0?}, {0xc00001c210?})
            /home/xjun/GOPATH/pkg/mod/github.com/wangkuiyi/[email protected]/tensor_ops.go:97 +0x45 fp=0xc00014f5b0 sp=0xc00014f570 pc=0x519ca5
    github.com/wangkuiyi/gotorch.(*Tensor).Div(...)
            /home/xjun/GOPATH/pkg/mod/github.com/wangkuiyi/[email protected]/tensor_ops.go:104
    github.com/wangkuiyi/gotorch/vision/transforms.(*NormalizeTransformer).Run(0xc0000d0000, {0xc0000105e0})
            /home/xjun/GOPATH/pkg/mod/github.com/wangkuiyi/[email protected]/vision/transforms/normalize.go:40 +0x45 fp=0xc00014f5d8 sp=0xc00014f5b0 pc=0x54e605
    runtime.call16(0xc000100690, 0xc0000105f0, 0x0, 0x0, 0x0, 0x10, 0xc00014fb08)
            /usr/local/go/src/runtime/asm_amd64.s:701 +0x49 fp=0xc00014f5f8 sp=0xc00014f5d8 pc=0x47d529
    runtime.reflectcall(0x5a8a80?, 0xc0000105e0?, 0x2?, 0x5b12c3?, 0x0?, 0x12?, 0x5a8a80?)
            <autogenerated>:1 +0x3c fp=0xc00014f638 sp=0xc00014f5f8 pc=0x481a3c
    reflect.Value.call({0x58c8a0?, 0xc0000d0000?, 0x0?}, {0x5ae89c, 0x4}, {0xc0002520a8, 0x1, 0x0?})
            /usr/local/go/src/reflect/value.go:556 +0x845 fp=0xc00014fc28 sp=0xc00014f638 pc=0x49fb65
    reflect.Value.Call({0x58c8a0?, 0xc0000d0000?, 0x0?}, {0xc0002520a8, 0x1, 0x1})
            /usr/local/go/src/reflect/value.go:339 +0xbf fp=0xc00014fca0 sp=0xc00014fc28 pc=0x49f0df
    github.com/wangkuiyi/gotorch/vision/transforms.(*ComposeTransformer).Run(0xc000144200?, {0xc0005c9e60?, 0xc0005c9e38?, 0x4326b1?})
            /home/xjun/GOPATH/pkg/mod/github.com/wangkuiyi/[email protected]/vision/transforms/transforms.go:30 +0x1d8 fp=0xc00014fdb0 sp=0xc00014fca0 pc=0x54ec58
    github.com/wangkuiyi/gotorch/vision/imageloader.(*ImageLoader).collateMiniBatch(0xc000676000, {0xc000144200?, 0x40, 0x40}, {0xc000146000, 0x40, 0x40})
            /home/xjun/GOPATH/pkg/mod/github.com/wangkuiyi/[email protected]/vision/imageloader/imageloader.go:230 +0x1c8 fp=0xc00014feb0 sp=0xc00014fdb0 pc=0x550848
    github.com/wangkuiyi/gotorch/vision/imageloader.(*ImageLoader).samplesToMinibatches(0xc000676000)
            /home/xjun/GOPATH/pkg/mod/github.com/wangkuiyi/[email protected]/vision/imageloader/imageloader.go:182 +0x225 fp=0xc00014ff98 sp=0xc00014feb0 pc=0x550025
    github.com/wangkuiyi/gotorch/vision/imageloader.New.func2()
            /home/xjun/GOPATH/pkg/mod/github.com/wangkuiyi/[email protected]/vision/imageloader/imageloader.go:98 +0x1d fp=0xc00014ffb0 sp=0xc00014ff98 pc=0x54f83d
    github.com/wangkuiyi/gotorch/vision/imageloader.newWorkingThreadGroup.func2()
            /home/xjun/GOPATH/pkg/mod/github.com/wangkuiyi/[email protected]/vision/imageloader/imageloader.go:308 +0x43 fp=0xc00014ffe0 sp=0xc00014ffb0 pc=0x550e63
    runtime.goexit()
            /usr/local/go/src/runtime/asm_amd64.s:1571 +0x1 fp=0xc00014ffe8 sp=0xc00014ffe0 pc=0x47f201
    created by github.com/wangkuiyi/gotorch/vision/imageloader.newWorkingThreadGroup
            /home/xjun/GOPATH/pkg/mod/github.com/wangkuiyi/[email protected]/vision/imageloader/imageloader.go:305 +0x31
    
    goroutine 1 [runnable, locked to thread]:
    github.com/wangkuiyi/gotorch._Cfunc_ItemFloat64(0x70235d70, 0xc000198000)
            _cgo_gotypes.go:654 +0x4d
    github.com/wangkuiyi/gotorch.Tensor.Item.func2({0xc000056db0?}, 0xc000056dd8?)
            /home/xjun/GOPATH/pkg/mod/github.com/wangkuiyi/[email protected]/tensor_ops.go:214 +0x4c
    github.com/wangkuiyi/gotorch.Tensor.Item({0xc000012040?})
            /home/xjun/GOPATH/pkg/mod/github.com/wangkuiyi/[email protected]/tensor_ops.go:214 +0x198
    main.train({0x5b805f, 0x2e}, {0x5b7e42, 0x2d}, 0x320, {0x5b36c9, 0x1b})
            /dev/shm/gotorch_projects/test1/mnist.go:84 +0x432
    main.main()
            /dev/shm/gotorch_projects/test1/mnist.go:52 +0x428
    
    goroutine 37 [runnable, locked to thread]:
    bufio.(*Reader).ReadByte(0xc0005eab40)
            /usr/local/go/src/bufio/bufio.go:262 +0x7a
    compress/flate.(*decompressor).huffSym(0xc000662000, 0xc000662028)
            /usr/local/go/src/compress/flate/inflate.go:719 +0x102
    compress/flate.(*decompressor).huffmanBlock(0x8c7040?)
            /usr/local/go/src/compress/flate/inflate.go:494 +0x45
    compress/flate.(*decompressor).Read(0xc000662000, {0xc000114928, 0x200, 0x4af737?})
            /usr/local/go/src/compress/flate/inflate.go:347 +0x7b
    compress/gzip.(*Reader).Read(0xc00011e580, {0xc000114928, 0x200, 0x200})
            /usr/local/go/src/compress/gzip/gunzip.go:251 +0x7a
    io.ReadAtLeast({0x5e5d78, 0xc00011e580}, {0xc000114928, 0x200, 0x200}, 0x200)
            /usr/local/go/src/io/io.go:331 +0x9a
    io.ReadFull(...)
            /usr/local/go/src/io/io.go:350
    archive/tar.(*Reader).readHeader(0xc000114900)
            /usr/local/go/src/archive/tar/reader.go:344 +0x51
    archive/tar.(*Reader).next(0xc000114900)
            /usr/local/go/src/archive/tar/reader.go:76 +0x106
    archive/tar.(*Reader).Next(0xc000114900)
            /usr/local/go/src/archive/tar/reader.go:51 +0x31
    github.com/wangkuiyi/gotorch/vision/imageloader.(*ImageLoader).readSamples(0xc000676000)
            /home/xjun/GOPATH/pkg/mod/github.com/wangkuiyi/[email protected]/vision/imageloader/imageloader.go:140 +0x8c
    github.com/wangkuiyi/gotorch/vision/imageloader.New.func1()
            /home/xjun/GOPATH/pkg/mod/github.com/wangkuiyi/[email protected]/vision/imageloader/imageloader.go:97 +0x1d
    github.com/wangkuiyi/gotorch/vision/imageloader.newWorkingThreadGroup.func1()
            /home/xjun/GOPATH/pkg/mod/github.com/wangkuiyi/[email protected]/vision/imageloader/imageloader.go:302 +0x43
    created by github.com/wangkuiyi/gotorch/vision/imageloader.newWorkingThreadGroup
            /home/xjun/GOPATH/pkg/mod/github.com/wangkuiyi/[email protected]/vision/imageloader/imageloader.go:299 +0x25
    
    goroutine 98 [chan send]:
    github.com/wangkuiyi/gotorch/vision/imageloader.(*ImageLoader).shuffleSamples(0xc000676120)
            /home/xjun/GOPATH/pkg/mod/github.com/wangkuiyi/[email protected]/vision/imageloader/imageloader.go:209 +0x245
    created by github.com/wangkuiyi/gotorch/vision/imageloader.New
            /home/xjun/GOPATH/pkg/mod/github.com/wangkuiyi/[email protected]/vision/imageloader/imageloader.go:88 +0x3e5
    
    goroutine 40 [chan send, locked to thread]:
    github.com/wangkuiyi/gotorch/vision/imageloader.(*ImageLoader).readSamples(0xc000676120)
            /home/xjun/GOPATH/pkg/mod/github.com/wangkuiyi/[email protected]/vision/imageloader/imageloader.go:162 +0x24c
    github.com/wangkuiyi/gotorch/vision/imageloader.New.func1()
            /home/xjun/GOPATH/pkg/mod/github.com/wangkuiyi/[email protected]/vision/imageloader/imageloader.go:97 +0x1d
    github.com/wangkuiyi/gotorch/vision/imageloader.newWorkingThreadGroup.func1()
            /home/xjun/GOPATH/pkg/mod/github.com/wangkuiyi/[email protected]/vision/imageloader/imageloader.go:302 +0x43
    created by github.com/wangkuiyi/gotorch/vision/imageloader.newWorkingThreadGroup
            /home/xjun/GOPATH/pkg/mod/github.com/wangkuiyi/[email protected]/vision/imageloader/imageloader.go:299 +0x25
    
    goroutine 41 [chan send, locked to thread]:
    github.com/wangkuiyi/gotorch/vision/imageloader.(*ImageLoader).samplesToMinibatches(0xc000676120)
            /home/xjun/GOPATH/pkg/mod/github.com/wangkuiyi/[email protected]/vision/imageloader/imageloader.go:182 +0x24e
    github.com/wangkuiyi/gotorch/vision/imageloader.New.func2()
            /home/xjun/GOPATH/pkg/mod/github.com/wangkuiyi/[email protected]/vision/imageloader/imageloader.go:98 +0x1d
    github.com/wangkuiyi/gotorch/vision/imageloader.newWorkingThreadGroup.func2()
            /home/xjun/GOPATH/pkg/mod/github.com/wangkuiyi/[email protected]/vision/imageloader/imageloader.go:308 +0x43
    created by github.com/wangkuiyi/gotorch/vision/imageloader.newWorkingThreadGroup
            /home/xjun/GOPATH/pkg/mod/github.com/wangkuiyi/[email protected]/vision/imageloader/imageloader.go:305 +0x31
    
    goroutine 21 [chan receive, locked to thread]:
    github.com/wangkuiyi/gotorch/vision/imageloader.newWorkingThreadGroup.func1()
            /home/xjun/GOPATH/pkg/mod/github.com/wangkuiyi/[email protected]/vision/imageloader/imageloader.go:301 +0x52
    created by github.com/wangkuiyi/gotorch/vision/imageloader.newWorkingThreadGroup
            /home/xjun/GOPATH/pkg/mod/github.com/wangkuiyi/[email protected]/vision/imageloader/imageloader.go:299 +0x25
    
    goroutine 22 [chan receive, locked to thread]:
    github.com/wangkuiyi/gotorch/vision/imageloader.newWorkingThreadGroup.func2()
            /home/xjun/GOPATH/pkg/mod/github.com/wangkuiyi/[email protected]/vision/imageloader/imageloader.go:307 +0x52
    created by github.com/wangkuiyi/gotorch/vision/imageloader.newWorkingThreadGroup
            /home/xjun/GOPATH/pkg/mod/github.com/wangkuiyi/[email protected]/vision/imageloader/imageloader.go:305 +0x31
    
    goroutine 33 [chan send]:
    github.com/wangkuiyi/gotorch/vision/imageloader.(*ImageLoader).shuffleSamples(0xc000676000)
            /home/xjun/GOPATH/pkg/mod/github.com/wangkuiyi/[email protected]/vision/imageloader/imageloader.go:209 +0x245
    created by github.com/wangkuiyi/gotorch/vision/imageloader.New
            /home/xjun/GOPATH/pkg/mod/github.com/wangkuiyi/[email protected]/vision/imageloader/imageloader.go:88 +0x3e5
    exit status 2
    
  • Unable to install go torch, including windows and ubuntu

    Unable to install go torch, including windows and ubuntu

    gcc: error: /root/go/pkg/mod/github.com/wangkuiyi/[email protected]/cgotorch/libtorch/lib: No such file or directory

    How did you install it?

  • add image-recordio-gen cmd and RecordIO reader

    add image-recordio-gen cmd and RecordIO reader

    The image-recordio-gen command converts an image folder with label txt to recordio file format.

    Let's take mnist dataset as an example. We could download the dataset from https://github.com/myleott/mnist_png.git.

    The dataset contains two directories: training and testing. We need to make a label file. The label file maps a class string to an int index. Following is the label file for mnist dataset.

    0
    1
    2
    3
    4
    5
    6
    7
    8
    9
    

    Then, we could run the image-recordio-gen command:

    $GOPATH/bin/image-recordio-gen -label=$MNIST/label.txt -dataset=$MNIST/training -output=$MNIST/train_record -recordsPerShard=1500
    

    We could find the recordio shard files in train_record directory:

    data-00000
    data-00001
    ...
    ...
    
  • add launch utility

    add launch utility

    This PR depends on #375

    First, install gotorch

    go install ./...
    

    Then, use launch tool to run 2 processes in a single node:

    $GOPATH/bin/launch -nprocPerNode=2 -masterAddr=127.0.0.1 -masterPort=11111 -trainingCmd="$GOPATH/bin/allreduce"
    

    It will run the allreduce example. Then, 0.log and 1.log will be created.

    Check the 0.log:

    cat 0.log
    2020/11/02 17:54:47  2  4
     6  8
    [ CPUFloatType{2,2} ]
    

    You will find the value is allreduced correctly.

Go binding for TensorFlow Lite
Go binding for TensorFlow Lite

go-tflite Go binding for TensorFlow Lite Usage model := tflite.NewModelFromFile("sin_model.tflite") if model == nil { log.Fatal("cannot load model")

Jan 1, 2023
Implementation of E(n)-Equivariant Graph Neural Networks, in Pytorch
Implementation of E(n)-Equivariant Graph Neural Networks, in Pytorch

EGNN - Pytorch Implementation of E(n)-Equivariant Graph Neural Networks, in Pytorch. May be eventually used for Alphafold2 replication.

Dec 23, 2022
A utility library to make use of the X Go Binding easier. (Implements EWMH and ICCCM specs, key binding support, etc.)

xgbutil is a utility library designed to work with the X Go Binding. This project's main goal is to make various X related tasks easier. For example,

Dec 10, 2022
A utility library to make use of the X Go Binding easier. (Implements EWMH and ICCCM specs, key binding support, etc.)

xgbutil is a utility library designed to work with the X Go Binding. This project's main goal is to make various X related tasks easier. For example,

Dec 10, 2022
mass-binding-target is a command line tool for generating binding target list by search plot files from disk.

mass-binding-target mass-binding-target is a command line tool for generating binding target list by search plot files from disk. Build Go 1.13 or new

Nov 5, 2021
Go-Guardian is a golang library that provides a simple, clean, and idiomatic way to create powerful modern API and web authentication.

❗ Cache package has been moved to libcache repository Go-Guardian Go-Guardian is a golang library that provides a simple, clean, and idiomatic way to

Dec 23, 2022
Package goth provides a simple, clean, and idiomatic way to write authentication packages for Go web applications.

Goth: Multi-Provider Authentication for Go Package goth provides a simple, clean, and idiomatic way to write authentication packages for Go web applic

Dec 29, 2022
Idiomatic Go input parsing with subcommands, positional values, and flags at any position. No required project or package layout and no external dependencies.
Idiomatic Go input parsing with subcommands, positional values, and flags at any position. No required project or package layout and no external dependencies.

Sensible and fast command-line flag parsing with excellent support for subcommands and positional values. Flags can be at any position. Flaggy has no

Jan 1, 2023
Command line tool to generate idiomatic Go code for SQL databases supporting PostgreSQL, MySQL, SQLite, Oracle, and Microsoft SQL Server

About xo xo is a command-line tool to generate Go code based on a database schema or a custom query. xo works by using database metadata and SQL intro

Jan 8, 2023
csvutil provides fast and idiomatic mapping between CSV and Go (golang) values.
csvutil provides fast and idiomatic mapping between CSV and Go (golang) values.

csvutil Package csvutil provides fast and idiomatic mapping between CSV and Go (golang) values. This package does not provide a CSV parser itself, it

Jan 6, 2023
idiomatic codec and rpc lib for msgpack, cbor, json, etc. msgpack.org[Go]

go-codec This repository contains the go-codec library, the codecgen tool and benchmarks for comparing against other libraries. This is a High Perform

Dec 19, 2022
Goview is a lightweight, minimalist and idiomatic template library based on golang html/template for building Go web application.

goview Goview is a lightweight, minimalist and idiomatic template library based on golang html/template for building Go web application. Contents Inst

Dec 25, 2022
An idiomatic Go (golang) validation package. Supports configurable and extensible validation rules (validators) using normal language constructs instead of error-prone struct tags.

ozzo-validation Description ozzo-validation is a Go package that provides configurable and extensible data validation capabilities. It has the followi

Jan 7, 2023
Idiomatic HTTP Middleware for Golang

Negroni Notice: This is the library formerly known as github.com/codegangsta/negroni -- Github will automatically redirect requests to this repository

Jan 2, 2023
Idiomatic HTTP Middleware for Golang

Negroni Notice: This is the library formerly known as github.com/codegangsta/negroni -- Github will automatically redirect requests to this repository

Dec 31, 2022
Idiomatic Event Sourcing in Go
Idiomatic Event Sourcing in Go

Event Sourcing for Go Idiomatic library to help you build Event Sourced application in Go. Please note The library is currently under development and

Oct 27, 2022
lightweight, idiomatic and composable router for building Go HTTP services

chi is a lightweight, idiomatic and composable router for building Go HTTP services. It's especially good at helping you write large REST API services

Jan 8, 2023
Minimal and idiomatic WebSocket library for Go

websocket websocket is a minimal and idiomatic WebSocket library for Go. Install go get nhooyr.io/websocket Highlights Minimal and idiomatic API First

Dec 30, 2022
Fast and idiomatic client-driven REST APIs.
Fast and idiomatic client-driven REST APIs.

Vulcain is a brand new protocol using HTTP/2 Server Push to create fast and idiomatic client-driven REST APIs. An open source gateway server which you

Jan 8, 2023
Minimal and idiomatic WebSocket library for Go

websocket websocket is a minimal and idiomatic WebSocket library for Go. Install go get nhooyr.io/websocket Highlights Minimal and idiomatic API First

Dec 31, 2022