Standard machine learning models

Cog: Standard machine learning models

Define your models in a standard format, store them in a central place, run them anywhere.

  • Standard interface for a model. Define all your models with Cog, in a standard format. It's not just the graph – it also includes code, pre-/post-processing, data types, Python dependencies, system dependencies – everything.
  • Store models in a central place. No more hunting for the right model file on S3. Cog models are in one place with a content-addressable ID.
  • Run models anywhere: Cog models run anywhere Docker runs: your laptop, Kubernetes, cloud platforms, batch processing pipelines, etc. And, you can use adapters to convert the models to on-device formats.

Cog does a few things to make your life easier:

  • Automatic Docker image. Define your environment with a simple format, and Cog will generate CPU and GPU Docker images using best practices and efficient base images.
  • Automatic HTTP service. Cog will generate an HTTP service from the definition of your model, so you don't need to write a Flask server in the right way.
  • No more CUDA hell. Cog knows which CUDA/cuDNN/PyTorch/Tensorflow/Python combos are compatible and will pick the right versions for you.

How does it work?

  1. Define how inferences are run on your model:
import cog
import torch

class ColorizationModel(cog.Model):
    def setup(self):
        self.model = torch.load("./weights.pth")

    @cog.input("input", type=Path, help="Grayscale input image")
    def run(self, input):
        # ... pre-processing ...
        output = self.model(processed_input)
        # ... post-processing ...
        return processed_output
  1. Define the environment it runs in with cog.yaml:
model: "model.py:ColorizationModel"
environment:
  python_version: "3.8"
  python_requirements: "requirements.txt"
  system_packages:
   - libgl1-mesa-glx
   - libglib2.0-0
  1. Push it to a repository and build it:
$ cog build
--> Uploading '.' to repository http://10.1.2.3/colorization... done
--> Building CPU Docker image... done
--> Building GPU Docker image... done
--> Built model b6a2f8a2d2ff

This has:

  • Created a ZIP file containing your code + weights + environment definition, and assigned it a content-addressable SHA256 ID.
  • Pushed this ZIP file up to a central repository so it never gets lost and can be run by anyone.
  • Built two Docker images (one for CPU and one for GPU) that contains the model in a reproducible environment, with the correct versions of Python, your dependencies, CUDA, etc.

Now, anyone who has access to this repository can run inferences on this model:

$ cog infer b6a2f8a2d2ff -i @input.png -o @output.png
--> Pulling GPU Docker image for b6a2f8a2d2ff... done
--> Running inference... done
--> Written output to output.png

It is also just a Docker image, so you can run it as an HTTP service wherever Docker runs:

$ cog show b6a2f8a2d2ff 
...
Docker image (GPU):  registry.hooli.net/colorization:b6a2f8a2d2ff-gpu
Docker image (CPU):  registry.hooli.net/colorization:b6a2f8a2d2ff-cpu

$ docker run -d -p 8000:8000 --gpus all registry.hooli.net/colorization:b6a2f8a2d2ff-gpu

$ curl http://localhost:8000/infer -F [email protected]

Why are we building this?

It's really hard for researchers to ship machine learning models to production. Dockerfiles, pre-/post-processing, API servers, CUDA versions. More often than not the researcher has to sit down with an engineer to get the damn thing deployed.

By defining a standard model, all that complexity is wrapped up behind a standard interface. Other systems in your machine learning stack just need to support Cog models and they'll be able to run anything a researcher dreams up.

At Spotify, we built a system like this for deploying audio deep learning models. We realized this was a repeating pattern: Uber, Coinbase, and others have built similar systems. So, we're making an open source version.

The hard part is defining a model interface that works for everyone. We're releasing this early so we can get feedback on the design and find collaborators. Hit us up if you're interested in using it or want to collaborate with us. We're on Discord or email us at [email protected].

Install

No binaries yet! You'll need Go 1.16, then run:

make install

This installs the cog binary to $GOPATH/bin/cog.

Next steps

Owner
Replicate
Making machine learning reproducible
Replicate
Comments
  • No CUDA runtime is found / Found no NVIDIA driver on your system.

    No CUDA runtime is found / Found no NVIDIA driver on your system.

    When running a GPU cog:

    $ sudo cog predict r8.im/allenhung1025/looptest@sha256:f5cd715e99046e0513fe2b4034e8f7d8c102525b02f49efb52b05f46fcb9ea83
    Starting Docker image r8.im/allenhung1025/looptest@sha256:f5cd715e99046e0513fe2b4034e8f7d8c102525b02f49efb52b05f46fcb9ea83 and running setup()...
    No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda'
    Traceback (most recent call last):
    ...
    AssertionError: 
    Found no NVIDIA driver on your system. Please check that you
    have an NVIDIA GPU and installed a driver from
    http://www.nvidia.com/Download/index.aspx
    ⅹ Failed to get container status: exit status 1
    

    But CUDA is fine:

    $ cd /usr/local/cuda/samples/1_Utilities/deviceQuery
    $ sudo make
    $ ./deviceQuery
    Detected 1 CUDA Capable device(s)
    
    Device 0: "NVIDIA GeForce RTX 2080 Ti"
      CUDA Driver Version / Runtime Version          11.4 / 11.2
      CUDA Capability Major/Minor version number:    7.5
      Total amount of global memory:                 11016 MBytes (11551440896 bytes)
      (68) Multiprocessors, ( 64) CUDA Cores/MP:     4352 CUDA Cores
    ...
    Result = PASS
    
  • cog: add support for

    cog: add support for "environment" to set environment variables in the generated Dockerfile

    ℹ️ Docs preview: https://github.com/hangtwenty/cog/blob/cog-environment-variables/docs/yaml.md#environment


    Change 1/2: support for environment (ENV)

    Closes issue #291, "Environment variables in cog.yaml." In Dockerfile parlance, it's support for ENV. This supersedes my the previous PR, #361.

    This enables adding environment to build in cog.yaml:

    build:
      environment:
        - EXAMPLE=/src/example
        - DEBUG=1
    

    And those become ENV directives in the Dockerfile.

    Change 2/2: A single default environment variable to configure caching for PyTorch and some other libraries (XDG_CACHE_HOME)

    One of the libraries used most often with cog is PyTorch. You don't have to do configure anything for it, and now the caching should "just work" in most cases. Likewise for other popular libraries.

    Variables you DON'T need to set

    You don't need to set any of the following because the sane default for $XDG_CACHE_HOME will take care of it. (Because it's part of a standard, several libraries use it for their defaults.)

    • pytorch: You don't need to set PYTORCH_TRANSFORMERS_CACHEdocs
    • Hugging Face: You don't need to set TRANSFORMERS_CACHEdocs
      • You shouldn't have to set PYTORCH_PRETRAINED_BERT_CACHEdocs
    • pip: no variables needed! Before this PR, cog already configured the pip/apt-get cache directories

    You don't need to set TORCH_HOME

    If TORCH_HOME is set, that'll take precedence over XDG_CACHE_HOME — see pytorch docs, huggingface docs. So, if you set TORCH_HOME to something else, just be sure to set it within /src/, like /src/.cache-torch. That way, it will get cached across runs.

  • Refactor `Makefile`

    Refactor `Makefile`

    This PR implements some changes I needed to get a Homebrew formula working (see https://github.com/replicate/cog/issues/822), specifically:

    • Defining constants for executables like PYTHON. When Homebrew installs Python 3, my understanding is that it links to python3, keeping the default python (2.7) executable accessible in the default PATH. Because Python 3 is a prerequisite for building from source, we need a way to override this when running make.
    • Adding an install target. In this PR, it's configured to be overridden by conventional PREFIX / BINDIR constants. It also supports staged installs using a DESTDIR constant.

    This PR includes some other changes to keep the rest of the Makefile more consistent:

    • Adding an uninstall target, to keep symmetry with install.
    • Defining and using constants for GO and other executables.
    • Adding a new default all target that builds cog as a prerequisite.
    • Adding go clean to the clean target.

    Finally, I noticed a TODO comment from @bfirsh for the test-integration target:

    TODO(bfirsh): use local copy of cog so we don't have to install globally

    With the new cog target, we can run integration tests against local executables without installing them by prepending PWD to PATH. (afbd925bdb2458291493dc6b939d37b9a7d018e5)

  • Include timestamps in Redis responses

    Include timestamps in Redis responses

    Timestamps are useful for calculating how long a prediction has taken to run. There are other ways we could get this data (using HTTP streams instead of queueing, and doing accurate timing from the outside), but this is probably the simplest to get started with and arguably the simplest for all sorts of users in the future.

    Signed-off-by: Dominic Baggott [email protected]

    Resolves #589

  • build error

    build error

    when I build a model with cog, I always get this error and fail to build. How can I handle this?

     => ERROR [stage-0  3/12] RUN curl https://pyenv.run | bash &&  git clone https://github.com/momo-lab/pyenv-install-latest.git "$(pyenv root)"/plugin  125.3s
    ------
     > [stage-0  3/12] RUN curl https://pyenv.run | bash &&         git clone https://github.com/momo-lab/pyenv-install-latest.git "$(pyenv root)"/plugins/pyenv-install-latest &&        pyenv install-latest "3.8.2" &&         pyenv global $(pyenv install-latest --print "3.8.2"):
    #8 1.093   % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
    #8 1.093                                  Dload  Upload   Total   Spent    Left  Speed
    100   270  100   270    0     0    102      0  0:00:02  0:00:02 --:--:--   102
    #8 3.785 curl: (7) Failed to connect to raw.githubusercontent.com port 443: Connection refused
    #8 3.788 /bin/sh: 1: pyenv: not found
    #8 3.811 Cloning into '/plugins/pyenv-install-latest'...
    #8 124.1 fatal: unable to access 'https://github.com/momo-lab/pyenv-install-latest.git/': gnutls_handshake() failed: The TLS connection was non-properly terminated.
    ------
    executor failed running [/bin/sh -c curl https://pyenv.run | bash &&    git clone https://github.com/momo-lab/pyenv-install-latest.git "$(pyenv root)"/plugins/pyenv-install-latest &&        pyenv install-latest "3.8.2" &&         pyenv global $(pyenv install-latest --print "3.8.2")]: exit code: 128
    ⅹ Failed to build Docker image: exit status 1
    
    
  • Initial working version using Pydantic for type annotations

    Initial working version using Pydantic for type annotations

    The exploration in #343 was sufficiently explored and getting very messy, so I thought I would create a new pull request for the real implementation.

    This is an implementation of #193, #205, and #259 using Pydantic for the type system, FastAPI for the model server, and OpenAPI for the schema.

    Models now look a bit like this:

    from cog import Predictor, Input, Path
    import torch
    
    class ColorizationPredictor(Predictor):
        def setup(self):
            self.model = torch.load("./weights.pth")
    
        def predict(self,
              image: Path = Input(title="Grayscale input image")
        ) -> Path:
            processed_input = preprocess(image)
            output = self.model(processed_input)
            return postprocess(output)
    

    The HTTP model server is a work in progress and will be finalized as part of future work.

    The Redis queue worker remains in place and the API is unchanged. As part of future work we will implement an AMQP API with a better API.

    Commits are significant and this should be reviewed commit by commit. This is an intentionally large and long-running branch -- main is currently used as the documentation for Replicate.

    Todo

    • [x] cog predict with new API
    • [x] Input ordering?
    • [x] Ensure HTTP server is single threaded.
    • [x] HTTP URLs for input and output
    • [x] Add choices option to Input to generate an Enum.
    • [x] cog init template
    • [x] Update getting started docs
    • [x] Update reference docs
    • [x] Rename Predictor to BasePredictor so we can do from cog import BasePredictor, Input, Path without colliding with a Predictor subclass.
    • [x] Better errors if you get output type wrong.
    • [x] ~Reliably set Cog version. In development, the version is set to 0.0.1 which is pretty useless. It should probably be hardcoded somewhere, and the release process needs updating. We need this to be able to switch between old and new Cog API in Replicate.~ We're just going to support version dev everywhere which means "latest".
    • [x] Update examples https://github.com/replicate/cog-examples/pull/8
    • [x] Update the OpenAPI Outputs to be a consistent place. (Simple outputs currently end up in Request object schema)
    • [x] Test returning object as output
    • [x] Test arbitrary JSON objects in output
    • [x] Is go.sum up to date?

    Future (Pydantic + OpenAPI)

    • [ ] Tell users to upgrade if they use @cog.input() with new Cog.
    • [ ] Document nouns to make sure we're all on the same page. "Input"/"Inputs"?
    • [ ] Document how to migrate @input() to Pydantic type annotations.
    • [ ] Go through and fix any TODOs
    • [ ] Make input validation errors work for cog predict in a simple way until we lock down HTTP API
    • [ ] Make input errors work for Redis API & Replicate
    • [ ] Integrate with Replicate - https://github.com/replicate/replicate-web/pull/907
    • [ ] Force users to pick an output type -- throw an error if it's not set. (Couldn't just be Any, but they need to be explicit!)
    • [ ] Clean up temporary files -- both input and output
    • [ ] Do we want to make cog predict with an existing image backwards compatible? ('cos there are old models on Replicate)
    • [ ] What happens if you use pathlib.Path as an input type? Does it work? Should we throw an error? (This might happen if you import wrong package / collide imports)
    • [ ] What happens if you return pathlib.Path from a predict function? Does this work? Should it throw an error?
    • [ ] Should we have a fixed set of supported inputs/outputs somehow?
    • [ ] Do we want to force users to put extra properties in additionalProperties? Should we even allow extra properties to cog.Input()?
    • [ ] Merge examples PR https://github.com/replicate/cog-examples/pull/8

    Far-flung future

    • Vendor Pydantic and FastAPI
    • Add back support for PyTorch and Tensorflow tensors as output. This was removed because it was fiddly to get working with Pydantic. You now need to convert them to numpy arrays.
    • Add back timing information to server.
    • Better error if you forget an output type
      pydantic.error_wrappers.ValidationError: 1 validation error for Prediction
      output
        unexpected value; permitted: None (type=value_error.const; given=hello foo; permitted=(None,))
      
    • Multiple file outputs for cog predict. It currently just outputs the plain JSON if there isn't a single file output.
    • Logging, as described in comment in #343
    • Review request/response objects and make sure we're happy with them.
      • This might be an opportunity to rename the statuses and ensure they're consistent ("success" vs "failed").
      • Do we want the input type to have a second input level so we can have top-level request options?
    • Do we need separate URLs for input/output schema, or can they be reliably fetched from OpenAPI?
    • [ ] Finalize & document HTTP API
    • [ ] Do we want to version the prediction API, or let the client handle it? https://github.com/replicate/cog/issues/94
    • [ ] Test arbitrary objects in input
    • [ ] What if people run new models from a directory of code with old Cog? What happens if we do that? Do we care? #286
    • [ ] What if people run new models from an image with old Cog? Maybe we throw a simple error from /predict?

    Based on (& blocked by)

    • #383
    • #385

    Refs

    Closes #58 Closes #113 Closes #193 Closes #205 Closes #212 Closes #246 Closes #259 Closes #262 Closes #263 Closes #304 Closes #306 Closes #327 Closes #328

  • Throw error if invalid keys exist in cog.yaml

    Throw error if invalid keys exist in cog.yaml

    Fix for:

    Throw error if invalid keys exist in cog.yaml #34

    Approaches:

    1) Use hard-coded key pair to test the yaml file

    Create a list of key pairs or a struct to validate the cog.yaml file

    Pros:

    • Simple & straight forward approach
    • Easy to test and validate it works

    Cons:

    • Hard to implement complex logic
    • Needs more code changes
    • Need to learn go and have knowledge of internal systems to make changes
    • Hard to keep track & validate the different versions of cog.yaml file

    2) Use a JSON Schema to validate

    This approach uses a JSON Schema to validate the cog.yaml file.

    Pros:

    • Uses Industry standard approach to validate the yaml file
    • Most cases will need a simple change to json schmema
    • No need to learn go to understand various fields in cog.yaml
    • The json-schema serves as the documentation.

    Cons:

    • Dependencies on external libraries and validation mechanism
    • Would be hard to understand for people not familiar with jsonSchemas
    • Required editing a large json file to make changes

    Approach Selected: "Using JSONSchema"

    Decided to implement the jsonschema approach based on the pros & cons and especially keeping the end-user in mind and the "open-source" nature of the project.

    The approach will not enforce new rules but only validates that the keys present in the yaml are defined in the schema.

    How to generate the initial JSON schema

    • Create a yaml file containing all the fields present in the cog.yaml
    • Converted the yaml to JSON using this tool: https://onlineyamltools.com/convert-yaml-to-json
    • Then generate the jsonschema using this tool: https://www.jsonschema.net/home

    The schema required few changes:

    • Remove the required field
    • Allow both number and string for python_version field

    Validate command

    The PR also includes a validate command to verify any cog.yaml in the current directory.

    Usage:

    cog validate
    

    Examples:

    Empty cog.yaml

    No error will be thrown and cog will use the default schema generated by the below code:

    func DefaultConfig() *Config {
    	return &Config{
    		Build: &Build{
    			GPU:           false,
    			PythonVersion: "3.8",
    		},
    	}
    }
    
    

    These are all valid cog.yaml files:

    • No File
    • Empty file
    • build:

    Bad yaml file example & errors

    cog.yaml:

    buildd:
    

    Output: ⅹ (root) Additional property builds is not allowed

    cog.yaml:

    buildd:
      python_version: "3.8"
      python_packages:
        - "torch=1.8.0"
    

    Output: ⅹ (root) Additional property buildd is not allowed

    cog.yaml:

    build:
      python_versions: "3.8"
      python_packages:
        - "torch=1.8.0"
    

    Output:

    ⅹ build Additional property python_versions is not allowed ..........................................................................................

    Signed-off-by: Shashank Agarwal 2 [email protected]

  • Keeping the homebrew formula up to date

    Keeping the homebrew formula up to date

    brew install cog is a thing now! See https://github.com/replicate/cog/pull/849 and https://github.com/Homebrew/homebrew-core/pull/117929

    It looks like it's pinned at 0,5.1. How should we go about keeping that up to date as we release new versions of Cog? My guess at a couple options:

    • We bump that number whenever we do a release. Does that mean opening a PR on homebrew-core each time? 😬
    • We point it at a tarball off the main branch, then promise ourselves to keep main stable. Then we'd never have to update the formula. Too yolo maybe?
  • Design & document HTTP API

    Design & document HTTP API

    This is a work-in-progress implementation of #413. I was considering creating an issue to discuss the design of the HTTP API, but I figure a pull request and some documentation might be the most productive medium for doing this.

    The text of the documentation is incomplete and might be partial sentences. Mainly looking here for feedback on the API, then we can complete all the sentences.

    I think it is mostly there, but a few design questions:

    1. Do we like our request and response objects?
    2. Do we like our status names? (Maybe failure instead of failed? Is there some prior art here we can build upon instead of inventing something new?)
    3. #428
    4. Is this compatible with #437 and #443?
    5. File handling is a work in progress and I will create a separate piece of documentation about that.
    6. Is this compatible with a potential future async version?
    7. Do we want to version it? #94
  • Show a replicate.com link after pushing to r8.im

    Show a replicate.com link after pushing to r8.im

    Show a message after a successful push. If the push was to the Replicate registry then also show a link to the model page on replicate.com.

    Screenshot 2022-02-28 at 17 51 48

    (It's using localhost:8100 as the replicate registry in development – in the wild that'd be matching against and replacing r8.im)

    The path not taken

    I talked to @bfirsh about displaying a message from replicate.com rather than hardcoding this into Cog. Unfortunately, there's no existing call made to replicate.com after the docker push command has completed, and there's no way I know of to make docker push display more info by feeding it a suitable response from the server.

    So, although it's much harder to update this when it's in Cog (because rolling out new code changes requires people to update their version), this seems like the right place to have it for now.

    Fixes #210

  • Proposed design for public AMQP queueing API

    Proposed design for public AMQP queueing API

    Context

    We currently have a private queue-based redis API that replicate.com uses to talk to Cog models, and we want to turn that into a well-defined public API that uses standard protocols (in this case AMQP).

    Requirements

    • Run predictions from JSON inputs in a request queue
    • Put JSON output of prediction on a response queue specified in request
    • Read input files from a URL (HTTP/S3/GCS) and write output files to particular location in an S3/GCS bucket
    • Record log output of prediction, in some way that can be read as the prediction is running
    • Put in-progress outputs from the model onto the response queue, as the model is running
    • Validate inputs and outputs based on a JSON Schema
    • Include metrics about run in response: model start up time, request processing time, and system metrics like CPU, RAM, and GPU memory used.
    • Errors and logs in response if prediction fails.
    • Cancel a running prediction.

    Strawman design

    At a high-level, the AMQP API uses the same request and response objects as the HTTP API, defined with an OpenAPI schema. But, instead of HTTP requests and responses, these objects are sent over AMQP request and response queues.

    Requests

    • The message body is the same format as the HTTP API request body being implemented in #378. It contains the input and any other metadata.
    • Message body encoded as JSON (content_type property set to application/json).
    • The response queue is specified with the reply_to property on the message.
    • Files are represented as URIs, prefixed with either file://, http://, https://, s3://, or gs://.
    • Example request body:
    {
      "input": {
        "image": "s3://hooli-input-bucket/hotdog.jpg",
        "disentanglement_threshold": 5
      },
      "output_file_prefix": "s3://hooli-output-bucket/5245/"
    }
    

    Responses

    • The message body is the same format as the HTTP API response body. It contains the status, output, logs, metrics, and other metadata.
    • Files are represented as URIs, like in the response. The model will upload them to output_file_prefix defined in the request. Credentials are provided at runtime via environment variables.
    • The response will be sent to the reply_to queue specified in the request.
    • Message body encoded as JSON (content_type property set to application/json).
    • Includes any message IDs present in the request (message_id, correlation_id properties).
    • Includes timing information and other performance metrics (setup time, prediction time, CPU time, memory usage, etc).
    • Includes logs. For each log message, or some log message updating frequency, this message will be sent multiple times, with status: "processing".
    • For models that return in-progress output as as they are running, this message will be sent multiple times, with status: "processing" and the "output" key set to the output.
    • Example request body:
    {
      "status": "success",
      "output": "s3://hooli-output-bucket/5245/output.jpg",
      "metrics": {
        "setup_time": <time>,
        "prediction_time": <time>
      },
      "logs": <array of strings, one per line>
    }
    

    Schema

    The AMQP API is specified with a subset of the OpenAPI/JSON Schema used for the HTTP API.

    It is available both as a org.cogmodel.openapi_schema label on the image, and at /openapi.json in the HTTP API.

    The relevant subset looks like this:

    {
      "components": {
        "schemas": {
          "Input": {
            "title": "Input",
            "required": ["image", "disentanglement_threshold"],
            "type": "object",
            "properties": {
              "image": {
                "description": "Image to manipulate",
                "type": "string",
                "format": "uri",
                "x-order": 0
              },
              "disentanglement_threshold": {
                "type": "number",
                "x-order": 1
              }
            }
          },
          "Output": {
            "title": "Output",
            "type": "string",
            "format": "uri"
          }
        }
      }
    }
    
    

    Big differences between old and new API

    • New system would use a well-defined, public AMQP API instead of the current system of using redis queues, to run predictions. Rationale: want to use standard tools, and make it and easy to run predictions in other workflows.
    • Old system had separate queues for responses, logs, and timing info, new system puts all of that info in the response queue. Rationale: easier to keep track of 1 queue?

    Todo

    • Cancel running predictions
    • Error handling
    • Pass credentials for file downloads/uploads with a request for scoping
    • Backwards compatibility on Replicate

    Some references worth reading:

    Edited by @preeth1, @bfirsh, ... (maintainers -- add your name here! consider this is a wiki.)

  • Bump github.com/mattn/go-isatty from 0.0.16 to 0.0.17

    Bump github.com/mattn/go-isatty from 0.0.16 to 0.0.17

    Bumps github.com/mattn/go-isatty from 0.0.16 to 0.0.17.

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
  • Bump github.com/getkin/kin-openapi from 0.110.0 to 0.112.0

    Bump github.com/getkin/kin-openapi from 0.110.0 to 0.112.0

    Bumps github.com/getkin/kin-openapi from 0.110.0 to 0.112.0.

    Release notes

    Sourced from github.com/getkin/kin-openapi's releases.

    v0.112.0

    What's Changed

    New Contributors

    Full Changelog: https://github.com/getkin/kin-openapi/compare/v0.111.0...v0.112.0

    v0.111.0

    What's Changed

    New Contributors

    Full Changelog: https://github.com/getkin/kin-openapi/compare/v0.110.0...v0.111.0

    Commits
    • 46e0df8 openapi3filter: use option to skip setting defaults on validation (#708)
    • a0b67a0 openapi3: continue validation on valid oneOf properties (#721)
    • 1f680b5 feat: improve error reporting for bad/missing discriminator (#718)
    • 1490eae openapi3: introduce (Paths).InMatchingOrder() paths iterator (#719)
    • de2455e openapi3: unexport ValidationOptions fields and add some more (#717)
    • 3be535f openapi3filter: validate non-string headers (#712)
    • 25a5fe4 Leave allocation capacity guessing to the runtime (#716)
    • 2975a21 openapi3: patch YAML serialization of dates (#698)
    • 35bb627 Fix links to OpenAPI spec after GitHub changes (#714)
    • 6a3b779 Fix inconsistent processing of server variables in gorillamux router (#705)
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
  • Allow redis connection URL to connect to redis.

    Allow redis connection URL to connect to redis.

    Related to #747

    This is implementing --redis-url CLI option to connect to redis as described in the comments of #747

    It's useful because allowing a URL as an alternative to just host/port allows for more options when using redis

    • Using SSL to connect
    • Using username/password to connect
    • Using a different database

    Existing --redis-host and --redis-port adds a deprecation warning but still works as before.

  • Is Redis worker supposed to exit after completing a single job?

    Is Redis worker supposed to exit after completing a single job?

    I've been trying to make Redis worker work with no success, unless I'm misunderstanding something. I'm using this example with my own cog image: https://github.com/replicate/cog-redis-example

    I add a message to the queue, the worker picks it up. Logs shows that it processes it but then it exits with the message:

    Shutting down worker: bye bye!

    No error messages, just that. I'm wondering if this is the intended behaviour of a cog. I expected it to continue listening for messages but maybe I understood the worker wrong. If this is the intended behaviour, are there any ways to make it not exit after a single job completion?

  • Bump github.com/docker/docker from 20.10.21+incompatible to 20.10.22+incompatible

    Bump github.com/docker/docker from 20.10.21+incompatible to 20.10.22+incompatible

    Bumps github.com/docker/docker from 20.10.21+incompatible to 20.10.22+incompatible.

    Release notes

    Sourced from github.com/docker/docker's releases.

    v20.10.22

    Bug fixes and enhancements

    • Improve error message when attempting to pull an unsupported image format or OCI artifact (moby/moby#44413, moby/moby#44569).
    • Fix an issue where the host's ephemeral port-range was ignored when selecting random ports for containers (moby/moby#44476).
    • Fix ssh: parse error in message type 27 errors during docker build on hosts using OpenSSH 8.9 or above (moby/moby#3862).
    • seccomp: block socket calls to AF_VSOCK in default profile (moby/moby#44564).

    Packaging Updates

    Commits
    • 42c8b31 Merge pull request #44656 from thaJeztah/20.10_containerd_binary_1.6.13
    • ff29c40 update containerd binary to v1.6.13
    • 0234322 Merge pull request #44488 from thaJeztah/20.10_backport_update_gotestsum
    • edca413 [20.10] update gotestsum to v1.8.2
    • 6112b23 Merge pull request #44476 from sbuckfelder/20.10_UPDATE
    • 194e73f Merge pull request #44607 from thaJeztah/20.10_containerd_binary_1.6.12
    • a9fdcd5 [20.10] update containerd binary to v1.6.12 (addresses CVE-2022-23471)
    • 48f955d Merge pull request #44597 from thaJeztah/20.10_containerd_1.6.11
    • 50d4d98 Merge pull request #44569 from thaJeztah/20.10_backport_relax_checkSupportedM...
    • 17451d2 Merge pull request #44593 from thaJeztah/20.10_update_go_1.18.9
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
  • Bump github.com/docker/cli from 20.10.21+incompatible to 20.10.22+incompatible

    Bump github.com/docker/cli from 20.10.21+incompatible to 20.10.22+incompatible

    Bumps github.com/docker/cli from 20.10.21+incompatible to 20.10.22+incompatible.

    Commits
    • 3a2c30b Merge pull request #3919 from thaJeztah/20.10_update_engine
    • 47649fb vendor: github.com/docker/docker v20.10.21
    • 3b562e9 vendor: github.com/moby/buildkit v0.8.4-0.20221020190723-eeb7b65ab7d6
    • e7cdabe Merge pull request #3918 from thaJeztah/20.10_docs_backports
    • 5106d8e Merge pull request #3917 from thaJeztah/20.10_backport_update_gotestsum
    • ce10682 Remove deprecated note
    • 058f7df docs: docker inspect --size
    • 226a2fd docs: docker inspect: reformat with prettier
    • 42eca75 docs: use correct separator in --security-opt
    • 0c8ce43 docs: fix misleading example of setting an env variable for a single command
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
Related tags
Machine Learning for Go
Machine Learning for Go

GoLearn GoLearn is a 'batteries included' machine learning library for Go. Simplicity, paired with customisability, is the goal. We are in active deve

Jan 3, 2023
On-line Machine Learning in Go (and so much more)

goml Golang Machine Learning, On The Wire goml is a machine learning library written entirely in Golang which lets the average developer include machi

Jan 5, 2023
Gorgonia is a library that helps facilitate machine learning in Go.
Gorgonia is a library that helps facilitate machine learning in Go.

Gorgonia is a library that helps facilitate machine learning in Go. Write and evaluate mathematical equations involving multidimensional arrays easily

Dec 30, 2022
Machine Learning libraries for Go Lang - Linear regression, Logistic regression, etc.

package ml - Machine Learning Libraries ###import "github.com/alonsovidales/go_ml" Package ml provides some implementations of usefull machine learnin

Nov 10, 2022
Gorgonia is a library that helps facilitate machine learning in Go.
Gorgonia is a library that helps facilitate machine learning in Go.

Gorgonia is a library that helps facilitate machine learning in Go. Write and evaluate mathematical equations involving multidimensional arrays easily

Dec 27, 2022
Prophecis is a one-stop machine learning platform developed by WeBank
Prophecis is a one-stop machine learning platform developed by WeBank

Prophecis is a one-stop machine learning platform developed by WeBank. It integrates multiple open-source machine learning frameworks, has the multi tenant management capability of machine learning compute cluster, and provides full stack container deployment and management services for production environment.

Dec 28, 2022
Go Machine Learning Benchmarks
Go Machine Learning Benchmarks

Benchmarks of machine learning inference for Go

Dec 30, 2022
A High-level Machine Learning Library for Go
A High-level Machine Learning Library for Go

Overview Goro is a high-level machine learning library for Go built on Gorgonia. It aims to have the same feel as Keras. Usage import ( . "github.

Nov 20, 2022
Katib is a Kubernetes-native project for automated machine learning (AutoML).
Katib is a Kubernetes-native project for automated machine learning (AutoML).

Katib is a Kubernetes-native project for automated machine learning (AutoML). Katib supports Hyperparameter Tuning, Early Stopping and Neural Architec

Jan 2, 2023
PaddleDTX is a solution that focused on distributed machine learning technology based on decentralized storage.
PaddleDTX is a solution that focused on distributed machine learning technology based on decentralized storage.

中文 | English PaddleDTX PaddleDTX is a solution that focused on distributed machine learning technology based on decentralized storage. It solves the d

Dec 14, 2022
Self-contained Machine Learning and Natural Language Processing library in Go
Self-contained Machine Learning and Natural Language Processing library in Go

Self-contained Machine Learning and Natural Language Processing library in Go

Jan 8, 2023
Example of Neural Network models of social and personality psychology phenomena

SocialNN Example of Neural Network models of social and personality psychology phenomena This repository gathers a collection of neural network models

Dec 5, 2022
Reinforcement Learning in Go
Reinforcement Learning in Go

Overview Gold is a reinforcement learning library for Go. It provides a set of agents that can be used to solve challenges in various environments. Th

Dec 11, 2022
Spice.ai is an open source, portable runtime for training and using deep learning on time series data.
Spice.ai is an open source, portable runtime for training and using deep learning on time series data.

Spice.ai Spice.ai is an open source, portable runtime for training and using deep learning on time series data. ⚠️ DEVELOPER PREVIEW ONLY Spice.ai is

Dec 15, 2022
FlyML perfomant real time mashine learning libraryes in Go

FlyML perfomant real time mashine learning libraryes in Go simple & perfomant logistic regression (~100 LoC) Status: WIP! Validated on mushrooms datas

May 30, 2022
Go (Golang) encrypted deep learning library; Fully homomorphic encryption over neural network graphs

DC DarkLantern A lantern is a portable case that protects light, A dark lantern is one who's light can be hidden at will. DC DarkLantern is a golang i

Oct 31, 2022
A tool for building identical machine images for multiple platforms from a single source configuration
A tool for building identical machine images for multiple platforms from a single source configuration

Packer Packer is a tool for building identical machine images for multiple platforms from a single source configuration. Packer is lightweight, runs o

Oct 3, 2021
Deploy, manage, and scale machine learning models in production
Deploy, manage, and scale machine learning models in production

Deploy, manage, and scale machine learning models in production. Cortex is a cloud native model serving platform for machine learning engineering teams.

Dec 30, 2022
This is the repository for the LinkedIn Learning course Learning Go.
This is the repository for the LinkedIn Learning course Learning Go.

Learning Go This is the repository for the LinkedIn Learning course Learning Go. The full course is available from LinkedIn Learning. What is Go? Go i

Nov 2, 2021
This repository is where I'm learning to write a CLI using Go, while learning Go, and experimenting with Docker containers and APIs.

CLI Project This repository contains a CLI project that I've been working on for a while. It's a simple project that I've been utilizing to learn Go,

Dec 12, 2021