Kubernetes Native Serverless Framework

Kubeless logo

CircleCI Slack

kubeless is a Kubernetes-native serverless framework that lets you deploy small bits of code without having to worry about the underlying infrastructure plumbing. It leverages Kubernetes resources to provide auto-scaling, API routing, monitoring, troubleshooting and more.

Kubeless stands out as we use a Custom Resource Definition to be able to create functions as custom kubernetes resources. We then run an in-cluster controller that watches these custom resources and launches runtimes on-demand. The controller dynamically injects the functions code into the runtimes and make them available over HTTP or via a PubSub mechanism.

Kubeless is purely open-source and non-affiliated to any commercial organization. Chime in at anytime, we would love the help and feedback !

Tools

Quick start

Check out the instructions for quickly set up Kubeless here.

Building

Consult the developer's guide for a complete set of instruction to build kubeless.

Compatibility Matrix with Kubernetes

Kubeless fully supports Kubernetes versions greater than 1.9 (tested until 1.15). For other versions some of the features in Kubeless may not be available. Our CI run tests against two different platforms: GKE (1.12) and Minikube (1.15). Other platforms are supported but fully compatibiliy cannot be assured.

Roadmap

We would love to get your help, feel free to lend a hand. We are currently looking to implement the following high level features:

  • Add other runtimes, currently Golang, Python, NodeJS, Ruby, PHP, .NET and Ballerina are supported. We are also providing a way to use custom runtime. Please check this doc for more details.
  • Investigate other messaging bus (e.g SQS, rabbitMQ)
  • Optimize for functions startup time
  • Add distributed tracing (maybe using istio)

Community

Issues: If you find any issues, please file it.

Slack: We're fairly active on slack and you can find us in the #kubeless channel.

Owner
Kubeless
Kubernetes Native Serverless Framework
Kubeless
Comments
  • How does kubeless achieve parallelism for function execution?

    How does kubeless achieve parallelism for function execution?

    Kubeless provides the following cmd to deploy a function

    $ kubeless function deploy test --runtime python2.7 \
                                    --handler test.foobar \
                                    --from-file test.py \
                                    --trigger-topic test-topic
    

    This cmd tells k8s to create a corresponding deployment for the function. However, this cmd doesn't specify the replica, so only one consumer will be created for the function handler. kafka cluster uses multi-consumer to achieve parallelism, does kubeless support multi-consumer?

    thanks!

  • Document custom runtime flavors

    Document custom runtime flavors

    Issue Ref: #900

    Description:

    Document how to create custom runtimes based on official ones.

    TODOs:

    • [X] Ready to review ~~- [ ] Automated Tests~~
    • [X] Docs
  • Use existing Kafka cluster

    Use existing Kafka cluster

    I'm a maintainer of https://github.com/Yolean/kubernetes-kafka, so naturally we have a cluster already :) Also https://github.com/kubernetes/charts/tree/master/incubator/kafka is quite widely adopted, judging by the number of pulls from https://hub.docker.com/r/solsson/kafka/.

    I disagree with https://github.com/kubeless/kubeless/issues/32. Kafka is an excellent choice of events backend, with semantics that fit nicely with serverless function execution (can strive for exactly once). Also it makes it easy to integrate with other services in a streaming platform.

    Can Kubeless use an existing Kafka cluster? You'd basically only need to specify any requirements on kafka config, and decouple kafka from the rest of Kubeless through a bootstrap brokers config. Maybe have some naming convention for topics.

  • Can not create Java functions

    Can not create Java functions

    Is this a BUG REPORT or FEATURE REQUEST?: BUG REPORT

    What happened: Creating Java function will make function as NOT_READY and pod is marked as Init:CrashLoopBackOff

    What you expected to happen: Function should have been created successfully

    How to reproduce it (as minimally and precisely as possible): Create Java file. Hello.java

    package io.kubeless;
    
    import io.kubeless.Event;
    import io.kubeless.Context;
    
    import java.lang.RuntimeException;
    
    public class Hello {
        public String hello(Event event, Context context) {
            return "Hello world!";
        }
    }
    

    Add pom.xml

    <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
      <modelVersion>4.0.0</modelVersion>
      <artifactId>function</artifactId>
      <name>function</name>
      <version>1.0-SNAPSHOT</version>
      <dependencies>
        <dependency>
          <groupId>io.kubeless</groupId>
          <artifactId>params</artifactId>
          <version>1.0-SNAPSHOT</version>
        </dependency>
      </dependencies>
      <parent>
        <groupId>io.kubeless</groupId>
        <artifactId>kubeless</artifactId>
        <version>1.0-SNAPSHOT</version>
      </parent>
    </project>
    

    Deploy function

    kubeless function deploy hello-java -f Hello.java -d pom.xml --handler Hello.hello -r java1.8
    INFO[0000] Deploying function...
    INFO[0000] Function hello-java submitted for deployment
    INFO[0000] Check the deployment status executing 'kubeless function ls hello-java'
    

    Kubeless function list

    kubeless function ls
    NAME            NAMESPACE       HANDLER         RUNTIME         DEPENDENCIES                                            STATUS
    hello           default         hello.world     python2.7                                                               1/1 READY
    hello-java      default         Hello.hello     java1.8         <project xmlns="http://maven.apache.org/POM/4.0.0"      0/1 NOT READY
                                                                    xmlns:xsi="http://www.w3.org/2001/XMLSchema-ins...
                                                                    xsi:schemaLocation="http://maven.apache.org/POM...
                                                                    http://maven.apache.org/xsd/maven-4.0.0.xsd">
                                                                      <modelVersion>4.0.0</modelVersion>
                                                                      <artifactId>function</artifactId>
                                                                      <name>function</name>
                                                                      <version>1.0-SNAPSHOT</version>
                                                                      <dependencies>
                                                                        <dependency>
                                                                          <groupId>io.kubeless</groupId>
                                                                          <artifactId>params</artifactId>
                                                                          <version>1.0-SNAPSHOT</version>
                                                                        </dependency>
                                                                      </dependencies>
                                                                      <parent>
                                                                        <groupId>io.kubeless</groupId>
                                                                        <artifactId>kubeless</artifactId>
                                                                        <version>1.0-SNAPSHOT</version>
                                                                      </parent>
                                                                    </project>
    nodejs          default         node.less       nodejs8                                                                 1/1 READY
    

    Pod status

    NAME                                                READY     STATUS                  RESTARTS   AGE
    hello-94657547f-4th28                               1/1       Running                 0          1h
    hello-java-57c87d8cd-s9l65                          0/1       Init:CrashLoopBackOff   3          1m
    nodejs-798475b6cf-g6k4h                             1/1       Running                 0          38m
    

    Pod description

    kubectl.exe describe pods hello-java-57c87d8cd-s9l65
    Name:           hello-java-57c87d8cd-s9l65
    Namespace:      default
    Node:           docker-for-desktop/192.168.65.3
    Start Time:     Tue, 27 Nov 2018 13:36:45 +0530
    Labels:         created-by=kubeless
                    function=hello-java
                    pod-template-hash=137438478
    Annotations:    prometheus.io/path=/metrics
                    prometheus.io/port=8080
                    prometheus.io/scrape=true
    Status:         Pending
    IP:             10.1.16.188
    Controlled By:  ReplicaSet/hello-java-57c87d8cd
    Init Containers:
      prepare:
        Container ID:  docker://348af8d0208fb87aeabdc962da9fa1c3b225b0a84ee5933c3a9f7e0c9fd91503
        Image:         kubeless/unzip@sha256:f162c062973cca05459834de6ed14c039d45df8cdb76097f50b028a1621b3697
        Image ID:      docker-pullable://kubeless/unzip@sha256:f162c062973cca05459834de6ed14c039d45df8cdb76097f50b028a1621b3697
        Port:          <none>
        Host Port:     <none>
        Command:
          sh
          -c
        Args:
          echo '73cae57c53af18d70df64876eda1d4efd7b14e490fecbbf50851ad930bfc9b49  /src/Hello.java' > /tmp/func.sha256 && sha256sum -c /tmp/func.sha256 && cp /src/Hello.java /kubeless/Hello.java && cp /src/pom.xml /kubeless
        State:          Terminated
          Reason:       Completed
          Exit Code:    0
          Started:      Tue, 27 Nov 2018 13:36:47 +0530
          Finished:     Tue, 27 Nov 2018 13:36:47 +0530
        Ready:          True
        Restart Count:  0
        Environment:    <none>
        Mounts:
          /kubeless from hello-java (rw)
          /src from hello-java-deps (rw)
          /var/run/secrets/kubernetes.io/serviceaccount from default-token-l2t6z (ro)
      compile:
        Container ID:  docker://e6281ba7ec3c188b56ea31ffa0234e8b35096d32e5e530ef9c586e243d894cf5
        Image:         kubeless/java-init@sha256:a14d846bfe53f359f706a260b95f0a9a755883b053dbd17b724e7a3cdff5bae6
        Image ID:      docker-pullable://kubeless/java-init@sha256:a14d846bfe53f359f706a260b95f0a9a755883b053dbd17b724e7a3cdff5bae6
        Port:          <none>
        Host Port:     <none>
        Command:
          sh
          -c
        Args:
          /compile-function.sh
        State:       Waiting
          Reason:    CrashLoopBackOff
        Last State:  Terminated
          Reason:    Error
          Message:   [INFO] Scanning for projects...
    [ERROR] [ERROR] Some problems were encountered while processing the POMs:
    [ERROR] Child module /kubeless/function/params of /kubeless/function/pom.xml does not exist @
    [ERROR] Child module /kubeless/function/function of /kubeless/function/pom.xml does not exist @
    [ERROR] Child module /kubeless/function/handler of /kubeless/function/pom.xml does not exist @
     @
    [ERROR] The build could not read 1 project -> [Help 1]
    [ERROR]
    [ERROR]   The project io.kubeless:kubeless:1.0-SNAPSHOT (/kubeless/function/pom.xml) has 3 errors
    [ERROR]     Child module /kubeless/function/params of /kubeless/function/pom.xml does not exist
    [ERROR]     Child module /kubeless/function/function of /kubeless/function/pom.xml does not exist
    [ERROR]     Child module /kubeless/function/handler of /kubeless/function/pom.xml does not exist
    [ERROR]
    [ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
    [ERROR] Re-run Maven using the -X switch to enable full debug logging.
    [ERROR]
    [ERROR] For more information about the errors and possible solutions, please read the following articles:
    [ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/ProjectBuildingException
    
          Exit Code:    1
          Started:      Tue, 27 Nov 2018 13:40:09 +0530
          Finished:     Tue, 27 Nov 2018 13:40:11 +0530
        Ready:          False
        Restart Count:  5
        Environment:
          KUBELESS_INSTALL_VOLUME:  /kubeless
          KUBELESS_FUNC_NAME:       hello
        Mounts:
          /kubeless from hello-java (rw)
          /var/run/secrets/kubernetes.io/serviceaccount from default-token-l2t6z (ro)
    Containers:
      hello-java:
        Container ID:
        Image:          kubeless/java@sha256:d2a59e50e8181174ad3c6096cd5d3ce82f46b7e22a6f3a109b0816787e7190d9
        Image ID:
        Port:           8080/TCP
        Host Port:      0/TCP
        State:          Waiting
          Reason:       PodInitializing
        Ready:          False
        Restart Count:  0
        Liveness:       http-get http://:8080/healthz delay=3s timeout=1s period=30s #success=1 #failure=3
        Environment:
          FUNC_HANDLER:             hello
          MOD_NAME:                 Hello
          FUNC_TIMEOUT:             180
          FUNC_RUNTIME:             java1.8
          FUNC_MEMORY_LIMIT:        0
          FUNC_PORT:                8080
          KUBELESS_INSTALL_VOLUME:  /kubeless
        Mounts:
          /kubeless from hello-java (rw)
          /var/run/secrets/kubernetes.io/serviceaccount from default-token-l2t6z (ro)
    Conditions:
      Type           Status
      Initialized    False
      Ready          False
      PodScheduled   True
    Volumes:
      hello-java:
        Type:    EmptyDir (a temporary directory that shares a pod's lifetime)
        Medium:
      hello-java-deps:
        Type:      ConfigMap (a volume populated by a ConfigMap)
        Name:      hello-java
        Optional:  false
      default-token-l2t6z:
        Type:        Secret (a volume populated by a Secret)
        SecretName:  default-token-l2t6z
        Optional:    false
    QoS Class:       BestEffort
    Node-Selectors:  <none>
    Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                     node.kubernetes.io/unreachable:NoExecute for 300s
    Events:
      Type     Reason                 Age              From                         Message
      ----     ------                 ----             ----                         -------
      Normal   Scheduled              4m               default-scheduler            Successfully assigned hello-java-57c87d8cd-s9l65 to docker-for-desktop
      Normal   SuccessfulMountVolume  4m               kubelet, docker-for-desktop  MountVolume.SetUp succeeded for volume "hello-java"
      Normal   SuccessfulMountVolume  4m               kubelet, docker-for-desktop  MountVolume.SetUp succeeded for volume "default-token-l2t6z"
      Normal   SuccessfulMountVolume  4m               kubelet, docker-for-desktop  MountVolume.SetUp succeeded for volume "hello-java-deps"
      Normal   Pulled                 4m               kubelet, docker-for-desktop  Container image "kubeless/unzip@sha256:f162c062973cca05459834de6ed14c039d45df8cdb76097f50b028a1621b3697" already present on machine
      Normal   Created                4m               kubelet, docker-for-desktop  Created container
      Normal   Started                4m               kubelet, docker-for-desktop  Started container
      Normal   Created                2m (x4 over 4m)  kubelet, docker-for-desktop  Created container
      Normal   Started                2m (x4 over 4m)  kubelet, docker-for-desktop  Started container
      Warning  BackOff                2m (x6 over 3m)  kubelet, docker-for-desktop  Back-off restarting failed container
      Normal   Pulled                 2m (x5 over 4m)  kubelet, docker-for-desktop  Container image "kubeless/java-init@sha256:a14d846bfe53f359f706a260b95f0a9a755883b053dbd17b724e7a3cdff5bae6" already present on machine
    

    Anything else we need to know?:

    Environment:

    • Kubernetes version (use kubectl version):
    Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.3", GitCommit:"2bba0127d85d5a46ab4b778548be28623b32d0b0", GitTreeState:"clean", BuildDate:"2018-05-21T09:17:39Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"windows/amd64"}
    Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.3", GitCommit:"2bba0127d85d5a46ab4b778548be28623b32d0b0", GitTreeState:"clean", BuildDate:"2018-05-21T09:05:37Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
    
    • Kubeless version (use kubeless version): Kubeless version: v1.0.0
    • Cloud provider or physical cluster: Using kubernetes on docker edge channel for windows.
  • Kubeless is creating one pod per function - is this normal for serverless?

    Kubeless is creating one pod per function - is this normal for serverless?

    I observed that kubeless is creating one pod per function even though both functions are based on python27 runtime. is this normal? or do you have any plan to refactor to use pool of runtime pods in future?

  • Function can not be created/updated: Kubernetes 1.16

    Function can not be created/updated: Kubernetes 1.16

    BUG REPORT:

    What happened: Unable to deploy function on k8s 1.16

    How to reproduce it (as minimally and precisely as possible): Follow example function for python

    time="2019-10-02T10:43:20Z" level=info msg="Processing change to Function default/hello" pkg=function-controller
    time="2019-10-02T10:43:20Z" level=error msg="Function can not be created/updated: the server could not find the requested resource" pkg=function-controller
    time="2019-10-02T10:43:20Z" level=error msg="Error processing default/hello (will retry): the server could not find the requested resource" pkg=function-controller
    time="2019-10-02T10:43:20Z" level=info msg="Processing change to Function default/hello" pkg=function-controller
    time="2019-10-02T10:43:20Z" level=error msg="Function can not be created/updated: the server could not find the requested resource" pkg=function-controller
    time="2019-10-02T10:43:20Z" level=error msg="Error processing default/hello (will retry): the server could not find the requested resource" pkg=function-controller
    time="2019-10-02T10:43:20Z" level=info msg="Processing change to Function default/hello" pkg=function-controller
    time="2019-10-02T10:43:21Z" level=error msg="Function can not be created/updated: the server could not find the requested resource" pkg=function-controller
    time="2019-10-02T10:43:21Z" level=error msg="Error processing default/hello (will retry): the server could not find the requested resource" pkg=function-controller
    time="2019-10-02T10:43:21Z" level=info msg="Processing change to Function default/hello" pkg=function-controller
    time="2019-10-02T10:43:22Z" level=error msg="Function can not be created/updated: the server could not find the requested resource" pkg=function-controller
    time="2019-10-02T10:43:22Z" level=error msg="Error processing default/hello (will retry): the server could not find the requested resource" pkg=function-controller
    time="2019-10-02T10:43:22Z" level=info msg="Processing change to Function default/hello" pkg=function-controller
    time="2019-10-02T10:43:23Z" level=error msg="
    time="2019-10-02T10:43:23Z" level=error msg="Error processing default/hello (will retry): the server could not find the requested resource" pkg=function-controller
    time="2019-10-02T10:43:23Z" level=info msg="Processing change to Function default/hello" pkg=function-controller
    time="2019-10-02T10:43:24Z" level=error msg="Function can not be created/updated: the server could not find the requested resource" pkg=function-controller
    time="2019-10-02T10:43:24Z" level=error msg="Error processing default/hello (giving up): the server could not find the requested resource" pkg=function-controller
    ERROR: logging before flag.Parse: E1002 10:43:24.900416       1 function_controller.go:185] the server could not find the requested resource
    

    Environment:

    • Kubernetes version (use kubectl version): 1.16.0
    • Kubeless version (use kubeless version):v1.0.4-dirty
    • Cloud provider or physical cluster: minikube and physical cluster
  • CrashLoopBackOff

    CrashLoopBackOff

    Is this a BUG REPORT or FEATURE REQUEST?: BUG REPORT What happened: Initially created kubeless as rbac,

    kubectl get pod -n kubeless NAME READY STATUS RESTARTS AGE kubeless-controller-manager-7c7bcb8db4-l26kr 0/3 CrashLoopBackOff 6 116s

    kubectl logs kubeless-controller-manager-7c7bcb8db4-l26kr -n kubeless -c kubeless-function-controller time="2019-01-08T06:59:40Z" level=info msg="Running Kubeless controller manager version: v1.0.1" time="2019-01-08T07:00:10Z" level=fatal msg="Unable to read the configmap: Error while fetching config location: Get https://10.96.0.1:443/apis/apiextensions.k8s.io/v1beta1/customresourcedefinitions/functions.kubeless.io: dial tcp 10.96.0.1:443: i/o timeout" What you expected to happen:

    How to reproduce it (as minimally and precisely as possible):

    Anything else we need to know?: curl https://10.96.0.1:443/apis/apiextensions.k8s.io/v1beta1/customresourcedefinitions/functions.kubeless.io curl: (60) SSL certificate problem: unable to get local issuer certificate More details here: https://curl.haxx.se/docs/sslcerts.html

    curl performs SSL certificate verification by default, using a "bundle" of Certificate Authority (CA) public keys (CA certs). If the default bundle file isn't adequate, you can specify an alternate file using the --cacert option. If this HTTPS server uses a certificate signed by a CA represented in the bundle, the certificate verification probably failed due to a problem with the certificate (it might be expired, or the name might not match the domain name in the URL). If you'd like to turn off curl's verification of the certificate, use the -k (or --insecure) option. Environment:

    • Kubernetes version (use kubectl version): v1.13.1
    • Kubeless version (use kubeless version): v1.0.1
    • Cloud provider or physical cluster: physical cluste
  • decoupling Function, Trigger and Runtimes

    decoupling Function, Trigger and Runtimes

    Issue Ref: [Issue number related to this PR or None]

    Basic issues this PR addresses are detailed/or dicussed in this proposal

    Description:

    This PR is result of collabrative development effort from @andresmgot, @ngtuna and @murali-reddy

    What problems does this PR address?

    decoupling function and event sources

    In the current design of Kubeless, there is just Function abstraction, respresented as CRD. This object is used to specify both function details (like function handler, code, runtime etc) and event source details (like Kafkat topic to which this function to be called, cron job etc).

    This PR brings in notion of Triggers as concept in to Kubeless. A Trigger is basically represents the details on the event source and the associated function that needs to be called when event occurs. Existing supported event sources (Kafka, cron job, http) are modelled in to seperate triggers. Corresponding to each trigger there will be seperate CRD controller. This seperation at API layer, provides clean seperation of concerns of functions and event sources.

    Also this seperation enables Kubeless to support n:m association between the function and event sources. For e.g, Kafka trigger object uses label selector to express the set of functions that need to be associated with trigger.

    decoupling event source listener and runtimes

    In the current Kubeless architecture, pod that is deployed to run the function, also has the event source listener code (like Kafka client subsribing to a particualr topic. Problem with this approach is we need to maintain a separate image for each language, language version, event source combination.

    With this PR each pod that is deployed to run function exposes a http endpoint, irrespective of event source and language type. Also fuction singature has been altered to carry event data and conext information.

    design changes

    Some notes on the design to help the reviewer.

    • Each trigger object has a corresponding CRD controller. Controller watches for add/delete/update events from the k8s api server and process the update to reflect to desired state expressed by the user.
    • Tigger object CRD controllers, also watches for Function object updates. Controller will take an action only if required. For e.g, when a function is deleted, Kafka trigger controller checks if there is any Kafka trigger object is associated. If there is Kafka trigger object associated then it will stop sending topic message to the function service.
    • By default CRD objects (like core objects) are deleted by API server when they are deleted by API. It's not possible for controllers to sync/clean-up when the object is gone. Kubernetes provides a declarative patterns of Finalizers which provides a mechansim through which interested controller can request for soft delete of the API object. This PR leverages Finalizers to process the deleted objects. Please see PVC protection controller for a reference.
    • kubeless-controller has been renamed to kubeless-controller-manager, and includes controllers for http triggers and cronjob triggers.
    • a new deployment manifest is added for Kafka. When deployed, it creates seperate deployment afka-trigger-controller, which has Kafka trigger controller CRD.

    Known Issue

    • CI is failing for GKE and Kafka build. I will fix it. But the reviews for the PR can get started.
    • Some inconsistent behaviour in edge cases was seen with API machinery for CRD controllers (for instance https://github.com/kubernetes/kubernetes/issues/60538). If any issue found we need to workaround or get it fixed upstream.
    • No explict CLI support for trigger creation in this PR. Kubeless function deploy implicitly creats the trigger objects as well. You can use API to create trigger object directly. CLI support will be added in seperatle PR.

    TODOs:

    • [x] Ready to review
    • [x] Automated Tests
    • [ ] Docs
  • Unable to use Kafka topic after minikube restart

    Unable to use Kafka topic after minikube restart

    I have deployed the basic function that shows the received context from here and the function works properly.

    But it seems that the function's CPU usage is too high.

    If I do docker stats inside the minikube instance I get 89% CPU usage.

    CONTAINER           CPU %               MEM USAGE / LIMIT       MEM %               NET I/O               BLOCK I/O           PIDS
    4fd0c2093ed2        89.26%              11.46 MiB / 1.953 GiB   0.57%               238.5 MB / 313.3 MB   0 B / 0 B           2
    

    And from within the container I can see that the python process is the responsible from that spike in usage.

    top - 00:20:21 up 18 min,  0 users,  load average: 1.22, 1.32, 1.01
    Tasks:   3 total,   2 running,   1 sleeping,   0 stopped,   0 zombie
    %Cpu(s): 40.1 us,  8.6 sy,  0.0 ni, 48.1 id,  0.0 wa,  0.0 hi,  3.2 si,  0.0 st
    KiB Mem:   2048076 total,  1923396 used,   124680 free,    52064 buffers
    KiB Swap:  1023996 total,   275216 used,   748780 free.   551052 cached Mem
    
      PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND                                                                                                                                                                                                                                            
        1 root      20   0  176188   8844   1320 R  73.6  0.4  12:09.61 python                                                                                                                                                                                                                                             
       24 root      20   0   21972   3524   3020 S   0.0  0.2   0:00.00 bash                                                                                                                                                                                                                                               
       29 root      20   0   23696   2756   2280 R   0.0  0.1   0:00.00 top                                                                                                                                                                                                                                                
    

    And this is the small handler that can be found in the container

    cat /kubeless/pubsub.py 
    def handler(context):
        return context
    

    Is there any way to change the logging level to see what's going on in the function wrapper itself?

  • Scheduled function not running periodically in k8s 1.8

    Scheduled function not running periodically in k8s 1.8

    I have a simple python function:

    $ cat hello.py
    def handler():
        print("Hello, I'm a cronjob!")
    
    

    Deployed with kubeless (0.2.3) to a GKE cluster (1.8.1-gke.0):

    $ kubeless function deploy hello --runtime python2.7 --from-file hello.py --handler hello.handler --schedule "* * * * *"
    
    $ kubeless function ls
    NAME 	NAMESPACE	HANDLER      	RUNTIME  	TYPE     	TOPIC	DEPENDENCIES
    hello	default  	hello.handler	python2.7	Scheduled	     	            
    
    $ kubectl get pod
    NAME                     READY     STATUS        RESTARTS   AGE
    hello-68f8bbd8cb-88krg   1/1       Terminating   0          38s
    hello-68f8bbd8cb-gr9jq   0/1       Terminating   0          41s
    hello-68f8bbd8cb-p726b   1/1       Running       0          36s
    hello-68f8bbd8cb-xwl5l   1/1       Terminating   0          42s
    
    $ kubeless function logs hello
    Bottle v0.12.13 server starting up (using CherryPyServer())...
    Listening on http://0.0.0.0:8080/
    Hit Ctrl-C to quit.
    
    10.8.1.1 - - [31/Oct/2017:09:49:23 +0000] "GET /healthz HTTP/1.1" 200 2 "" "kube-probe/1.8+" 0/149
    10.8.1.1 - - [31/Oct/2017:09:49:53 +0000] "GET /healthz HTTP/1.1" 200 2 "" "kube-probe/1.8+" 0/236
    10.8.1.1 - - [31/Oct/2017:09:50:23 +0000] "GET /healthz HTTP/1.1" 200 2 "" "kube-probe/1.8+" 0/115
    10.8.1.1 - - [31/Oct/2017:09:50:53 +0000] "GET /healthz HTTP/1.1" 200 2 "" "kube-probe/1.8+" 0/279
    10.8.1.1 - - [31/Oct/2017:09:51:23 +0000] "GET /healthz HTTP/1.1" 200 2 "" "kube-probe/1.8+" 0/101
    10.8.1.1 - - [31/Oct/2017:09:51:53 +0000] "GET /healthz HTTP/1.1" 200 2 "" "kube-probe/1.8+" 0/102
    10.8.1.1 - - [31/Oct/2017:09:52:23 +0000] "GET /healthz HTTP/1.1" 200 2 "" "kube-probe/1.8+" 0/128
    10.8.1.1 - - [31/Oct/2017:09:52:53 +0000] "GET /healthz HTTP/1.1" 200 2 "" "kube-probe/1.8+" 0/131
    10.8.1.1 - - [31/Oct/2017:09:53:23 +0000] "GET /healthz HTTP/1.1" 200 2 "" "kube-probe/1.8+" 0/117
    10.8.1.1 - - [31/Oct/2017:09:53:53 +0000] "GET /healthz HTTP/1.1" 200 2 "" "kube-probe/1.8+" 0/117
    
    $ kubectl get cronjobs
    No resources found.
    
    

    The function pod is running, but never called periodically, and no CronJobs is created in the Kubernetes cluster.

  • kubeless on Openshift/Minishift 3.6

    kubeless on Openshift/Minishift 3.6

    I try to run kubeless on the latest Minishift release:

    oc v3.6.0+c4dd4cf kubernetes v1.6.1+5115d708d7 features: Basic-Auth

    Server https://192.168.64.5:8443 openshift v3.6.0+c4dd4cf kubernetes v1.6.1+5115d708d7

    For kubeless version 0.2.3 there seems to be a compatibility issue with Openshift. As far as I understood kubeless uses a new Custom Resource Definition (of Kubernetes v1.7.0+, according to the release notes) from version 0.2.0 on. This probably breaks compatibility with Openshift because version 3.6 is based on Kubernetes 1.6. I run into this issue:

    ➜ ~ oc create -f https://github.com/kubeless/kubeless/releases/download/v0.2.3/kubeless-rbac-v0.2.3.yaml
    clusterrole "kubeless-controller-deployer" created
    clusterrolebinding "kubeless-controller-deployer" created
    service "broker" created
    statefulset "kafka" created
    service "kafka" created
    statefulset "zoo" created
    deployment "kubeless-controller" created
    serviceaccount "controller-acct" created
    service "zoo" created
    service "zookeeper" created
    error: unable to recognize "https://github.com/kubeless/kubeless/releases/download/v0.2.3/kubeless-rbac-v0.2.3.yaml": no matches for apiextensions.k8s.io/, Kind=CustomResourceDefinition
    

    I can deploy the 0.1.0 kubeless version on the Openshift cluster without errors. However, if I then run a kubeless function ls command I get an ...server could not find... error. Same if I try to deploy something with kubeless.

    ➜  oc create -f https://github.com/kubeless/kubeless/releases/download/v0.1.0/kubeless-rbac-v0.1.0.yaml
    deployment "kubeless-controller" created
    clusterrole "kubeless-controller-deployer" created
    service "kafka" created
    service "zoo" created
    service "zookeeper" created
    statefulset "zoo" created
    serviceaccount "controller-acct" created
    clusterrolebinding "kubeless-controller-deployer" created
    service "broker" created
    statefulset "kafka" created
    thirdpartyresource "function.k8s.io" created
    
    ➜  ./kubeless_1.0 version
    Kubeless version: v0.1.0 (a1e1fab)
    ➜  ./kubeless_1.0 function ls
    FATA[0000] the server could not find the requested resource (get functions.k8s.io) 
    

    Does anyone know if/how I can make kubeless work on a minishift instance? Thank you for your help!

Litmus helps Kubernetes SREs and developers practice chaos engineering in a Kubernetes native way.
Litmus helps Kubernetes SREs and developers practice chaos engineering in a Kubernetes native way.

Litmus Cloud-Native Chaos Engineering Read this in other languages. ???? ???? ???? ???? Overview Litmus is a toolset to do cloud-native chaos engineer

Jan 1, 2023
The OCI Service Operator for Kubernetes (OSOK) makes it easy to connect and manage OCI services from a cloud native application running in a Kubernetes environment.

OCI Service Operator for Kubernetes Introduction The OCI Service Operator for Kubernetes (OSOK) makes it easy to create, manage, and connect to Oracle

Sep 27, 2022
Deploy 2 golang aws lambda functions using serverless framework.

Deploy 2 golang aws lambda functions using serverless framework.

Jan 20, 2022
Kubernetes-native framework for test definition and execution

████████ ███████ ███████ ████████ ██ ██ ██ ██ ██████ ███████ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ █████

Dec 31, 2022
An open-source, distributed, cloud-native CD (Continuous Delivery) product designed for developersAn open-source, distributed, cloud-native CD (Continuous Delivery) product designed for developers
An open-source, distributed, cloud-native CD (Continuous Delivery) product designed for developersAn open-source, distributed, cloud-native CD (Continuous Delivery) product designed for developers

Developer-oriented Continuous Delivery Product ⁣ English | 简体中文 Table of Contents Zadig Table of Contents What is Zadig Quick start How to use? How to

Oct 19, 2021
Kubernetes OS Server - Kubernetes Extension API server exposing OS configuration like sysctl via Kubernetes API

KOSS is a Extension API Server which exposes OS properties and functionality using Kubernetes API, so it can be accessed using e.g. kubectl. At the moment this is highly experimental and only managing sysctl is supported. To make things actually usable, you must run KOSS binary as root on the machine you will be managing.

May 19, 2021
An Easy to use Go framework for Kubernetes based on kubernetes/client-go

k8devel An Easy to use Go framework for Kubernetes based on kubernetes/client-go, see examples dir for a quick start. How to test it ? Download the mo

Mar 25, 2022
A serverless cluster computing system for the Go programming language

Bigslice Bigslice is a serverless cluster data processing system for Go. Bigslice exposes composable API that lets the user express data processing ta

Dec 14, 2022
Putting serverless on your server
Putting serverless on your server

Matterless: putting serverless on your server Serverless computing enables you to build applications that automatically scale with demand, and your wa

Dec 10, 2022
Serviço de consulta de CEP Serverless usando Lambda function em Golang
Serviço de consulta de CEP Serverless usando Lambda function em Golang

Consulta CEP Serverless Consulta CEP foi desenvolvido com o objetivo de facilitar a vida do desenvolvedor que precisa de um serviço de consulta de CEP

Oct 26, 2021
GCP Serverless API With Golang

GCP SERVERLESS API TECH STACK API Gateway Golang Google Cloud Firestore (Native Mode) Google Cloud Functions Google Cloud Storage LOCAL SETUP git clon

Nov 8, 2021
Koyeb is a developer-friendly serverless platform to deploy apps globally.
Koyeb is a developer-friendly serverless platform to deploy apps globally.

Koyeb Serverless Platform Deploy a Go Gin application on Koyeb Learn more about Koyeb · Explore the documentation · Discover our tutorials About Koyeb

Nov 14, 2022
FaaSNet: Scalable and Fast Provisioning of Custom Serverless Container Runtimes at Alibaba Cloud Function Compute (USENIX ATC'21)

FaaSNet FaaSNet is the first system that provides an end-to-end, integrated solution for FaaS-optimized container runtime provisioning. FaaSNet uses l

Jan 2, 2023
Go serverless functions examples with most popular Cloud Providers

go-serverless Go serverless functions examples with most popular Cloud Providers Creating zip archive go mod download go build ./cmd/<aws|gcp> zip -

Nov 16, 2021
Docker-NodeJS - Creating a CI/CD Environment for Serverless Containers on Google Cloud Run
Docker-NodeJS - Creating a CI/CD Environment for Serverless Containers on Google Cloud Run

Creating a CI/CD Environment for Serverless Containers on Google Cloud Run Archi

Jan 8, 2022
Kubernetes Operator for a Cloud-Native OpenVPN Deployment.

Meerkat is a Kubernetes Operator that facilitates the deployment of OpenVPN in a Kubernetes cluster. By leveraging Hashicorp Vault, Meerkat securely manages the underlying PKI.

Jan 4, 2023
Kubernetes Native Policy Management
Kubernetes Native Policy Management

Kyverno Kubernetes Native Policy Management Kyverno is a policy engine designed for Kubernetes. It can validate, mutate, and generate configurations u

Jan 2, 2023
Cloud Native Configurations for Kubernetes

CNCK CNCK = Cloud Native Configurations for Kubernetes Make your Kubernetes applications more cloud native by injecting runtime cluster information in

Nov 4, 2021
gokp aims to install a GitOps Native Kubernetes Platform

gokp gokp aims to install a GitOps Native Kubernetes Platform. This project is a Proof of Concept centered around getting a GitOps aware Kubernetes Pl

Nov 4, 2022