Lazyload module with golang

[TOC]

中文介绍

Install & Use

Make sure slime-boot has been installed.

  1. Install the lazyload module and additional components, through slime-boot configuration:

    Example

apiVersion: config.netease.com/v1alpha1
kind: SlimeBoot
metadata:
  name: lazyload
  namespace: mesh-operator
spec:
  image:
    pullPolicy: Always
    repository: docker.io/slimeio/slime-lazyload
    tag: {{your_lazyload_tag}}
  module:
    - name: lazyload
      fence:
        enable: true
        wormholePort: 
          - "{{your_port}}" # replace to your application service ports, and extend the list in case of multi ports
      metric:
        prometheus:
          address: {{prometheus_address}} # replace to your prometheus address
          handlers:
            destination:
              query: |
                sum(istio_requests_total{source_app="$source_app",reporter="destination"})by(destination_service)
              type: Group
  component:
    globalSidecar:
      enable: true
      type: namespaced
      namespace:
        - {{your_namespace}} # replace to your service's namespace, and extend the list in case of multi namespaces
      resources:
        requests:
          cpu: 200m
          memory: 200Mi
        limits:
          cpu: 200m
          memory: 200Mi
      image:
        repository: {{your_sidecar_repo}}
        tag: {{your_sidecar_tag}}           
    pilot:
      enable: true
      resources:
        requests:
          cpu: 200m
          memory: 200Mi
        limits:
          cpu: 200m
          memory: 200Mi
      image:
        repository: {{your_pilot_repo}}
        tag: {{your_pilot_tag}}
  1. make sure all components are running
$ kubectl get po -n mesh-operator
NAME                                    READY     STATUS    RESTARTS   AGE
global-sidecar-pilot-796fb554d7-blbml   1/1       Running   0          27s
lazyload-fbcd5dbd9-jvp2s                1/1       Running   0          27s
slime-boot-68b6f88b7b-wwqnd             1/1       Running   0          39s
$ kubectl get po -n {{your_namespace}}
NAME                              READY     STATUS    RESTARTS   AGE
global-sidecar-785b58d4b4-fl8j4   1/1       Running   0          68s
  1. enable lazyload

Apply servicefence resource to enable lazyload.

apiVersion: microservice.slime.io/v1alpha1
kind: ServiceFence
metadata:
  name: {{your_svc}}
  namespace: {{your_namespace}}
spec:
  enable: true
  1. make sure sidecar has been generated Execute kubectl get sidecar {{svc name}} -oyaml,you can see a sidecar is generated for the corresponding service, as follow:
apiVersion: networking.istio.io/v1beta1
kind: Sidecar
metadata:
  name: {{your_svc}}
  namespace: {{your_ns}}
  ownerReferences:
  - apiVersion: microservice.slime.io/v1alpha1
    blockOwnerDeletion: true
    controller: true
    kind: ServiceFence
    name: {{your_svc}}
spec:
  egress:
  - hosts:
    - istio-system/*
    - mesh-operator/*
    - '*/global-sidecar.{your ns}.svc.cluster.local'
  workloadSelector:
    labels:
      app: {{your_svc}}

Other installation options

Disable global-sidecar

In the ServiceMesh with allow_any enabled, the global-sidecar component can be omitted. Use the following configuration:

Example

Instructions:

Not using the global-sidecar component may result in the first call not following the pre-defined routing rules. It may result in the underlying logic of istio (typically passthrough), then it come back to send request using clusterIP. VirtualService temporarily disabled.

Scenario:

Service A accesses service B, but service B's virtualservice directs the request for access to service B to service C. Since there is no global sidecar to handle this, the first request is transmitted by istio to service B via PassthroughCluster. What should have been a response from service C becomes a response from service B with an error. After first request, B adds to A's servicefence, then A senses that the request is directed to C by watching B's virtualservice. Later C adds to A's servicefence., and all requests after the first time will be successfully responded by C.

apiVersion: config.netease.com/v1alpha1
kind: SlimeBoot
metadata:
  name: lazyload
  namespace: mesh-operator
spec:
  image:
    pullPolicy: Always
    repository: docker.io/slimeio/slime-lazyload
    tag: {{your_lazyload_tag}}
  module:
    - fence:
        enable: true
        wormholePort:
        - "{{your_port}}" # replace to your application service ports, and extend the list in case of multi ports
      name: slime-fence
      global:
        misc:
          global-sidecar-mode: no      
      metric:
        prometheus:
          address: {{prometheus_address}} # replace to your prometheus address
          handlers:
            destination:
              query: |
                sum(istio_requests_total{source_app="$source_app",reporter="destination"})by(destination_service)
              type: Group

Use cluster unique global-sidecar

Example

Instructions:

In k8s, the traffic of short domain access will only come from the same namespace, and cross-namespace access must carry namespace information. Cluster unique global-sidecar is often not under the same namespace with business service, so its envoy config lacks the configuration of short domain. Therefore, cluster unique global-sidecar cannot successfully forward access requests within the same namespace, resulting in timeout "HTTP/1.1 0 DC downstream_remote_disconnect" error.

So in this case, inter-application access should carry namespace information.

apiVersion: config.netease.com/v1alpha1
kind: SlimeBoot
metadata:
  name: lazyload
  namespace: mesh-operator
spec:
  image:
    pullPolicy: Always
    repository: docker.io/slimeio/slime-lazyload
    tag: {{your_lazyload_tag}}
  module:
    - fence:
        enable: true
        wormholePort:
        - "{{your_port}}" # replace to your application service ports, and extend the list in case of multi ports
      name: slime-fence
      global:
        misc:
          global-sidecar-mode: cluster      
      metric:
        prometheus:
          address: {{prometheus_address}} # replace to your prometheus address
          handlers:
            destination:
              query: |
                sum(istio_requests_total{source_app="$source_app",reporter="destination"})by(destination_service)
              type: Group
  component:
    globalSidecar:
      enable: true
      type: cluster
      image:
        repository: {{your_sidecar_repo}}
        tag: {{your_sidecar_tag}}      
    pilot:
      enable: true
      image:
        repository: {{your_pilot_repo}}
        tag: {{your_pilot_tag}}     

Introduction of features

Automatic ServiceFence generation based on namespace/service label

fence supports automatic generation based on label, i.e. you can define the scope of "fence enabled" functionality by typing label slime.io/serviceFenced.

  • namespace level

    • true: Servicefence cr will be created for all services (without cr) under this namespace
    • Other values: No action
  • service level

    • true: generates servicefence cr for this service
    • false: do not generate servicefence cr for this service

    All of the above will override the namespace level setting (label)

    • other values: use namespace level configuration

For automatically generated servicefence cr, it will be recorded by the standard label app.kubernetes.io/created-by=fence-controller, which implements the state association change. Servicefence that do not match this label are currently considered manually configured and are not affected by the above labels.

Example

namespace testns has three services under it: svc1, svc2, svc3

  • Label testns with slime.io/serviceFenced=true: Generate cr for the above three services
  • Label svc2 with slime.io/serviceFenced=false: only the cr for svc1, svc3 remain
  • Remove this label from svc2: restores three cr
  • Remove app.kubernetes.io/created-by=fence-controller from the cr of svc3; remove the label on testns: only the cr of svc3 remains

Sample configuration

apiVersion: v1
kind: Namespace
metadata:
  creationTimestamp: "2021-03-16T09:36:25Z"
  labels:
    istio-injection: enabled
    slime.io/serviceFenced: "true"
  name: testns
  resourceVersion: "79604437"
  uid: 5a34b780-cd95-4e43-b706-94d89473db77
---
apiVersion: v1
kind: Service
metadata:
  annotations: {}
  labels:
    app: svc2
    service: svc2
    slime.io/serviceFenced: "false"
  name: svc2
  namespace: testns
  resourceVersion: "79604741"
  uid: b36f04fe-18c6-4506-9d17-f91a81479dd2

Custom undefined traffic dispatch

By default, lazyload/fence sends (default or undefined) traffic that envoy cannot match the route to the global sidecar to deal with the problem of missing service data temprorarily, which is inevitably faced by "lazy loading". This solution is limited by technical details, and cannot handle traffic whose target (e.g. domain name) is outside the cluster, see [Configuration Lazy Loading]: Failed to access external service #3 slime/issues/3).

Based on this background, this feature was designed to be used in more flexible business scenarios as well. The general idea is to assign different default traffic to different targets for correct processing by means of domain matching.

Sample configuration.

module:
  - name: fence
    fence:
      wormholePort:
      - "80"
      - "8080"
      dispatches: # new field
      - name: 163
        domains:
        - "www.163.com"
        cluster: "outbound|80||egress1.testns.svc.cluster.local" # standard istio cluster format: <direction>|<svcPort>|<subset>|<svcFullName>, normally direction is outbound and subset is empty      
      - name: baidu
        domains:
        - "*.baidu.com"
        - "baidu.*"
        cluster: "{{ (print .Values.foo \ ". \" .Values.namespace ) }}" # you can use template to construct cluster dynamically
      - name: sohu
        domains:
        - "*.sohu.com"
        - "sodu.*"
        cluster: "_GLOBAL_SIDECAR" # a special name which will be replaced with actual global sidecar cluster
      - name: default
        domains:
        - "*"
        cluster: "PassthroughCluster"  # a special istio cluster which will passthrough the traffic according to orgDest info. It's the default behavior of native istio.

foo: bar

In this example, we dispatch a portion of the traffic to the specified cluster; let another part go to the global sidecar; and then for the rest of the traffic, let it keep the native istio behavior: passthrough.

Note:

  • In custom assignment scenarios, if you want to keep the original logic "all other undefined traffic goes to global sidecar", you need to explicitly configure the last item as above

Example

Install Istio (1.8+)

Set Tag

$latest_tag equals the latest tag. The shell scripts and yaml files uses this version as default.

$ export latest_tag=$(curl -s https://api.github.com/repos/slime-io/lazyload/tags | grep 'name' | cut -d\" -f4 | head -1)

Install Slime

$ /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/slime-io/lazyload/$latest_tag/install/samples/lazyload/easy_install_lazyload.sh)"

Confirm all components are running.

$ kubectl get slimeboot -n mesh-operator
NAME       AGE
lazyload   2m20s
$ kubectl get pod -n mesh-operator
NAME                                    READY   STATUS             RESTARTS   AGE
global-sidecar-pilot-7bfcdc55f6-977k2   1/1     Running            0          2m25s
lazyload-b9646bbc4-ml5dr                1/1     Running            0          2m25s
slime-boot-7b474c6d47-n4c9k             1/1     Running            0          4m55s
$ kubectl get po -n default
NAME                              READY   STATUS    RESTARTS   AGE
global-sidecar-59f4c5f989-ccjjg   1/1     Running   0          3m9s

Install Bookinfo

Change the namespace of current-context to which bookinfo will deploy first. Here we use default namespace.

$ kubectl label namespace default istio-injection=enabled
$ kubectl apply -f "https://raw.githubusercontent.com/slime-io/lazyload/$latest_tag/install/config/bookinfo.yaml"

Confirm all pods are running.

$ kubectl get po -n default
NAME                              READY   STATUS    RESTARTS   AGE
details-v1-79f774bdb9-6vzj6       2/2     Running   0          60s
global-sidecar-59f4c5f989-ccjjg   1/1     Running   0          5m12s
productpage-v1-6b746f74dc-vkfr7   2/2     Running   0          59s
ratings-v1-b6994bb9-klg48         2/2     Running   0          59s
reviews-v1-545db77b95-z5ql9       2/2     Running   0          59s
reviews-v2-7bf8c9648f-xcvd6       2/2     Running   0          60s
reviews-v3-84779c7bbc-gb52x       2/2     Running   0          60s

Then we can visit productpage from pod/ratings, executing curl productpage:9080/productpage.

You can also create gateway and visit productpage from outside, like what shows in Open the application to outside traffic.

Enable Lazyload

Create lazyload for productpage.

$ kubectl apply -f "https://raw.githubusercontent.com/slime-io/lazyload/$latest_tag/install/samples/lazyload/servicefence_productpage.yaml"

Confirm servicefence and sidecar already exist.

$ kubectl get servicefence -n default
NAME          AGE
productpage   12s
$ kubectl get sidecar -n default
NAME          AGE
productpage   22s
$ kubectl get sidecar productpage -n default -oyaml
apiVersion: networking.istio.io/v1beta1
kind: Sidecar
metadata:
  creationTimestamp: "2021-08-04T03:54:35Z"
  generation: 1
  name: productpage
  namespace: default
  ownerReferences:
  - apiVersion: microservice.slime.io/v1alpha1
    blockOwnerDeletion: true
    controller: true
    kind: ServiceFence
    name: productpage
    uid: d36e4be7-d66c-4f77-a9ff-14a4bf4641e6
  resourceVersion: "324118"
  uid: ec283a14-8746-42d3-87d1-0ee4538f0ac0
spec:
  egress:
  - hosts:
    - istio-system/*
    - mesh-operator/*
    - '*/global-sidecar.default.svc.cluster.local'
  workloadSelector:
    labels:
      app: productpage

First Visit and Observ

Visit the productpage website, and use kubectl logs -f productpage-xxx -c istio-proxy -n default to observe the access log of productpage.

[2021-08-06T06:04:36.912Z] "GET /details/0 HTTP/1.1" 200 - via_upstream - "-" 0 178 43 43 "-" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.107 Safari/537.36" "48257260-1f5f-92fa-a18f-ff8e2b128487" "details:9080" "172.17.0.17:9080" outbound|9080||global-sidecar.default.svc.cluster.local 172.17.0.11:45422 10.101.207.55:9080 172.17.0.11:56376 - -
[2021-08-06T06:04:36.992Z] "GET /reviews/0 HTTP/1.1" 200 - via_upstream - "-" 0 375 1342 1342 "-" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.107 Safari/537.36" "48257260-1f5f-92fa-a18f-ff8e2b128487" "reviews:9080" "172.17.0.17:9080" outbound|9080||global-sidecar.default.svc.cluster.local 172.17.0.11:45428 10.106.126.147:9080 172.17.0.11:41130 - -

It is clearly that the banckend of productpage is global-sidecar.

Now we get the sidecar yaml.

$ kubectl get sidecar productpage -oyaml
apiVersion: networking.istio.io/v1beta1
kind: Sidecar
metadata:
  creationTimestamp: "2021-08-06T03:23:05Z"
  generation: 2
  name: productpage
  namespace: default
  ownerReferences:
  - apiVersion: microservice.slime.io/v1alpha1
    blockOwnerDeletion: true
    controller: true
    kind: ServiceFence
    name: productpage
    uid: 27853fe0-01b3-418f-a785-6e49db0d201a
  resourceVersion: "498810"
  uid: e923e426-f0f0-429a-a447-c6102f334904
spec:
  egress:
  - hosts:
    - '*/details.default.svc.cluster.local'
    - '*/reviews.default.svc.cluster.local'
    - istio-system/*
    - mesh-operator/*
    - '*/global-sidecar.default.svc.cluster.local'
  workloadSelector:
    labels:
      app: productpage

Details and reviews are already added into sidecar!

Second Visit and Observ

Visit the productpage website again, and use kubectl logs -f productpage-xxx -c istio-proxy -n default to observe the access log of productpage.

[2021-08-06T06:05:47.068Z] "GET /details/0 HTTP/1.1" 200 - via_upstream - "-" 0 178 46 46 "-" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.107 Safari/537.36" "1c1c8e23-24d3-956e-aec0-e4bcff8df251" "details:9080" "172.17.0.6:9080" outbound|9080||details.default.svc.cluster.local 172.17.0.11:58522 10.101.207.55:9080 172.17.0.11:57528 - default
[2021-08-06T06:05:47.160Z] "GET /reviews/0 HTTP/1.1" 200 - via_upstream - "-" 0 379 1559 1558 "-" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.107 Safari/537.36" "1c1c8e23-24d3-956e-aec0-e4bcff8df251" "reviews:9080" "172.17.0.10:9080" outbound|9080||reviews.default.svc.cluster.local 172.17.0.11:60104 10.106.126.147:9080 172.17.0.11:42280 - default

The backends are details and reviews now.

Uninstall

Uninstall bookinfo.

$ kubectl delete -f "https://raw.githubusercontent.com/slime-io/lazyload/$latest_tag/install/config/bookinfo.yaml"

Uninstall slime.

$ /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/slime-io/lazyload/$latest_tag/install/samples/lazyload/easy_uninstall_lazyload.sh)"

Remarks

If you want to use customize shell scripts or yaml files, please set $custom_tag_or_commit.

$ export custom_tag_or_commit=xxx

If command includes a yaml file, please use $custom_tag_or_commit instead of $latest_tag.

#$ kubectl apply -f "https://raw.githubusercontent.com/slime-io/lazyload/$latest_tag/install/config/bookinfo.yaml"
$ kubectl apply -f "https://raw.githubusercontent.com/slime-io/lazyload/$custom_tag_or_commit/install/config/bookinfo.yaml"

If command includes a shell script, please add $custom_tag_or_commit as a parameter to the shell script.

#$ /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/slime-io/lazyload/$latest_tag/install/samples/smartlimiter/easy_install_limiter.sh)"
$ /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/slime-io/lazyload/$latest_tag/install/samples/smartlimiter/easy_install_limiter.sh)" $custom_tag_or_commit
Owner
Smart ServiceMesh Manager
null
Comments
  • Bugfix for auto fence enable

    Bugfix for auto fence enable

    1. Adding no auto-fence label to the global-sidecar. It will not auto create fence for global-sidecar when enable fence at namespace level.
    2. Auto created fence name equals svc.spec.selector instead of svc.name now. For example, for svc/details with spec.selector[app]: details-new, it will create fence/details in the previous version and will create fence/details-new now.
  • Static configuration capability enhancement of Servicefence.Spec

    Static configuration capability enhancement of Servicefence.Spec

    Lazyload supports dynamic updating of service dependencies via metric, but in some scenarios, users want to add some static service dependencies when lazy loading is enabled, such as all services under a namespace, or services with certain labels. We will carefully consider the actual requirements and implement similar functionality through fence's spec field.

  • Support auto enable lazyload via AutoFence param

    Support auto enable lazyload via AutoFence param

    related to https://github.com/slime-io/lazyload/pull/36

    Support for enabling lazyload for services manually or automatically

    Support for enabling lazyload for services, either manually or automatically, via the autoFence parameter. Enabling lazyload here refers to the creation of the serviceFence resource, which generates the Sidecar CR.

    Support for specifying whether lazyload is globally enabled in automatic mode via the defaultCreateFence parameter.

    The configuration is as follows

    ---
    apiVersion: config.netease.com/v1alpha1
    kind: SlimeBoot
    metadata:
      name: lazyload
      namespace: mesh-operator
    spec:
      module:
        - name: lazyload
          kind: lazyload
          enable: true
          general:
            autoFence: true # true for automatic mode, false for manual mode, default for manual mode
            defaultCreateFence: true # Default behaviour in auto mode, true to create servicefence, false to not create, default not to create
      # ...
    

    Auto mode

    Auto mode is entered when the autoFence parameter is true. The range of services enabled for lazyload in auto mode is adjusted by three dimensions.

    Service Level - label slime.io/serviceFenced

    • false: not auto enable

    • true: auto enable

    • other values or empty: use namespace level configuration

    Namespace Level - label slime.io/serviceFenced

    • false: not auto enable for this namespace
    • true: auto enable for this namespace
    • other values or empty: use global level configuration

    Global Level - defaultCreateFence param of lazyload module

    • false: not auto enable for all
    • true: auto enable for all

    Priority: Service Level > Namespace Level > Global Level

    Note: ServiceFence that are auto generated are labeled with app.kubernetes.io/created-by=fence-controller, which enables state association changes. ServiceFence that do not match this Label are considered to be manually configured and are not affected by the above Label.

    Example

    Namespace testns has 3 services, svc1, svc2, svc3

    • When autoFence is true and defaultCreateFence is true, three ServiceFence for the above services is auto generated
    • Label ns with slime.io/serviceFenced: "false", all ServiceFence disappear
    • Label svc1 with slime.io/serviceFenced: "true" , create ServiceFence for svc1
    • Delete the labels on Namespace and Service: created the three ServiceFence

    Sample configuration

    apiVersion: v1
    kind: Namespace
    metadata:
      labels:
        istio-injection: enabled
        slime.io/serviceFenced: "false"
      name: testns
    ---
    apiVersion: v1
    kind: Service
    metadata:
      annotations: {}
      labels:
        app: svc1
        service: svc1
        slime.io/serviceFenced: "true"
      name: svc1
      namespace: testns
    

    Manual mode

    When the autoFence parameter is false, lazyload is enabled in manual mode, requiring the user to create the ServiceFence resource manually. This enablement is Service level.

  • Improve global-sidecar request forwarding capabilities

    Improve global-sidecar request forwarding capabilities

    Problem Solved

    Global-sidecar would calculate the destination address to be forwarded based on the request header "Slime-Orig-Dest" when forwarding the request previously. When the request header "Slime-Orig-Dest" is empty, forwarding is rejected.

    This PR extends the use of the global-sidecar scenario. When the request header "Slime-Orig-Dest" is empty, global-sidecar uses the request.host and the port on which the global-sidecar accepts the request as the destinatiuon address. Then request forwards directly, without generating an accesslog metric.

    disaster-recovery-arc * The picture is not accurate enough. Traffic forwarded by global-sidecar is routed through the sidecarProxy when it reaches the destination, unless traffic hijacking has been disabled.

    This allows business traffic to be directed to the global-sidecar for correct forwarding when there is a problem with some business sidecarProxy. This implies some sense of disaster recovery capability.

    Notes

    • general.wormholePort defines requests of which service ports should be forwarded, connected by ";"
    • replicas of global-sidecar can be defined in component.globalSidecar.replicas
    • resources of global-sidecar can be defined in component.globalSidecar.resources

    Usage Example

    ---
    apiVersion: config.netease.com/v1alpha1
    kind: SlimeBoot
    metadata:
      name: lazyload
      namespace: mesh-operator
    spec:
      image:
        pullPolicy: Always
        repository: registry.cn-hangzhou.aliyuncs.com/slimeio/slime-lazyload
        tag: master-78f3d09_linux_amd64-dirty_e09cde4
      module:
        - name: lazyload-test
          kind: lazyload
          enable: true
          general:
            wormholePort: # replace to your application service ports, and extend the list in case of multi ports
              - "9080"
              - "9090"
          global:
            misc:
              globalSidecarMode: cluster
              metricSourceType: accesslog
      component:
        globalSidecar:
          enable: true
          sidecarInject:
            enable: true # should be true
            mode: pod
            labels:
              sidecar.istio.io/inject: "true"
          replicas: 2
          resources:
            requests:
              cpu: 200m
              memory: 200Mi
            limits:
              cpu: 400m
              memory: 400Mi
          image:
            repository: registry.cn-hangzhou.aliyuncs.com/slimeio/slime-global-sidecar
            tag: master-5d96f4d_linux_amd64
          probePort: 28888
    
  • Support domain alias in fence module config

    Support domain alias in fence module config

    New feature

    This feature supports adding the name of the dependent target service and a set of its regular matches, converted by a field domainAlias, to the sidecarscope.

    message Fence {
      // ...  
      // domain alias rule
      repeated DomainAlias domainAlias = 5;
    }
    
    message DomainAlias {
      string pattern = 1;
      repeated string templates = 2;
    }
    

    Supports both static and dynamic dependent target service.

    Usage example

    Add domainAlias with single inverted commas in fence module config

    apiVersion: config.netease.com/v1alpha1
    kind: SlimeBoot
    metadata:
      name: lazyload
      namespace: mesh-operator
    spec:
      module:
        - name: lazyload-test
          kind: lazyload
          enable: true
          general:
            wormholePort: # replace to your application service ports, and extend the list in case of multi ports
              - "9080"
            domainAliases: 
              - pattern: '(?P<service>[^\.]+)\.(?P<namespace>[^\.]+)\.svc\.cluster\.local$'
                templates:
                  - "$namespace.$service.mailsaas"
      #...
    

    serviceFence example

    apiVersion: microservice.slime.io/v1alpha1
    kind: ServiceFence
    metadata:
      name: ratings
      namespace: default
    spec:
      enable: true
      host:
        details.default.svc.cluster.local: # static dependent service
          stable: {}
    status:
      domains:
        default.details.mailsaas: # static dependent service converted result
          hosts:
          - default.details.mailsaas
        default.productpage.mailsaas: # dynamic dependent service converted result
          hosts:
          - default.productpage.mailsaas
        details.default.svc.cluster.local:
          hosts:
          - details.default.svc.cluster.local
        productpage.default.svc.cluster.local:
          hosts:
          - productpage.default.svc.cluster.local
      metricStatus:
        '{destination_service="productpage.default.svc.cluster.local"}': "1" # dynamic dependent service
    

    sidecar example

    apiVersion: networking.istio.io/v1beta1
    kind: Sidecar
    metadata:
      name: ratings
      namespace: default
    spec:
      egress:
      - hosts:
        - '*/default.details.mailsaas' # static dependent service converted result
        - '*/default.productpage.mailsaas' # dynamic dependent service converted result
        - '*/details.default.svc.cluster.local'
        - '*/productpage.default.svc.cluster.local'
        - istio-system/*
        - mesh-operator/*
      workloadSelector:
        labels:
          app: ratings
    
  • Update the rule of auto complete target service name

    Update the rule of auto complete target service name

    If service A requests a service out of cluster, like baidu.com, Lazyload will add '*/baidu.com.svc.cluster.local' to sidecar A in previous version, which is wrong. Now Lazyload can determine whether to add suffix svc.cluster.local or cluster.local according to service list in cluster. Then Lazyload will add '*/baidu.com' to sidecar A now.

  • Cluster mode supports all namespaces in mesh to use lazyload automatically

    Cluster mode supports all namespaces in mesh to use lazyload automatically

    In cluseter mode, all namespaces in the service mesh can use Lazyload, no need to explicitly specify a list of namespace like the namespace mode anymore.

  • Support prometheus as lazyload metric source in istio 1.12+

    Support prometheus as lazyload metric source in istio 1.12+

    In istio 1.12+, the way of metadata_exchange filters insertion is changed, related pr Add metadata exchange as native code. So global-sidecar will be inserted these filters.

    Then prometheus metric, what we want is like service A -> service B, will be service A -> global-sidecar and global-sidecar -> service B. Lazyload cannot get right relationships anymore in previous version with prometheus metric mode.

    In this pr, we remove metadata_exchange filters from global-sidecar via the envoyfilter named global-sidecar-metadata-exchange-remove, and this problem is solved.

    Note

    All versions of 1.12+ istio (latest version 1.13.2 so far) has problem with envoyfilter remove operation, causing lazyload with prometheus metric cannot run on these versions, related issue EnvoyFilter VIRTUAL_HOST REMOVE patch operation will destroy route configuration #36357. And the last bugfix commit is fix filter removal for http filter at 2022.03.16, not contained by any istio realease.

    We provide a new istiod image registry.cn-hangzhou.aliyuncs.com/slimeio/pilot:1.13.2-bugfix based on istio 1.13.2 in order to run lazyload with prometheus metric.

  • Complete fence module config migration

    Complete fence module config migration

    Preceding work Move fence module config from framework to module

    • Migrate namespace, dispatches fields to fence module.
    • Preferred use general field to fill fence module config. If general is empty, try to use deprecated fence field in slimeboot operator.
    • Need valid name or kind of module to use lazyload.
  • Make dynamic service dependency info durable

    Make dynamic service dependency info durable

    related to https://github.com/slime-io/slime/pull/138

    Step 1: Make dynamic service dependency info be durable When lazyload starting, it will generate cache contains dynamic service dependency info from existed ServiceFences, which contains useful info in status.metricStatus. After this change, it will never cause dynamic relationships lost when lazyload pod restarting or deleting. This step has been completed in this PR.

    Step 2: Make dynamic service dependency info be time-sensitive In step 1, lazyload will never delete dynamic service dependency info in any time. Suppose a dynamic dependency is not invoked again for a long time after a certain occurrence, at which point we should have a mechanism to delete the dynamic dependency. One scenario is that all service invocations record the time of the most recent invocation, that we globally set a time threshold that triggers deletion, and we also allow users to specify a dedicated time threshold for specific services. These will be done in subsequent development.

  • Support for disable ServiceFence auto generating

    Support for disable ServiceFence auto generating

    related to https://github.com/slime-io/slime/pull/137

    In previous version, namespace and service level ServiceFence auto generating is auto enable. Now we can disable them through new fields in fence config, like below. Default value is false.

    {
    	"general": {
    		"wormholePort": [
    			"9090"
    		],
    		"disableAutoFence": true
    	},
    	"name": "lazyload"
    }
    
  • can  slimeboot_no_global_sidecar.yaml be used?

    can slimeboot_no_global_sidecar.yaml be used?

    lazyload can read from accelog and prometheus but the examples all mode need to deploy global sidecar? can slimeboot_no_global_sidecar.yaml be used?

  • Make dynamic service dependency persistence

    Make dynamic service dependency persistence

    Backgroud

    Metric source for lazyload: Prometheus and AccessLog. In Prometheus mode, the data is stored in Prometheus and the LazyLoad Controller queries the results in real time without storing this data, so this mode does not require a persistence transformation. In AccessLog mode, the global-sidecar sends the AccessLog containing the service dependencies to the LazyLoad Controller after it has completed the underhanded forwarding. The LazyLoad Controller receives and processes the data and stores it in memory in the form of a Map. Once the Controller is restarted, all in-memory data is lost. When the ServiceFence is updated, the Status.MetricStatus of the servicefence is cleared and the dynamic service dependencies are lost.

    Thinking

    Step 1 Simple Persistence

    The reason for the problem is that the cache map is initialised with a null value, but in fact, status.metricStatus of each ServiceFence is a good record of persistence dynamic service dependency and can be initialised by reversing servicefence.status.metricStatus -> cache map to complete the initialization. The follow-up is still one-way cache map -> servicefence.status.metricStatus

    Step 2 Time-sensitive Persistence

    In Step 1, the contents of the cache map are never deleted. Suppose a dynamic dependency is not invoked again for a long time after a certain occurrence, at which point we should have a mechanism to delete the dynamic dependency. One scenario would be to record when the accesslog metric is generated, set a global time threshold that triggers deletion, and delete the metric when the condition is met. we could even consider allowing users to specify a dedicated time threshold for a particular service. One problem with this is that when there is a metric in the cache map, subsequent calls are direct and do not go through the global-sidecar, so that the same accesslog metric is not generated again. Wouldn't that mean that lazyload sees it as inactive, regardless of whether the call is active or not? Resulting in all metric being periodically deleted.

    Design

    Step 1 Simple Persistence

    AccessLogConvertorConfig adds InitCache When initializing the producer in lazyload, add the contents of the initialized initCache to get the ServiceFence information with the help of DynamicClient provided by bootstrap.Environment. You only need to convert these info to initCache later.

    Step 2 Time-sensitive Persistence

    TODO

  • The compatibility of Lazyload with Istio

    The compatibility of Lazyload with Istio

    Update (2022.04.25)

    It has been verified that Istio 1.13.3 has fixed all relevant problems. We currently recommend versions 1.13.3+, 1.11.3 or 1.10.4.

    经过验证,Istio 1.13.3已经修复了所有相关问题。目前我们推荐使用的版本为1.13.3+, 1.11.3或1.10.4

    Update (2022.03.22)

    There are still some bugs with Envoyfilter REMOVE operation in Istio releases 1.13.1 ~ 1.13.2. The last bugfix commit is fix filter removal for http filter at 2022.03.16, not contained by any istio realease.

    Lazyload with prometheus metric cannot run on these versions, while lazyload with accesslog metric has no problem.

    We provide a new istiod image registry.cn-hangzhou.aliyuncs.com/slimeio/pilot:1.13.2-bugfix based on istio 1.13.2 in order to run lazyload with prometheus metric.

    This problem will be solved in future istio 1.13.3+ so far as we know.

    Update (2022.03.08)

    It has been verified that this issue has been fixed in istio 1.13. The current recommended versions of istio are

    • 1.13 All official versions
    • 1.11.0 ~ 1.11.3
    • 1.10.0 ~ 1.10.4
    • 1.9.0 ~ 1.9.8

    Versions not recommended for use are

    • 1.12 All versions
    • 1.11.4+
    • 1.10.5 ~ 1.10.6
    • 1.9.9

    经过验证,目前istio 1.13已经修复此问题。 目前推荐使用的Istio版本为:

    • 1.13 所有正式版本
    • 1.11.0 ~ 1.11.3
    • 1.10.0 ~ 1.10.4
    • 1.9.0 ~ 1.9.8

    不推荐使用的版本为:

    • 1.12 所有版本
    • 1.11.4+
    • 1.10.5 ~ 1.10.6
    • 1.9.9

    After testing, there are currently compatibility issues with the latest Istio version for Lazyload. The cause of the problem, most likely, is that the new code in Istio causes an error in EnvoyFilter's REMOVE operation on VIRTUAL_HOST. This will affect the EnvoyFilter/to-global-sidecar of Lazyload module from taking effect.

    Issue of the Istio community to follow up on this issue EnvoyFilter VIRTUAL_HOST REMOVE patch operation will destroy route configuration #36357

    Code PR suspected of introducing problems do not build catch all virtual host every time #35449

    Theoretically, any version of Istio released after October 2, 2021 could have the problem. The versions currently known to be affected are

    • 1.12.0,1.12.1
    • 1.11.4,1.11.5
    • 1.10.5,1.10.6
    • 1.9.9

    We are currently recommending versions 1.11.3 or 1.10.4. We will continue to follow up on this issue and update this issue when there is progress.

    经过测试,目前懒加载与最新的Istio版本存在兼容性问题。问题的原因,很有可能是Istio的新代码导致EnvoyFilter对VIRTUAL_HOST的REMOVE操作出现错误。这会影响懒加载模块兜底EnvoyFilter to-global-sidecar的生效。

    Istio社区跟进此问题的Issue EnvoyFilter VIRTUAL_HOST REMOVE patch operation will destroy route configuration #36357

    怀疑引入问题的代码PR do not build catch all virtual host every time #35449

    理论上,2021年10月2号之后发布的Istio版本都可能存在问题。目前已知受影响的版本为:

    • 1.12.0,1.12.1
    • 1.11.4,1.11.5
    • 1.10.5,1.10.6
    • 1.9.9

    目前我们推荐使用的版本为1.11.3或1.10.4。我们会持续跟进这个问题,并在有进展时,及时更新本issue。

  • Support ServiceEntry

    Support ServiceEntry

    Currently lazyload supports k8s service. In some scenarios, erviceentry is also useful. Therefore, we will seriously consider the solutions to support lazyload enabled for serviceentry.

A Golang REST API to handle users and posts for a simple instagram backend. Uses MongoDB as the database. Tested using golang-testing and Postman.
A Golang REST API to handle users and posts for a simple instagram backend. Uses MongoDB as the database. Tested using golang-testing and Postman.

A Golang REST API to handle users and posts for a simple instagram backend. Uses MongoDB as the database. Tested using golang-testing and Postman.

Oct 10, 2021
Go (Golang) API REST with Gin FrameworkGo (Golang) API REST with Gin Framework

go-rest-api-aml-service Go (Golang) API REST with Gin Framework 1. Project Description Build REST APIs to support AML service with the support of exte

Nov 21, 2021
A user-friendly CMS written in Go (golang)

Fragmenta CMS Fragmenta CMS is a user-friendly Content Management System built with Go. For more information and a demo of the CMS in action, see the

Dec 24, 2022
[爬虫框架 (golang)] An awesome Go concurrent Crawler(spider) framework. The crawler is flexible and modular. It can be expanded to an Individualized crawler easily or you can use the default crawl components only.

go_spider A crawler of vertical communities achieved by GOLANG. Latest stable Release: Version 1.2 (Sep 23, 2014). QQ群号:337344607 Features Concurrent

Dec 30, 2022
A URL shortener using http://is.gd/ and the Go programming language (http://golang.org/)

goisgd A simple command line URL shortener using http://is.gd/. Getting the Code go get github.com/NickPresta/GoURLShortener Usage Import this librar

Apr 6, 2022
Extensible wiki system using CouchDB and written in Golang
Extensible wiki system using CouchDB and written in Golang

Wikifeat Introduction Wikifeat is an open source collaboration platform built around the ever-popular Wiki concept. It is meant to be extensible and h

Aug 23, 2022
A golang framework helps gopher to build a data visualization and admin panel in ten minutes
A golang framework helps gopher to build a data visualization and admin panel in ten minutes

the missing golang data admin panel builder tool. Documentation | 中文介绍 | DEMO | 中文DEMO | Twitter | Forum Inspired by laravel-admin Preface GoAdmin is

Dec 30, 2022
Online Preview Word,Excel,PPT,PDF,Image by Golang
Online Preview Word,Excel,PPT,PDF,Image by Golang

Online Preview Word,Excel,PPT,PDF,Image by Golang.基于Golang的在线预览Word,Excel,PPT,PDF,图片.

Dec 26, 2022
GoCondor is a golang web framework with an MVC like architecture, it's based on Gin framework
GoCondor is a golang web framework with an MVC like architecture, it's based on Gin framework

GoCondor is a golang web framework with an MVC like architecture, it's based on Gin framework, it features a simple organized directory structure for your next project with a pleasant development experience, made for developing modern APIs and microservices.

Dec 29, 2022
Example golang using gin framework everything you need, i create this tutorial special for beginner.

Golang Gin Framework Fundamental Example golang using gin framework everything you need, i create this tutorial special for beginner. Feature Containe

Dec 16, 2022
Software of Development with Golang and MySQL
Software of Development with Golang and MySQL

CRUD REST API GOLANG GORM AND MYSQL Description This repository is a Software of Application with Golang, Mux, GORM (ORM) and MySQL. Installation Usin

Nov 24, 2022
OpenSCRM是一套基于Go和React的超高质量企业微信私域流量管理系统 。企业微信、私域流量、SCRM、CRM、Golang、React、企业微信SDK
OpenSCRM是一套基于Go和React的超高质量企业微信私域流量管理系统 。企业微信、私域流量、SCRM、CRM、Golang、React、企业微信SDK

安全,强大,易开发的企业微信SCRM 文档 | 截图 | 演示 | 安装 项目简介 OpenSCRM是一套基于Go和React的超高质量企业微信私域流量管理系统 在线演示 http://dashboard.demo.openscrm.cn:8000/ 项目特点 安全性高:企业微信控制了企业所有员工和

Dec 28, 2022
Forms is a fast, powerful, flexible, sortable web form rendering library written in golang.

forms Description forms makes form creation and handling easy. It allows the creation of form without having to write HTML code or bother to make the

Oct 2, 2022
The source code for workshop Scalable architecture using Redis as backend database using Golang + Redis

The source code for workshop Scalable architecture using Redis as backend database using Golang + Redis

Sep 23, 2022
golang-react-app
golang-react-app

- Estructura planteada: api ( Aplicación backend desarrollada en GOLANG ) controllers main.go product.go database connection.go models price.go produc

Nov 27, 2022
A web forum built in Golang and SQLite and designed in SCSS
A web forum built in Golang and SQLite and designed in SCSS

Forum "Fairfax" ?? What is it? A web forum built in Golang and SQLite and designed in SCSS. Members of the forum can take a personality test and be so

Nov 10, 2021
A social media API to handle users and their posts, written from scratch in Golang
A social media API to handle users and their posts, written from scratch in Golang

Initial Set-Up To start the project on your own machine you'll need Golang instlled, along with mongoDB. Once you've insured these requirements are me

Oct 9, 2021
This product about make link to be short link with golang rest api

This project using golang with go fiber, firebase, and dependency injection

Oct 13, 2021
Go Getting Started from PluralSight. Simple webapp in golang

Go_WEB_APP Go Getting Started from PluralSight. Simple webapp in golang This is simple Webapp in GO, which has users api to interact with backend This

Oct 18, 2021