The Ultimate Engineer Toolbox YouTube 🔨 🔧

The Ultimate Engineer Toolbox YouTube 🔨 🔧

A Collection of tools, hands-on walkthroughs with source code.
The Ultimate Swiss Army knife for DevOps, Developers and Platform Engineers


Steps Playlist 📺 Source :octocat:
Learn Kubernetes ❄️ Kubernetes Guide source
Learn about CI/CD tools 🐳 CI/CD Guide
Deploy Kubernetes to the cloud Cloud Guide source
Monitoring Kubernetes 🔍 Cloud Guide source
Guide to Logging 📃 Cloud Guide source
Guide to ServiceMesh 🌐 Cloud Guide source

Docker Development Basics

Step ✔️ Video 🎥 Source Code :octocat:
Working with Dockerfiles
(.NET, Golang, Python, NodeJS)
Docker 1 source
Working with code
(.NET, Golang, Python, NodeJS)
Docker 1 source
Docker Multistage explained Docker 1 source
Debugging Go in Docker Docker 1 source
Debugging .NET in Docker Docker 1 source
Debugging Python in Docker Docker 1 source
Debugging NodeJS in Docker Docker 1 source

Engineering Toolbox 🔨 🔧

Checkout the toolbox website

toolbox 1

Comments
  • Fatal config file error for sentinel

    Fatal config file error for sentinel

    Hey @marcel-dempers , I copied your script exactly as it is for the sentinel and i'm getting the following output:

    Wed, Dec 1 2021 8:47:24 am |   Wed, Dec 1 2021 8:47:24 am | *** FATAL CONFIG FILE ERROR (Redis 6.2.3) *** Wed, Dec 1 2021 8:47:24 am | Reading the configuration file, at line 4 Wed, Dec 1 2021 8:47:24 am | >>> 'sentinel monitor mymaster 6379 2' Wed, Dec 1 2021 8:47:24 am | Unrecognized sentinel configuration statement.

    Any idea's? I'm parsing through docs right now but not seeing anything obviously wrong.

    apiVersion: apps/v1
    kind: StatefulSet
    metadata:
      name: sentinel
    spec:
      serviceName: sentinel
      replicas: 3
      selector:
        matchLabels:
          app: sentinel
      template:
        metadata:
          labels:
            app: sentinel
        spec:
          initContainers:
          - name: config
            image: redis:6.2.3-alpine
            command: [ "sh", "-c" ]
            args:
              - |
                REDIS_PASSWORD=a-very-complex-password-here
                nodes=redis-0.redis.redis.svc.cluster.local,redis-1.redis.redis.svc.cluster.local,redis-2.redis.redis.svc.cluster.local
    
                for i in ${nodes//,/ }
                do
                    echo "finding master at $i"
                    MASTER=$(redis-cli --no-auth-warning --raw -h $i -a $REDIS_PASSWORD info replication | awk '{print $1}' | grep master_host: | cut -d ":" -f2)
                    if [ "$MASTER" == "" ]; then
                        echo "no master found"
                        MASTER=
                    else
                        echo "found $MASTER"
                        break
                    fi
                done
                echo "sentinel monitor mymaster $MASTER 6379 2" >> /tmp/master
                echo "port 5000
                sentinel resolve-hostnames yes
                sentinel announce-hostnames yes
                $(cat /tmp/master)
                sentinel down-after-milliseconds mymaster 5000
                sentinel failover-timeout mymaster 60000
                sentinel parallel-syncs mymaster 1
                sentinel auth-pass mymaster $REDIS_PASSWORD
                " > /etc/redis/sentinel.conf
                cat /etc/redis/sentinel.conf
            volumeMounts:
            - name: redis-config
              mountPath: /etc/redis/
          containers:
          - name: sentinel
            image: redis:6.2.3-alpine
            command: ["redis-sentinel"]
            args: ["/etc/redis/sentinel.conf"]
            ports:
            - containerPort: 5000
              name: sentinel
            volumeMounts:
            - name: redis-config
              mountPath: /etc/redis/
            - name: data
              mountPath: /data
          volumes:
          - name: redis-config
            emptyDir: {}
      volumeClaimTemplates:
      - metadata:
          name: data
        spec:
          accessModes: [ "ReadWriteOnce" ]
          storageClassName: "longhorn"
          resources:
            requests:
              storage: 50Mi
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: sentinel
    spec:
      clusterIP: None
      ports:
      - port: 5000
        targetPort: 5000
        name: sentinel
      selector:
        app: sentinel
    
  • errors while running in k8s cluster in GKE

    errors while running in k8s cluster in GKE

    ➜ POC-admission-controller git:(master) ✗ kubectl logs -f example-webhook-78c8bc67b7-p95gd -n k8s-controller panic: pods is forbidden: User "system:serviceaccount:k8s-controller:example-webhook" cannot list resource "pods" in API group "" at the cluster scope

    goroutine 1 [running]: main.test() /app/test.go:14 +0x1a8 main.main() /app/main.go:93 +0x392 ➜ POC-admission-controller git:(master) ✗

    @marcel-dempers

  • Deployment failing in Kubernetes Deployments for Beginners

    Deployment failing in Kubernetes Deployments for Beginners

    Hey Marcel,

    Your videos are awesome.

    I was trying to follow the steps from you deployments video in my local environment but had a few issues. It looks like the deployments.yaml in your repo has changed a bit from the one shown in the deployments video.

    The current deployments.yaml in your master branch has a couple of issues:

    • the configmap & secrets volumes don't exist
    • the aimvector/golang:1.0.0 image fails due to missing /configs/config.json file

    Cheers,

    Tim

  • docker build is failing on slave

    docker build is failing on slave

    Hello,

    I followed your tutorial, but when I tried to build a new job in pipeline, it results in an error

    Started by user jenkins
    Obtained jenkins/JenkinsFile from git https://github.com/marcel-dempers/docker-development-youtube-series.git
    Running in Durability level: MAX_SURVIVABILITY
    [Pipeline] Start of Pipeline
    [Pipeline] node
    Still waiting to schedule task
    ‘jenkins-slave-mbm4d’ is offline
    Agent jenkins-slave-mbm4d is provisioned from template Kubernetes Pod Template
    ---
    apiVersion: "v1"
    kind: "Pod"
    metadata:
      annotations: {}
      labels:
        jenkins: "slave"
        jenkins/label: "jenkins-slave"
      name: "jenkins-slave-mbm4d"
    spec:
      containers:
      - env:
        - name: "JENKINS_SECRET"
          value: "********"
        - name: "JENKINS_TUNNEL"
          value: "jenkins:50000"
        - name: "JENKINS_AGENT_NAME"
          value: "jenkins-slave-mbm4d"
        - name: "JENKINS_NAME"
          value: "jenkins-slave-mbm4d"
        - name: "JENKINS_AGENT_WORKDIR"
          value: "/home/jenkins/agent"
        - name: "JENKINS_URL"
          value: "http://jenkins/"
        image: "aimvector/jenkins-slave"
        imagePullPolicy: "IfNotPresent"
        name: "jnlp"
        resources:
          limits: {}
          requests: {}
        securityContext:
          privileged: false
        tty: true
        volumeMounts:
        - mountPath: "/var/run/docker.sock"
          name: "volume-0"
          readOnly: false
        - mountPath: "/home/jenkins/agent"
          name: "workspace-volume"
          readOnly: false
        workingDir: "/home/jenkins/agent"
      hostNetwork: false
      nodeSelector:
        beta.kubernetes.io/os: "linux"
      restartPolicy: "Never"
      securityContext: {}
      volumes:
      - hostPath:
          path: "/var/run/docker.sock"
        name: "volume-0"
      - emptyDir:
          medium: ""
        name: "workspace-volume"
    
    Running on jenkins-slave-mbm4d in /home/jenkins/agent/workspace/test
    [Pipeline] {
    [Pipeline] stage
    [Pipeline] { (test pipeline)
    [Pipeline] sh
    + echo hello
    hello
    + git clone https://github.com/marcel-dempers/docker-development-youtube-series.git
    Cloning into 'docker-development-youtube-series'...
    + cd ./docker-development-youtube-series/golang
    + docker build . -t test
    time="2019-12-24T02:08:35Z" level=error msg="failed to dial gRPC: cannot connect to the Docker daemon. Is 'docker daemon' running on this host?: dial unix /var/run/docker.sock: connect: permission denied"
    context canceled
    [Pipeline] }
    [Pipeline] // stage
    [Pipeline] }
    [Pipeline] // node
    [Pipeline] End of Pipeline
    ERROR: script returned exit code 1
    Finished: FAILURE
    

    It is failing at

    time="2019-12-24T02:08:35Z" level=error msg="failed to dial gRPC: cannot connect to the Docker daemon. Is 'docker daemon' running on this host?: dial unix /var/run/docker.sock: connect: permission denied" context canceled

  • Debugging containerized python flask app with non-standard code organization with VS Code

    Debugging containerized python flask app with non-standard code organization with VS Code

    I am trying to debug my Python 3 Flask app using VS Code. I have the extensions docker and python installed for the Remote WSL mode of VS Code. The relevant part of the docker-compose.yml is:

      web:
        build: 
          context: .
        image: web
        container_name: web
        ports:
          - 5004:5000
        command: python manage.py run -h 0.0.0.0
        volumes:
          - .:/usr/src/app
        environment:
          - FLASK_DEBUG=1
          - APP_SETTINGS=project.server.config.DevelopmentConfig
        networks:
          - webnet
    

    When I follow the instructions as per your video, I end up with commenting the command statement in my docker-compose file and an additional layer in my Dockerfile like so:

    ENV FLASK_APP=manage.py
    
    # ###########START NEW IMAGE : DEBUGGER ###################
    FROM base as debug
    RUN pip install ptvsd
    
    WORKDIR /usr/src/app
    CMD python -m ptvsd --host 0.0.0.0 --port 5678 --wait --multiprocess -m flask run -h 0.0.0.0 -p 5000
    

    and a launch.json file like:

    // .vscode/launch.json
    {
      "configurations": [
        {
          "name": "Python Attach",
          "type": "python",
          "request": "attach",
          "pathMappings": [
            {
              "localRoot": "${workspaceFolder}",
              "remoteRoot": "/usr/src/app"
            }
          ], 
          "port": 5678, 
          "host": "127.0.0.1"
        }
      ]
    }
    

    Now here comes the kicker. My app is composed like so:

    manage.py (has the statement app = create_app()) project   server    - _init_.py (this is where create_app() is defined using Flask() with appropriate config)    - config.py       main (rest of the app)

    When try to debug this, I can apply breakpoints in manage.py and they trigger fine on app start, but my breakpoints in the views, which are located in .\project\server\main\views.py do not get triggered. I am guessing, this has either to do with how I am initiating the debug sub-process, or my pathMappings in the launch.json.

    Any suggestions to debug this are appreciated. Sorry about the long post.

  • Bump yajl-ruby from 1.4.1 to 1.4.3 in /monitoring/logging/fluentd/kubernetes/dockerfiles

    Bump yajl-ruby from 1.4.1 to 1.4.3 in /monitoring/logging/fluentd/kubernetes/dockerfiles

    Bumps yajl-ruby from 1.4.1 to 1.4.3.

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

  • nginx ingress deployment issue

    nginx ingress deployment issue

    I've followed the YouTube video and deployed using the yaml files, but I had to modify some v1beta1 to v1.

    My services don't get an external IP and the nginx-ingress-controller pods get status CrashLoopBackOff. I can see that the health check fails.

    I'm not running this on DockerDesktop. Instead I've installed four Ubuntu 22.04 VMs on a proxmox server. image

    Events:
      Type     Reason     Age                From               Message
      ----     ------     ----               ----               -------
      Normal   Scheduled  58s                default-scheduler  Successfully assigned ingress-nginx/nginx-ingress-controller-9776fbf95-7lsr6 to k8s-worker1
      Normal   Pulling    57s                kubelet            Pulling image "quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.33.0"
      Normal   Pulled     36s                kubelet            Successfully pulled image "quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.33.0" in 20.653688932s
      Warning  Unhealthy  36s                kubelet            Readiness probe failed: Get "http://10.244.1.6:10254/healthz": dial tcp 10.244.1.6:10254: connect: connection refused
      Normal   Created    16s (x3 over 36s)  kubelet            Created container nginx-ingress-controller
      Normal   Started    16s (x3 over 36s)  kubelet            Started container nginx-ingress-controller
      Normal   Pulled     16s (x2 over 35s)  kubelet            Container image "quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.33.0" already present on machine
      Warning  BackOff    7s (x7 over 34s)   kubelet            Back-off restarting failed container
    

    This is a list of everything in my cluster:

    NAMESPACE              NAME                                             READY   STATUS             RESTARTS        AGE
    default                pod/nginx-deployment-6595874d85-9lqkl            1/1     Running            0               160m
    default                pod/nginx-deployment-6595874d85-j8qj4            1/1     Running            0               160m
    default                pod/nginx-deployment-6595874d85-rd5gv            1/1     Running            0               160m
    default                pod/nginx-server                                 1/1     Running            0               160m
    example-app            pod/example-deploy-7988897cb8-jdbxg              1/1     Running            0               88m
    example-app            pod/example-deploy-7988897cb8-xrgf9              1/1     Running            0               88m
    ingress-nginx          pod/nginx-ingress-controller-9776fbf95-7lsr6     0/1     CrashLoopBackOff   6 (3m40s ago)   9m39s
    ingress-nginx          pod/nginx-ingress-controller-9776fbf95-lgbpx     0/1     CrashLoopBackOff   6 (3m32s ago)   9m39s
    kube-system            pod/coredns-6d4b75cb6d-jzkbt                     1/1     Running            0               3h30m
    kube-system            pod/coredns-6d4b75cb6d-z68cf                     1/1     Running            0               3h30m
    kube-system            pod/etcd-k8s-master                              1/1     Running            3               3h30m
    kube-system            pod/kube-apiserver-k8s-master                    1/1     Running            3               3h30m
    kube-system            pod/kube-controller-manager-k8s-master           1/1     Running            0               3h30m
    kube-system            pod/kube-flannel-ds-6kx27                        1/1     Running            0               3h26m
    kube-system            pod/kube-flannel-ds-qr6bp                        1/1     Running            0               164m
    kube-system            pod/kube-flannel-ds-rk4m2                        1/1     Running            0               174m
    kube-system            pod/kube-flannel-ds-tdjhs                        1/1     Running            0               3h2m
    kube-system            pod/kube-proxy-6t8g8                             1/1     Running            0               164m
    kube-system            pod/kube-proxy-984kj                             1/1     Running            0               3h30m
    kube-system            pod/kube-proxy-ddnlc                             1/1     Running            0               3h2m
    kube-system            pod/kube-proxy-ld844                             1/1     Running            0               174m
    kube-system            pod/kube-scheduler-k8s-master                    1/1     Running            3               3h30m
    kubernetes-dashboard   pod/dashboard-metrics-scraper-7bfdf779ff-sszdv   1/1     Running            0               157m
    kubernetes-dashboard   pod/kubernetes-dashboard-6cdd697d84-7jht4        1/1     Running            0               157m
    
    NAMESPACE              NAME                                TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)                      AGE
    default                service/kubernetes                  ClusterIP      10.96.0.1        <none>        443/TCP                      3h30m
    default                service/nginx-http                  ClusterIP      10.104.87.158    <none>        80/TCP                       159m
    example-app            service/example-service             LoadBalancer   10.98.61.110     <pending>     80:31537/TCP                 84m
    ingress-nginx          service/ingress-nginx               LoadBalancer   10.96.36.79      <pending>     80:32451/TCP,443:30304/TCP   50m
    kube-system            service/kube-dns                    ClusterIP      10.96.0.10       <none>        53/UDP,53/TCP,9153/TCP       3h30m
    kubernetes-dashboard   service/dashboard-metrics-scraper   ClusterIP      10.104.144.188   <none>        8000/TCP                     157m
    kubernetes-dashboard   service/kubernetes-dashboard        ClusterIP      10.108.35.49     <none>        443/TCP                      157m
    
    NAMESPACE     NAME                             DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE
    kube-system   daemonset.apps/kube-flannel-ds   4         4         4       4            4           <none>                   3h26m
    kube-system   daemonset.apps/kube-proxy        4         4         4       4            4           kubernetes.io/os=linux   3h30m
    
    NAMESPACE              NAME                                        READY   UP-TO-DATE   AVAILABLE   AGE
    default                deployment.apps/nginx-deployment            3/3     3            3           160m
    example-app            deployment.apps/example-deploy              2/2     2            2           88m
    ingress-nginx          deployment.apps/nginx-ingress-controller    0/2     2            0           9m39s
    kube-system            deployment.apps/coredns                     2/2     2            2           3h30m
    kubernetes-dashboard   deployment.apps/dashboard-metrics-scraper   1/1     1            1           157m
    kubernetes-dashboard   deployment.apps/kubernetes-dashboard        1/1     1            1           157m
    
    NAMESPACE              NAME                                                   DESIRED   CURRENT   READY   AGE
    default                replicaset.apps/nginx-deployment-6595874d85            3         3         3       160m
    example-app            replicaset.apps/example-deploy-7988897cb8              2         2         2       88m
    ingress-nginx          replicaset.apps/nginx-ingress-controller-9776fbf95     2         2         0       9m39s
    kube-system            replicaset.apps/coredns-6d4b75cb6d                     2         2         2       3h30m
    kubernetes-dashboard   replicaset.apps/dashboard-metrics-scraper-7bfdf779ff   1         1         1       157m
    kubernetes-dashboard   replicaset.apps/kubernetes-dashboard-6cdd697d84        1         1         1       157m
    

    Can you guess what's most likely the issue here?

  • security issue on redis/kubernetes

    security issue on redis/kubernetes

    https://github.com/marcel-dempers/docker-development-youtube-series/blob/master/storage/redis/kubernetes/sentinel/sentinel-statefulset.yaml#L22

    i think it's a security issue but idk maybe not, im new at kubernetes. this line seems like wrong, i guess it should come from the secret config/config map.

  • standalone-prometheus is unable to scrape example-app

    standalone-prometheus is unable to scrape example-app

    Hello,

    Following you promethues-operator example under 1.14.8 cuz there isn't one in 1.18.4. When I add the standalone-prometheus instance the example-app target says error http://:5000/metrics. See attached screenshot. The code is straight out of you repo. I have double checked all of the labels and they all look correct. Is the because the app is not service that /metrics endpoint? This is the example from the kubernetes/deployments, services, secrets, configmaps right? Did something change here? I there another example application to try?

    I tried using your python-application on port 80 and received a different error shown in the second screenshot. Is there anyway to update an example that works?

    Screen Shot 2021-08-19 at 5 13 11 PM

    Screen Shot 2021-08-19 at 5 43 58 PM

  • Could not connect to Redis at redis-06379: Name does not resolve

    Could not connect to Redis at redis-06379: Name does not resolve

    Hi,

    while deploying sentinel i am getting below error.

    finding master at redis-0.redis.redis.svc.cluster.local Could not connect to Redis at redis-0.redis.redis.svc.cluster.local:6379: Name does not resolve no master found finding master at redis-1.redis.redis.svc.cluster.local Could not connect to Redis at redis-1.redis.redis.svc.cluster.local:6379: Name does not resolve no master found finding master at redis-2.redis.redis.svc.cluster.local Could not connect to Redis at redis-2.redis.redis.svc.cluster.local:6379: Name does not resolve no master found port 5000 sentinel monitor mymaster 6379 2 sentinel down-after-milliseconds mymaster 5000 sentinel failover-timeout mymaster 60000 sentinel parallel-syncs mymaster 1 sentinel auth-pass mymaster a-very-complex-password-here

    Here are my nodes:-

    kubectl get nodes NAME STATUS ROLES AGE VERSION rbqn01h02 Ready controlplane,etcd,worker 295d v1.17.5 rbqn01h03 Ready controlplane,etcd,worker 295d v1.17.5 rbqn01h04 Ready controlplane,etcd,worker 295d v1.17.5 rbqn04h02 Ready controlplane,etcd,worker 295d v1.17.5

    Here are my redis podes

    kubectl get pods -n redis-cluster1 NAME READY STATUS RESTARTS AGE redis-0 1/1 Running 0 22h redis-1 1/1 Running 0 22h redis-2 1/1 Running 0 22h

    I even tried sentinel deployment by adding above mentioned nodes in initcontainer startup script. Here:--

    REDIS_PASSWORD=a-very-complex-password-here nodes=

    Please suggest what need to add in nodes so that sentinel can connect to redis cluster

  • TLS error in basic secret injection video

    TLS error in basic secret injection video

    Hey Marcel,

    I was trying to replicate the setup from the video "Basic secret injection for microservices on Kubernetes using Vault". I got to the point of starting the example app deployment & found that the pod starts but stays in the "Init:0/1" status.

    The vault injector pod logs show that it received the mutating webhook call:

    kubectl -n vault-example logs vault-example-agent-injector-7cdd648787-tv4lb
    2020-08-12T22:55:14.523Z [INFO] handler: Starting handler.. Listening on ":8080"... Updated certificate bundle received. Updating certs... 2020-08-12T23:08:00.894Z [INFO] handler: Request received: Method=POST URL=/mutate?timeout=30s

    The logs from the vault pod show a TLS error:

    kubectl -n vault-example logs vault-example-0
    ==> Vault server configuration:

             Api Address: https://10.244.0.6:8200
                     Cgo: disabled
         Cluster Address: https://10.244.0.6:8201
              Listener 1: tcp (addr: "0.0.0.0:8200", cluster address: "[::]:8201", max_request_duration: "1m30s", max_request_size: "33554432", tls: "enabled")
               Log Level: info
                   Mlock: supported: true, enabled: false
           Recovery Mode: false
                 Storage: file
                 Version: Vault v1.3.1
    

    2020-08-12T22:50:10.226Z [INFO] proxy environment: http_proxy= https_proxy= no_proxy= ==> Vault server started! Log data will stream in below:

    2020-08-12T22:50:50.416Z [INFO] core.cluster-listener: starting listener: listener_address=[::]:8201 2020-08-12T22:50:50.416Z [INFO] core.cluster-listener: serving cluster requests: cluster_listen_address=[::]:8201 2020-08-12T22:50:50.416Z [INFO] core: post-unseal setup starting 2020-08-12T22:50:50.417Z [INFO] core: loaded wrapping token key 2020-08-12T22:50:50.417Z [INFO] core: successfully setup plugin catalog: plugin-directory= 2020-08-12T22:50:50.418Z [INFO] core: successfully mounted backend: type=system path=sys/ 2020-08-12T22:50:50.418Z [INFO] core: successfully mounted backend: type=identity path=identity/ 2020-08-12T22:50:50.419Z [INFO] core: successfully mounted backend: type=cubbyhole path=cubbyhole/ 2020-08-12T22:50:50.421Z [INFO] core: successfully enabled credential backend: type=token path=token/ 2020-08-12T22:50:50.421Z [INFO] core: restoring leases 2020-08-12T22:50:50.421Z [INFO] rollback: starting rollback manager 2020-08-12T22:50:50.422Z [INFO] identity: entities restored 2020-08-12T22:50:50.422Z [INFO] expiration: lease restore complete 2020-08-12T22:50:50.422Z [INFO] identity: groups restored 2020-08-12T22:50:50.422Z [INFO] core: post-unseal setup complete 2020-08-12T22:50:50.423Z [INFO] core: vault is unsealed 2020-08-12T23:01:10.547Z [INFO] core: enabled credential backend: path=kubernetes/ type=kubernetes 2020-08-12T23:05:51.876Z [INFO] core: successful mount: namespace= path=secret/ type=kv 2020-08-12T23:06:38.902Z [INFO] http: TLS handshake error from 127.0.0.1:52998: remote error: tls: unknown certificate

    And the logs from the init container show an error trying to authenticate with vault:

    kubectl -n vault-example logs basic-secret-74b4fdbcdc-2zmtl -c vault-agent-init ==> Vault server started! Log data will stream in below:

    ==> Vault agent configuration: 2020-08-12T23:08:01.568Z [INFO] sink.file: creating file sink 2020-08-12T23:08:01.568Z [INFO] sink.file: file sink configured: path=/home/vault/.token mode=-rw-r----- 2020-08-12T23:08:01.568Z [INFO] auth.handler: starting auth handler 2020-08-12T23:08:01.568Z [INFO] auth.handler: authenticating 2020-08-12T23:08:01.568Z [INFO] sink.server: starting sink server

    2020-08-12T23:08:01.568Z [INFO] template.server: starting template server Cgo: disabled Log Level: info Version: Vault v1.3.1

    2020/08/12 23:08:01.569034 [INFO] (runner) creating new runner (dry: false, once: false) 2020/08/12 23:08:01.569618 [WARN] (clients) disabling vault SSL verification 2020/08/12 23:08:01.569658 [INFO] (runner) creating watcher 2020-08-12T23:08:11.580Z [ERROR] auth.handler: error authenticating: error="Put https://vault-example.vault-example.svc:8200/v1/auth/kubernetes/login: dial tcp: lookup vault-example.vault-example.svc on 10.96.0.10:53: read udp 10.244.0.8:50821->10.96.0.10:53: read: connection refused" backoff=2.156164762 2020-08-12T23:08:13.703Z [INFO] auth.handler: authenticating 2020-08-12T23:08:23.712Z [ERROR] auth.handler: error authenticating: error="Put https://vault-example.vault-example.svc:8200/v1/auth/kubernetes/login: dial tcp: lookup vault-example.vault-example.svc on 10.96.0.10:53: read udp 10.244.0.8:41477->10.96.0.10:53: i/o timeout" backoff=2.29257713

    In terms of TLS - I used the exact TLS config/process indicated in your ssl_generate_self_signed.txt file.

    Any suggestions would be greatly appreciated.

    Thanks

    Tim

  • facing errors while trying ingress controller YT video

    facing errors while trying ingress controller YT video

    Hello, I am following this youtube video Kubernetes Ingress Explained for Beginners One step involves running command kubectl apply -f ingress/controller/traefik this throws following errors

    configmap/traefik-config unchanged
    serviceaccount/traefik-ingress-controller unchanged
    service/traefik-ingress-service unchanged
    service/traefik-web-ui unchanged
    resource mapping not found for name: "traefik-ingress-controller" namespace: "kube-system" from "ingress/controller/traefik/traefik-deployment.yaml": no matches for kind "Deployment" in version "extensions/v1beta1"
    ensure CRDs are installed first
    resource mapping not found for name: "traefik-ingress-controller" namespace: "" from "ingress/controller/traefik/traefik-rbac.yaml": no matches for kind "ClusterRole" in version "rbac.authorization.k8s.io/v1beta1"
    ensure CRDs are installed first
    resource mapping not found for name: "traefik-ingress-controller" namespace: "" from "ingress/controller/traefik/traefik-rbac.yaml": no matches for kind "ClusterRoleBinding" in version "rbac.authorization.k8s.io/v1beta1"
    ensure CRDs are installed first
    ``
    
    
  • fix: docker build failing because of deprecated go version

    fix: docker build failing because of deprecated go version

    When following the video I noticed the docker building of both apps (consumer and publisher) didn't work. So I readapted the dockerfiles and included the *.mod and *.sum files that are required in most modern versions of go. Tested both changes before commiting successfully.

  • Redis Kubernetes: Redis connecting to slave instead of master

    Redis Kubernetes: Redis connecting to slave instead of master

    When connecting to Redis from code or cli (from a different pod) using redis://redis as the host/url, it connecting to slave instead of master. This causes READONLY You can't write against a read only replica error. Any idea how to solve the issue? How does the DNS resolver knows the master's ip?

  • Update ssl_generate_self_signed.txt

    Update ssl_generate_self_signed.txt

    not able to download cfssl & cfssljson utility by just using curl, worked after using curl -L option. More info here - https://unix.stackexchange.com/a/321751/537201

  • add 'include_timestamp true' in the elastic part of the configmap to …

    add 'include_timestamp true' in the elastic part of the configmap to …

    Hi Marcel,

    Beforehand, this is a great chance of thanking you for all your great work ! In my Kubernetes leanring path, your publications are of utmost help !

    I started to search solutions to concentrates Kubernetes Pods Logs, and, as usual, your tutorial was precise and easy to follow.

    I encountered two troubles:

    One was very straigthforward : as I work on "full Linux" environments, I missed a 'chmod' in the Dockerfile (I wouln'd write for this small thing !) I lost a lot of time on the second trouble, and i saw that this didn't work for you either, in the youtube video : The logs records in elasticsearch were missing the timestamps, and logs witout timestamp are nearly of no use.

    The solution is really simple (when you find it :) ) : one tag was missing in the elstic-fluent.conf part of the configmap : include_timestamp true

    As a 'bonus' (but you may like it or not), I added a slightly modified version of counter.yaml, named counter-err.yaml who put also 'randomly' data on the stderr.

    I hope that these changes suits you.

    Cheers,

    Pascal (from France)

A simple go application that uses Youtube Data API V3 to show the real-time stats for a youtube channel such as the subs, views, avg. earnings etc.
A simple go application that uses Youtube Data API V3 to show the real-time stats for a youtube channel such as the subs, views, avg. earnings etc.

Youtube-channel-monitor A simple go application that uses Youtube Data API V3 to show the real-time stats for a youtube channel such as the subs, view

Dec 30, 2021
⚔ Personal Golang starter kit with an engineer research perspective, expressjs developer friendly, and aims for rapid app development.
⚔ Personal Golang starter kit with an engineer research perspective, expressjs developer friendly, and aims for rapid app development.

Goku (WIP; Author Only) ⚔ Personal Golang starter kit with an engineer research perspective, expressjs developer friendly, and aims for rapid app deve

Jan 6, 2022
This is an assignment for Intern-Software Engineer, Backend Go from LINE MAN Wongnai. It is create with Go and GIN framework

COVID-19-API-Assignment Create by Chayaphon Bunyakan, Email: [email protected] Run the API by typing the following command go run main.go Run t

Jan 9, 2022
Fluxcdproj - The Ultimate Swiss Army knife for DevOps, Developers and Platform Engineers
Fluxcdproj -  The Ultimate Swiss Army knife for DevOps, Developers and Platform Engineers

Fluxcdproj - The Ultimate Swiss Army knife for DevOps, Developers and Platform Engineers

Feb 1, 2022
A toolbox for debugging docker container and kubernetes with web UI.
A toolbox for debugging docker container and kubernetes with web UI.

A toolbox for debugging Docker container and Kubernetes with visual web UI. You can start the debugging journey on any docker container host! You can

Oct 20, 2022
A shields.io API for your youtube channel to protect your api key
A shields.io API for your youtube channel to protect your api key

Youtube-Channel-Badge A shields.io API for your youtube channel to protect your

Dec 23, 2021
A go (golang) library to search videos in YouTube.

YT Search A go (golang) library to search videos in YouTube. Installation go get github.com/AnjanaMadu/YTSearch Usage package main import ( "fmt"

Oct 1, 2022
YouTube'da altyazısı olan veya otomatik olarak oluşturulmuş altyazılı videolarda istediğiniz kelimenin hangi saat, dakika ve saniye de geçtiğini size gösterip aradığınız şeyi hızlıca bulmanızı sağlar.

YouTube Subtitles YouTube'da altyazısı olan veya otomatik olarak oluşturulmuş altyazılı videolarda istediğiniz kelimenin hangi saat, dakika ve saniye

Mar 4, 2022
A long-running Go program that watches a Youtube playlist for new videos, and downloads them using yt-dlp or other preferred tool.

ytdlwatch A long-running Go program that watches a Youtube playlist for new videos, and downloads them using yt-dlp or other preferred tool. Ideal for

Jul 25, 2022
A tiny Go library + client for downloading Youtube videos. The library is capable of fetching Youtube video metadata, in addition to downloading videos.

A tiny Go library + client (command line Youtube video downloader) for downloading Youtube videos. The library is capable of fetching Youtube video metadata, in addition to downloading videos. If ffmpeg is available, client can extract MP3 audio from downloaded video files.

Oct 14, 2022
A youtube library for retrieving metadata, and obtaining direct links to video-only/audio-only/mixed versions of videos on YouTube in Go.

A youtube library for retrieving metadata, and obtaining direct links to video-only/audio-only/mixed versions of videos on YouTube in Go. Install go g

Dec 10, 2022
This project will help you to create Live img.shields.io Badges which will Count YouTube Stats (Subscriber, Views, Videos) without YouTube API
This project will help you to create Live img.shields.io Badges which will Count YouTube Stats (Subscriber, Views, Videos) without YouTube API

Free YouTube Stats Badge This project will help you to create Live img.shields.io Badges which will Count YouTube Stats (Subscriber, Views, Videos) wi

Oct 11, 2022
A simple go application that uses Youtube Data API V3 to show the real-time stats for a youtube channel such as the subs, views, avg. earnings etc.
A simple go application that uses Youtube Data API V3 to show the real-time stats for a youtube channel such as the subs, views, avg. earnings etc.

Youtube-channel-monitor A simple go application that uses Youtube Data API V3 to show the real-time stats for a youtube channel such as the subs, view

Dec 30, 2021
A non-go engineer tries to write Go to solve Advent of Code

Wherein an engineer (who primarily uses Kotlin, Java, Scala and C#) tries to teach themselves Go by solving Advent of Code challenges. It's... not pre

Dec 9, 2021
⚔ Personal Golang starter kit with an engineer research perspective, expressjs developer friendly, and aims for rapid app development.
⚔ Personal Golang starter kit with an engineer research perspective, expressjs developer friendly, and aims for rapid app development.

Goku (WIP; Author Only) ⚔ Personal Golang starter kit with an engineer research perspective, expressjs developer friendly, and aims for rapid app deve

Jan 6, 2022
This is an assignment for Intern-Software Engineer, Backend Go from LINE MAN Wongnai. It is create with Go and GIN framework

COVID-19-API-Assignment Create by Chayaphon Bunyakan, Email: [email protected] Run the API by typing the following command go run main.go Run t

Jan 9, 2022
Shopify Production Engineer Intern Challenge - Summer 2022

shopify-pe ---------- A tiny inventory management web-application. DESCRIPTION The API backend for this application is written in `go'. It handle

Jan 17, 2022
customer.io full stack engineer take home project
customer.io full stack engineer take home project

customer.io full stack engineer take home project

Jan 21, 2022
Owldetect - Take home challenge for Haraj Solutions Engineer candidates

OwlDetect Welcome to Haraj take home challenge! In this challenge you will be as

Feb 17, 2022