Apptainer: Application containers for Linux

Apptainer

NOTE: The apptainer repo is currently working towards a v1.0.0 release and not ready for production in its current state. Until then, use the Singularity Repo for a production ready version.

CI

What is Apptainer?

Apptainer is an open source container platform designed to be simple, fast, and secure. Many container platforms are available, but Apptainer is designed for ease-of-use on shared systems and in high performance computing (HPC) environments. It features:

  • An immutable single-file container image format, supporting cryptographic signatures and encryption.
  • Integration over isolation by default. Easily make use of GPUs, high speed networks, parallel filesystems on a cluster or server.
  • Mobility of compute. The single file SIF container format is easy to transport and share.
  • A simple, effective security model. You are the same user inside a container as outside, and cannot gain additional privilege on the host system by default.

Apptainer is open source software, distributed under the BSD License.

Check out talks about Apptainer and some use cases of Apptainer on our website.

Getting Started with Apptainer

To install Apptainer from source, see the installation instructions. For other installation options, see our guide.

System administrators can learn how to configure Apptainer, and get an overview of its architecture and security features in the administrator guide.

For users, see the user guide for details on how to run and build containers with Apptainer.

Contributing to Apptainer

Community contributions are always greatly appreciated. To start developing Apptainer, check out the guidelines for contributing.

Please note we have a code of conduct. Please follow it in all your interactions with the project members and users.

Our roadmap, other documents, and user/developer meeting information can be found in the apptainer community page.

We also welcome contributions to our user guide and admin guide.

Support

To get help with Apptainer, check out the Apptainer Help web page.

Go Version Compatibility

Apptainer aims to maintain support for the two most recent stable versions of Go. This corresponds to the Go Release Maintenance Policy and Security Policy, ensuring critical bug fixes and security patches are available for all supported language versions.

Citing Apptainer

Apptainer can be cited using its former name Singularity.

The Singularity software may be cited using our Zenodo DOI 10.5281/zenodo.1310023:

Singularity Developers (2021) Singularity. 10.5281/zenodo.1310023 https://doi.org/10.5281/zenodo.1310023

This is an 'all versions' DOI for referencing Singularity in a manner that is not version-specific. You may wish to reference the particular version of Singularity used in your work. Zenodo creates a unique DOI for each release, and these can be found in the 'Versions' sidebar on the Zenodo record page.

Please also consider citing the original publication describing Singularity:

Kurtzer GM, Sochat V, Bauer MW (2017) Singularity: Scientific containers for mobility of compute. PLoS ONE 12(5): e0177459. https://doi.org/10.1371/journal.pone.0177459

License

Unless otherwise noted, this project is licensed under a 3-clause BSD license found in the license file.

Owner
The Apptainer Container Project
The Apptainer Container Project
Comments
  • Segmentation fault with docker image with apptainer (on an image that did not segfault with singularity)

    Segmentation fault with docker image with apptainer (on an image that did not segfault with singularity)

    Version of Apptainer

    $ singularity --version
    apptainer version 1.0.2-1.el8
    

    I'm trying this command with the following image in docker hub, getting a Segmentation fault.

    $ SINGULARITY_CACHEDIR=/tmp APPTAINER_CACHEDIR=/tmp singularity exec docker://madminertool/madminer-workflow-ph:0.5.3beta16 echo hello
    <snip>
    INFO:    Using cached SIF image
    Segmentation fault
    

    Expected behavior

    The container should run and I display "hello"

    Actual behavior

    I get a Segmentation fault instead. Singularity 3.8.7 works though

    $ SINGULARITY_CACHEDIR=/tmp singularity exec docker://madminertool/madminer-workflow-ph:0.5.3beta16 echo hello
    INFO:    Creating SIF file...
    hello
    $ singularity --version
    singularity version 3.8.7-1.el7
    

    Steps to reproduce this behavior

    You can try the following command:

    $ SINGULARITY_CACHEDIR=/tmp APPTAINER_CACHEDIR=/tmp singularity exec docker://madminertool/madminer-workflow-ph:0.5.3beta16 echo hello
    

    What OS/distro are you running

    Red Hat Enterprise Linux release 8.6 (Ootpa)

    How did you install Apptainer

    RPM source

    Name         : apptainer
    Version      : 1.0.2
    Release      : 1.el8
    Architecture : x86_64
    Size         : 128 M
    Source       : apptainer-1.0.2-1.el8.src.rpm
    Repository   : @System
    From repo    : @commandline
    Summary      : Application and environment virtualization
    URL          : https://apptainer.org
    License      : BSD and LBNL BSD and ASL 2.0
    Description  : Apptainer provides functionality to make portable
                 : containers that can be used across host environments.
    
  • Apptainer 1.1.0 doesn't work with lhcathome atlas unlike apptainer 1.0.3

    Apptainer 1.1.0 doesn't work with lhcathome atlas unlike apptainer 1.0.3

    Version of Apptainer

    What version of Apptainer (or Singularity) are you using? Run

    apptainer-1.1.0~rc.1-1

    Expected behavior

    Lhc@home atlas native works normally just like on apptainer version 1.0.3-1

    Новый файл1.txt

    Actual behavior

    Новый файл.txt

    Steps to reproduce this behavior

    Install apptainer-1.1.0~rc.1-1.el8.x86_64.rpm

    What OS/distro are you running

    VERSION="8.6 (Green Obsidian)"
    ID="rocky"
    ID_LIKE="rhel centos fedora"
    VERSION_ID="8.6"
    PLATFORM_ID="platform:el8"
    PRETTY_NAME="Rocky Linux 8.6 (Green Obsidian)"
    ANSI_COLOR="0;32"
    CPE_NAME="cpe:/o:rocky:rocky:8:GA"
    HOME_URL="https://rockylinux.org/"
    BUG_REPORT_URL="https://bugs.rockylinux.org/"
    ROCKY_SUPPORT_PRODUCT="Rocky Linux"
    ROCKY_SUPPORT_PRODUCT_VERSION="8"
    REDHAT_SUPPORT_PRODUCT="Rocky Linux"
    REDHAT_SUPPORT_PRODUCT_VERSION="8"
    
    
    

    How did you install Apptainer

    From EPEL Testing x86_64 https://centos.pkgs.org/8/epel-testing-x86_64/apptainer-1.1.0~rc.1-1.el8.x86_64.rpm.html

  • Singularity pull image question

    Singularity pull image question

    Greetings the support team:

    This is referring back to an issue https://github.com/apptainer/singularity/issues/5792. I realize that ticket has been closed. I reviewed the posts and https://github.com/apptainer/singularity/issues/5329, my issue is different from @DrDaveD had. My command is a regular singularity pull "singularity pull docker://godlovedc/lolcow", tried on both singularity versions 3.5.2 and 3.7.0. They stuck on "Getting image source signatures". There are no informative message after this line.

    I have tried a few things. I thought the account used for pull a docker image is a service account, it does not have a docker authentication credential, there is no .docker inside $HOME. However, on the same system, I notice a regular user without .docker inside his $HOME, but can pull the image without a problem.

    I thought the account $HOME resides an NFS file system, i.e. /home/account is a symbolic link to an NFS file system, it might matter. However, I notice other regular user account has $HOME in an NFS file system, he can pull the image without a problem.

    I wonder if there is a similar ticket address this? Is there any other debugging approach to identify the problem? Thank you very much.

  • Fails to run unsquashfs

    Fails to run unsquashfs

    Version of Apptainer

    apptainer version 1.1.0
    

    Expected behavior

    $ apptainer exec --no-home docker://centos:7 echo "Hello world!"
    INFO:    Using cached SIF image
    INFO:    Converting SIF file to temporary sandbox...
    Hello world!
    INFO:    Cleaning up image...
    

    Actual behavior

    $ apptainer exec --no-home docker://centos:7 echo "Hello world!"
    INFO:    Using cached SIF image
    INFO:    squashfuse not found, will not be able to mount SIF
    INFO:    fuse2fs not found, will not be able to mount EXT3 filesystems
    INFO:    Converting SIF file to temporary sandbox...
    FATAL:   while extracting /home/cburr/.apptainer/cache/oci-tmp/c73f515d06b0fa07bb18d8202035e739a494ce760aa73129f60f4bf2bd22b407: root filesystem extraction failed: extract command failed: WARNING: passwd file doesn't exist in container, not updating
    WARNING: group file doesn't exist in container, not updating
    WARNING: Skipping mount /etc/hosts [binds]: /etc/hosts doesn't exist in container
    WARNING: Skipping mount /etc/localtime [binds]: /etc/localtime doesn't exist in container
    WARNING: Skipping mount proc [kernel]: /proc doesn't exist in container
    WARNING: Skipping mount /tmp/tmp.fkKaxYs3K9/envs/test/var/apptainer/mnt/session/var/tmp [tmp]: /var/tmp doesn't exist in container
    WARNING: Skipping mount /tmp/tmp.fkKaxYs3K9/envs/test/var/apptainer/mnt/session/etc/resolv.conf [files]: /etc/resolv.conf doesn't exist in container
    /tmp/tmp.fkKaxYs3K9/envs/test/bin/unsquashfs: error while loading shared libraries: libz.so.1: cannot open shared object file: No such file or directory
    : exit status 127
    

    Steps to reproduce this behavior

    I'm going to assume you're unfamilar with conda. In which case you can use the standalone micromamba binary to make an environment:

    # Create a micromamba installation in /tmp
    export MAMBA_ROOT_PREFIX=$(mktemp -d)
    cd $MAMBA_ROOT_PREFIX
    curl -Ls https://micro.mamba.pm/api/micromamba/linux-64/latest | tar -xvj bin/micromamba
    eval "$(./bin/micromamba shell hook -s posix)"
    
    # Install apptainer
    micromamba create --name test -c chrisburr/label/apptainer-issue -c conda-forge apptainer
    # Activate the environment
    micromamba activate test
    # Run the test
    apptainer exec --no-home docker://centos:7 echo "Hello world!"
    

    What OS/distro are you running

    It should be independent as everything is coming from conda-forge but just in case:

    $ cat /etc/os-release
    NAME="Arch Linux"
    PRETTY_NAME="Arch Linux"
    ID=arch
    BUILD_ID=rolling
    ANSI_COLOR="38;2;23;147;209"
    HOME_URL="https://archlinux.org/"
    DOCUMENTATION_URL="https://wiki.archlinux.org/"
    SUPPORT_URL="https://bbs.archlinux.org/"
    BUG_REPORT_URL="https://bugs.archlinux.org/"
    LOGO=archlinux-logo
    

    How did you install Apptainer

    I'm building apptainer for conda-forge: https://github.com/conda-forge/staged-recipes/pull/20641

  • Fail to find GPU driver

    Fail to find GPU driver

    Version of Apptainer

    What version of Apptainer (or Singularity) are you using? Run

    apptainer --version (or singularity --version). image

    Expected behavior

    image I can't find the Nvidia driver, while outside the container, I'm able to find it.

    What else configuration should I do except using --nv?

  • squashfuse Performance

    squashfuse Performance

    Version of Apptainer

    $ apptainer --version
    apptainer version 1.1.0~rc.2-1.el7
    

    Expected behavior

    When using SIF images with unprivileged Apptainer, execution time should be similar to unprivileged Singularity.

    Actual behavior

    Apptainer's move to squashfuse for unprivileged (user namespace) mounts of SIF images has significantly increased the execution time of some containers, compared to automatically unpacking SIF images to a temporary sandbox as unprivileged Singularity did. I believe this is primarily a concern for containers running multiple processes/threads, as it seems there is a single squashfuse process to handle all of the parallel I/O requests and decompression.

    Steps to reproduce this behavior

    apptainer run -i -c -e -B /tmp/atlasgen:/results -B /tmp docker://gitlab-registry.cern.ch/hep-benchmarks/hep-workloads/atlas-gen-bmk:v2.1 -W --threads 1 --events 200 This is an ATLAS event generation benchmark container that will run a process per logical core on the host. Execution times on a system with 2x AMD EPYC 7351 CPUs (64 logical cores total):
    Singularity with user namespaces (unpack to sandbox) Execution time: ~24 min

    Apptainer with setuid (squashfs privileged mount) Execution time: ~25 min

    Apptainer with user namespaces (squashfuse mount) Execution time: ~2 hours 50 minutes

    During execution, I see the squashfuse process using 100% of a single CPU core during most of the run.

    Ideally the default behavior would be to revert to automatically unpacking SIF images when used unprivileged.

    What OS/distro are you running

    Scientific Linux 7

    How did you install Apptainer

    RPM from EPEL testing repo.

  • [SOLVED] Running a graphical Nvidia GPU accelerated program in a container gives GLIBC version mismatch error

    [SOLVED] Running a graphical Nvidia GPU accelerated program in a container gives GLIBC version mismatch error

    Version of Apptainer

    apptainer version 1.0.3-2.fc36

    Expected behavior

    1. A GUI window (glxgears) should pop up with host NVIDIA GPU-accelearted graphics.
    2. nvidia-smi command on the host should show glxgears as a process utilizing the GPU.

    Actual behavior

    I get the following errors

    /usr/bin/glxgears: /lib64/libc.so.6: version `GLIBC_2.34' not found (required by /.singularity.d/libs/libGLX.so.0)
    /usr/bin/glxgears: /lib64/libc.so.6: version `GLIBC_2.34' not found (required by /.singularity.d/libs/libGLdispatch.so.0)
    

    Steps to reproduce this behavior

    1. Create a Container Definition file
    cat << EOF >> glx_test.def
    Bootstrap: docker
    From: centos:centos7
    
    %post
        yum -y install xauth xeyes glx-utils mesa-dri-drivers
    
    %environment
        export LIBGL_DEBUG=verbose
        export LC_ALL=C
    EOF
    
    1. Build a container from the definition file apptainer build --fakeroot glxgears_test.sif glx_test.def

    2. Run glxgears in the container with GPU support apptainer exec --nv glxgears_test.sif glxgears

    What OS/distro are you running

    $ cat /etc/os-release
    NAME="Fedora Linux"
    VERSION="36 (Workstation Edition)"
    ID=fedora
    VERSION_ID=36
    VERSION_CODENAME=""
    PLATFORM_ID="platform:f36"
    PRETTY_NAME="Fedora Linux 36 (Workstation Edition)"
    ANSI_COLOR="0;38;2;60;110;180"
    LOGO=fedora-logo-icon
    CPE_NAME="cpe:/o:fedoraproject:fedora:36"
    HOME_URL="https://fedoraproject.org/"
    DOCUMENTATION_URL="https://docs.fedoraproject.org/en-US/fedora/f36/system-administrators-guide/"
    SUPPORT_URL="https://ask.fedoraproject.org/"
    BUG_REPORT_URL="https://bugzilla.redhat.com/"
    REDHAT_BUGZILLA_PRODUCT="Fedora"
    REDHAT_BUGZILLA_PRODUCT_VERSION=36
    REDHAT_SUPPORT_PRODUCT="Fedora"
    REDHAT_SUPPORT_PRODUCT_VERSION=36
    PRIVACY_POLICY_URL="https://fedoraproject.org/wiki/Legal:PrivacyPolicy"
    VARIANT="Workstation Edition"
    VARIANT_ID=workstation
    

    Nvidia drivers, CUDA, and Nvidia container toolkit are installed on the host The output of nvidia-smi on the host is

    +-----------------------------------------------------------------------------+
    | NVIDIA-SMI 515.57       Driver Version: 515.57       CUDA Version: 11.7     |
    |-------------------------------+----------------------+----------------------+
    | GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
    | Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
    |                               |                      |               MIG M. |
    |===============================+======================+======================|
    |   0  NVIDIA GeForce ...  Off  | 00000000:01:00.0 Off |                  N/A |
    | N/A   34C    P8    N/A /  N/A |      4MiB /  2048MiB |      0%      Default |
    |                               |                      |                  N/A |
    +-------------------------------+----------------------+----------------------+
                                                                                   
    +-----------------------------------------------------------------------------+
    | Processes:                                                                  |
    |  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
    |        ID   ID                                                   Usage      |
    |=============================================================================|
    |    0   N/A  N/A      1858      G   /usr/libexec/Xorg                   2MiB |
    +-----------------------------------------------------------------------------+
    

    The output of nvidia-container-cli list on the host is

    /dev/nvidiactl
    /dev/nvidia-uvm
    /dev/nvidia-uvm-tools
    /dev/nvidia-modeset
    /dev/nvidia0
    /usr/bin/nvidia-smi
    /usr/bin/nvidia-debugdump
    /usr/bin/nvidia-persistenced
    /usr/bin/nvidia-cuda-mps-control
    /usr/bin/nvidia-cuda-mps-server
    /usr/lib64/libnvidia-ml.so.515.57
    /usr/lib64/libnvidia-cfg.so.515.57
    /usr/lib64/libcuda.so.515.57
    /usr/lib64/libnvidia-opencl.so.515.57
    /usr/lib64/libnvidia-ptxjitcompiler.so.515.57
    /usr/lib64/libnvidia-allocator.so.515.57
    /usr/lib64/libnvidia-compiler.so.515.57
    /usr/lib64/libnvidia-ngx.so.515.57
    /usr/lib64/libnvidia-encode.so.515.57
    /usr/lib64/libnvidia-opticalflow.so.515.57
    /usr/lib64/libnvcuvid.so.515.57
    /usr/lib64/libnvidia-eglcore.so.515.57
    /usr/lib64/libnvidia-glcore.so.515.57
    /usr/lib64/libnvidia-tls.so.515.57
    /usr/lib64/libnvidia-glsi.so.515.57
    /usr/lib64/libnvidia-fbc.so.515.57
    /usr/lib64/libnvidia-rtcore.so.515.57
    /usr/lib64/libnvoptix.so.515.57
    /usr/lib64/libGLX_nvidia.so.515.57
    /usr/lib64/libEGL_nvidia.so.515.57
    /usr/lib64/libGLESv2_nvidia.so.515.57
    /usr/lib64/libGLESv1_CM_nvidia.so.515.57
    /usr/lib64/libnvidia-glvkspirv.so.515.57
    /lib/firmware/nvidia/515.57/gsp.bin
    

    How did you install Apptainer

    I installed apptainer from the official repository using the following command sudo dnf install -y apptainer

  • fix: check valid path for --pwd

    fix: check valid path for --pwd

    Signed-off-by: Pablo Caderno [email protected]

    Description of the Pull Request (PR):

    Changed default behavior of switching to a different directory if os.Chdir(e.EngineConfig.OciConfig.Process.Cwd) fails when starting the container.

    Whilst this approach should fix the issue, it might be "too strict" for other cases.

    This fixes or addresses the following GitHub issues:

    • Fixes https://github.com/apptainer/singularity/issues/6086
  • Allow unprivileged users to build images by default

    Allow unprivileged users to build images by default

    Description of the Pull Request (PR):

    ~This patch removes several Geteuid() != 0 checks and allow unprivileged users to build images by default.~ This patch introduces a new flag -N|--unprivileged, with which some Geteuid() != 0 will be skipped.

    Motivations

    • Nowadays, package managers such as Nix and Gentoo ebuild and a lot of projects manags don't require root privileges to work, while still benefits from / rely on the customized directory trees (not available in Appimages). As a mature container format, Apptainer serves as a reliable way to wrap the build result, making it portable and runnabe on network-based file systems where I/O of small files is terrably slow.

    • As a practice of the same privilege inside and outside philosophy of Apptainer, it is the user's responsibility to provide enough level privilege for their toolchain. This patch also enable users to build in restricted environment where there is no way to get root privilege and that no suid-ed executables are available.

    • A program should be given the minimum level of privilege needed from the security perspective, which is known as the Principle of Least Privilege. It makes the container building (especially the %startup and %files sections) much safer.

    Function parameter changes

    • runSectionScript(name string, script types.Script) -> runSectionScript(name string, script types.Script, unprivileged bool)

    • Full(ctx context.Context) -> Full(ctx context.Context, unprivileged bool)

    • Use b.Full(ctx, false) in ConvertOCIToSIF

    Status

    This patch is compiled and run locally, and it work as expected so far, but is still a draft considering the need of discussion, consensus reaching and documentation update.

    This fixes or addresses the following GitHub issues:

    • Fixes #215

    Before submitting a PR, make sure you have done the following:

  • symbol lookup error: singularity: undefined symbol: seccomp_notify_respond

    symbol lookup error: singularity: undefined symbol: seccomp_notify_respond

    Version of Apptainer

    What version of Apptainer (or Singularity) are you using? Run

    apptainer --version (or singularity --version).

    singularity: symbol lookup error: singularity: undefined symbol: seccomp_notify_respond
    

    Installed via EPEL: apptainer-1.1.0-1.el8.x86_64

    Expected behavior

    What did you expect to see when you do...?

    Expected version string to be printed.

    Actual behavior

    What actually happened? Why was it incorrect?

    apptainer threw an error about a missing symbol. No version string produced.

    Steps to reproduce this behavior

    How can others reproduce this issue/problem?

    Just do: apptainer --version

    What OS/distro are you running

    $ cat /etc/os-release
    NAME="Red Hat Enterprise Linux"
    VERSION="8.1 (Ootpa)"
    ID="rhel"
    ID_LIKE="fedora"
    VERSION_ID="8.1"
    PLATFORM_ID="platform:el8"
    PRETTY_NAME="Red Hat Enterprise Linux 8.1 (Ootpa)"
    ANSI_COLOR="0;31"
    CPE_NAME="cpe:/o:redhat:enterprise_linux:8.1:GA"
    HOME_URL="https://www.redhat.com/"
    BUG_REPORT_URL="https://bugzilla.redhat.com/"
    
    REDHAT_BUGZILLA_PRODUCT="Red Hat Enterprise Linux 8"
    REDHAT_BUGZILLA_PRODUCT_VERSION=8.1
    REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux"
    REDHAT_SUPPORT_PRODUCT_VERSION="8.1"
    
    

    How did you install Apptainer

    Write here how you installed Apptainer (or Singularity). Eg. RPM, source.

    RPM via EPEL: apptainer-1.1.0-1.el8.x86_64

  • Prevent force overridden of PS1 if already set

    Prevent force overridden of PS1 if already set

    Description of the Pull Request (PR):

    When users explicitly set PS1 value in environment section, 99-base.sh force overridden of that PS1 value. This PR prevents force overridden of PS1. This make allow users to set their preferable PS1 value for their image.

    This fixes or addresses the following GitHub issues:

    • Fixes #925

    Before submitting a PR, make sure you have done the following:

  • Latest apptainer RPM in EPEL 8 does not Provide: Singularity

    Latest apptainer RPM in EPEL 8 does not Provide: Singularity

    Version of Apptainer

    N/A

    Expected behavior

    On a fresh EL8 machine with EPEL enabled, I expect to be able to yum install singularity and get the apptainer or apptainer-suid package. Specifically, we had yum install singularity in a Dockerfile and tripped over this in one of our automatic builds.

    Actual behavior

    Instead installation fails:

    [root@bb6ee73afd77 /]# yum install singularity
    CentOS Stream 8 - AppStream                                                                                                                                                                                                                            145 kB/s |  27 MB     03:08    
    CentOS Stream 8 - BaseOS                                                                                                                                                                                                                               3.1 MB/s |  26 MB     00:08    
    CentOS Stream 8 - Extras                                                                                                                                                                                                                                56 kB/s |  18 kB     00:00    
    CentOS Stream 8 - Extras common packages                                                                                                                                                                                                               948  B/s | 5.2 kB     00:05    
    CentOS Stream 8 - PowerTools                                                                                                                                                                                                                           3.4 MB/s | 5.5 MB     00:01    
    Extra Packages for Enterprise Linux 8 - x86_64                                                                                                                                                                                                         7.8 MB/s |  13 MB     00:01    
    Extra Packages for Enterprise Linux Modular 8 - x86_64                                                                                                                                                                                                 861 kB/s | 733 kB     00:00    
    No match for argument: singularity
    Error: Unable to find a match: singularity
    
    [epel]
    name=Extra Packages for Enterprise Linux $releasever - $basearch
    # It is much more secure to use the metalink, but if you wish to use a local mirror
    # place it's address here.
    #baseurl=https://download.example/pub/epel/$releasever/Everything/$basearch
    metalink=https://mirrors.fedoraproject.org/metalink?repo=epel-$releasever&arch=$basearch&infra=$infra&content=$contentdir
    enabled=1
    gpgcheck=1
    countme=1
    gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-8
    
    [epel-debuginfo]
    name=Extra Packages for Enterprise Linux $releasever - $basearch - Debug
    # It is much more secure to use the metalink, but if you wish to use a local mirror
    # place it's address here.
    #baseurl=https://download.example/pub/epel/$releasever/Everything/$basearch/debug
    metalink=https://mirrors.fedoraproject.org/metalink?repo=epel-debug-$releasever&arch=$basearch&infra=$infra&content=$contentdir
    enabled=0
    gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-8
    gpgcheck=1
    
    [epel-source]
    name=Extra Packages for Enterprise Linux $releasever - $basearch - Source
    # It is much more secure to use the metalink, but if you wish to use a local mirror
    # place it's address here.
    #baseurl=https://download.example/pub/epel/$releasever/Everything/SRPMS
    metalink=https://mirrors.fedoraproject.org/metalink?repo=epel-source-$releasever&arch=$basearch&infra=$infra&content=$contentdir
    enabled=0
    gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-8
    gpgcheck=1
    

    Steps to reproduce this behavior

    1. Start an EL8 container, e.g. podman run quay.io/almalinux/almalinux:8
    2. Install epel-release in the container
    3. Run yum install singularity

    What OS/distro are you running

    [root@bb6ee73afd77 /]# cat /etc/os-release
    NAME="CentOS Stream"
    VERSION="8"
    ID="centos"
    ID_LIKE="rhel fedora"
    VERSION_ID="8"
    PLATFORM_ID="platform:el8"
    PRETTY_NAME="CentOS Stream 8"
    ANSI_COLOR="0;31"
    CPE_NAME="cpe:/o:centos:centos:8"
    HOME_URL="https://centos.org/"
    BUG_REPORT_URL="https://bugzilla.redhat.com/"
    REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux 8"
    REDHAT_SUPPORT_PRODUCT_VERSION="CentOS Stream"
    [root@bb6ee73afd77 /]# cat /etc/yum.repos.d/epel.repo 
    

    How did you install Apptainer

    N/A

  • RFE: Templating Support for Definition File

    RFE: Templating Support for Definition File

    Description

    If we can pass values at build time to replace variables' value inside definition file, before actually process definition file to build image, that brings flexibility to how we interact with definition file. I called this feature, "Templateing".

    Usually OSS projects distribute not only source code but its binaries and also container images when they release these days. In the CI/CD era, creating binaries is written in the form of scripts that covers wide range of OS and variety of dependencies. That "recipe" can switch OS versions, library versions etc. Widely known examples in HPC are Spack recipes.

    If Apptainer definition file support "Templating" feature:

    • Less integration effort for OSS maintainers: They can reuse existing "recipe" scripts when they try to add Apptainer definition file to their release artifact.
    • It allows users to alter base image and/or version (This is not supported in Apptainer yet.)
    • It allows users to alter script "switch" (This is kind of supported through APPTAINERENV_VARS but this is not support every section in definition file.)
    • It allows users to alter variables at every section in consistent way (This is not supported in Apptainer yet.)

    This is inspired by Dockerfile ARG and --build-arg (--build-arg-file) option for build command.

    UI Changes

    Add --build-arg and --build-arg-file options to build command

    --build-arg

    apptainer build --build-arg OS_VER=8.7 --build-arg APP_VERSION=2206 app.sif app.def
    

    --build-arg-file

    apptainer build --build-arg-file build-args app.sif app.def
    

    build-args file

    OS_VER=8.7
    APP_VERSION=2206
    

    Internal Behavioural Changes

    When build-arg or build-arg-file option comes in:

    1. process build-args first
    2. fill template to create actual definition file
    3. process definition file

    How definition file looks like:

    Bootstrap: docker
    From: rockylinux:${OS_VER}
    
    %setup
        touch /${OS_VER}
        touch ${APPTAINER_ROOTFS}/file2
    
    %files
        /script-${OS_VER}.sh
        /dir-${OS_VER} /opt
    
    %environment
        export OS_VER=${OS_VER}
    
    %post
        /script-${OS_VER}.sh ${APP_VER}
    
    %runscript
        echo "Container was created $NOW"
        echo "Arguments received: $*"
        exec echo "$@"
    
    %startscript
        nc -lp ${LISTEN_PORT:-'8080'}
    
    %test
        grep -q NAME=\"Rocky\" /etc/os-release
        if [ $? -eq 0 ]; then
            echo "Container base is Rocky Linux as expected."
        else
            echo "Container base is not Rocky Linux."
            exit 1
        fi
    
    %labels
        Author ${AUTHER:-}
        Version ${VERSION:-}
    
    %help
        This is a ${DEMO:-} for templating definition file
    

    I am please to hear any comments and ideas from community here.

  • better support for startscript.sh (feature request)

    better support for startscript.sh (feature request)

    Version of Apptainer

    $ apptainer --version
    apptainer version 1.1.3
    

    Expected behavior

    I'd like to run apptainer instance start docker://some/container or similar and get the entrypoint to run as the startscript.

    Actual behavior

    afaik, there is no good way to get the container to recognize the docker entrypoint as the startscript. Instead, apptainer shoves all of the entrypoint stuff into the runscript by default and if you run the container as an instance it does not execute the entrypoint.

    I've tried a few different workarounds, but so far nothing is ideal. One idea is to force the container to execute the runscript when the instance is started. I tried this by creating a wrapper script (on the host system) like so:

    echo "/bin/sh /.singularity.d/runscript" >startscript
    
    chmod 750 startscript
    

    Then bind mounting it into the metadata directory at runtime like so:

    export APPTAINER_BINDPATH=startscript:/.singularity.d/startscript
    

    This seems to work, but it is very brittle and there are potential side effects for running other containers.

    The second thing I tried was to rebuild the container with a def file of the following format.

    Bootstrap: docker                                                               
    From: some/container                                                
                                                                                       
    %post                                                                           
        cp /.singularity.d/runscript /.singularity.d/startscript
    

    This also works, but it produces an intermediate build artifact that may be unwanted.

    Solution?

    I think some discussion is needed about the correct way to approach this. Maybe the user could pass a flag to instance start command to let it know it should treat the runscript as the startscript. Or maybe Apptainer should intelligently copy the docker CMD and entrypoint stuff to the startscript instead of the runscript if the user is executing the container as an instance. This second option would probably be a sensible default, but it would also be a breaking change if folks are currently using instance start on OCI containers.

    What say the developers?

  • error: can't mount image /proc/self/fd/9: failed to mount squashfs filesystem

    error: can't mount image /proc/self/fd/9: failed to mount squashfs filesystem

    I am running on apptainer version 1.1.3-1.el7 in an HPC system.

    Expected behaviour

    I am currently running nextflow pipelines (https://github.com/nf-core/rnaseq) using singularity images, it pulls and mounts the images automatically (I have run pipelines successfully before).

    Actual behaviour

    However, when mounting one of the images (on the later steps of the pipeline, NFCORE_RNASEQ:RNASEQ:MARK_DUPLICATES_PICARD:PICARD_MARKDUPLICATES) I get the following error:

    Command error: 
               WARNING: DEPRECATED USAGE: Forwarding SINGULARITYENV_TMPDIR as environment variable will not be supported in the future, use APPTAINERENV_TMPDIR instead 
               WARNING: DEPRECATED USAGE: Forwarding SINGULARITYENV_NXF_DEBUG as environment variable will not be supported in the future, use APPTAINERENV_NXF_DEBUG instead 
               FATAL:   container creation failed: mount hook function failure: mount /proc/self/fd/9->/var/apptainer/mnt/session/rootfs error: while mounting image /proc/self/fd/9: squashfuse_ll exited with status 255: Something went wrong trying to read the squashfs image.
    

    I have seen this error in other issues on lower versions of singularity --> https://github.com/apptainer/singularity/issues/5408 I have also discarded lack space errors, running in different locations of the HPC cluster with more space.

    Steps to reproduce

    nextflow run nf-core/rnaseq -profile singularity -r 3.9 --input samplesheet_test.csv --outdir test_dir --aligner star_rsem --save_align_intermeds --genome hg38 --skip_trimming --skip_umi_extract

    I am sorry I can't give any files or better reproduction since the data is confidential.

    Machine I am working on

    NAME="CentOS Linux"
    VERSION="7 (Core)"
    ID="centos"
    ID_LIKE="rhel fedora"
    VERSION_ID="7"
    PRETTY_NAME="CentOS Linux 7 (Core)"
    ANSI_COLOR="0;31"
    CPE_NAME="cpe:/o:centos:centos:7"
    HOME_URL="https://www.centos.org/"
    BUG_REPORT_URL="https://bugs.centos.org/"
    
    CENTOS_MANTISBT_PROJECT="CentOS-7"
    CENTOS_MANTISBT_PROJECT_VERSION="7"
    REDHAT_SUPPORT_PRODUCT="centos"
    REDHAT_SUPPORT_PRODUCT_VERSION="7"
    

    Installing apptainer

    It was installed beforehand in the HPC system by the admins

  • Ubuntu Focal Image: `/lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.34' not found`

    Ubuntu Focal Image: `/lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.34' not found`

    Version of Apptainer

    apptainer version 1.1.4

    Expected behavior

    EDIT: Updated to avoid pydrake (large dep), and instead use glxgears

    For example, in Apptainer with --nv and glxgears installed:

    $ glxgears
    # No error
    

    Actual behavior

    $ glxgears
    glxgears: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.34' not found (required by /.singularity.d/libs/libGLdispatch.so.0)
    glxgears: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.34' not found (required by /.singularity.d/libs/libGLX.so.0)
    

    Steps to reproduce this behavior

    See https://github.com/EricCousineau-TRI/repro/tree/f1ff2000abdc631a94a750cdb0c2d66d33ff4857/bug/apptainer_issue945

    Look at repro.sh; more notably:

    ${apptainer_bin} build --fakeroot --sandbox ./repro.sandbox ./repro.Apptainer
    
    # Succeeds.
    ${apptainer_bin} exec ./repro.sandbox ${test_script}
    
    # Fails.
    ${apptainer_bin} exec --nv ./repro.sandbox ${test_script}
    

    What OS/distro are you running

    $ cat /etc/os-release
    PRETTY_NAME="Ubuntu 22.04.1 LTS"
    NAME="Ubuntu"
    VERSION_ID="22.04"
    VERSION="22.04.1 LTS (Jammy Jellyfish)"
    VERSION_CODENAME=jammy
    ID=ubuntu
    ID_LIKE=debian
    HOME_URL="https://www.ubuntu.com/"
    SUPPORT_URL="https://help.ubuntu.com/"
    BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
    PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
    UBUNTU_CODENAME=jammy
    

    How did you install Apptainer

    From source, using this script: https://github.com/EricCousineau-TRI/repro/blob/f1ff2000abdc631a94a750cdb0c2d66d33ff4857/shell/apptainer_stuff/build_and_install_apptainer.sh

  • e2e tests don't work with Ubuntu 22.04

    e2e tests don't work with Ubuntu 22.04

    In #901 the Ubuntu version used to run things in CI was updated from 20.04 to 22.04, but the e2e tests were left at 20.04 because it caused problems for the fakeroot command binds and for cgroup tests. Look into what can be done about those problems and upgrade the e2e tests to use 22.04.

Podman: A tool for managing OCI containers and pods

Podman: A tool for managing OCI containers and pods Podman (the POD MANager) is a tool for managing containers and images, volumes mounted into those

Jan 1, 2023
Experimental code execution microservice based on Docker containers.
Experimental code execution microservice based on Docker containers.

ranna ランナー - Experimental code runner microservice based on Docker containers. ⚠ PLEASE READ BEFORE USE First of all, this project is currently work i

Dec 9, 2022
a tool for getting metrics in containers

read metrics in container if environment is container, the cpu ,memory is relative to container, else the metrics is relative to host. juejing link :

Oct 13, 2022
An application that is developed to generate application by API specification

GO boilerplate is an application that is developed to generate application by API specification and Database schema with the collaboration with opn-generator.

Oct 14, 2021
Record CS knowlegement with XMind, version 2.0. 使用 XMind 记录 Linux 操作系统,网络,C++,Golang 以及数据库的一些设计
Record CS knowlegement with XMind, version 2.0. 使用 XMind 记录 Linux 操作系统,网络,C++,Golang 以及数据库的一些设计

Psyduck 另一个用 XMind 记录 CS 基础问题的地方,同样提供了 .xmind 源文件以及导出的 .pdf 文件,XMind 版本为「XMind 2020」。 在 2020 年时,曾花了约 2 个月的时间整理了第一份 XMind 知识库: ZeroMind。 之所以额外创建一个 Repo

Dec 30, 2022
Lima launches Linux virtual machines on macOS, with automatic file sharing, port forwarding, and containerd.

Lima: Linux-on-Mac ("macOS subsystem for Linux", "containerd for Mac")

Jan 8, 2023
IBus Engine for GoVarnam. An easy way to type Indian languages on GNU/Linux systems.

IBus Engine For GoVarnam An easy way to type Indian languages on GNU/Linux systems. goibus - golang implementation of libibus Thanks to sarim and haun

Feb 10, 2022
Nintendo Switch Joycon keyboard mapper for Linux

joygo Nintendo Switch Joycon keyboard mapper for Linux First, build with -> chmod +x build && ./build Then pair your Joycons to your computer via Blue

Nov 13, 2021
Linux namespace with golang

Linux namespace with golang

Nov 10, 2021
Testing the use of a golang wrapper around UserMode Linux for making stdin

This code is for testing the use of a golang wrapper around UserMode Linux for making stdin, stdout and stderr available to attach, detach and reattach to from the host using Unix sockets.

Dec 24, 2021
Script that sets your nzxt kraken temps based on cpu temps on linux

liquidctl-cpu-temp Script that monitors cpu temps and sets cpu cooler temps according to entered fan/pump curves. Only tested on NZXT kraken z63 requi

Nov 16, 2021
An experimental vulkan 3d engine for linux (raspberry 4)

protomatter an experimental vulkan 3d engine for linux (raspberry 4).

Nov 14, 2021
Graphical small-internet client for windows, linux, MacOS X and BSDs. Supports gemini, http, https, gopher, finger.
Graphical small-internet client for windows, linux, MacOS X and BSDs. Supports gemini, http, https, gopher, finger.

Graphical small-internet client for windows, linux, MacOS X and BSDs. Supports gemini, http, https, gopher, finger.

Jan 1, 2023
Linux UDisks2 (dbus) easy access from Go

udisks udisks gives you high level access to Linux system drives and block devices wrapping the udisk2 interfaces. An example command line udisks clie

Apr 25, 2022
Monitor usb hotplug events (Linux)

USBMon Thin udev wrapper to simplify usb device add/remove monitoring. // monitor USB hotplug events package main import ( "context" "fmt" "githu

Aug 1, 2022
Simple application to manage basic deployments

Simple application to manage basic deployments Usage You need to create a deployment yaml file (sample). then run the binary file with --config flag t

Aug 29, 2021
Person is a simple CRUD application written in go which exposes API endpoint to create the person.

Person Person is a simple CRUD application written in go which exposes API endpoint to create the person. Installation Install docker in your local sy

Oct 18, 2021
An example event-driven application using Atmo and NATS

Atmo + NATS Example Project This repo is an example of using Atmo with NATS as a streaming messaging layer. In this example, Atmo connects to NATS and

Oct 27, 2021
A golang application to mock the billing system

mock-billing-cli A golang application to mock the billing system in super markets Features View all items & items with filter Refill items with admin

Jan 13, 2022