We're working with Windows and Mac M1 machines to develop locally using Docker and need to fetch and install a .deb package within our docker environment.
The package needs amd64/arm64 depending on the architecture being used.
Is there a way to determine this in the docker file i.e.
if xyz === 'arm64'
RUN wget http://...../arm64.deb
else
RUN wget http://...../amd64.deb
First you need to check if there is no other (easier) way to do it with the package manager.
You can use the arch command to get the architecture (equivalent to uname -m). The problem is that it does not return the values you expect (aarch64 instead of arm64 and x86_64 instead of amd64). So you have to convert it, I have done it through a sed.
RUN arch=$(arch | sed s/aarch64/arm64/ | sed s/x86_64/amd64/) && \
wget "http://...../${arch}.deb"
Note: You should add the following code before running this command to prevent unexpected behavior in case of failure in the piping chain. See Hadolint DL4006 for more information.
SHELL ["/bin/bash", "-o", "pipefail", "-c"]
Later versions of Docker expose info about the current architecture using global variables:
BUILDPLATFORM — matches the current machine. (e.g. linux/amd64)
BUILDOS — os component of BUILDPLATFORM, e.g. linux
BUILDARCH — e.g. amd64, arm64, riscv64
BUILDVARIANT — used to set ARM variant, e.g. v7
TARGETPLATFORM — The value set with --platform flag on build
TARGETOS - OS component from --platform, e.g. linux
TARGETARCH - Architecture from --platform, e.g. arm64
TARGETVARIANT
Note that you may need to export DOCKER_BUILDKIT=1 on headless machines for these to work, and they are enabled by default on Docker Desktop.
Use in conditional scripts like so:
RUN wget "http://...../${TARGETARCH}.deb"
You can even branch Dockerfiles by using ONBUILD like this:
FROM alpine as build_amd
# Intel stuff goes here
ONBUILD RUN apt-get install -y intel_software.deb
FROM alpine as build_arm
# ARM stuff goes here
ONBUILD RUN apt-get install -y arm_software.deb
# choose the build to use for the rest of the image
FROM build_${TARGETARCH}
# rest of the dockerfile follows
Related
I'm writing a Dockerfile which needs to install different pip wheels, depending on the PyTorch version and the CUDA version installed in the base image:
RUN pip install torch-scatter -f https://data.pyg.org/whl/torch-${TORCH}+${CUDA}.html
RUN pip install torch-sparse -f https://data.pyg.org/whl/torch-${TORCH}+${CUDA}.html
For this reason, I need to capture these versions at build time, into the environment variables TORCH and CUDA.
I know I could use the ENV command in a Dockerfile to assign to environmental variables:
ENV TORCH=1.12.0
ENV CUDA=113
(note that CUDA must not contain any dot, unlike TORCH), then build the container. Now, if I log into the running Docker container, I can get these versions from the command line:
python -c "import torch; print(torch.__version__)"
>>> 1.12.0
python -c "import torch; print(torch.version.cuda)"
>>> 11.3
However, I don't want to hardcode the versions in the Dockerfile, because if I change the base image, the hardcoded values will be wrong, I will try installing the wrong wheels, and installation will fail. I want to find them at build time, and assign them to TORCH & CUDA. How can I do it?
You can use ordinary shell syntax to set environment variables within a RUN command, with the limitation that those settings will be lost at the end of that command. So within a single RUN command you can use shell command substitution to set the environment variable and use it, but its value will not be available any more after that command.
# all within a single RUN line
RUN TORCH=$(python -c "import torch; print(torch.__version__)"); \
CUDA=$(python -c "import torch; print(torch.version.cuda)" | sed 's/\.//g'); \
pip install torch-scatter -f https://data.pyg.org/whl/torch-${TORCH}+${CUDA}.html; \
pip install torch-sparse -f https://data.pyg.org/whl/torch-${TORCH}+${CUDA}.html
Let's say we have a services.proto with our gRPC service definitions, for example:
service Foo {
rpc Bar (BarRequest) returns (BarReply) {}
}
message BarRequest {
string test = 1;
}
message BarReply {
string test = 1;
}
We could compile this locally to Go by running something like
$ protoc --go_out=. --go_opt=paths=source_relative \
--go-grpc_out=. --go-grpc_opt=paths=source_relative \
services.proto
My concern though is that running this last step might produce inconsistent output depending on the installed version of the protobuf compiler and the Go plugins for gRPC. For example, two developers working on the same project might have slightly different versions installed locally.
It would seem reasonable to me to address this by containerizing the protoc step. For example, with a Dockerfile like this...
FROM golang:1.18
WORKDIR /src
RUN apt-get update && apt-get install -y protobuf-compiler
RUN go install google.golang.org/protobuf/cmd/protoc-gen-go#v1.26
RUN go install google.golang.org/grpc/cmd/protoc-gen-go-grpc#v1.1
CMD protoc --go_out=. --go_opt=paths=source_relative --go-grpc_out=. --go-grpc_opt=paths=source_relative services.proto
... we can run the protoc step inside a container:
docker run --rm -v $(pwd):/src $(docker build -q .)
After wrapping the previous command in a shell script, developers can run it on their local machine, giving them deterministic, reproducible output. It can also run in a CI/CD pipeline.
My question is, is this a sound approach and/or is there an easier way to achieve the same outcome?
NB, I was surprised to find that the official grpc/go image does not come with protoc preinstalled. Am I off the beaten path here?
My question is, is this a sound approach and/or is there an easier way to achieve the same outcome?
It is definitely a good approach. I do the same. Not only to have a consistent across the team, but also to ensure we can produce the same output in different OSs.
There is an easier way to do that, though.
Look at this repo: https://github.com/jaegertracing/docker-protobuf
The image is in Docker hub, but you can create your image if you prefer.
I use this command to generate Go:
docker run --rm -u $(id -u) \
-v${PWD}/protos/:/source \
-v${PWD}/v1:/output \
-w/source jaegertracing/protobuf:0.3.1 \
--proto_path=/source \
--go_out=paths=source_relative,plugins=grpc:/output \
-I/usr/include/google/protobuf \
/source/*
I am experimenting with docker's buildx and noticed that everything seems to be straight forward except for one thing. My Dockerfile needs to pull certain packages depending on the architecture.
For example, here's a piece of the Dockerfile:
FROM XYZ
# Set environment variable for non-interactive install
ARG DEBIAN_FRONTEND=noninteractive
# Run basic commands to update the image and install basic stuff.
RUN apt update && \
apt dist-upgrade -y -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" && \
apt autoremove -y && \
apt clean -y && \
...
# Install amazon-ssm-agent
mkdir /tmp/ssm && \
curl https://s3.amazonaws.com/ec2-downloads-windows/SSMAgent/latest/debian_amd64/amazon-ssm-agent.deb -o /tmp/ssm/amazon-ssm-agent.deb && \
As you can see from above, the command is set to pull down the Amazon SSM agent using a hard-coded link.
What's the best way to approach this? Should I just modify this Dockerfile to create a bunch of if conditions?
Docker automatically defines a set of ARGs for you when you're using the BuildKit backend (which is now the default). You need to declare that ARG, and then (within the RUN command) you can use an environment variable $TARGETOS to refer to the target operating system (the documentation suggests linux or windows).
FROM ...
# Must be explicitly declared, and after FROM
ARG TARGETOS
# Then it can be used like a normal environment variable
RUN curl https://s3.amazonaws.com/ec2-downloads-$TARGETOS/...
There is a similar $TARGETPLATFORM if you need to build either x86 or ARM images, but its syntax doesn't necessarily match what's in this URL. If $TARGETPLATFORM is either amd64 or arm, you may need to reconstruct the Debian architecture string. You can set a shell variable within a single RUN command and it will last until the end of that command, but no longer.
ARG TARGETPLATFORM
RUN DEBARCH="$TARGETPLATFORM"; \
if [ "$DEBARCH" = "arm" ]; then DEBARCH=arm64; fi; \
curl .../debian-$DEBARCH/...
Scenario
I want to develop ansible roles. Those roles should be validated through a CI/CD process with molecule and utilize docker as driver. This validation step should include multiple Linux flavours (e.g. centos/ubuntu/debian) times the supported ansible versions.
The tests should then be executed such that the role is verified with
centos:7 + ansible:2.5
centos:7 + ansible:2.6
centos:7 + ansible:2.7
...
ubuntu:1604 + ansible:2.5
ubuntu:1604 + ansible:2.6
ubuntu:1604 + ansible:2.7
...
The issues at hand
there is no official ansible image availabe
how to best test a role for ansible version compatibility?
Issue 1: no official ansible images
The official images by the ansible team have been deprecated for about 3 years now:
https://hub.docker.com/r/ansible/ubuntu14.04-ansible
https://hub.docker.com/r/ansible/centos7-ansible
In addition, the link the deprecated images refer to in order to find new images supporting ansible is quite useless due to the sheer amount of results
https://hub.docker.com/search/?q=ansible&page=1&isAutomated=0&isOfficial=0&pullCount=1&starCount=0
Is there a well-maintained ansible docker image by the community (or ansible) that fills the void?
Preferable with multiple versions that can be pulled and a CI process that builds and validates the created image regularly.
Why am I looking for ansible images? I do not want to reinvent the wheel (if possible). I would like to use the images to test ansible roles via molecule for version incompatibility.
I searched but could not find anything truly useful. What images are you using to run ansible in a container/orchestrator? Do you build and maintain the images yourself?
e.g. https://hub.docker.com/r/ansibleplaybookbundle/apb-base/tags
looked promising, however, the images in there are also over 7 months old (at least).
Issue 2: how to best test a role for ansible version compatibility?
Is creating docker images for each combination of OS and ansible version the best way to test via molecule and docker as driver? Or is there a smarter way to test backward compatibility of ansible roles with multiple OS times different ansible versions?
I already test my ansible roles with molecule and docker as driver. Those tests currently only testing the functionality of the role on various Linux distros, but not the ansible backward compatibility by running the role with older ansible versions.
Here an example role with travis tests for centos7/ubuntu1604/ubuntu1804 based on geerlingguy's ntp role: https://github.com/Gepardec/ansible-role-ntp
Solution
In order to test ansible roles with multiple versions of ansible, python and various Linux flavors we can use
molecule for our ansible role functionality
docker as our abstraction layer on which we run the target system for our ansible role
tox to setup generic virtualenvs and test our various combinations without side effects
travis to automate it all
This will be quite a long/detailed answer. You can check out an example ansible role with the whole setup here
https://github.com/ckaserer/ansible-role-example
https://travis-ci.com/ckaserer/ansible-role-example
Step 1: test ansible role with molecule
Molecule docs: https://molecule.readthedocs.io/en/stable/
Fixes Issue: 1) no official ansible images
I could create ansible images for every distro I would like to test as jeff geerling describes in his blog posts.
The clear downside of this approach: The images will need maintenance (eventually)
However, with molecule, we can combine base images with a Dockerfile.j2 (Jinja2 Template) to create the images with minimal requirements to run ansible. Through this approach, we are now able to use the official linux distro images from docker hub and do not need to maintain a docker repo for each linux distro and the various versions.
Here the important bits in molecule.yml
platforms:
- name: instance-${TOX_ENVNAME}
image: ${MOLECULE_DISTRO:-'centos:7'}
command: /sbin/init
volumes:
- /sys/fs/cgroup:/sys/fs/cgroup:ro
privileged: true
The default dockerfile.j2 from molecule is already good, but I have a few additions
# Molecule managed
{% if item.registry is defined %}
FROM {{ item.registry.url }}/{{ item.image }}
{% else %}
FROM {{ item.image }}
{% endif %}
{% if item.env is defined %}
{% for var, value in item.env.items() %}
{% if value %}
ENV {{ var }} {{ value }}
{% endif %}
{% endfor %}
{% endif %}
RUN if [ $(command -v apt-get) ]; then apt-get update && apt-get install -y python sudo bash ca-certificates iproute2 && apt-get clean; \
elif [ $(command -v zypper) ]; then zypper refresh && zypper install -y python sudo bash python-xml iproute2 && zypper clean -a; \
elif [ $(command -v apk) ]; then apk update && apk add --no-cache python sudo bash ca-certificates; \
elif [ $(command -v xbps-install) ]; then xbps-install -Syu && xbps-install -y python sudo bash ca-certificates iproute2 && xbps-remove -O; \
elif [ $(command -v swupd) ]; then swupd bundle-add python3-basic sudo iproute2; \
elif [ $(command -v dnf) ] && cat /etc/os-release | grep -q '^NAME=Fedora'; then dnf makecache && dnf --assumeyes install python sudo python-devel python*-dnf bash iproute && dnf clean all; \
elif [ $(command -v dnf) ] && cat /etc/os-release | grep -q '^NAME="CentOS Linux"' ; then dnf makecache && dnf --assumeyes install python36 sudo platform-python-devel python*-dnf bash iproute && dnf clean all && ln -s /usr/bin/python3 /usr/bin/python; \
elif [ $(command -v yum) ]; then yum makecache fast && yum install -y python sudo yum-plugin-ovl bash iproute && sed -i 's/plugins=0/plugins=1/g' /etc/yum.conf && yum clean all; \
fi
# Centos:8 + ansible 2.7 failed with error: "The module failed to execute correctly, you probably need to set the interpreter"
# Solution: ln -s /usr/bin/python3 /usr/bin/python
By default, this will test the role with centos:7. However, we can set the environment variable MOLECULE_DISTRO to whichever image we would like to test and run it via
MOLECULE_DISTRO=ubuntu:18.04 molecule test
Summary
We use official distro images from docker hub to test our ansible role via molecule.
The files used in this step
molecule.yml https://github.com/ckaserer/ansible-role-example/blob/master/molecule/default/molecule.yml
Dockerfile.j2 https://github.com/ckaserer/ansible-role-example/blob/master/molecule/default/Dockerfile.j2
Source
https://molecule.readthedocs.io/en/stable/examples.html#systemd-container
https://www.jeffgeerling.com/blog/2019/how-i-test-ansible-configuration-on-7-different-oses-docker
https://www.jeffgeerling.com/blog/2018/testing-your-ansible-roles-molecule
Step 2: Check compatibility for your role ( python version X ansible version X linux distros )
Fixes Issue 2: how to best test a role for ansible version compatibility?
Let's use tox to create virtual environments to avoid side effects while testing various compatibility scenarios.
Here the important bits in tox.ini
[tox]
minversion = 3.7
envlist = py{3}-ansible{latest,29,28}-{ alpinelatest,alpine310,alpine39,alpine38, centoslatest,centos8,centos7, debianlatest,debian10,debian9,debian8, fedoralatest,fedora30,fedora29,fedora28, ubuntulatest,ubuntu2004,ubuntu1904,ubuntu1804,ubuntu1604 }
# only test currently supported ansible versions
# https://docs.ansible.com/ansible/latest/reference_appendices/release_and_maintenance.html#release-status
skipsdist = true
[base]
passenv = *
deps =
-rrequirements.txt
ansible25: ansible==2.5
...
ansiblelatest: ansible
commands =
molecule test
setenv =
TOX_ENVNAME={envname}
MOLECULE_EPHEMERAL_DIRECTORY=/tmp/{envname}
[testenv]
passenv =
{[base]passenv}
deps =
{[base]deps}
commands =
{[base]commands}
setenv =
...
centoslatest: MOLECULE_DISTRO="centos:latest"
centos8: MOLECULE_DISTRO="centos:8"
centos7: MOLECULE_DISTRO="centos:7"
...
{[base]setenv}
The entirety of requirements.txt
docker
molecule
by simply executing
tox
it will create virtual envs for each compatibility combination defined in tox.ini by
envlist = py{3}-ansible{latest,29,28}-{ alpinelatest,alpine310,alpine39,alpine38, centoslatest,centos8,centos7, debianlatest,debian10,debian9,debian8, fedoralatest,fedora30,fedora29,fedora28, ubuntulatest,ubuntu2004,ubuntu1904,ubuntu1804,ubuntu1604 }
which translates in this particular case to: python3 x ansible version x linux distro
Great! We have created tests for compatibility checks with the added benefit of always testing with ansible latest to notice breaking changes early on.
The files used in this step
tox.ini https://github.com/ckaserer/ansible-role-example/blob/master/tox.ini
requirements.txt https://github.com/ckaserer/ansible-role-example/blob/master/requirements.txt
Source
https://tox.readthedocs.io/en/latest/
https://molecule.readthedocs.io/en/stable/testing.html#tox
Step 3: CI with travis
Running the checks locally is good, running in a CI tool is great. So let's do that.
For this purpose following bits in the .travis.yml are important
---
version: ~> 1.0
os: linux
language: python
python:
- "3.8"
- "3.7"
- "3.6"
- "3.5"
services: docker
cache:
pip: true
directories:
- .tox
install:
- pip install tox-travis
env:
jobs:
# ansible:latest - check for breaking changes
...
- TOX_DISTRO="centoslatest" TOX_ANSIBLE="latest"
- TOX_DISTRO="centos8" TOX_ANSIBLE="latest"
- TOX_DISTRO="centos7" TOX_ANSIBLE="latest"
...
# ansible: check version compatibility
# only test currently supported ansible versions
#
https://docs.ansible.com/ansible/latest/reference_appendices/release_and_maintenance.html#release-status
- TOX_DISTRO="centoslatest" TOX_ANSIBLE="{29,28}"
- TOX_DISTRO="centos8" TOX_ANSIBLE="{29,28}"
- TOX_DISTRO="centos7" TOX_ANSIBLE="{29,28}"
...
script:
- tox -e $(echo py${TRAVIS_PYTHON_VERSION} | tr -d .)-ansible${TOX_ANSIBLE}-${TOX_DISTRO}
# remove logs/pycache before caching .tox folder
- |
rm -r .tox/py*/log/*
find . -type f -name '*.py[co]' -delete -o -type d -name __pycache__ -delete
First we have specified language: python to run builds with multiple versions of python as defined in the python: list.
We need docker, so we add it via services: docker.
The test will take quite some time, let's cache pip and our virtenv created by tox with
cache:
pip: true
directories:
- .tox
We need tox...
install:
- pip install tox-travis
And finally, we define all our test cases
env:
jobs:
# ansible:latest - check for breaking changes
...
- TOX_DISTRO="centoslatest" TOX_ANSIBLE="latest"
...
Note that I have separate jobs for latest and the distinct versions. That is on purpose. I would like to easily see what broke. Is it version compatibility or an upcoming change located in ansible's latest release.
The files used in this step
https://github.com/ckaserer/ansible-role-example/blob/master/.travis.yml
Source
https://docs.travis-ci.com/user/caching/
Bonus: run tox in parallel
You can run the tests in parallel (e.g. 3 test simultaneous) by executing
tox -p 3
However, this will not give the output from molecule. You can enable that with
tox -p 3 -o true
The obvious downside to this approach is the pain of figuring out which line belongs to which process in the parallel execution.
Source
https://tox.readthedocs.io/en/latest/example/basic.html#parallel-mode
No real answer here, but some ideas :
Ansible Silo might have fitted, but no commit for a year.
And it's not exactly what you're looking for, but Ansible Runner is meant to be a fit for the "run ansible" use case. And it's a part of Ansible Tower / AWS, so it should last.
Runner is intended to be most useful as part of automation and tooling that needs to invoke Ansible and consume its results.
They do mention executing from a container here
The design of Ansible Runner makes it especially suitable for controlling the execution of Ansible from within a container for single-purpose automation workflows
But the issue for you is that the official ansible/ansible-runner container is tagged after ansible-runner version, and ansible itself is installed through pip install ansible at container build time.
Is there a Dockerfile for installing cl-json (or other Quicklisp library) on Docker? Most installation instructions I've seen require user input on commands with no --noinput flag, making it difficult to install through a Dockerfile.
In addition, many of the instructions appear out of date or reference broken links and non-existent resources. It would be convenient to use a Dockerfile to install it in a consistent way with e.g. Quicklisp.
Here is a possible Dockerfile for an application based on SBCL.
FROM dparnell/minimal-sbcl
RUN sbcl --noinform \
--disable-ldb \
--lose-on-corruption \
--eval "(ql:quickload '(buildapp))" \
--eval '(buildapp:build-buildapp "/bin/buildapp")'
RUN buildapp --load /opt/quicklisp/setup.lisp \
--eval "(ql:quickload '(cl-json))" \
--output bin/executable
CMD executable
I am basing the image on dparnell/minimal-sbcl, which comes with Quicklisp pre-installed.
I then run SBCL once to build buildapp (that could be a separate docker image).
Then, I run buildapp, load quicklisp/setup.lisp and install cl-json. You can load as many dependencies you want with quickload, but I'd recommand defining your own system.asd file and list dependencies there.
https://lispcookbook.github.io/cl-cookbook/testing.html#continuous-integration
In this tutorial we use Gitlab CI with the daewok/lisp-devel Docker image that includes several Lisp implementations and Quicklisp, so we can run a lisp and (ql:quickload "cl-json") right away.