Issue creating a Packer debian image with systemd enabled - docker

I am using WSL2 on Windows 10. I have the most v1.0.3 of WSL which allows systemd (systemctl) and am using Ubuntu 20.04 as my default distribution.
I have a functioning Dockerfile to build a Debian image with systemd enabled. It works perfectly fine. I found it at this link: https://github.com/robertdebock/docker-debian-systemd.
However I was tasked to redo this Dockerfile in Packer, since Packer is our standard for buildling images instead of Dockerfiles. I am having issues because even though I feel I've copied everything from the Dockerfile into my HCL2 code, when I launch the container in docker-compose it comes up saying that systemctl isn't available. systemd is definitely installed on the packer image.
Here is my HCL2 code:
packer {
required_plugins {
docker = {
version = ">= 0.0.7"
source = "github.com/hashicorp/docker"
}
}
}
source "docker" "debian" {
image = "debian:11"
commit = true
changes = [
"ENV container docker",
"ENV DEBIAN_FRONTEND noninteractive",
"VOLUME /sys/fs/cgroup"
]
}
build {
name = "${var.docker_login_server}/mycompany-debian-11-docker"
sources = [
"source.docker.debian"
]
provisioner "shell" {
environment_vars = []
inline = [
"apt-get update",
"apt-get install --no-install-recommends -y systemd systemd-sysv procps sudo ca-certificates unzip wget curl",
"apt-get clean",
"curl 'https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip' -o 'awscliv2.zip'",
"unzip awscliv2.zip",
"sudo ./aws/install",
"sudo adduser --disabled-password --gecos '' admin",
"rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*",
"rm -rf /lib/systemd/system/multi-user.target.wants/*",
"rm -rf /etc/systemd/system/*.wants/*",
"rm -rf /lib/systemd/system/local-fs.target.wants/*",
"rm -rf /lib/systemd/system/sockets.target.wants/*udev*",
"rm -rf /lib/systemd/system/sockets.target.wants/*initctl*",
"rm -rf /lib/systemd/system/sysinit.target.wants/systemd-tmpfiles-setup*",
"rm -rf /lib/systemd/system/systemd-update-utmp*"
]
}
post-processors {
post-processor "docker-tag" {
repository = "${var.docker_login_server}/mycompany-debian-11-docker"
tags = ["11"]
}
post-processor "docker-push" {
login = true
login_username = "${var.docker_login_username}"
login_password = "${var.docker_login_password}"
login_server = "${var.docker_login_server}"
keep_input_artifact = false
}
}
}
Note on the above, if I add this line to the source -> changes[] section, when I try to run the image it fails with a syntax error (below), so I removed it for now.
"CMD /lib/systemd/systemd"
Container error: /bin/sh: 1: Syntax error: "(" unexpected
Once the image is successfully build in packer and sent to our internal image repo, I use docker-compose to run it like this:
services:
debian:
container_name: debian
image: {internalRepoUrl}/mycompany-debian-11-docker:11
tty: true
volumes:
- /sys/fs/cgroup:/sys/fs/cgroup:ro
privileged: true
When I open a terminal on the container and try to run systemctl, I get this error:
System has not been booted with systemd as init system (PID 1). Can't operate.
Failed to connect to bus: Host is down
However, if I launch the image created with the Dockerfile, it works fine. It seems there is simply something I need to do in my Packer HCL that isn't obvious to me. Does anyone have any suggestions?
See the above problem description. I tried replicating what is working in a Dockerfile in Packer. I was expecting that the debian instance would come up with systemctl functioning, similar to the way it does when I create the image using the Dockerfile referenced in the link above.

Related

Running sonarqube as container with same network as host

I am trying to run a Sonarqube container that gets created as below Dockerfile:
FROM node:15-buster
################
# Install java #
################
RUN apt-get update && \
DEBIAN_FRONTEND=noninteractive \
apt-get -y install openjdk-11-jre-headless && \
apt-get clean && \
rm -rf /var/lib/apt/lists/*
############################
# Install SonarQube client #
############################
WORKDIR /root
RUN apt-get install -y curl grep sed unzip
RUN curl --insecure -o ./sonarscanner.zip -L https://binaries.sonarsource.com/Distribution/sonar-scanner-cli/sonar-scanner-cli-4.4.0.2170-linux.zip
RUN unzip -q sonarscanner.zip
RUN rm sonarscanner.zip
RUN mv sonar-scanner-4.4.0.2170-linux sonar-scanner
ENV SONAR_RUNNER_HOME=/root/sonar-scanner
ENV PATH $PATH:/root/sonar-scanner/bin
# Include Sonar configuration and project paths
COPY ./sonar/sonar-runner.properties ./sonar-scanner/conf/sonar-scanner.properties
# Ensure Sonar uses the provided Java for musl instead of a borked glibc one
RUN sed -i 's/use_embedded_jre=true/use_embedded_jre=false/g' /root/sonar-scanner/bin/sonar-scanner
My sonar link is not accessible , I did confirm on all the network checks like checking its reachability from my Jenkins host and its fine. Only it is the Sonarqube container from where the link is unreachable:
ERROR: SonarQube server [https://sonar.***.com] can not be reached
Below is my Jenkinsfile stage for Sonarqube:
stage('SonarQube') {
agent
{dockerfile { filename 'sonar/Dockerfile'
args '-u root:root'
}
}
steps {
withCredentials([string(credentialsId: 'trl-mtr-sonar-login', variable: 'SONAR_LOGIN')]) {
script {
sh 'sonar-scanner -Dsonar.login="$SONAR_LOGIN" -Dsonar.projectBaseDir=. || true'
}
}
}
}
Plugin 'withCredentials' is used in above snippet of code. I would want to add the network in container just like host.
As a result of browsing I found manual command to do the same and also the docker.image.inside plugin. I still can not consolidate all to be used in my pipeline for sonarqube :
# Start a container attached to a specific network
docker run --network [network] [container]
# Attach a running container to a network
docker network connect [network] [container]
I also created the stage as below but even it seems to be failing:
stage('SonarTests') {
steps{
docker.image('sonar/Dockerfile').inside('-v /var/run/docker.sock:/var/run/docker.sock --entrypoint="" --net bridge')
{
sh 'sonar-scanner -Dsonar.login="$SONAR_LOGIN" -Dsonar.projectBaseDir=. || true'
}
}
}
Could someone please assist here.

Is it possible to add an installer, run it and delete it during one build step in Docker?

I'm trying to create a Docker image from a pretty large installer binary (300+ MB). I want to add the installer to the image, install it, and delete the installer. This doesn't seem to be possible:
COPY huge-installer.bin /tmp
RUN /tmp/huge-installer.bin
RUN rm /tmp/huge-installer.bin # <- has no effect on the image size
Using multiple build stages doesn't seem to solve this, since I need to run the installer in the final image. If I could execute the installer directly from a previous build stage, without copying it, that would solve my problem, but as far as I know that's not possible.
Is there any way to avoid including the full weight of the installer in the final image?
I ended up solving this by using the built-in HTTP server in Python to make the project directory available to the image over HTTP.
Inside the Dockerfile, I can run commands like this, piping scripts directly to bash using curl:
RUN curl "http://127.0.0.1:${SERVER_PORT}/installer-${INSTALLER_VERSION}.bin" | bash
Or save binaries, run them and delete them in one step:
RUN curl -O "http://127.0.0.1:${SERVER_PORT}/binary-${INSTALLER_VERSION}.bin" && \
./binary-${INSTALLER_VERSION}.bin && \
rm binary-${INSTALLER_VERSION}.bin
I use a Makefile to start the server and stop it after the build, but you can use a build script instead.
Here's a Makefile example:
SHELL := bash
IMAGE_NAME := app-test
VERSION := 1.0.0
SERVER_PORT := 8580
.ONESHELL:
.PHONY: build
build:
# Kills the HTTP server when the build is done
function cleanup {
pkill -f "python3 -m http.server.*${SERVER_PORT}"
}
trap cleanup EXIT
# Starts a HTTP server that makes the contents of the project directory
# available to the image
python3 -m http.server -b 127.0.0.1 ${SERVER_PORT} &>/dev/null &
sleep 1
EXTRA_ARGS=""
# Allows skipping the build cache by setting NO_CACHE=1
if [[ -n $$NO_CACHE ]]; then
EXTRA_ARGS="--no-cache"
fi
docker build $$EXTRA_ARGS \
--network host \
--build-arg SERVER_PORT=${SERVER_PORT} \
-t ${IMAGE_NAME}:latest \
.
docker tag ${IMAGE_NAME}:latest ${IMAGE_NAME}:${VERSION}
I think the best way is to download the bin from a website then run it:
RUN wget http://myweb/huge-installer.bin && /tmp/huge-installer.bin && rm /tmp/huge-installer.bin
in this way your image layer will not contain the binary you download
I didn't test it thoroughly, but wouldn't such an approach be viable? (Besides LinPy's answer, which is way easier if you have the possibility to just do it that way.)
Dockerfile:
FROM alpine:latest
COPY entrypoint.sh /tmp/entrypoint.sh
RUN \
echo "I am an image that can run your huge installer binary!" \
&& echo "I will only function when you give it to me as a volume mount."
ENTRYPOINT [ "/tmp/entrypoint.sh" ]
entrypoint.sh:
#!/bin/sh
/tmp/your-installer # install your stuff here
while true; do
echo "installer finished, commit me now!"
sleep 5
done
Then run:
$ docker build -t foo-1
$ docker run --rm --name foo-1 --rm -d -v $(pwd)/your-installer:/tmp/your-installer
$ docker logs -f foo-1
# once it echoes "commit me now!", run the next command
$ docker commit foo-1 foo-2
$ docker stop foo-1
Since the installer was only mounted as a volume, the image foo-2 should not contain it anymore. You could also go and build another Dockerfile based on foo-2 to change the entrypoint, for example.
Cf. docker commit

Visual Studio Code Remote - Containers - change shell

When launching an attached container in "VS Code Remote Development", has anyone found a way to change the container's shell when launching the vscode integrated terminal.
It seems to run something similar to.
docker exec -it <containername> /bin/bash
I am looking for the equivalent of
docker exec -it <containername> /bin/zsh
The only settings I found for Attached containers are
"remote.containers.defaultExtensions": []
I worked around it with
RUN echo "if [ -t 1 ]; then" >> /root/.bashrc
RUN echo "exec zsh" >> /root/.bashrc
RUN echo "fi" >> /root/.bashrc
Still would be interested in knowing if there was a way to set this per container.
I use a Docker container for my development environment and set the shell to bash in my Dockerfile:
# …
ENTRYPOINT ["bash"]
Yet when VS Code was connecting to my container it was insisting on using the /bin/ash shell which was driving me crazy... However the fix (at least for me) was very simple but not obvious:
From the .devcontainer.json reference.
All I needed to do in my case was to add the following entry in my .devcontainer.json file:
{
…
"settings": {
"terminal.integrated.shell.*": "/bin/bash"
}
…
}
Complete .devcontainer.json file (FYI)
{
"name": "project-blueprint",
"dockerComposeFile": "./docker-compose.yml",
"service": "dev",
"workspaceFolder": "/workspace/dev",
"postCreateCommand": "yarn",
"settings": {
"terminal.integrated.shell.*": "/bin/bash"
}
}
I'd like to contribute to this thread since I spent a decent amount of time combing the web for a good solution to this involving VS Code's new API for terminal.integrated.profiles.linux
Note as of 20 Jan 2022 both the commented and the uncommented json work. The uncommented out lines is the new non deprecated way to get this working with Dev containers.
{
"settings": {
// "terminal.integrated.shell.linux": "/bin/zsh"
"terminal.integrated.defaultProfile.linux": "zsh",
"terminal.integrated.profiles.linux": {
"zsh": {
"path": "/bin/zsh"
},
}
}
}
if any one is interested I also figured out how to get oh my ZSH built into the image.
Dockerfile:
# Setup Stage - set up the ZSH environment for optimal developer experience
FROM node:16-alpine AS setup
RUN npm install -g expo-cli
# Let scripts know we're running in Docker (useful for containerized development)
ENV RUNNING_IN_DOCKER true
# Use the unprivileged `node` user (pre-created by the Node image) for safety (and because it has permission to install modules)
RUN mkdir -p /app \
&& chown -R node:node /app
# Set up ZSH and our preferred terminal environment for containers
RUN apk --no-cache add zsh curl git
# Set up ZSH as the unprivileged user (we just need to start it, it'll initialize our setup itself)
USER node
# set up oh my zsh
RUN cd ~ && wget https://raw.githubusercontent.com/ohmyzsh/ohmyzsh/master/tools/install.sh && sh install.sh
# initialize ZSH
RUN /bin/zsh ~/.zshrc
# Switch back to root
USER root

"rsync" was not detected as installed in your guest machine

I'm trying to setup Vagrant with docker as a provider but when running
vagrant up --provider=docker --debug
I get this error:
"rsync" was not detected as installed in your guest machine. This
is required for rsync synced folders to work. In addition to this,
Vagrant doesn't know how to automatically install rsync for your
machine, so you must do this manually.
Full log here:
http://pastebin.com/zCTSqibM
Vagrantfile
require 'yaml'
Vagrant.configure("2") do |config|
user_config = YAML.load_file 'user_config.yml'
config.vm.provider "docker" do |d|
d.build_dir = "."
d.has_ssh = true
d.ports = user_config['port_mapping']
d.create_args = ["--dns=127.0.0.1","--dns=8.8.8.8", "--dns=8.8.4.4"]
d.build_args = ['--no-cache=true'] end
config.vm.hostname = "dev"
config.ssh.username = "it" config.ssh.port = 22 config.ssh.private_key_path = ["./initial_ssh_key", user_config['ssh_private_key_path']] config.ssh.forward_agent = true
end
Dockerfile
FROM debian:jessie MAINTAINER IT <it#email.com>
RUN echo 'exit 0' > /usr/sbin/policy-rc.d
RUN echo 'debconf debconf/frontend select Noninteractive' | debconf-set-selections
RUN apt-get update RUN apt-get upgrade -y RUN apt-get install sudo apt-utils -y
RUN apt-get -y install sysvinit-core sysvinit sysvinit-utils RUN cp /usr/share/sysvinit/inittab /etc/inittab RUN apt-get remove -y --purge
--auto-remove systemd libpam-systemd systemd-sysv
RUN apt-get install ssh -y
RUN addgroup --system it RUN adduser --system --disabled-password
--uid 1000 --shell /bin/bash --home /home/it it RUN adduser it it RUN adduser it sudo
RUN echo "it ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers
ADD initial_ssh_key.pub /home/it/.ssh/authorized_keys RUN chown it:it /home/it/ -R RUN echo "Host * \n\tStrictHostKeyChecking no" >> /etc/ssh/ssh_config
CMD exec /sbin/init
Note:
I'm on Mac OS X 10.12 and I've installed vagrant, virtualbox and docker I have rsync installed and added to my PATH in the host machine.
Also, the same vagrant and docker configs works perfectly on a ubuntu host.
How do I install rsync in the guest machine? Or is something else wrong with my config? Any ideas?
You may want to give the alternative boot2docker box a try: https://github.com/dduportal/boot2docker-vagrant-box
as it contains rsync while the hashicorp/boot2docker, which is used by default, seems to lack this!
If doing so, you must add the follwong line to your docker provider config (of course adopted to your system):
d.vagrant_vagrantfile = "../path/to/Vagrantfile"
This is because you're changing the docker provider host vm as described in the vagrant docker provider documentation.
Try adding rsync to your Docker file, somewhere in one of your apt-get lines. Linux hosts use NFS by default, that's why it works on your Ubuntu.
Normally Vagrant tries to install rsync on a guest machine, if that fails - it notifies you with that error message. More info on vagrant website (3rd paragraph in "Prerequisites" chapter)

Running docker-compose on a docker gitlab-ci-multi-runner

I have a project running on Docker with docker-compose for dev environment.
I want to get it running on GitLabCI with a gitlab-ci-multi-runner "Docker mode" instance.
Here is my .gitlab-ci.yml file:
image: soullivaneuh/docker-bash
before_script:
- apk add --update bash curl
- curl --silent --location https://github.com/docker/compose/releases/download/1.5.2/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose
- chmod +x /usr/local/bin/docker-compose
- ./configure
- docker-compose up -d
Note that soullivaneuh/docker-bash image is just a docker image with bash installed.
The script fails on docker-compose up -d command:
gitlab-ci-multi-runner 0.7.2 (998cf5d)
Using Docker executor with image soullivaneuh/docker-bash ...
Pulling docker image soullivaneuh/docker-bash:latest ...
Running on runner-1ee5079f-project-3-concurrent-1 via sd-59984...
Fetching changes...
Removing app/config/parameters.yml
Removing docker-compose.env
HEAD is now at 5c5e7ff remove docker service
From https://git.dummy.net/project/project
5c5e7ff..45e643d docker-ci -> origin/docker-ci
Checking out 45e643dd as docker-ci...
Previous HEAD position was 5c5e7ff... remove docker service
HEAD is now at 45e643d... Remove docker info commands
$ apk add --update bash curl
fetch http://dl-4.alpinelinux.org/alpine/v3.2/main/x86_64/APKINDEX.tar.gz
OK: 10 MiB in 28 packages
$ curl --silent --location https://github.com/docker/compose/releases/download/1.5.2/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose
$ chmod +x /usr/local/bin/docker-compose
$ ./configure
$ docker-compose up -d
bash: line 30: /usr/local/bin/docker-compose: No such file or directory
ERROR: Build failed with: exit code 1
I have absolutly no idea why this is failing.
Thanks for help.
The No such file or directory is misleading. I've received that many times while trying to run dynamically linked binaries using alpine linux (which it appears you are using).
The problem (as I understand it) is that the binary was compiled and linked against glibc, but alpine uses musl, not glibc.
You could use ldd /usr/local/bin/docker-compose to tell you which libraries are missing (or run it with strace if all else fails).
To get it working, it might be easier to install from python source (https://docs.docker.com/compose/install/#install-using-pip), which is what the official compose image does (https://github.com/docker/compose/blob/master/Dockerfile.run).
Or you could use an image built on debian or some other distro that uses glibc.

Resources