Unable to Build Docker Image Using GitHub Actions - docker

I'm currently trying to build a Docker image using GitHub Actions (CI). I can successfully build it on my machine and multiple other x86_64 architectures which I believe GitHub Actions also runs, but when building, I experience the following issue:
standard_init_linux.go:219: exec user process caused: exec format error
The command '/bin/sh -c apt-get update && apt-get install -y build-essential psmisc ifupdown omxplayer x11-xserver-utils xserver-xorg libraspberrypi0 libraspberrypi-dev raspberrypi-kernel-headers cec-utils libpng12-dev git-core wget --no-install-recommends && apt-get clean && rm -rf /var/lib/apt/*' returned a non-zero code: 1
I've searched multiple other threads here, but I wasn't able to find anything useful and I'm not quite sure what else to try. Any help or suggestions would be much appreciated.
Relevant Files:
This is the full build log
This is the Dockerfile
This is the CI file
This is the full repository

Your base image is invalid for amd64:
$ docker image inspect balenalib/raspberry-pi-debian-node:latest-jessie
...
"Architecture": "amd64",
...
$ docker run -it --rm balenalib/raspberry-pi-debian-node:latest-jessie /bin/bash
...
root#2eb37d8359ed:/# dpkg --print-architecture
armhf
That base image won't run on systems without qemu's binfmt_misc configured to run binaries for other platforms.
It's actually not a multi-platform base image at all, and instead is only designed to run on systems with qemu setup (note the media type is a manifest and not a manifest list):
$ regctl image manifest --list balenalib/raspberry-pi-debian-node:latest-jessie
{
"schemaVersion": 2,
"mediaType": "application/vnd.docker.distribution.manifest.v2+json",
"config": {
"mediaType": "application/vnd.docker.container.image.v1+json",
"size": 11726,
"digest": "sha256:5ec0839ecb046f260ad72751d0c4b08c7a085b147a519619e5a54876643a3231"
},
"layers": [
{
"mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
"size": 40222636,
"digest": "sha256:d84b7435af12678c551b7489227b74c994981386b5bc4875ec512e11f28249c5"
},
And the image configuration has more pointers to qemu:
$ regctl image inspect balenalib/raspberry-pi-debian-node:latest-jessie
{
"created": "2019-05-02T22:50:58.241895826Z",
"architecture": "amd64",
"os": "linux",
"config": {
"Env": [
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"LC_ALL=C.UTF-8",
"DEBIAN_FRONTEND=noninteractive",
"UDEV=off",
"QEMU_CPU=arm1176",
"NODE_VERSION=11.14.0",
"YARN_VERSION=1.12.3"
],
This won't work on hosts without qemu's binfmt-misc setup. For building within a github action, you can use the setup qemu action:
- name: Set up QEMU
id: qemu
uses: docker/setup-qemu-action#v1
with:
image: tonistiigi/binfmt:latest
platforms: all

Related

It seems you are running Vue CLI inside a container

I m trying to run my vuejs app using vs-code remote-containers. Its deployed and I can access it via the url: localhost:8080/ but If I update some js file, its not re-compiling and even not hot-reloading.
devcontainer.json
{
"name": "Aquawebvue",
"dockerFile": "Dockerfile",
"appPort": [3000],
"runArgs": ["-u", "node"],
"settings": {
"workbench.colorTheme": "Cobalt2",
"terminal.integrated.automationShell.linux": "/bin/bash"
},
"postCreateCommand": "yarn",
"extensions": [
"esbenp.prettier-vscode",
"wesbos.theme-cobalt2",
]
}
Dockerfile
FROM node:12.13.0
RUN npm install -g prettier
After opening container and running cmd 'yarn serve' in terminal it builds and deploy successfully but I got this warning:
It seems you are running Vue CLI inside a container.
Since you are using a non-root publicPath, the hot-reload socket
will not be able to infer the correct URL to connect. You should
explicitly specify the URL via devServer.public.
VSCode has a pre-defined .devcontainer directory for Vue projects. It can be found on GitHub. You can install it automatically by running the command Add Development Container Configuration Files... > Show All Definitions > Vue.
Dockerfile
# [Choice] Node.js version: 14, 12, 10
ARG VARIANT=14
FROM mcr.microsoft.com/vscode/devcontainers/javascript-node:0-${VARIANT}
RUN su node -c "umask 0002 && npm install -g http-server #vue/cli #vue/cli-service-global"
WORKDIR /app
EXPOSE 8080
# [Optional] Uncomment this section to install additional OS packages.
# RUN apt-get update && export DEBIAN_FRONTEND=noninteractive \
# && apt-get -y install --no-install-recommends <your-package-list-here>
devcontainer.json
{
"name": "Vue (Community)",
"build": {
"dockerfile": "Dockerfile",
"context": "..",
// Update 'VARIANT' to pick a Node version: 10, 12, 14
"args": { "VARIANT": "14" }
},
// Set *default* container specific settings.json values on container create.
"settings": {
"terminal.integrated.shell.linux": "/bin/zsh"
},
// Add the IDs of extensions you want installed when the container is created.
"extensions": [
"dbaeumer.vscode-eslint",
"octref.vetur"
],
// Use 'forwardPorts' to make a list of ports inside the container available locally.
"forwardPorts": [
8080
],
// Use 'postCreateCommand' to run commands after the container is created.
// "postCreateCommand": "uname -a",
// Comment out connect as root instead. More info: https://aka.ms/vscode-remote/containers/non-root.
"remoteUser": "node"
}

Docker hub image fails but building its Dockerfile works. What is happening?

I have used Docker-compose a lot recently, but this time I found a container I really want to use but the docker hub’s image is not compatible with my arm/v6 raspberry pi.
Using it anyway results in
standard_init_linux.go:219: exec user process caused: exec format error
Strangely, copying the Dockerfile and building it with
build:
context: ./ttrss-docker/src/app
results in the app working well. But for some reason, I can’t use the dockerhub’s image.
In case it matters, the Dockerfile is this, and the Docker Hub image is this.
FROM alpine:3.12
EXPOSE 9000/tcp
RUN apk add --no-cache dcron php7 php7-fpm \
php7-pdo php7-gd php7-pgsql php7-pdo_pgsql php7-mbstring \
php7-intl php7-xml php7-curl php7-session \
php7-dom php7-fileinfo php7-json \
php7-pcntl php7-posix php7-zip php7-openssl \
git postgresql-client sudo
ADD startup.sh /
ADD updater.sh /
ADD index.php /
ADD dcron.sh /
ADD backup.sh /etc/periodic/weekly/backup
RUN sed -i.bak 's/^listen = 127.0.0.1:9000/listen = 9000/' /etc/php7/php-fpm.d/www.conf
RUN sed -i.bak 's/\(memory_limit =\) 128M/\1 256M/' /etc/php7/php.ini
RUN mkdir -p /var/www
CMD /startup.sh
Question: if I don’t use the Docker hubs image, can Watchtower update my container ?
If not, does anyone know what’s happening and how I can achieve a container that updates via Watchtower ?
Many thanks :)
The image you are pulling has only been built for a single architecture: amd64. The resulting binaries and libraries are not usable on other platforms like ARM used by the Raspberry Pi. Below are the debugging steps to verify this.
The manifest is application/vnd.docker.distribution.manifest.v2+json:
$ regctl image manifest --list cthulhoo/ttrss-fpm-pgsql-static
{
"schemaVersion": 2,
"mediaType": "application/vnd.docker.distribution.manifest.v2+json",
"config": {
"mediaType": "application/vnd.docker.container.image.v1+json",
"size": 4257,
"digest": "sha256:916ae5126809992b922c5db0f41e62a40be245703685e19f51797db95f312e81"
},
...
Checking the architecture of that image:
$ regctl image inspect cthulhoo/ttrss-fpm-pgsql-static --format '{{.Architecture}}'
amd64
This would need to be fixed by the image creator to build an image for ARM platforms, which you see with the Alpine base image.
$ regctl image manifest --list alpine:3.12
{
"schemaVersion": 2,
"mediaType": "application/vnd.docker.distribution.manifest.list.v2+json",
"manifests": [
{
"mediaType": "application/vnd.docker.distribution.manifest.v2+json",
"size": 528,
"digest": "sha256:074d3636ebda6dd446d0d00304c4454f468237fdacf08fb0eeac90bdbfa1bac7",
"platform": {
"architecture": "amd64",
"os": "linux"
}
},
{
"mediaType": "application/vnd.docker.distribution.manifest.v2+json",
"size": 528,
"digest": "sha256:096ebf69d65b5dcb3756fcfb053e6031a3935542f20cd7a8b7c59e1b3cb71558",
"platform": {
"architecture": "arm",
"os": "linux",
"variant": "v6"
}
},
{
"mediaType": "application/vnd.docker.distribution.manifest.v2+json",
"size": 528,
"digest": "sha256:299294be8699c1b323c137f972fd0aa5eaa4b95489c213091dcf46ef39b6c810",
"platform": {
"architecture": "arm",
"os": "linux",
"variant": "v7"
}
},
...
Building multi-platform images is often done with buildx. The regctl command used above is part of my regclient project.

Packer fails my docker build with error "sudo: not found" despite sudo being present

I'm trying to build a packer image with docker on it and I want docker to create a docker image with a custom script. The relevant portion of my code is (note that the top builder double-checks that sudo is installed):
{
"type": "shell",
"inline": [
"apt-get install sudo"
]
},
{
"type": "docker",
"image": "python:3",
"commit": true,
"changes": [
"RUN pip install Flask",
"CMD [\"python\", \"echo.py\"]"
]
}
The relevant portion of my screen output is:
==> docker: provisioning with shell script: /var/folders/s8/g1_gobbldygook/T/packer-shell23453453245
docker: /temp/script_1234.sh: 3: /tmp/script_1234.sh: sudo: not found
==> docker: killing the contaner: 34234hashvomit234234
Build 'docker' errored: Scipt exited with non-zero exit status: 127
The script in question is not one of mine. It's some randomly generated script that has a different series of four numbers every time I build. I'm new to both packer and docker, so maybe it's obvious what the problem is, but it's not to me.
There seem to be a few problems with your packer input. Since you haven't included the complete input file it's hard to tell, but notice a couple of things:
You probably need to run apt-get update before calling apt-get install sudo. Without that, even if the image has cached package metadata it is probably stale. If I try to build an image using your input it fails with:
E: Unable to locate package sudo
While not a problem in this context, it's good to explicitly include -y on the apt-get command line when you're running it non-interactively:
apt-get -y install sudo
In situations where apt-get is attached to a terminal, this will prevent it from prompting for confirmation. This is not a necessary change to your input, but I figure it's good to be explicit.
Based on the docs and on my testing, you can't include a RUN statement in the changes block of a docker builder. That fails with:
Stderr: Error response from daemon: run is not a valid change command
Fortunately, we can move that pip install command into a shell provisioner.
With those changes, the following input successfully builds an image:
{
"builders": [{
"type": "docker",
"image": "python:3",
"commit": true
}],
"provisioners": [{
"type": "shell",
"inline": [
"apt-get update",
"apt-get -y install sudo",
"pip install Flask"
]
}],
"post-processors": [[ {
"type": "docker-tag",
"repository": "packer-test",
"tag": "latest"
} ]]
}
(NB: Tested using Packer v1.3.5)

Ansible not executing main.yml

I am using Ansible local inside a Packer script to configure a Docker image. I have a role test that has a main.yml file that's supposed to output some information and create a directory to see that the script actually ran. However, the main.yml doesn't seem to get run.
Here is my playbook.yml:
---
- name: apply configuration
hosts: all
remote_user: root
roles:
- test
test/tasks/main.yml:
---
- name: Test output
shell: echo 'testing output from test'
- name: Make test directory
file: path=/test state=directory owner=root
When running this via packer build packer.json I get the following output from the portion related to Ansible:
docker: Executing Ansible: cd /tmp/packer-provisioner-ansible-local/59a33ccb-bd9f-3b49-65b0-4cc20783f193 && ANSIBLE_FORCE_COLOR=1 PYTHONUNBUFFERED=1 ansible-playbook /tmp/packer-provisioner-ansible-local/59a33ccb-bd9f-3b49-65b0-4cc20783f193/playbook.yml --extra-vars "packer_build_name=docker packer_builder_type=docker packer_http_addr=" -c local -i /tmp/packer-provisioner-ansible-local/59a33ccb-bd9f-3b49-65b0-4cc20783f193/packer-provisioner-ansible-local037775056
docker:
docker: PLAY [apply configuration] *****************************************************
docker:
docker: TASK [setup] *******************************************************************
docker: ok: [127.0.0.1]
docker:
docker: PLAY RECAP *********************************************************************
docker: 127.0.0.1 : ok=1 changed=0 unreachable=0 failed=0
I used to run a different more useful role this way and it worked fine. I hadn't run this for a few months and now it stopped working. Any ideas what I am doing wrong? Thank you!
EDIT:
here is my packer.json:
{
"builders": [
{
"type": "docker",
"image": "ubuntu:latest",
"commit": true,
"run_command": [ "-d", "-i", "-t", "--name", "{{user `ansible_host`}}", "{{.Image}}", "/bin/bash" ]
}
],
"provisioners": [
{
"type": "shell",
"inline": [
"apt-get -y update",
"apt-get -y install ansible"
]
},
{
"type": "ansible-local",
"playbook_file": "ansible/playbook.yml",
"playbook_dir": "ansible",
"role_paths": [
"ansible/roles/test"
]
}
]
}
This seems to be due to a bug in Packer. Everything works as expected with any Packer version other than 1.0.4. I recommend either downgrading to 1.0.3 or installing the yet to be released 1.1.0 version.
My best guess is that this is being caused by a known and fixed issue about how directories get copied by the docker builder when using Ansible local provisioner.

docker pull failed. manifest invalid: manifest invalid - artifactory

docker 1.9.1 pull on centos7 is failing when pulling from private V2 registry.
$ docker -v
Docker version 1.9.1, build 78ee77d/1.9.1
$ docker pull web-docker.bin-repo.hostname.com/web-dev:latest
Trying to pull repository web-docker.bin-repo.hostname.com/web-dev ...
failed
manifest invalid: manifest invalid
The same command works fine on osx with docker 1.10.3. Can anyone tell me why this isn't working and how to troubleshoot further?
update: here is the manifest it's trying to pull. It can pull v1 manifests, but fails on v2 manifests like the one below.
{
"schemaVersion": 2,
"mediaType": "application/vnd.docker.distribution.manifest.v2+json",
"config": {
"mediaType": "application/octet-stream",
"size": 7503,
"digest": "sha256:58672cb2c8c6d44c1271a5ca38e60a4ab29fb60050bc76995ce662c126509036"
},
"layers": [
{
"mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
"size": 32,
"digest": "sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4"
},
{
"mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
"size": 72038766,
"digest": "sha256:35d9d5d11536c0c6843ecd106dc710b5c54b8198aa28710e73dba2cbe555847f"
},
{
"mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
"size": 19361671,
"digest": "sha256:f7de7971859186e93100b41fbba5513771737ba65f57c62404130646bd41b96b"
},
{
"mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
"size": 108814795,
"digest": "sha256:0041a80e34f1271571619554f6833c06e0ef75d39f152f5fe44ba75bf7e25ae2"
},
{
"mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
"size": 157895786,
"digest": "sha256:ec3cfa9c22f7e6497a0eacf85c86bf8eb5fdec35d096298f9efb43827a393472"
}
]
}
For this issue what I observed is that whenever you push the same image artifact for the second time with same SHA, we will observe this issue.
To Solve this, I would recommend giving permission to override/delete the mainifest file in artifactory.
This will definitely solve this issue.
Problem resolved itself after upgrading to a new version of Docker. (Docker version 1.10.3, build 20f81dd) The standard yum repo lags behind in versions, so add the docker repo and get the latest version of docker:
sudo yum update
Add the yum repo:
sudo tee /etc/yum.repos.d/docker.repo <<-'EOF'
[dockerrepo]
name=Docker Repository
baseurl=https://yum.dockerproject.org/repo/main/centos/$releasever/
enabled=1
gpgcheck=1
gpgkey=https://yum.dockerproject.org/gpg
EOF
Install the docker-engine:
sudo yum install docker-engine
Start the daemon:
sudo service docker start
Add the insecure-registry flag (if the priv registry does not have a cert)
sudo vi /usr/lib/systemd/system/docker.service
ExecStart=/usr/bin/docker daemon --insecure-registry web-docker.bin-repo.hostname.com -H fd://
Reload the daemon:
sudo systemctl daemon-reload
Pull from the private registry:
sudo docker pull web-docker.bin-repo.hostname.com/web-dev:latest
latest: Pulling from web-dev
a3ed95caeb02: Pull complete

Resources