I have a docker image that contains all ARM binaries, except for a statically-linked x86 QEMU executable. It was specifically designed for doing ARM builds while on x86 hardware.
The base image is show0k/miniconda-armv7. Since I don't use Conda, but do need Python, I then build atop it with this Dockerfile:
FROM show0k/miniconda-armv7
MAINTAINER savanni#cloudcity.io
RUN [ "cross-build-start" ]
RUN apt-get update
RUN apt-get -y upgrade
RUN apt-get -y install python3 python3-pip python3-venv ssh git iputils-ping
RUN [ "cross-build-end" ]
I can start this image perfectly on my machine and even run the build commands.
But, when I go to Circle, my container either gets hung up in a queue after "Spin up Environment", or I end frequently with this error message:
Unexpected preparation error: Error response from daemon: Container d366de1282a32a79bca5265a8a97f573c8949f2838be231abcd234e5694d8d0b is not running (where the container ID is different every time)
This is my Circle configuration file:
---
version: 2
jobs:
build:
docker:
- image: savannidgerinel/arm-python:latest
working_directory: ~/repo
steps:
- run:
name: test the image
command: /bin/uname -a
Related
I am trying to run a gui using PyQt5 on a docker container. Everything working fine but when I am actualy running the container using docker-compose up command I am getting an error that says:
qt.qpa.xcb: could not connect to display
qt.qpa.plugin: Could not load the Qt platform plugin "xcb" in "" even though it was found.
This application failed to start because no Qt platform plugin could be initialized. Reinstalling the application may fix this problem.
Available platform plugins are: eglfs, linuxfb, minimal, minimalegl, offscreen, vnc, xcb.
Can someone help my fix this?
Note: I have been tried this solutions and none of them worked for me
Solution 1
Solution 2
Solution 3
This is my Dcokerfile:
FROM ubuntu:latest
# Preparing work environment
ADD server.py .
ADD test.py .
RUN apt-get update
RUN apt-get upgrade
RUN apt-get -y install python3
RUN apt-get -y install python3-pip
# Preparing work environment
RUN apt-get -y install python3-pyqt5
This is the docker-compose part:
test:
container_name: test
image: image
command: python3 test.py
ports:
- 4000:4000/tcp
networks:
pNetwork1:
ipv4_address: 10.1.0.3
I'm creating a gitlab-ci deployment stage that requires some more libraries than existing in my image. In this example, I'm adding ssh (in real world, I want to add many more libs):
image: adoptopenjdk/maven-openjdk11
...
deploy:
stage: deploy
script:
- which ssh || (apt-get update -y && apt-get install -y ssh)
- chmod 600 ${SSH_PRIVATE_KEY}
...
Question: how can I tell gitlab runner to cache the image that I'm building in the deploy stage, and reuse it for all deployment runs in future? Because as written, the library installation takes place for each and every deployment, even if nothing changed between runs.
GitLab can only cache files/directories, but because of the way apt works, there is no easy way to tell it to cache installs you've done this way. You also cannot "cache" the image.
There are two options I see:
Create or use a docker image that already includes your dependencies.
FROM adoptopenjdk/maven-openjdk11
RUN apt update && apt install -y foo bar baz
Then build/push the image the image to dockerhub, then change the image: in the yaml:
image: membersound/maven-openjdk11-with-deps:latest
OR simply choose an image that already has all the dependencies you want! There are many useful docker images out there with useful tools installed. For example octopusdeploy/worker-tools comes with many runtimes and tools installed (java, python, AWS CLI, kubectl, and much more).
attempt to cache the deb packages and install from the deb packages. (beware this is ugly)
Commit a bash script as so to a file like install-deps.sh
#!/usr/bin/env bash
PACKAGES="wget jq foo bar baz"
if [ ! -d "./.deb_packages" ]; then
apt update && apt --download-only install -y ${PACKAGES}
cp /var/cache/apt/archives/*.deb ./.deb_packages
fi
apt install -y ./.deb_packages/*.deb
This should cause the debian files to be cached in the directory ./.deb_packages. You can then configure gitlab to cache them so you can use them later.
my_job:
before_script:
- install-deps.sh
script:
- ...
cache:
paths:
- ./.deb_packages
I'm trying to create one image for different architectures, namely for amd64 and arm64.
The Dockerfile I've created is identical in its contents.
When I build from this Dockerfile on my main machine, which is on amd64, the resultant image will run on all other amd64 machines. However, when I attempt to run this image on arm64, I will see exec errors.
The main culprit seems to stem from my use of Ubuntu as the base image (FROM: ubuntu:latest), which somehow "knows" which architecture on which I'm building. As a result, I end up with a different image depending on the architecture I'm building on.
This isn't an issue in itself. After all, I can build once on amd64 and again on arm64.
What I'd like to do is be able to push one image for each architecture and have them pull to other machines automatically, without having to configure two sets of Dockerfiles. Another way to put it is I'd really like to know how the team at Ubuntu configures their images so that :latest pulls the latest version and the correct architecture.
Any advice would be appreciated!
Edit: For reference, I'm using Docker 19.03.5. The Dockerfile looks like this:
FROM ubuntu:latest
COPY /requirements.txt /tmp/
RUN apt-get update && \
apt-get install -y python3-pip python3-dev build-essential libssl-dev libffi-dev libxml2-dev libxslt1-dev && \
apt-get install -y libtiff5-dev libjpeg8-dev zlib1g-dev
RUN cd /usr/local/bin && \
ln -s /usr/bin/python3 python && \
pip3 install --upgrade pip
RUN pip install lxml && \
pip install -r /tmp/requirements.txt && \
pip install gunicorn
I'd recommend using BuildKit with buildx which is available in 19.03. First you probably want some setup on a Linux host using qemu and binfmt_misc for cross compiling. Without it, you would need a build node for each platform you want to build. With binfmt_misc, you need two important details to work inside of a container, first is you need the static user binaries, and second is the --fix-binary flag needs to be used when injecting them into the kernel. For the first, that comes down to the package name you install, e.g. on Debian the package name is qemu-user-static. And for the second, this may require a version of the package from from an unstable release. E.g. here are a few bug reports to get the change included:
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=868030
https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/1815100
Once you've done this, you can verify the --fix-binary result by looking for the F flag in /proc/sys/fs/binfmt_misc/*.
Next, you need to setup a buildx worker. That can be done with:
docker buildx create --driver docker-container --name local --use \
unix:///var/run/docker.sock
docker buildx inspect --bootstrap local
You should see something like the following from the inspect, note the multiple platforms:
$ docker buildx inspect --bootstrap local
[+] Building 54.1s (1/1) FINISHED
=> [internal] booting buildkit 54.1s
=> => pulling image moby/buildkit:buildx-stable-1 45.4s
=> => creating container buildx_buildkit_local0 8.7s
Name: local
Driver: docker-container
Nodes:
Name: local0
Endpoint: unix:///var/run/docker.sock
Status: running
Platforms: linux/amd64, linux/arm64, linux/riscv64, linux/ppc64le, linux/s390x, linux/386, linux/arm/v7, linux/arm/v6
Now you can perform a build for multiple architectures. The $image_and_tag must be an external registry where buildx can push the image. You cannot have a multi-arch image locally because docker images locally must be a single platform, but the registry like Docker Hub does support multi-arch manifests:
docker buildx build --platform linux/amd64,linux/arm64 \
--output type=registry -t $image_and_tag .
And you can even test those other images using the qemu cross platform support:
docker container run --platform linux/arm64 $image_and_tag
Note that you may need to enable experimental CLI options in docker, I forget which features have not made it to GA yet. In ~/.docker/config.json, add:
{
"auths": {
...
},
"experimental": "enabled"
}
Or you can export a variable (adding to your .bashrc to make it persistent):
export DOCKER_CLI_EXPERIMENTAL=enabled
Note: docker desktop has included settings for qemu/binfmt_misc for a while, so you can skip straight to the buildx steps in that environment. Buildx can also be run as a standalone tool. See the repo for more details: https://github.com/docker/buildx
I'm working on a jetson tk1 deployment scheme where I use docker to create the root filesystem which then gets flashed onto the image.
The way this works is I create an armhf image using the nvidia provided sample filesystem with a qemu-arm-static binary which I can then build upon using standard docker tools.
I then have a "flasher" image which copies the contents of the file system, creates an iso image and flashes it onto my device.
The problem that I'm having is that I'm getting inconsistent results between installing apt packages using a docker RUN statement vs entering the image and installing apt packages.
IE:
# docker build -t jetsontk1:base .
Dockerfile
from jetsontk1:base1
RUN apt update
RUN apt install build-essential cmake
# or
RUN /bin/bash -c 'apt install build-essential cmake -y'
vs:
docker run -it jetsontk1:base1 /bin/bash
# apt update
# apt install build-essential cmake
When I install using the docker script I get the following error:
Processing triggers for man-db (2.6.7.1-1) ...
terminate called after throwing an instance of 'std::length_error'
what(): basic_string::append
qemu: uncaught target signal 6 (Aborted) - core dumped
Aborted (core dumped)
The command '/bin/sh -c /bin/bash -c 'apt install build-essential cmake -y'' returned a non-zero code: 134
I have no issues when manually installing applications from when I'm inside the container, but there's no point in using docker to manage this image building process if I can't do apt installs :/
The project can be found here: https://github.com/dtmoodie/tk1_builder
With the current state with the issue as I presented it at commit: 8e22c0d5ba58e9fdab38e675eed417d73ae0aad9
I'm building a custom Docker image for use in CI. It's based on Node Alpine, and only includes git and the node CLI for publishing the app, expo-cli - but the resulting image takes up 1.27GB of space.
This is the CLI
https://github.com/expo/expo-cli
The corresponding folder in my node_modules is 16.4 MB
The Dockerfile:
FROM node:alpine
RUN set -xe \
&& apk add --no-cache git \yarn -v
&& git --version && node -v && yarn -v
RUN yarn global add expo-cli#2.18.3
The build context itself is tiny:
docker build . -t no-unsafe
Sending build context to Docker daemon 4.096kB
Step 1/4 : FROM node:alpine
...
Using Dive to analyze the image, it says that RUN yarn global add expo-cli#2.18.3 adds 1.1GB to the image:
Inspecting that layer specifically shows the following folders to be the culprit
I don't understand how this image ends up so huge?