I'm trying to build a docker image using Gitlab-CI for linux/arm/v7 platform but unfortunately I'm facing the following error:
[3/7] RUN apt-get update
ERROR: executor failed running [/dev/.buildkit_qemu_emulator /bin/sh -c apt-get update]: failed to copy xattrs: failed to set xattr "security.selinux" on /tmp/buildkit-qemu-emulator135475847/dev/.buildkit_qemu_emulator: operation not supported
------
> [3/7] RUN apt-get update:
------
failed to solve: rpc error: code = Unknown desc = executor failed running [/dev/.buildkit_qemu_emulator /bin/sh -c apt-get update]: failed to copy xattrs: failed to set xattr "security.selinux" on /tmp/buildkit-qemu-emulator135475847/dev/.buildkit_qemu_emulator: operation not supported
Cleaning up file based variables
00:01
ERROR: Job failed: exit code 1
My gitlab-ci.yml looks like:
image: jdrouet/docker-with-buildx:stable
variables:
DOCKER_HOST: tcp://docker:2375/
DOCKER_DRIVER: overlay2
services:
- docker:dind
build:
stage: build
before_script:
- docker info
- docker login -u gitlab-ci-token -p $CI_JOB_TOKEN $CI_REGISTRY
script:
- docker buildx create --use
- docker buildx build --push --platform linux/arm/v7 -t $CI_REGISTRY_IMAGE .
And my Dockerfile is the following:
ARG NODE_VERSION=lts-slim
FROM --platform=linux/arm/v7 node:${NODE_VERSION}
WORKDIR /home/node
RUN apt-get update
RUN apt-get install -y build-essential python
RUN npm install --global npm node-gyp
COPY . .
ARG NODE_ENV=production
ENV NODE_ENV ${NODE_ENV}
RUN npm ci
CMD ["npm", "start"]
Is anyone having any idea how I can solve the issue?
Disable selinux on the host and retry your docker build.
See https://www.tecmint.com/disable-selinux-in-centos-rhel-fedora/ for instructions.
Related
I have a react/django app that's dockerized. There's 2 stages to the GitLab CI process. Build_Node and Build_Image. Build node just builds the react app and stores it as an artifact. Build image runs docker build to build the actual docker image, and relies on the node step because it copies the built files into the image.
However, the build process on the image takes a long time if package dependencies have changed (apt or pip), because it has to reinstall everything.
Is there a way to split the docker build job into multiple parts, so that I can say install the apt and pip packages in the dockerfile while build_node is still running, then finish the docker build once that stage is done?
gitlab-ci.yml:
stages:
- Build Node Frontend
- Build Docker Image
services:
- docker:18.03-dind
variables:
DOCKER_DRIVER: overlay2
DOCKER_HOST: tcp://localhost:2375
DOCKER_TLS_CERTDIR: ""
build_node:
stage: Build Node Frontend
only:
- staging
- production
image: node:14.8.0
variables:
GIT_SUBMODULE_STRATEGY: recursive
artifacts:
paths:
- http
cache:
key: "node_modules"
paths:
- frontend/node_modules
script:
- cd frontend
- yarn install --network-timeout 100000
- CI=false yarn build
- mv build ../http
build_image:
stage: Build Docker Image
only:
- staging
- production
image: docker
script:
#- sleep 10000
- tar -cvf app.tar api/ discordbot/ helpers/ http/
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
#- docker pull $CI_REGISTRY_IMAGE:latest
#- docker build --network=host --cache-from $CI_REGISTRY_IMAGE:latest --tag $CI_REGISTRY_IMAGE:$CI_COMMIT_SHORT_SHA --tag $CI_REGISTRY_IMAGE:latest .
- docker build --network=host --tag $CI_REGISTRY_IMAGE:$CI_COMMIT_SHORT_SHA --tag $CI_REGISTRY_IMAGE:latest .
- docker push $CI_REGISTRY_IMAGE:$CI_COMMIT_SHORT_SHA
- docker push $CI_REGISTRY_IMAGE:latest
Dockerfile:
FROM python:3.7-slim
# Add user
ARG APP_USER=abc
RUN groupadd -r ${APP_USER} && useradd --no-log-init -r -g ${APP_USER} ${APP_USER}
WORKDIR /app
ENV PYTHONUNBUFFERED=1
EXPOSE 80
EXPOSE 8080
ADD requirements.txt /app/
RUN set -ex \
&& BUILD_DEPS=" \
gcc \
" \
&& RUN_DEPS=" \
ffmpeg \
postgresql-client \
nginx \
dumb-init \
" \
&& apt-get update && apt-get install -y $BUILD_DEPS \
&& pip install --no-cache-dir --default-timeout=100000 -r /app/requirements.txt \
&& apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false $BUILD_DEPS \
&& apt-get update && apt-get install -y --no-install-recommends $RUN_DEPS \
&& rm -rf /var/lib/apt/lists/*
# Set uWSGI settings
ENV UWSGI_WSGI_FILE=/app/api/api/wsgi.py UWSGI_HTTP=:8000 UWSGI_MASTER=1 UWSGI_HTTP_AUTO_CHUNKED=1 UWSGI_HTTP_KEEPALIVE=1 UWSGI_LAZY_APPS=1 UWSGI_WSGI_ENV_BEHAVIOR=holy PYTHONUNBUFFERED=1 UWSGI_WORKERS=2 UWSGI_THREADS=4
ENV UWSGI_STATIC_EXPIRES_URI="/static/.*\.[a-f0-9]{12,}\.(css|js|png|jpg|jpeg|gif|ico|woff|ttf|otf|svg|scss|map|txt) 315360000"
ENV PYTHONPATH=$PYTHONPATH:/app/api:/app
ENV DB_PORT=5432 DB_NAME=shittywizard DB_USER=shittywizard DB_HOST=localhost
ADD nginx.conf /etc/nginx/nginx.conf
# Set entrypoint
ADD entrypoint.sh /
RUN chmod 755 /entrypoint.sh
ENTRYPOINT ["dumb-init", "--", "/entrypoint.sh"]
ADD app.tar /app/
RUN python /app/api/manage.py collectstatic --noinput
Sure! Check out the gitlab docs on stages and on building docker images with gitlab-ci.
If you have multiple pipeline steps defined within a stage they will run in parallel. For example, the following pipeline would build the node and image artifacts in parallel and then build the final image using both artifacts.
stages:
- build
- bundle
build-node:
stage: build
script:
- # steps to build node and push to artifact registry
build-base-image:
stage: build
script:
- # steps to build image and push to artifact registry
bundle-node-in-image:
stage: bundle
script:
- # pull image artifact
- # download node artifact
- # build image on top of base image with node artifacts embedded
Note that all the pushing and pulling and starting and stopping might not save you time depending on your image size relative to build time, but this will do what you're asking for.
I created docker which runs automated tests. I run it by gitlab script. All works except the report file. I cannot get a report file from docker and insert it to the repository. Command docker cp not working. My GitLab script and docker file:
Gitlab script:
stages:
- test
test:
image: docker:latest
services:
- name: docker:dind
entrypoint: ["env", "-u", "DOCKER_HOST"]
command: ["dockerd-entrypoint.sh"]
variables:
DOCKER_HOST: tcp://docker:2375/
DOCKER_DRIVER: overlay2
# See https://github.com/docker-library/docker/pull/166
DOCKER_TLS_CERTDIR: ""
stage: test
before_script:
- docker login -u "xxx" -p "yyy" docker.io
script:
- docker run --name authContainer "rrr/image:0.0.1"
after_script:
- docker cp authContainer:/artifacts $CI_PROJECT_DIR/artifacts/
artifacts:
when: always
paths:
- $CI_PROJECT_DIR/artifacts/test-result.xml
reports:
junit:
- $CI_PROJECT_DIR/artifacts/test-result.xml
Dockerfile:
FROM mcr.microsoft.com/dotnet/core/sdk:2.1
COPY /publish /Spinelle.AutomaticTests
WORKDIR /Spinelle.AutomaticTests
RUN apt-get update -y
RUN apt install unzip
RUN wget https://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb
RUN dpkg -i google-chrome-stable_current_amd64.deb; apt-get -fy install
RUN curl https://chromedriver.storage.googleapis.com/84.0.4147.30/chromedriver_linux64.zip -o /usr/local/bin/chromedriver
RUN unzip -o /usr/local/bin/chromedriver -d /Spinelle.AutomaticTests
RUN chmod 777 /Spinelle.AutomaticTests
CMD dotnet vstest /Parallel Spinelle.AutomaticTests.dll --TestAdapterPath:. --logger:"nunit;LogFilePath=/artifacts/test-result.xml;MethodFormat=Class;FailureBodyFormat=Verbose"
I'm trying to use custom cache dependencies from bitbucket, using Dockerfile.
This is my bitbucket-pipelines.yml:
hml:
- step:
caches:
- node-cache
name: Tests and build
services:
- docker
volumes:
- "$BITBUCKET_CLONE_DIR/node_modules:/root/node_modules"
- "$BITBUCKET_CLONE_DIR:/code"
script:
# - apt update
# - apt-get install -y curl
# - curl -L "https://github.com/docker/compose/releases/download/1.25.0/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
# - chmod +x /usr/local/bin/docker-compose
# - echo 'DIALOGFLOW_PROJECT_ID=a' > .env
# - docker-compose up -d
# - docker exec api npm run test
- docker image inspect $(docker image ls -aq) --format {{.Size}} | awk '{totalSizeInBytes += $0} END {print totalSizeInBytes}'
- echo $BITBUCKET_CLONE_DIR/node_modules
- docker build -t cloudia/api .
- docker save --output api.docker cloudia/api
artifacts:
- api.docker
- step:
name: Deploy
services:
- docker
deployment: staging
script:
- apt-get update
- apt-get install -y curl unzip python jq
- curl "https://s3.amazonaws.com/aws-cli/awscli-bundle.zip" -o "awscli-bundle.zip"
- mv awscli-bundle.zip /tmp/awscli-bundle.zip
- unzip /tmp/awscli-bundle.zip -d /tmp
- /tmp/awscli-bundle/install -i /usr/local/aws -b /usr/local/bin/aws
- docker load --input ./api.docker
- chmod +x ./deploy_hml.sh
- ./deploy_hml.sh
definitions:
caches:
node-cache: node_modules
services:
docker:
memory: 2048
And here's my Dockerfile:
FROM node:10.15.3
WORKDIR /code
# Using some comments for tests
COPY [ "package*.json", "/code/" ]
RUN npm install --silent
COPY . /code
RUN npm run build
EXPOSE 5000
CMD npm start
My pipeline run without problem, but the cache is not working.
Message received when I tried to run pipeline:
Cache "node-cache": Downloading
Cache "node-cache": Not found
How can I set up the pipeline when docker build runs a Dockerfile?
I am novice with Docker and I am trying to create a docker image and use the docker container so I did the following:
My Dockerfile is:
FROM ubuntu:16.04
# # Front stack
# RUN apt-get install -y npm && \
# npm install -g #angular/cli
FROM python:3.6
RUN apt-get update
RUN apt-get install -y libpython-dev curl build-essential unzip python-dev libaio-dev libaio1 vim
rpm2cpio cpio python-pip dos2unix
RUN mkdir /code
COPY ./requirements.txt /code/requirements.txt
RUN pip install -r /code/requirements.txt
RUN pip install --upgrade pip
COPY . /code/
WORKDIR /code
ENV PYTHONPATH=/code/py_lib
CMD ["bash", "-c", "tail -f /dev/null"]
My dockerCompose file is:
version: '3.5'
services:
testsample:
image: toto/test-sample
restart: unless-stopped
env_file:
- .env
command: bash -c "pip3 install -r requirements.txt && tail -f /dev/null"
# command: bash -c "tail -f /dev/null"
volumes:
- .:/code
I executed these commands:
docker build . -f Dockerfile
docker images
docker-compose up
This gave me an error:
Pulling testsample (toto/test-sample:)...
ERROR: The image for the service you're trying to recreate has been removed. If you continue, volume
data could be lost. Consider backing up your data before continuing.
Continue with the new image? [yN]y
Pulling testsample (toto/test-sample:)...
ERROR: pull access denied for toto/test-sample, repository does not exist or may require 'docker
login': denied: requested access to the resource is denied
I tried docker login and I am able to connect.
So what would lead to this problem?
You have to provide tag name when you are building a docker image using a docker file like the following:
docker build -t toto/test-sample -f Dockerfile .
-t here is for the tag name
-f here is for telling the name of the Dockerfile (in this case it is optinal as Dockerfile is the default name)
If you put the Dockerfile in the same directory as your docker-compose.yml file, you can do the following:
version: '3.5'
services:
testsample:
image: toto/test-sample
build:
context: .
dockerfile: Dockerfile
restart: unless-stopped
env_file:
- .env
volumes:
- .:/code
Then, do:
docker-compose up --build -d
Otherwise, if you are simply having problems building the image, you just need to do:
docker build -t toto/test-sample .
build command should be:
docker build -t toto/test-sample .
Heres my .gitlab-ci.yml
stages:
- containerize
- compile
build_image:
image: docker
stage: containerize
script:
- docker build -t compiler_image_v0 .
compile:
image: compiler_image_v0
stage: compile
script:
- make
artifacts:
when: on_success
paths:
- output/
expire_in: 1 day
The build_image is running correctly, the image created is listed when using the docker images command on the machine with the runners. But the second job fails with the error:
ERROR: Job failed: Error response from daemon: pull access denied for compiler_image_v0, repository does not exist or may require 'docker login' (executor_docker.go:168:1s)
What's going on?
This is my Dockerfile
FROM ubuntu:18.04
WORKDIR /app
# Ubuntu packages
RUN apt-get update
RUN DEBIAN_FRONTEND=noninteractive apt-get -y install apt-utils subversion g++ make cmake unzip
RUN DEBIAN_FRONTEND=noninteractive apt-get -y install libgtk2.*common libpango-1* libasound2* xserver-xorg
RUN DEBIAN_FRONTEND=noninteractive apt-get -y install cpio
RUN DEBIAN_FRONTEND=noninteractive apt-get -y install bash
RUN DEBIAN_FRONTEND=noninteractive apt-get -y install autoconf automake perl m4
# Intel Fortran compiler
RUN mkdir /intel
COPY parallel_studio_xe_2018_3_pro_for_docker.zip /intel
RUN cd /intel && unzip /intel/parallel_studio_xe_2018_3_pro_for_docker.zip
RUN cd /intel/parallel_studio_xe_2018_3_pro_for_docker && ./install.sh --silent=custom_silent.cfg
RUN rm -rf /intel
The stage compile tries to pull the image compiler_image_v0. This image exists only temporary in the docker container of the stage containerize. You have a container registry in your gitlab repository and can push the built image in the containerize stage and then pull it in the compile stage. Furthermore: You should provide a full name of your private gitlab registry. I think dockerhub is used per default.
You can change your .gitlab.ci.yaml to add the push command and use a fully named image:
stages:
- containerize
- compile
build_image:
image: docker
stage: containerize
script:
- docker build -t compiler_image_v0 .
- docker push registry.gitlab.com/group-name/repo-name:compiler_image_v0
compile:
image: registry.gitlab.com/group-name/repo-name:compiler_image_v0
stage: compile
script:
- make
artifacts:
when: on_success
paths:
- output/
expire_in: 1 day
This would overwrite the image on each build. But you could add some versioning.