docker-compose apt-get permission denied. Regular docker build can apt-get - docker

I am running dockerd with userns-remap on "default".
I have downloaded docker-compose binaries and chown'ed it to my user that is in the docker group.
I have a dockerfile that is based on ubuntu:focal
I need a docker-compose in order to pass some variables, mount drives, etc.
I am running SE Linux on mode 1.
I am running docker-compose and docker commands on a user that is in the "docker" group but not in the "wheel" group.
Issue:
when running:
docker-compose --verbose up --build
I get the error:
=> ERROR [2/9] RUN apt-get update -y 1.7s
[2/9] RUN apt-get update -y:
#0 1.615 Reading package lists...
#0 1.688 E: List directory /var/lib/apt/lists/partial is missing. - Acquire (13: Permission denied)
failed to solve: executor failed running [/bin/sh -c apt-get update -y]: exit code: 100
For some reason when I run:
docker build .
The build works flawlessly.
I think this might be an issue with the host and not the file. But anyway here are the files.
docker-compose.yaml
version: "3.9"
services:
node:
container_name: node
build: .
env_file:
- settings.env
volumes:
- /opt/node:/home/node
restart: unless-stopped
user: root
ports:
- "10226:10226"
Dockerfile
FROM ubuntu:focal
LABEL maintainer="Me"
USER root
RUN apt-get update -y
Any idea what might be causing this? Any ideas on how I would go about debugging this?

Related

Error while creating mount source path when using docker-compose in Windows

I am trying to dockerize my React-Flask app by dockerizing each one of them and using docker-compose to put them together.
Here the Dockerfiles for each app look like:
React - Frontend
FROM node:latest
WORKDIR /frontend/
ENV PATH /frontend/node_modules/.bin:$PATH
COPY package.json /frontend/package.json
COPY . /frontend/
RUN npm install --silent
RUN npm install react-scripts#3.0.1 -g --silent
CMD ["npm", "run", "start"]
Flask - Backend
#Using ubuntu as our base
FROM ubuntu:latest
#Install commands in ubuntu, including pymongo for DB handling
RUN apt-get update -y
RUN apt-get install -y python-pip python-dev build-essential
RUN python -m pip install pymongo[srv]
#Unsure of COPY command's purpose, but WORKDIR points to /backend
COPY . /backend
WORKDIR /backend/
RUN pip install -r requirements.txt
#Run order for starting up the backend
ENTRYPOINT ["python"]
CMD ["app.py"]
Each of them works fine when I just use docker build and docker up. I've checked that they work fine when they are built and ran independently. However, when I docker-compose up the docker-compose.yml which looks like
# Docker Compose
version: '3.7'
services:
frontend:
container_name: frontend
build:
context: frontend
dockerfile: Dockerfile
ports:
- "3000:3000"
volumes:
- '.:/frontend'
- '/frontend/node_modules'
backend:
build: ./backend
ports:
- "5000:5000"
volumes:
- .:/code
Gives me the error below
Starting frontend ... error
Starting dashboard_backend_1 ...
ERROR: for frontend Cannot start service sit-frontend: error while creating mount source path '/host_mnt/c/Users/myid/DeskStarting dashboard_backend_1 ... error
ERROR: for dashboard_backend_1 Cannot start service backend: error while creating mount source path '/host_mnt/c/Users/myid/Desktop/dashboard': mkdir /host_mnt/c: file exists
ERROR: for frontend Cannot start service frontend: error while creating mount source path '/host_mnt/c/Users/myid/Desktop/dashboard': mkdir /host_mnt/c: file exists
ERROR: for backend Cannot start service backend: error while creating mount source path '/host_mnt/c/Users/myid/Desktop/dashboard': mkdir /host_mnt/c: file exists
ERROR: Encountered errors while bringing up the project.
Did this happen because I am using Windows? What can be the issue? Thanks in advance.
For me the only thing that worked was restarting the Docker deamon
Check if this is related to docker/for-win issue 1560
I had the same issue. I was able to resolve it by running:
docker volume rm -f [name of docker volume with error]
Then restarting docker, and running:
docker-compose up -d --build
I tried these same steps without restarting my docker, but restarting my computer and that didn't resolve the issue.
What resolved the issue for me was removing the volume with the error, restarting my docker, then doing a build again.
Other cause:
On Windows this may be due to a user password change. Uncheck the box to stop sharing the drive and then allow Docker to detect that you are trying to mount the drive and share it.
Also mentioned:
I just ran docker-compose down and then docker-compose up. Worked for me.
I have tried with docker container prune then press y to remove all stopped containers. This issue has gone.
I saw this after I deleted a folder I'd shared with docker and recreated one with the same name. I think this deleted the permissions. To resolve it I:
Unshared the folder in docker settings
Restarted docker
Ran docker container prune
Ran docker-compose build
Ran docker-compose up.
Restarting the docker daemon will work.

docker-compose execute command

I'm trying to put some commands in my docker-compose file to be ran in my container and they don't work.
I map a volume from host to container where I have a Root certificate, all I want to do is to run this command update-ca-certificates, so it will updates the directory /etc/ssl/certs with my cert in the container, however it is not happening.
I tried to solve this in a Dockerfile and I can see that the command runs, but it seems that the cert is not present there and just appears after I login to the container.
What I end doing is to get into the container, then I run the needed commands manually.
This is the piece of my docker-compose file that I have been trying to use:
build:
context: .
dockerfile: Dockerfile
command: >
sh -c "ls -la /usr/local/share/ca-certificates &&
update-ca-certificates"
security_opt:
- seccomp:unconfined
volumes:
- "c:/certs_for_docker:/usr/local/share/ca-certificates"
In the same way I cannot run apt update or anything like this, but after connecting to the container with docker exec -it test_alerting_comp /bin/bash I can pull anything from any repo.
My goal is to execute any needed command on building time, so when I login to the container I have already the packages I will use and the Root cert updated, thanks.
Why dont you do package update/install and copy certificates in the Dockerfile?
Dockerfile
...
RUN apt-get update && apt-get -y install whatever
COPY ./local/certificates /usr/local/share/my-certificates
RUN your-command-for-certificates
docker-compose.yml
version: "3.7"
services:
your-service:
build: ./dir

Docker error: Service 'app' failed to build

I am new to docker, currently following book to learn Django.
Is it necessary to be in virtual environment when running the below
command?
I have gone through docker basic videos which says it saves each apps as images. But where these images are saved?.
Does this line make the current pc root directory or dockers Image '
WORKDIR /usr/src/app'
ADD is placed before RUN in the Dockerfile.
$ sudo docker-compose build
But I got these errors.
ERROR: Service 'app' failed to build: ADD failed: stat /var/lib/docker/tmp/docker-builder912263941/config/requirements.txt: no such file or directory
Dockerfile
FROM python:3
RUN apt-get update \
&& apt-get install -y --no-install-recommends \
mysql-client default-libmysqlclient-dev
WORKDIR /usr/src/app
ADD config/requirements.txt ./
RUN pip3 install --upgrade pip; \
pip3 install -r requirements.txt
RUN django-admin startproject myproject .;\
mv ./myproject ./origproject
docker-compose.yml
version: '2'
services:
db:
image: 'mysql:5.7'
app:
build: .
command: python3 manage.py runserver 0.0.0.0:8000
volumes:
- './project:/usr/src/app/myproject'
- './media:/usr/src/app/media'
- './static:/usr/src/app/static'
- './templates:/usr/src/app/templates'
- './apps/external:/usr/src/app/external'
- './apps/myapp1:/usr/src/app/myapp1'
- './apps/myapp2:/usr/src/app/myapp2'
ports:
- '8000:8000'
links:
- db
requirements.txt
Pillow~=5.2.0
mysqlclient~=1.3.0
Django~=2.1.0
Is it necessary to be in virtual environment when running the below
command?
No, the docker build environment is isolated from the host. Any virtualenv on the host is ignored on the build context and the resulting image.
I have gone through docker basic videos which says it saves each apps
as images. But where these images are saved?.
The images are stored somewhere in /var/lib/docker but isn't meant to be browsed manually. You can send the images somewhere with docker push <image:tag> or save them with docker save <image:tag> -o <image>.tar
Does this line make the current pc root directory or dockers Image ' WORKDIR > /usr/src/app'
That line change the current workdir on the image.
ERROR: Service 'app' failed to build: ADD failed: stat /var/lib/docker/tmp/docker-builder912263941/config/requirements.txt: no such file or directory
This error means that you do not have config/requirements.txt in your current directory where build is run. Adjust your path on the Dockerfile properly.
$ docker-compose up -d
This will download the necessary Docker images and create a container for the web service.

Docker compose: run command variable substitution doesn't work

Problem
Substitution doesn't work for the build phase
Files
docker-compose.yml (only kibana part):
kibana:
build:
context: services/kibana
args:
KIBANA_VERSION: "${KIBANA_VERSION}"
entrypoint: >
/scripts/wait-for-it.sh elasticsearch:9200
-s --timeout=${ELASTICSEARCH_INIT_TIMEOUT}
-- /usr/local/bin/kibana-docker
environment:
ELASTICSEARCH_URL: http://elasticsearch:9200
volumes:
- ./scripts/wait-for-it.sh:/scripts/wait-for-it.sh
ports:
- "${KIBANA_HTTP_PORT}:5601"
links:
- elasticsearch
depends_on:
- elasticsearch
networks:
- frontend
- backend
restart: always
Dockerfile for the services/kibana:
ARG KIBANA_VERSION=6.2.3
FROM docker.elastic.co/kibana/kibana:${KIBANA_VERSION}
USER root
RUN yum install -y which && yum clean all
USER kibana
COPY kibana.yml /usr/share/kibana/config/kibana.yml
RUN ./bin/kibana-plugin install https://github.com/sivasamyk/logtrail/releases/download/v0.1.27/logtrail-${KIBANA_VERSION}-0.1.27.zip
COPY logtrail.json /usr/share/kibana/plugins/logtrail/logtrail.json
EXPOSE 5601
Env file (only kibana part):
KIBANA_VERSION=6.2.3
KIBANA_HTTP_PORT=5601
KIBANA_ELASTICSEARCH_HOST=elasticsearch
KIBANA_ELASTICSEARCH_PORT=9200
Actual output (Problem is here: substitution doesn't work)
#docker-compose up --force-recreate --build kibana
.........
Step 8/10 : RUN ./bin/kibana-plugin install https://github.com/sivasamyk/logtrail/releases/download/v0.1.27/logtrail-${KIBANA_VERSION}-0.1.27.zip
---> Running in d28b1dcb6348
Attempting to transfer from https://github.com/sivasamyk/logtrail/releases/download/v0.1.27/logtrail--0.1.27.zip
Attempting to transfer from https://artifacts.elastic.co/downloads/kibana-plugins/https://github.com/sivasamyk/logtrail/releases/download/v0.1.27/logtrail--0.1.27.zip/https://github.com/sivasamyk/logtrail/releases/download/v0.1.27/logtrail--0.1.27.zip-6.2.3.zip
Plugin installation was unsuccessful due to error "No valid url specified."
ERROR: Service 'kibana' failed to build: The command '/bin/sh -c ./bin/kibana-plugin install https://github.com/sivasamyk/logtrail/releases/download/v0.1.27/logtrail-${KIBANA_VERSION}-0.1.27.zip' returned a non-zero code: 70
Expected output (something similar):
Step 8/10 : RUN ./bin/kibana-plugin install https://github.com/sivasamyk/logtrail/releases/download/v0.1.27/logtrail-${KIBANA_VERSION}-0.1.27.zip
---> Running in d28b1dcb6348
Attempting to transfer from https://github.com/sivasamyk/logtrail/releases/download/v0.1.27/logtrail-6.2.3-0.1.27.zip
I've found answer after 5 min when I posted this question ... damn
The solution is stupid, but works: I only need to redefine args for the new user. See:
ARG KIBANA_VERSION=6.2.3
FROM docker.elastic.co/kibana/kibana:${KIBANA_VERSION}
USER root
RUN yum install -y which && yum clean all
USER kibana
ARG KIBANA_VERSION=${KIBANA_VERSION}
COPY kibana.yml /usr/share/kibana/config/kibana.yml
RUN ./bin/kibana-plugin install https://github.com/sivasamyk/logtrail/releases/download/v0.1.27/logtrail-${KIBANA_VERSION}-0.1.27.zip
COPY logtrail.json /usr/share/kibana/plugins/logtrail/logtrail.json
EXPOSE 5601
The solution is the following lines:
USER kibana
ARG KIBANA_VERSION=${KIBANA_VERSION}

Docker-compose build misses some package content in container

I'm working on build containers exploiting a monitoring application (Centreon).
When i build my container manually (with docker run) and when i build my docker file, i have different results. Somes scripts used by the application are missing.
Here's my dockerfile :
FROM centos:centos7
LABEL Author = "AurelienH."
LABEL Description = "DOCKERFILE : Creates a Docker Container for a Centreon poller"
#Update and install requirements
RUN yum update -y
RUN yum install -y wget nano httpd git
#Install Centreon repo
RUN yum install -y --nogpgcheck http://yum.centreon.com/standard/3.4/el7/stable/noarch/RPMS/centreon-release-3.4-4.el7.centos.noarch.rpm
#Install Centreon
RUN yum install -y centreon-base-config-centreon-engine centreon centreon-pp-manager centreon-clapi
RUN yum install -y centreon-widget*
RUN yum clean all
#PHP Time Zone
RUN echo -n "date.timezone = Europe/Paris" > /etc/php.d/php-timezone.ini
#Supervisor
RUN yum install -y python-setuptools
RUN easy_install supervisor
COPY /cfg/supervisord.conf /etc/
RUN yum clean all
EXPOSE 22 80 5667 5669
CMD ["/usr/bin/supervisord", "--configuration=/etc/supervisord.conf"]
The difference i see is in the /usr/lib/nagios/plugins folder. I miss some scripts here. And when i execute the exact same commands but in a container i'm running i can find my files.
Maybe it has something to do with writing permissions for the user that executes the commands with docker-compose ?
EDIT :
docker-compose file :
version: "3"
services:
centreon:
build: ./centreon
depends_on:
- mariadb
container_name: sinelis-central
volumes:
- ./central-broker-config:/etc/centreon-broker
- ./central-centreon-plugins:/usr/lib/centreon/plugins
- ./central-engine-config:/etc/centreon-engine
- ./central-logs-broker:/var/log/centreon-broker
- ./central-logs-engine:/var/log/centreon-engine
- ./central-nagios-plugins:/usr/lib/nagios/plugins
- ./central-ssh-key:/home/centreon/.ssh/
ports:
- "80:80"
- "5667:5667"
- "5669:5669"
- "22:22"
deploy:
restart_policy:
window: 300s
links:
- mariadb
mariadb:
image: mariadb
container_name: sinelis-mariadb
environment:
MYSQL_ROOT_PASSWORD: passwd2017
deploy:
restart_policy:
window: 300s
To run the container I use the docker run -it centos:centos7 command
It doesn't matter what you put in your image at that location, you will always see the contents of your volume mount:
- ./central-nagios-plugins:/usr/lib/nagios/plugins
Docker does not initialize host volumes to the contents of the image, and once you have data in the volume, docker does an initialization with any volume type.
Keep in mind the build happens on an image without any of the other configurations in the compose file applied, no volumes are mounted for you to update. Then when you run your container, you overlay the directories of the image with the volumes you select. Build time and run time are two separate phases.
Edit: to have a named volume point to a host directory, you can defined a bind mount volume. This will not create the directory if it does not already exist (the attempt to mount the volume will fail and the container would not start). But if the directory is empty, it will be initialized to the contents of your image:
version: "3"
volumes:
central-nagios-plugins:
driver: local
driver_opts:
type: none
o: bind
device: /usr/lib/nagios/plugins
services:
centreon:
....
volumes:
...
- central-nagios-plugins:/usr/lib/nagios/plugins
...
It will be up to you to empty the contents of this volume when you want it to be reinitialized with the contents of your image, and merging multiple versions of this directory would also be a process you'd need to create yourself.

Resources