I am using AWS EC2 instance and have installed docker and docker-compose on AWS linux.
Now I have a docker-compose.yml file which is trying the command mkdir -p /workspace/.m2/repositories. Now this command requires sudo, otherwise it gives permissions error.
I tried adding sudo inside docker-compose but it gave me an error saying
sudo: command not found
I can run this command manually and can comment this command inside of docker-compose.yml file but I am interested to know that is there any way to run this command from inside of docker-compose.yml file?
I may have a solution for you. You can extend the strongbox image in a custom Dockerfile to solve this issue I think.
Create a new Dockerfile, like this one:
Dockerfile
FROM strongboxci/alpine:jdk8-mvn-3.5
USER root
RUN mkdir -p /workspace/.m2/repositories
RUN chown jenkins:jenkins /workspace/.m2/repositories
USER jenkins
Then build the image with something like this:
docker build -t mystrongbox:01 .
And finally update the docker-compose.yml file to this:
docker-compose.yml
version: '2'
services:
strongbox-from-web-core:
image: mystrongbox:01
command:
- /bin/bash
- -c
- |
echo ""
echo "[NOTICE] This will take at least 2 to 5 minutes to start depending on your machine and connection!"
echo ""
echo " Open http://localhost:48080/storages to browse the repository contents."
echo ""
sleep 5
mkdir -p /workspace/.m2/repositories
mvn clean install -DskipTests -Dmaven.repo.local=/workspace/.m2/repositories
cd strongbox-web-core
mvn spring-boot:run -Dmaven.repo.local=/workspace/.m2/repositories
ports:
- 48080:48080
volumes:
- ./:/workspace
working_dir: /workspace
Finally try again with:
docker-compose up
Then you will have the directory created in the image already, and ownership set to the jenkins user.
I'm one of the developers at strongbox/strongbox. We're thrilled that someone is trying out our Docker images for development :)
Now this command requires sudo, otherwise it gives permissions error.
What you are experiencing, is likely a permission issue. Our Docker images are running as user.group = 1000.1000 (which is usually the first user on many distributions). I suspect that your UID/GID is different, which you can check by doing id -u and id -g. If it's something other than 1000.1000 - you would need to do a workaround:
Create a user & group with IDs 1000.1000:
groupadd -g 1000 jenkins
useradd -u 1000 -g 1000 -s /bin/bash -m jenkins
Chown/chmod the cloned strongbox project like this:
chown -R `id -u`.1001 /path/to/strongbox-project
chmod -R 775 /path/to/strongbox-project
Try again docker-compose up
This image does not have sudo installed, so you wouldn't be able to execute it. However, you shouldn't need it as well, because the /workspace is being mounted from your FS (this is the strongbox project) and it will write /workspace/.m2/repository in the volume.
Related
I want to run a specific docker-compose file without entering the sudo password and without assigning that user who runs the command to the docker group for security reasons.
I thought about using the NOPASSWD inside sudoers file and run a bash script called "bash-dockercompose-up.sh" that simply runs docker-compose up -d.
However, it needs the sudo command before the docker-compose up -d to connect to docker host.
This is my /etc/sudoers file:
exampleuser ALL=(root) NOPASSWD:/usr/bin/bash-dockercompose-up.sh
Ok I was able to run it by using the python official sdk library.
https://docs.docker.com/engine/api/sdk/
I created a python script called "service-up.py"
service-up.py
import docker
client = docker.from_env()
container = client.containers.get('id or name here')
container.start()
Then compile it into a binary file in order to change it's uid permissions so a non root user can run it:
pyinstaller service-up.py
move into the dist folder where file is located and run:
chown root:root service-up
chmod 4755 service-up
I have a docker-compose file with a service called 'app'. When I try to run my docker file I don't see the service with docker ps but I do with docker ps -a.
I looked at the logs:
docker logs my_app_1
python: can't open file '//apps/index.py': [Errno 2] No such file or directory
In order to debug I wanted to be able to see the home directory and the files and dirs contained there when the app attempts to run.
Is there a command I can add to docker-compose that would show me the pwd and ls -l of the container when it attempts to run index.py?
My Dockerfile:
FROM python:3
COPY . .
RUN pip install -r requirements.txt
CMD ["python", "apps/index.py"]
My docker-compose.yaml:
version: '3.1'
services:
app:
build:
context: ./app
dockerfile: ./Dockerfile
depends_on:
- db
ports:
- 8050:8050
My directory structure:
my_app:
* docker-compose.yaml
* app
* Dockerfile
* apps
* index.py
You can add a RUN statement in the application Dockerfile to run these commands.
Example:
FROM python:3
COPY . .
RUN pip install -r requirements.txt
# Run your commands
RUN pwd && ls -l
CMD ["python", "apps/index.py"]
Then you chan check the logs of the build process and view the results.
I hope this answer helps you.
If you're just trying to debug an image you've already built, you can docker-compose run an alternate command:
docker-compose run apps \
ls -l ./apps
You don't need to modify anything in your Dockerfile to be able to do this (assuming it uses CMD correctly; see below).
If you need to do more intensive debugging, you can docker-compose run apps sh (or, if your image has it, bash) to get an interactive shell. The container will include any mounted volumes and be on the same Docker network as the named container, but won't have published ports.
Note that the command here replaces the CMD in the Dockerfile. If your image uses ENTRYPOINT for its main command, or if it has a complete command split between ENTRYPOINT and CMD (especially, if you have ENTRYPOINT ["python"]), these need to be combined into a single CMD for this to work. If your ENTRYPOINT is a wrapper script that does some first-time setup and then runs the CMD, this approach will work fine; the debugging ls or sh will run after the first-time setup happens.
There is docker-compose that uses base Dockerfile created image for application.
Dockerfile looks similar to below. Some lines are omitted for reason.
FROM ubuntu:18.04
RUN set -e -x ;\
apt-get -y update ;\
apt-get -y upgrade ;
...
USER service
When using this image in docker-compose and adding named volume to service, folder in named volume is not accessible, with message Permission denied. Part from docker-compose looks as below.
version: "3.1"
services:
myapp:
image: myappimage
command:
- /myapp
ports:
- 12345:1234
volumes:
- logs-folder:/var/log/myapp
volumes:
logs-folder:
My assumption was that USER service line is issue, which I confirmed by setting user: root in myapp service.
Now, question is next. I would like to avoid manually creating volume and setting permissions. I would like it to be automated using docker-compose.
Is this possible and if yes, how can this be done?
Yes, there is a trick. Not really in the docker-compose file, but in the Docker file. You need to create the /var/log/myapp folder and set its permissions before switching to the service user:
FROM ubuntu:18.04
RUN useradd myservice
RUN mkdir /var/log/myapp
RUN chown myservice:myservice /var/log/myapp
...
USER myservice:myservice
Docker-compose will preserve permissions.
See Docker Compose mounts named volumes as 'root' exclusively
I had a similar issue but mine was related to a file shared via a volume to a service I was not building with a Dockerfile, but pulling. I had shared a shell script that I used in docker-compose but when I executed it, did not have permission.
I resolved it by using chmod in the command of docker compose
command: -c "chmod a+x ./app/wait-for-it.sh && ./app/wait-for-it.sh -t 150 -h ..."
volumes:
- ./wait-for-it.sh:/app/wait-for-it.sh
You can change volume source permissions to avoid Permission denied error.
chmod a+x logs-folder
I have a simple Dockerfile
FROM python:3.8-slim-buster
RUN apt-get update && apt-get install
RUN apt-get install -y \
curl \
gcc \
make \
python3-psycopg2 \
postgresql-client \
libpq-dev
RUN mkdir -p /var/www/myapp
WORKDIR /var/www/myapp
COPY . /var/www/myapp
RUN chmod 700 ./scripts/*.sh
And an associated docker-compose file
version: "3"
volumes:
postgresdata:
services:
myapp:
image: ralston3/myapp_api:prod-latest
tty: true
command: /bin/bash -c "/var/www/myapp/scripts/myscript.sh && echo 'hello world'"
ports:
- 8000:8000
volumes:
- .:/var/www/myapp
environment:
SOME_ENV_VARS=SOME_VARIABLE
# ... more here
depends_on:
- redis
- postgresql
# ... other docker services defined below
When I run docker-compose up via:
docker-compose up -f /path/to/docker-compose.yml up
My myapp container/service fails with myapp_myapp_1 exited with code 127 with another error mentioning myapp_1 | /bin/sh: 1: /var/www/myapp/scripts/myscript.sh: not found
Further, if I exec into the myapp container via docker exec -it {CONTAINER_ID} /bin/bash I can clearly see that all of my files are there. I can literally run the /var/www/myapp/scripts/myscript.sh and it works fine.
However, there seems to be some issue with docker-compose (which could totally be my mistake). But I'm just confused as to how I can exec into the container and clearly see the files there. But docker-compose exists with 127 saying "No such file or directory".
You are bind mounting the current directory into "/var/www/myapp" so it may be that your local directory is "hiding/overwriting" the container directory. Try removing the volumes declaration for you myapp service and if that works then you know it is the bind mount causing the issue.
Unrelated to your question, but a problem you will also encounter: you're installing Python a second time, above and beyond the version pre-installed in the python Docker image.
Either switch to debian:buster as base image, or don't bother installing antyhign with apt-get and instead just pip install your dependencies like psycopg.
See https://pythonspeed.com/articles/official-python-docker-image/ for explanation why you don't need to do this.
in my case there were 2 stages: builder and runner.
I was getting an executable in builder and running that exe using the alpine image in runner.
My mistake here was that I didn't use the alpine version for the builder. Ex. I used golang:1.20 but when I used golang:1.20-alpine the problem went away.
Make sure you use the correct version and tag!
So I have used the default docker for testcafe which on docker hub is testcafe/testcafe and I have to run a few testcafe scripts.
However, I need the screenshot that fires on error, to be uploaded to somewhere where I can look at it later after the docker image is done running.
I am using the Imgur program which uses bash so I re-did a few things to make it sh compatible and everything works except I need curl. I tried running
apk add curl
but I'm getting the error
ERROR: Unable to lock database: Permission denied ERROR: Failed to open apk database:
Now I no this means that I do not have permission to do this but can I get around this is there some way to become root (this is in bitbucket pipeline).
I do NOT really want to create my own docker.
Also note all questions I have found relating to this are about installing while creating the docker, however, my question is how to do this after the docker is created. thx (a fine answer would be another way to save the screen shot, but preferably not with ssh).
For those seeing this error using a Dockerfile (and coming here via a Google search): add the following line to your Dockerfile:
USER root
Hopefully this will help anyone who is not interested in creating a new container.
If you are trying to enter into your docker container like so:
docker exec -it <containername> /bin/sh
Instead, try this:
docker exec -it --user=root <containername> /bin/sh
docker exec -it --user=root {containername} bash
with this I can able to execute apk-update
The best fix is to place USER <youruser> AFTER the lines where your docker build is failing. In most cases it is safe to add the USER line directly above the command or entrypoint.
For example:
FROM python:3.8.0-alpine
RUN addgroup -S app && adduser -S -G app app
RUN apk add --no-cache libmaxminddb postgresql-dev gcc musl-dev
ADD . .
USER app
ENTRYPOINT ["scripts/entrypoint.sh"]
CMD ["scripts/gunicorn.sh"]
For those seeing this error when running through a Jenkins pipeline script (and coming hre via a Google search), use the following when starting your Docker image:
node('docker') {
docker.image('golang:1.14rc1-alpine3.11').inside(' -u 0') {
sh 'apk add curl'
...
}
}
For a Docker container it is easy:
docker exec -it --user root container-name sh
For Kubernetes pods, it is a bit more complicated. If your image is built with a non-root user and also you cannot run pods with a root user inside your cluster, you need to install the packages with this method:
Identify the user which the pod is using
Create a new Dockerfile
Configure it as such
FROM pod-image-name:pod-image-tag
USER root
RUN apk update && apk add curl
USER the-original-pod-user
Then build it
docker build -t pod-image-name:pod-image-tag-with-curl .
And change the image of your deployment/pod inside the cluster from pod-image-name:pod-image-tag to pod-image-name:pod-image-tag-with-curl
I have resolved the same problem executing the "docker build -t" command with root user:
#docker build -t $DOCKER_IMAGE