Edit apache configuration in docker - docker

First time docker user here, I'm using this image: https://github.com/dgraziotin/osx-docker-lamp
I want to make the apache in that container to use a configuration file from the host system. How do I do that?
I know I can use nsenter, but I think my changes will get deleted when the container is turned off.
Thank you

The best solution is using VOLUME.
docker pull dgraziotin/lamp
You need to copy /etc/apache2/ from container to current directory in host computer. Then you can do this:
cd ~
mkdir conf
docker run -i -t --rm -v ~/conf:/tmp/conf dgraziotin/lamp:latest /bin/bash
On container do:
ls /tmp/conf
cd /etc/apache2/
tar -cf /tmp/conf/apache-conf.tar *
exit
On host computer:
cd conf
tar -xf apache-conf.tar
cd ..
# alter your configuration in this file and save
vi conf/apache2.conf
# run your container : daemon mode
docker run -d -p 9180:80 --name web-01 -v ~/conf:/etc/apache2 dgraziotin/lamp:latest
docker ps
To list conf content on Container use:
docker exec web-01 ls -lAt /etc/apache2/
total 72
-rw-r--r-- 1 root root 1779 Jul 17 20:24 envvars
drwxr-xr-x 2 root root 4096 Apr 10 11:46 mods-enabled
drwxr-xr-x 2 root root 4096 Apr 10 11:45 sites-available
-rw-r--r-- 1 root root 7136 Apr 10 11:45 apache2.conf
drwxr-xr-x 2 root root 4096 Apr 10 11:45 mods-available
drwxr-xr-x 2 root root 4096 Apr 10 11:44 conf-enabled
drwxr-xr-x 2 root root 4096 Apr 10 11:44 sites-enabled
drwxr-xr-x 2 root root 4096 Apr 10 11:44 conf-available
-rw-r--r-- 1 root root 320 Jan 7 2014 ports.conf
-rw-r--r-- 1 root root 31063 Jan 3 2014 magic
Use docker exec web-01 cat /etc/apache2/apache2.conf to list content inside Container.
One the WEB page to test your environment.
I hope this help you.

You should use a Dockerfile to generate a new image containing your desired configuration. For example:
FROM dgraziotin/lamp
COPY my-config-file /some/configuration/file
This assumes that there is a file my-config-file located in the same directory as the Dockerfile. Then run:
docker build -t myimage
And once the build completes you will have an image named myimage available locally.

Related

How can I use "docker run --user" but with root priviliges

I have a Docker image which contains an analysis pipeline. To run this pipeline, I need to provide input data and I want to keep the outputs. This pipeline must be able to be run by other users than myself, on their own laptops.
Briefly, my root (/) folder structure is as follows:
total 72
drwxr-xr-x 1 root root 4096 May 29 15:38 bin
drwxr-xr-x 2 root root 4096 Feb 1 17:09 boot
drwxr-xr-x 5 root root 360 Jun 1 15:31 dev
drwxr-xr-x 1 root root 4096 Jun 1 15:31 etc
drwxr-xr-x 2 root root 4096 Feb 1 17:09 home
drwxr-xr-x 1 root root 4096 May 29 15:49 lib
drwxr-xr-x 2 root root 4096 Feb 24 00:00 lib64
drwxr-xr-x 2 root root 4096 Feb 24 00:00 media
drwxr-xr-x 2 root root 4096 Feb 24 00:00 mnt
drwxr-xr-x 1 root root 4096 Mar 12 19:38 opt
drwxr-xr-x 1 root root 4096 Jun 1 15:24 pipeline
dr-xr-xr-x 615 root root 0 Jun 1 15:31 proc
drwx------ 1 root root 4096 Mar 12 19:38 root
drwxr-xr-x 3 root root 4096 Feb 24 00:00 run
drwxr-xr-x 1 root root 4096 May 29 15:38 sbin
drwxr-xr-x 2 root root 4096 Feb 24 00:00 srv
dr-xr-xr-x 13 root root 0 Apr 29 10:14 sys
drwxrwxrwt 1 root root 4096 Jun 1 15:25 tmp
drwxr-xr-x 1 root root 4096 Feb 24 00:00 usr
drwxr-xr-x 1 root root 4096 Feb 24 00:00 var
The pipeline scripts are in /pipeline and are packaged into the image with a "COPY. /pipeline" instruction in my Dockerfile.
For various reasons, this pipeline (which is a legacy pipeline) is set up so that the input data must be in a folder such /pipeline/project. To run my pipeline, I use:
docker run --rm --mount type=bind,source=$(pwd),target=/pipeline/project --user "$(id -u):$(id -g)" pipelineimage:v1
In other words, I mount a folder with the data to /pipeline/project. I found I needed to use the --user to insure the output files would have the correct permissions - i.e. I would have read/write/exec access on my host computer after the container exits.
The pipeline runs but I have one issue: one particular software used by the pipeline automatically tries to produce (and I can't change that) 1 folder in $HOME (so / - which I showed above) and 1 folder in my WORKDIR (which I have set up in my Dockerfile to be /pipeline). These attempts fails, and I'm guessing it's because I am not running the pipeline as root. But I need to use --user to make sure my outputs have the correct permissions - i.e. that I don't require sudo rights to read these outputs etc.
My question is: how am I meant to handle this? It seems that by using --user, I have the correct permissions set for the mounted folder (/pipeline/projects) where many output files are successfully made, no problems there. But how can I ensure the other 2 folders are correctly made outside of that mount?
I have tried the following but not success:
Doing "COPY -chown myhostuid:mygroupid" . pipeline/". This works but I have to hardcode my uid and gid so that won't work if another colleague tries to run the image.
Adding a new user with sudo rights and making it run the image: "RUN useradd -r newuser -g sudo" (I also tried using the "root" group but no success). This just gives me outputs which require sudo rights to read/write/exec. Which is not what I want.
Am I missing something? I don't understand why it's "easy" to handle permissions for a mounted folder but so much harder for the other folders in a container. Thanks.
If your software doesn't rely on relative paths (~/, ./), you can just set $HOME and WORKDIR to a directory that any user can write:
ENV HOME=/tmp
WORKDIR /tmp
If you can't do that, you can pass the uid/gid via the environment to an entrypoint script running as root, chown/chmod as necessary, then drop privileges to run the pipeline (runuser, su, sudo, setuidgid).
For example (untested):
entrypoint.sh
#!/bin/bash
[[ -v "RUN_UID" ]] || { echo "unset RUN_UID" >&2; exit 1; }
[[ -v "RUN_GID" ]] || { echo "unset RUN_GID" >&2; exit 1; }
# chown, chmod, set env, etc.
chown $RUN_UID:$RUN_GID "/path/that/requires/write/permissions"
export HOME=/tmp
# Run the pipeline as a non-root user.
sudo -E -u "#$RUN_UID" -g "#$RUN_GID" /path/to/pipeline
Dockerfile
...
COPY entrypoint.sh /usr/local/bin/entrypoint.sh
ENTRYPOINT ["/usr/local/bin/entrypoint.sh"]
Finally, pass the user and group IDs via the environment when running:
docker run --rm --mount type=bind,source=$(pwd),target=/pipeline/project -e RUN_UID=$(id -u) -e RUN_GID=$(id -g) pipelineimage:v1

Files inside a docker image disappear when mounting a volume

Inside of docker image has several files in /tmp directory.
Example
/tmp # ls -al
total 4684
drwxrwxrwt 1 root root 4096 May 19 07:09 .
drwxr-xr-x 1 root root 4096 May 19 08:13 ..
-rw-r--r-- 1 root root 156396 Apr 24 07:12 6359688847463040695.jpg
-rw-r--r-- 1 root root 150856 Apr 24 06:46 63596888545973599910.jpg
-rw-r--r-- 1 root root 142208 Apr 24 07:07 63596888658550828124.jpg
-rw-r--r-- 1 root root 168716 Apr 24 07:12 63596888674472576435.jpg
-rw-r--r-- 1 root root 182211 Apr 24 06:51 63596888734768961426.jpg
-rw-r--r-- 1 root root 322126 Apr 24 06:47 6359692693565384673.jpg
-rw-r--r-- 1 root root 4819 Apr 24 06:50 635974329998579791105.png
When I type the command to run this image -> container.
sudo docker run -v /home/media/simple_dir2:/tmp -d simple_backup
Expected behavior is if I run ls -al /home/media/simple_dir2
then the files show up.
But actual behavior is nothing exists in /home/media/simple_dir2.
On the other hand, if I run the same image without the volume option such as:
sudo docker run -d simple_backup
And enter that container using:
sudo docker exec -it <simple_backup container id> /bin/sh
ls -al /tmp
Then the files exist.
TL;DR
I want to mount a volume (directory) on the host, and have it filled with the files which are inside of the docker image.
My env
Ubuntu 18.04
Docker 19.03.6
From: https://docs.docker.com/storage/bind-mounts/
Mount into a non-empty directory on the container
If you bind-mount into a non-empty directory on the container, the directory’s existing contents are obscured by the bind mount. This can be beneficial, such as when you want to test a new version of your application without building a new image. However, it can also be surprising and this behavior differs from that of docker volumes.
"So, if host os's directory is empty, then container's directory will override is that right?"
Nope, it doesn't compare them for which one has files; it just overrides the folder on the container with the one on the host no matter what.

Docker: file permissions with --volume bind mount

I'm following the guidelines from: https://denibertovic.com/posts/handling-permissions-with-docker-volumes/ to setup a --volume bind mount in my container and creating a user in the guest container with the same UID as my host user - the theory being that my container user should be able to access the mount. It's not working for me and I'm looking for some pointers to try next.
More background details:
My Dockerfile starts from an alpine base and adds python dev packages. It copies across an entrypoint.sh script per guidelines from denibertovic. It then jumps to the entrpoint.sh script.
FROM alpine
RUN apk update
RUN apk add bash
RUN apk add python3
RUN apk add python3-dev
RUN apk add su-exec
COPY entrypoint.sh /usr/local/bin/entrypoint.sh
RUN chmod +x /usr/local/bin/entrypoint.sh
ENTRYPOINT ["/usr/local/bin/entrypoint.sh"]
The entrpoint.sh script adds a user to the container with the UID passed in as an environment variable.
#!/bin/bash
# Add local user
# Either use the LOCAL_USER_ID if passed in at runtime or
# fallback
USER_ID=${LOCAL_USER_ID:-9001}
echo "Starting with UID : $USER_ID"
adduser -s /bin/bash -u $USER_ID -H -D user
export HOME=/home/user
su-exec user "$#"
The container builds no problem.
I then run it with the following command line:
sudo docker run -it -e LOCAL_USER_ID=`id -u` -v `realpath ../..`:/ws django-runtime /bin/bash
You'll see that I'm passing in my host UID to be mapped to the container user's UID and I'm asking for a volume bind mount from my local working directory to the /ws mountpoint in the container.
From the bash shell inside the container I can see that /ws is owned by the 'user' UID matching my own 'id'. However, when I go to list the contents of /ws I get a Permission Denied error as follows:
[dleclair#localhost runtime]$ sudo docker run -it -e LOCAL_USER_ID=`id -u` -v `realpath ../..`:/ws django-runtime /bin/bash
[sudo] password for dleclair:
Starting with UID : 1000
bash-5.0$ id
uid=1000(user) gid=1000(user) groups=1000(user)
bash-5.0$ ls -la .
total 0
drwxr-xr-x 1 root root 27 Feb 8 09:15 .
drwxr-xr-x 1 root root 27 Feb 8 09:15 ..
-rwxr-xr-x 1 root root 0 Feb 8 09:15 .dockerenv
drwxr-xr-x 1 root root 18 Feb 8 07:44 bin
drwxr-xr-x 5 root root 360 Feb 8 09:15 dev
drwxr-xr-x 1 root root 91 Feb 8 09:15 etc
drwxr-xr-x 2 root root 6 Jan 16 21:52 home
drwxr-xr-x 1 root root 17 Jan 16 21:52 lib
drwxr-xr-x 5 root root 44 Jan 16 21:52 media
drwxr-xr-x 2 root root 6 Jan 16 21:52 mnt
drwxr-xr-x 2 root root 6 Jan 16 21:52 opt
dr-xr-xr-x 119 root root 0 Feb 8 09:15 proc
drwx------ 2 root root 6 Jan 16 21:52 root
drwxr-xr-x 1 root root 21 Feb 8 07:44 run
drwxr-xr-x 1 root root 21 Feb 8 08:22 sbin
drwxr-xr-x 2 root root 6 Jan 16 21:52 srv
dr-xr-xr-x 13 root root 0 Feb 8 01:58 sys
drwxrwxrwt 2 root root 6 Jan 16 21:52 tmp
drwxr-xr-x 1 root root 19 Feb 8 07:44 usr
drwxr-xr-x 1 root root 19 Jan 16 21:52 var
drwxrwxr-x 5 user user 111 Feb 8 02:15 ws
bash-5.0$
bash-5.0$
bash-5.0$ cd /ws
bash-5.0$ ls -la
ls: can't open '.': Permission denied
total 0
bash-5.0$
Appreciate any pointers anyone can offer. Thanks!
After more searching I found the answer to my problem here: Permission denied on accessing host directory in Docker and here: http://www.projectatomic.io/blog/2015/06/using-volumes-with-docker-can-cause-problems-with-selinux/.
In short, the problem was with the SELinux default labels for the volume mount blocking access to the mounted files. The solution was to add a ':Z' trailer to the -v command line argument to force docker to set the appropriate flags against the mounted files to allow access.
The command line therefore became:
sudo docker run -it -e LOCAL_USER_ID=`id -u` -v `realpath ../..`:/ws:Z django-runtime /bin/bash
Worked like a charm.

creating my own docker image for archlinux and how to use it for development

I am trying to learn docker. So i am trying to create an archlinux image. Presently i am not worried of size. But i am stuck up how to further go ahead to use this as my development for a project. My goal is to create and use different archlinux images for my different projects separately.
1) shift to root in terminal
2) mkdir archlinux
3) pacstrap -i -c -d ./archlinux base
4) echo 'en_US.UTF-8 UTF-8' > ./archlinux/etc/locale.gen
5) arch-chroot ./archlinux locale-gen
6) echo 'LANG=en_US.UTF-8' > ./archlinux/etc/locale.conf
Now the total size of the folder archlinux is 899 MB.
Now i am trying to import it as an docker image
cd archlinux
tar -c . | docker import - example_archlinux
tar: ./etc/pacman.d/gnupg/S.gpg-agent: socket ignored
tar: ./etc/pacman.d/gnupg/S.gpg-agent.extra: socket ignored
tar: ./etc/pacman.d/gnupg/S.gpg-agent.ssh: socket ignored
tar: ./etc/pacman.d/gnupg/S.scdaemon: socket ignored
tar: ./etc/pacman.d/gnupg/S.gpg-agent.browser: socket ignored
sha256:2b3ed6536389a1184f402ff5a9d20380a3f4aa2c49bdee31df9c7c10186eb889
Now I run the docker image
# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
example_archlinux latest 2b3ed6536389 About a minute ago 881MB
Now i try to run the image:
# docker run -ti example_archlinux:latest /bin/bash
[root#3863ba31186b /]#
# docker run -ti example_archlinux:latest ls -al
total 52
drwxr-xr-x 1 root root 4096 Oct 16 08:32 .
drwxr-xr-x 1 root root 4096 Oct 16 08:32 ..
-rwxr-xr-x 1 root root 0 Oct 16 08:32 .dockerenv
lrwxrwxrwx 1 root root 7 Jan 5 2018 bin -> usr/bin
drwxr-xr-x 2 root root 4096 Oct 16 08:01 boot
drwxr-xr-x 5 root root 360 Oct 16 08:32 dev
drwxr-xr-x 1 root root 4096 Oct 16 08:32 etc
drwxr-xr-x 2 root root 4096 Jan 5 2018 home
lrwxrwxrwx 1 root root 7 Jan 5 2018 lib -> usr/lib
lrwxrwxrwx 1 root root 7 Jan 5 2018 lib64 -> usr/lib
drwxr-xr-x 2 root root 4096 Jan 5 2018 mnt
drwxr-xr-x 2 root root 4096 Jan 5 2018 opt
dr-xr-xr-x 275 root root 0 Oct 16 08:32 proc
drwxr-x--- 3 root root 4096 Oct 16 08:01 root
drwxr-xr-x 2 root root 4096 Oct 16 08:01 run
lrwxrwxrwx 1 root root 7 Jan 5 2018 sbin -> usr/bin
drwxr-xr-x 4 root root 4096 Oct 16 08:01 srv
dr-xr-xr-x 13 root root 0 Oct 16 08:32 sys
drwxrwxrwt 2 root root 4096 Oct 16 08:01 tmp
drwxr-xr-x 8 root root 4096 Oct 16 08:10 usr
drwxr-xr-x 12 root root 4096 Oct 16 08:01 var
Its great. Its working
Q1 : Will docker not ask for login and password of root, assuming i have set root passwd
I want to create my Django + ngingx + postgresql + redis + git. I will install and setup the required packages.
.
So i am testing whether run command will save the folders craeted
# docker run -ti example_archlinux:latest /bin/bash
[root#9f4e56ce38c5 /]# mkdir hare
[root#9f4e56ce38c5 /]# exit
# docker run -ti example_archlinux:latest ls /hare
ls: cannot access '/hare': No such file or directory
I have the main question:
Q2 Since i created a folder and if i exit its not there anymore.
Now what is the best way to use a docker image for my development.
I cant afford that my files are not there after i exit.
So is there any way that the container is permanently created and i can work in it for my development.
OR
Where to create my source code on host or docker. I want everything at one place.
Q1: I never tried setting the root password. But usually, when running the container, you'll be logged in as root except if you use the USER Dockerfile command, which is the more secure approach. More about it here
Q2: Everytime you remove your container, everything inside of it will be destroyed. So, you'll lose the files you've created, unless you bound a volume to your host. Volumes are the standard way to go. You can define a volume, for instance, on your docker run command:
docker run -ti -v /host/source/folder:/desired/guest/folder example_archlinux:latest ls -al
Now you can add/remove/change files both from container or host and it will be persisted. There wont be duplicated files. It's just that both have access to it.
more details here

Docker run with "-v" create another shared directory

I have a strange problem running a Docker container
It is working OK if I run:
docker run -it -v /home/drleo/pythonCourses:/home/pythonCurses /redpmorg/python-courses
But if I run container with publish option the Docker will create a new folder in my /home/drleo directory with the SAME name: pythonCourses, owned by root but obviously empty:
docker run -it -p 127.0.0.1:8080:8080 -v /home/drleo/pythonCourses:/home/pythonCurses /redpmorg/python-courses
-rw-r--r-- 1 drleo drleo 675 May 6 2016 .profile
drwxr-xr-x 2 drleo drleo 4096 May 6 2016 Public
drwxr-xr-x 2 root root 4096 Feb 16 13:08 pyhtonCourses
drwxrwxr-x 2 drleo drleo 4096 Feb 16 13:08 pythonCourses
-rwxrwxr-x 1 drleo drleo 71 Jan 20 22:35 reset-network
The question is why? Thanks!
You seem to have a type somewhere. python != pyhton.
Double check your command history.

Resources