I try to execute the following line:
docker exec --user www-data nextcloud_docker php /var/www/html/occ db:convert-filecache-bigint
which returns a prompt:
This can take up to hours, depending on the number of files in your instance!
Continue with the conversion (y/n)? [n]
Unfortunately the docker exec command ends (returns to shell) and I am not able to start the occ command.
How can I solve this?
Thanks.
You can try setting the -i flag on the docker command and piping a 'y' into it, like this
echo y | docker exec -i --user www-data nextcloud_docker php /var/www/html/occ db:convert-filecache-bigint
or you can run the command fully interactively with the -it flags like this
docker exec -it --user www-data nextcloud_docker php /var/www/html/occ db:convert-filecache-bigint
occ has a -n switch.
I run it from cron, including the update. I have these lines in /home/update-nextcloud-inside-container.sh inside my container:
#!/bin/bash
date
sed -i 's~www-data:/var/www:/usr/sbin/nologin~www-data:/var/www:/bin/bash~g' /etc/passwd
su -c "cd /var/www/nextcloud; php /var/www/nextcloud/updater/updater.phar --no-interaction" www-data
su -c "cd /var/www/nextcloud; ./occ db:add-missing-indices -n" www-data
su -c "cd /var/www/nextcloud; ./occ db:convert-filecache-bigint -n" www-data
sed -i s~www-data:/var/www:/bin/bash~www-data:/var/www:/usr/sbin/nologin~g /etc/passwd
and the host cron launches a script with these lines:
ActiveContainer=$(/home/myusername/bin/lsdocker.sh | grep next )
docker exec -i ${ActiveContainer} /home/update-nextcloud-inside-container.sh
I see now I am missing taking the instance off-line for running convert-filecache. I'll have to add that.
Edit: (lsdocker.sh is a script that uses docker ps to list just the active container names)
Related
This question already has answers here:
what is docker run -it flag?
(3 answers)
Closed 22 days ago.
When executing this command, I can't just leave out neither i nor t to get the bash to work.
sudo docker exec -it 69e937450dab bash
What does it exactly do? When do I need the command without these parameters?
The flags -i and -t are required to run an interactive shell session in the container:
-i makes the session interactive by keeping STDIN open even if not attached
-t allocates a pseudo-TTY, allowing you to interact with the container using a terminal
I will answer it myself:
Normal execution without any flags:
[ec2-user#ip-172-31-109-14 ~]$ sudo docker exec 69e937450dab ls
bin
boot
dev
docker-entrypoint.d
docker-entrypoint.sh
etc
If your command needs an input like cat, you can try:
[ec2-user#ip-172-31-109-14 ~]$ echo test | sudo docker exec 69e937450dab cat
Nothing will show, because there is no input stream going to the docker container. This can be achieved with the -i flag.
[ec2-user#ip-172-31-109-14 ~]$ echo test | sudo docker exec -i 69e937450dab cat
test
Now, let us suppose, you want the bash to start as process:
sudo docker exec 69e937450dab bash
You will see nothing, because the process started in the container. Adding the flag will do the deal:
[ec2-user#ip-172-31-109-14 ~]$ sudo docker exec -t 69e937450dab bash
root#69e937450dab:/#
But this does not really help, because we need an input stream, which takes our commands and can be received by the bash. Therefore, we need to combine the two:
[ec2-user#ip-172-31-109-14 ~]$ sudo docker exec -i -t 69e937450dab bash
root#69e937450dab:/# ls
bin boot dev docker-entrypoint.d docker-entrypoint.sh etc hi home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var
root#69e937450dab:/#
small recap:
-t for attaching the bash process to our terminal
-i for being able to send inputs via STDIN for example with the keyboard to the bash in the container
Without -i can be used for commands, that don't need inputs. Without -t and bash can be used, when you just want to use one command.
Is it possible to save the selected directory in docker before exiting the container?
As a default, docker does not remember the selected directory before it exits.
In the example below, I changed the directory inside docker to home.
Example:
> docker exec -it loving_mccarthy /bin/bash
root#6bd70522dd17:/# cd /home
root#6bd70522dd17:/home# exit
exit
> docker exec -it loving_mccarthy /bin/bash
/#
You can achieve this behavior by setting up a bash function, which you can configure to run on terminal exit, that writes the current working directory to the .bashrc file, which you can then use the second time you run a bash terminal in the container with cd -.
Here's a full example:
Start the container:
$ docker run -it --name=example -d ubuntu /bin/bash
Run docker exec the first time, and configure the function for trap:
$ docker exec -it example /bin/bash
root#ecfc612fe6b8:/# set_workdir_on_exit() { echo "export OLDPWD=$PWD" >> $HOME/.bashrc ;}
root#ecfc612fe6b8:/# trap 'set_workdir_on_exit' EXIT
root#ecfc612fe6b8:/# cd /home
root#ecfc612fe6b8:/home# exit
Run docker exec the second time:
$ docker exec -it example /bin/bash
root#6a491ad1aee5:/# cd -
/home
root#6a491ad1aee5:/home#
What's wrong with the following statement ?
sudo docker exec myDockerName ls -lt /var/lib/myApp/data/myFolder/debian*.gz
this returns No such file or directory
but running the command being within the docker, returns the desired results
sudo docker exec -it myDockerName bash
ls -lt /var/lib/myApp/data/myFolder/debian*.gz
does docker ls work differently ?
I think that passing your arguments as a string can solve this issue, using the -c flag:
sudo docker exec myDockerName bash -c "ls -lt /var/lib/myApp/data/myFolder/debian*.gz"
BY default when you run
docker run -it [myimage]
OR
docker attach [mycontainer]
you connect to the terminal as root user, but I would like to connect as a different user. Is this possible?
For docker run:
Simply add the option --user <user> to change to another user when you start the docker container.
docker run -it --user nobody busybox
For docker attach or docker exec:
Since the command is used to attach/execute into the existing process, therefore it uses the current user there directly.
docker run -it busybox # CTRL-P/Q to quit
docker attach <container id> # then you have root user
/ # id
uid=0(root) gid=0(root) groups=10(wheel)
docker run -it --user nobody busybox # CTRL-P/Q to quit
docker attach <container id>
/ $ id
uid=99(nobody) gid=99(nogroup)
If you really want to attach to the user you want to have, then
start with that user run --user <user> or mention it in your Dockerfile using USER
change the user using `su
You can run a shell in a running docker container using a command like:
docker exec -it --user root <container id> /bin/bash
As an updated answer from 2020. --user, -u option is Username or UID (format: <name|uid>[:<group|gid>]).
Then, it works for me like this,
docker exec -it -u root:root container /bin/bash
Reference: https://docs.docker.com/engine/reference/commandline/exec/
You can specify USER in the Dockerfile. All subsequent actions will be performed using that account. You can specify USER one line before the CMD or ENTRYPOINT if you only want to use that user when launching a container (and not when building the image). When you start a container from the resulting image, you will attach as the specified user.
The only way I am able to make it work is by:
docker run -it -e USER=$USER -v /etc/passwd:/etc/passwd -v `pwd`:/siem mono bash
su - magnus
So I have to both specify $USER environment variable as well a point the /etc/passwd file. In this way, I can compile in /siem folder and retain ownership of files there not as root.
My solution:
#!/bin/bash
user_cmds="$#"
GID=$(id -g $USER)
UID=$(id -u $USER)
RUN_SCRIPT=$(mktemp -p $(pwd))
(
cat << EOF
addgroup --gid $GID $USER
useradd --no-create-home --home /cmd --gid $GID --uid $UID $USER
cd /cmd
runuser -l $USER -c "${user_cmds}"
EOF
) > $RUN_SCRIPT
trap "rm -rf $RUN_SCRIPT" EXIT
docker run -v $(pwd):/cmd --rm my-docker-image "bash /cmd/$(basename ${RUN_SCRIPT})"
This allows the user to run arbitrary commands using the tools provides by my-docker-image. Note how the user's current working directory is volume mounted
to /cmd inside the container.
I am using this workflow to allow my dev-team to cross-compile C/C++ code for the arm64 target, whose bsp I maintain (the my-docker-image contains the cross-compiler, sysroot, make, cmake, etc). With this a user can simply do something like:
cd /path/to/target_software
cross_compile.sh "mkdir build; cd build; cmake ../; make"
Where cross_compile.sh is the script shown above. The addgroup/useradd machinery allows user-ownership of any files/directories created by the build.
While this works for us. It seems sort of hacky. I'm open to alternative implementations ...
For docker-compose. In the docker-compose.yml:
version: '3'
services:
app:
image: ...
user: ${UID:-0}
...
In .env:
UID=1000
Execute command as www-data user: docker exec -t --user www-data container bash -c "ls -la"
This solved my use case that is: "Compile webpack stuff in nodejs container on Windows running Docker Desktop with WSL2 and have the built assets under your currently logged in user."
docker run -u 1000 -v "$PWD":/build -w /build node:10.23 /bin/sh -c 'npm install && npm run build'
Based on the answer by eigenfield. Thank you!
Also this material helped me understand what is going on.
When running below command in command line(terminal) this gets executed fine:
$sudo docker exec -it 5570dc09b58 bash
But same results with :
FATA[0000] cannot enable tty mode on non tty input
Error when running in a shell script file.
Scripts may be forced to run in interactive mode with the -i option or with a #!/bin/bash -i header.
So adding shebang to the script with -i option should work:
#!/bin/bash -i
docker exec -it ed3d9e46b8ee date
Run the script as usual:
chmod +x run.sh
sudo ./run.sh
Output:
Thu Apr 2 14:06:00 UTC 2015
You are not running docker in a terminal, so you should remove -t from -it:
sudo docker exec -i 5570dc09b58 bash
See a more detailed answer here.
There are few images which do not support the interactive shell/bash.
Example - Docker image mockserver/mockserver
Docker Set Up