Process is hanging all the time - docker

Here is my script:
mogo()
{
sshpass -p 'abc123' ssh -tt -q -o StrictHostKeyChecking=no admin#192.168.10.145 <<'SSH_EOF'
sudo docker exec -it $(sudo docker ps --filter name=mongo --format "{{.Names}}") bash -c "mongodump -d saas -u abc -p abc123 -o md1/"
logout
SSH_EOF
touch /home/admin/11jul20
}
I am calling above script using cronjob for taking backup.
Issue: the process created by the above script hangs forever and
the touch command after logout is not executed.
Manual workaround: If I terminate the process manually with the kill command. The touch command is running and the file 11jul20 got created.
If I remove single quotes '' to 'SSH_EOF' sudo docker command not taking backup, but the touch command is running.
Kindly help me to understand what is wrong.

I suspect that your issue is related to the password request of sudo going to the pseudo terminal specified by your ssh -tt (which is not a real terminal).
I avoid sshpass and never install it (it's a security risk) so I can't test your script.
However, the following will work.
Make your ssh login account part of the docker group
sudo usermod -a -G admin docker
As an aside, instead of using an 'admin' account, it would be a lot more secure to create a special login account (maintenance) on your linux box that can perform only the admin tasks needed.
User the following script
mogo()
{
ssh -T -q -o StrictHostKeyChecking=no admin#192.168.10.145 <<SSH_EOF
docker exec -it \$(docker ps --filter name=mongo --format "{{.Names}}") bash -c "mongodump -d saas -u $MGO_USER -p $MGO_PWD -o md1/"
logout
SSH_EOF
touch /home/admin/11jul20
}
Note that there is no need for pseudo terminal so we disable it (-T). I'd also look at not disabling StrictHostKeyChecking.
3. Set your env.
Keep your passwords and other secrets as environment variables (12 factors), never in your scripts. That's a minimum.
For instance, the following will be injected in your heredoc script.
IMPORTANT make sure you don't use single quotes around SSH_EOF, otherwise the var replacement isn't performed.
export MGO_USER=abc
export MGO_PWD=abc123
Docker has also a secrets store and other open source vaults are available, but with more complexity.
Call your backup script
mgo

Related

How do you block console, root or other users, access to a docker container?

I tried installing puppet and changing the root user's shell to '/sbin/nologin' but I can still get right into the console?
It is a centOS 7 container.
Is Docker using a socket for the connection? Could I use selinux to block the socket? If I do I fear that I will also disable docker from being able to communicate with the container at all? I have been reading through Docker Security articles but have not found a good solution.
My end goal is for the container to be an ephemeral 'black box' when it comes up. My particular user case is a local web app, so no console access will be required.
You could try to remove all terminal commands (bash, sh, and so on) from the container:
docker exec [container-id] -it /bin/rm -R /bin/*
At that point you will not be able to use docker exec [container-id] -it bash to get a console to the container.
If you want to be more gentle about it you can only remove the shells you have (and leave all the other commands available (like the rm command):
docker exec [container-id] -it /bin/rm -R /bin/bash
docker exec [container-id] -it /bin/rm -R /bin/sh
... and so on

How to use Deployer with Docker (Laradock)

I created a fresh Digital Ocean server with Docker on it (using Laradock) and got my Laravel website working well.
Now I want to automate my deployments using Deployer.
I think my only problem is that I can't get Deployer to run docker exec -it $(docker-compose ps -q php-fpm) bash;, which is the command I successfully manually use to enter the appropriate Docker container (after using SSH to connect from my local machine to the Digital Ocean server).
When Deployer tries to run it, I get this error message:
➤ Executing task execphpfpm
[1.5.6.6] > cd /root/laradock && (pwd;)
[1.5.6.6] < /root/laradock
[1.5.6.6] > cd /root/laradock && (docker exec -it $(docker-compose ps -q php-fpm) bash;)
[1.5.6.6] < the input device is not a TTY
➤ Executing task deploy:failed
• done on [1.5.6.6]
✔ Ok [3ms]
➤ Executing task deploy:unlock
[1.5.6.6] > rm -f ~/daily/.dep/deploy.lock
• done on [1.5.6.6]
✔ Ok [188ms]
In Client.php line 99:
[Deployer\Exception\RuntimeException (1)]
The command "cd /root/laradock && (docker exec -it $(docker-compose ps -q php-fpm) bash;)" failed.
Exit Code: 1 (General error)
Host Name: 1.5.6.6
================
the input device is not a TTY
Here are the relevant parts of my deploy.php:
host('1.5.6.6')
->user('root')
->identityFile('~/.ssh/id_rsa2018-07-09')
->forwardAgent(true)
->stage('production')
->set('deploy_path', '~/{{application}}');
before('deploy:prepare', 'execphpfpm');
task('execphpfpm', function () {
cd('/root/laradock');
run('pwd;');
run('docker exec -it $(docker-compose ps -q php-fpm) bash;');
run('pwd');
});
I've already spent a day and a half reading countless articles and trying so many different variations. E.g. replacing the -it flag with -i, or setting export COMPOSE_INTERACTIVE_NO_CLI=1 or replacing the whole docker exec command with docker-compose exec php-fpm bash;.
I expect that I'm missing something fairly simple. Docker is widely used, and Deployer seems popular too.
To use Laravel Deployer you should connect via ssh directly to the workspace container.
You can expose the container's ssh port:
https://laradock.io/documentation/#access-workspace-via-ssh
Let's say you've forwarded container ssh port 22 to vm port 2222. In that case you need configure your Deployer to use the port 2222.
Also remember to set proper secure SSH keys, not the default ones.
You should try
docker-compose exec -T php-fpm bash;
The -T option will
Disable pseudo-tty allocation. By default docker-compose exec allocates a TTY.
In my particular case I had separate containers for PHP and Composer. That is why I could not connect to the container via SSH while deploying.
So I configured the bin/php and bin/composer parameters like this:
set('bin/php', 'docker exec php php');
set('bin/composer', 'docker run --volume={{release_path}}:/app composer');
Notice that here we use exec for a persistent php container which is already running at the moment and run to start a new instance of composer container which will stop after installing dependencies.

Error "The input device is not a TTY"

I am running the following command from my Jenkinsfile. However, I get the error "The input device is not a TTY".
docker run -v $PWD:/foobar -it cloudfoundry/cflinuxfs2 /foobar/script.sh
Is there a way to run the script from the Jenkinsfile without doing interactive mode?
I basically have a file called script.sh that I would like to run inside the Docker container.
Remove the -it from your cli to make it non interactive and remove the TTY. If you don't need either, e.g. running your command inside of a Jenkins or cron script, you should do this.
Or you can change it to -i if you have input piped into the docker command that doesn't come from a TTY. If you have something like xyz | docker ... or docker ... <input in your command line, do this.
Or you can change it to -t if you want TTY support but don't have it available on the input device. Do this for apps that check for a TTY to enable color formatting of the output in your logs, or for when you later attach to the container with a proper terminal.
Or if you need an interactive terminal and aren't running in a terminal on Linux or MacOS, use a different command line interface. PowerShell is reported to include this support on Windows.
What is a TTY? It's a terminal interface that supports escape sequences, moving the cursor around, etc, that comes from the old days of dumb terminals attached to mainframes. Today it is provided by the Linux command terminals and ssh interfaces. See the wikipedia article for more details.
To see the difference of running a container with and without a TTY, run a container without one: docker run --rm -i ubuntu bash. From inside that container, install vim with apt-get update; apt-get install vim. Note the lack of a prompt. When running vim against a file, try to move the cursor around within the file.
For docker run DON'T USE -it flag
(as said BMitch)
And it's not exactly what you are asking, but would be also useful for others:
For docker-compose exec use -T flag!
The -T key would help people who are using docker-compose exec! (It disable pseudo-tty allocation)
For example:
docker-compose -f /srv/backend_bigdata/local.yml exec -T postgres backup
or
docker-compose exec -T mysql mysql -uuser_name -ppassword database_name < dir/to/db_backup.sql
For those who struggle with this error and git bash on Windows, just use PowerShell where -it works perfectly.
If you are using git bash on windows, you just need to put
winpty
before your 'docker line' :
winpty docker exec -it some_container bash
In order for docker to allocate a TTY (the -t option) you already need to be in a TTY when docker run is called. Jenkins executes its jobs not in a TTY.
Having said that, the script you are running within Jenkins you may also want to run locally. In that case it can be really convenient to have a TTY allocated so you can send signals like ctrl+c when running it locally.
To fix this make your script optionally use the -t option, like so:
test -t 1 && USE_TTY="-t"
docker run ${USE_TTY} ...
when using 'git bash',
1) I execute the command:
docker exec -it 726fe4999627 /bin/bash
I have the error:
the input device is not a TTY. If you are using mintty, try prefixing the command with 'winpty'
2) then, I execute the command:
winpty docker exec -it 726fe4999627 /bin/bash
I have another error:
OCI runtime exec failed: exec failed: container_linux.go:344: starting container process caused "exec: \"D:/Git/usr/bin/
bash.exe\": stat D:/Git/usr/bin/bash.exe: no such file or directory": unknown
3) third, I execute the:
winpty docker exec -it 726fe4999627 bash
it worked.
when I using 'powershell', all worked well.
Using docker-compose exec -T fixed the problem for me via Jenkins
docker-compose exec -T containerName php script.php
Same Case Here, I am running the following command throw .sh script(bash) and python .py
However, I get the same error "The input device is not a TTY".
in my case, I'm trying to take the dump from a running container of my "production" env with authentication and passing with some arguments,
then take the output of .bak file of my mssql database container.
Remove -it from the command. If you want to keep it interactive then keep -i.
you can check my .sh file and a long command taking dump.
if using windows, try with cmd , for me it works. check if docker is started.
My Jenkins pipeline step shown below failed with the same error.
steps {
echo 'Building ...'
sh 'sh ./Tools/build.sh'
}
In my "build.sh" script file "docker run" command output this error when it was executed by Jenkins job. However it was working OK when the script ran in the shell terminal.The error happened because of -t option passed to docker run command that as I know tries to allocate terminal and fails if there is no terminal to allocate.
In my case I have changed the script to pass -t option only if a terminal could be detected. Here is the code after changes :
DOCKER_RUN_OPTIONS="-i --rm"
# Only allocate tty if we detect one
if [ -t 0 ] && [ -t 1 ]; then
DOCKER_RUN_OPTIONS="$DOCKER_RUN_OPTIONS -t"
fi
docker run $DOCKER_RUN_OPTIONS --name my-container-name my-image-tag
I know this is not directly answering the question at hand but for anyone that comes upon this question who is using WSL running Docker for windows and cmder or conemu.
The trick is not to use Docker which is installed on windows at /mnt/c/Program Files/Docker/Docker/resources/bin/docker.exe but rather to install the ubuntu/linux Docker. It's worth pointing out that you can't run Docker itself from within WSL but you can connect to Docker for windows from the linux Docker client.
Install Docker on Linux
sudo apt-get install apt-transport-https ca-certificates curl software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
sudo apt-get update
sudo apt-get install docker-ce
Connect to Docker for windows on the port 2375 which needs to be enabled from the settings in docker for windows.
docker -H localhost:2375 run -it -v /mnt/c/code:/var/app -w "/var/app" centos:7
Or set the docker_host variable which will allow you to omit the -H switch
export DOCKER_HOST=tcp://localhost:2375
You should now be able to connect interactively with a tty terminal session.
In Jenkins, I'm using docker-compose exec -T
eg:-
docker-compose exec -T app php artisan migrate
winpty works as long as you don't specify volumes to be mounted such as .:/mountpoint or ${pwd}:/mountpoint
The best workaround I have found is to use the git-bash plugin inside Visual Code Studio and use the terminal to start and stop containers or docker-compose.
For those using Pyinvoke see this documentation which I'll syndicate here in case the link dies:
99% of the time, adding pty=True to your run call will make things work as you were expecting. Read on for why this is (and why pty=True is not the default).
Command-line programs often change behavior depending on whether a controlling terminal is present; a common example is the use or disuse of colored output. When the recipient of your output is a human at a terminal, you may want to use color, tailor line length to match terminal width, etc.
Conversely, when your output is being sent to another program (shell pipe, CI server, file, etc) color escape codes and other terminal-specific behaviors can result in unwanted garbage.
Invoke’s use cases span both of the above - sometimes you only want data displayed directly, sometimes you only want to capture it as a string; often you want both. Because of this, there is no “correct” default behavior re: use of a pseudo-terminal - some large chunk of use cases will be inconvenienced either way.
For use cases which don’t care, direct invocation without a pseudo-terminal is faster & cleaner, so it is the default.
Instead of using -it use --tty
So your docker run should look like this:
docker run -v $PWD:/foobar --tty cloudfoundry/cflinuxfs2 /foobar/script.sh
use only -i flag than -it flag. which can help you to see what going on inside container.
docker exec -i $USER bash <<EOF
apt install nano -y
EOF
you might see the warning but it shows you output on the terminal inside docker.

How to clean Docker container logs?

I read my Docker container log output using
docker logs -f <container_name>
I log lots of data to the log in my node.js app via calls to console.log(). I need to clean the log, because it's gotten too long and the docker logs command first runs through the existing lines of the log before getting to the end. How do I clean it to make it short again? I'd like to see a command like:
docker logs clean <container_name>
But it doesn't seem to exist.
First, if you just need to see less output, you can have docker only show you the more recent lines:
docker logs --since 30s -f <container_name_or_id>
Or you can put a number of lines to limit:
docker logs --tail 20 -f <container_name_or_id>
To delete the logs on a Docker for Linux install, you can run the following for a single container:
echo "" > $(docker inspect --format='{{.LogPath}}' <container_name_or_id>)
Note that this requires root, and I do not recommend this. You could potentially corrupt the logfile if you null the file in the middle of docker writing a log to the same file. Instead you should configure docker to rotate the logs.
Lastly, you can configure docker to automatically rotate logs with the following in an /etc/docker/daemon.json file:
{
"log-driver": "json-file",
"log-opts": {"max-size": "10m", "max-file": "3"}
}
That allows docker to keep up to 3 log files per container, with each file limited to 10 megs (so a limit between 20 and 30 megs of logs per container). You will need to run a systemctl reload docker to apply those changes. And these changes are the defaults for any newly created container, they do not apply to already created containers. You will need to remove and recreate any existing containers to have these settings apply.
The best script I found is
sudo sh -c 'truncate -s 0 /var/lib/docker/containers/*/*-json.log'
It cleans all logs and you don't need to stop the containers.
Credit goes to https://bytefreaks.net/applications/docker/horrible-solution-how-to-delete-all-docker-logs
If you want to remove all log files, not only for a specific container's log, you can use:
docker system prune
But, note that this does not clear logs for running containers.
This is not the ideal solution, but until Docker builds in a command to do it, this is a good workaround.
Create a script file docker-clean-logs.sh with this content:
#!/bin/bash
rm $(docker inspect $1 | grep -G '"LogPath": "*"' | sed -e 's/.*"LogPath": "//g' | sed -e 's/",//g');
Grant the execute permission to it:
chmod +x ./docker-clean-logs.sh
Stop the Docker container that you want to clean:
docker stop <container_name>
Then run the above script:
./docker-clean-logs.sh <container_name>
And finally run your container again:
docker start ...
Credit goes to the user sgarbesi on this page: https://github.com/docker/compose/issues/1083
You can use logrotate as explained in this article
https://sandro-keil.de/blog/2015/03/11/logrotate-for-docker-container/
This needs to be done before launching the container.
Run:
docker inspect {containerId}
Copy LogPath value
truncate -s 0 {LogaPath}
Solution for a docker swarm service:
logging:
options:
max-size: "10m"
max-file: "10"
In order to do this on OSX, you need to get to the virtual machine the Docker containers are running in.
You can use the walkerlee/nsenter image to run commands inside the VM like so:
docker run --rm -it --privileged --pid=host walkerlee/nsenter -t 1 -m -u -i -n sh
Combining that with a simplified version of the accepted answer you get:
#!/bin/sh
docker run --rm -it --privileged --pid=host walkerlee/nsenter -t 1 -m -u -i -n \
cp /dev/null $(docker inspect -f '{{.LogPath}}' $1)
Save it, chmod +x it, run it.
As far as I can tell this doesn't require the container to be stopped. Also, it clears out the log file (instead of deleting it) avoiding errors when doing docker logs right after cleanup.
On Windows 10 none of the solutions worked for me, I kept getting 'No such file or directory'
This worked
Get container ID (inspect the container)
In file explorer open docker-desktop-data (in WSL)
Navigate to version-pack-data\community\docker\containers\CONTAINER_ID
Stop the container
Open the file CONTAINER_ID-json.log file and trim it or just create a blank file with same name
source

Connect to docker container as user other than root

BY default when you run
docker run -it [myimage]
OR
docker attach [mycontainer]
you connect to the terminal as root user, but I would like to connect as a different user. Is this possible?
For docker run:
Simply add the option --user <user> to change to another user when you start the docker container.
docker run -it --user nobody busybox
For docker attach or docker exec:
Since the command is used to attach/execute into the existing process, therefore it uses the current user there directly.
docker run -it busybox # CTRL-P/Q to quit
docker attach <container id> # then you have root user
/ # id
uid=0(root) gid=0(root) groups=10(wheel)
docker run -it --user nobody busybox # CTRL-P/Q to quit
docker attach <container id>
/ $ id
uid=99(nobody) gid=99(nogroup)
If you really want to attach to the user you want to have, then
start with that user run --user <user> or mention it in your Dockerfile using USER
change the user using `su
You can run a shell in a running docker container using a command like:
docker exec -it --user root <container id> /bin/bash
As an updated answer from 2020. --user, -u option is Username or UID (format: <name|uid>[:<group|gid>]).
Then, it works for me like this,
docker exec -it -u root:root container /bin/bash
Reference: https://docs.docker.com/engine/reference/commandline/exec/
You can specify USER in the Dockerfile. All subsequent actions will be performed using that account. You can specify USER one line before the CMD or ENTRYPOINT if you only want to use that user when launching a container (and not when building the image). When you start a container from the resulting image, you will attach as the specified user.
The only way I am able to make it work is by:
docker run -it -e USER=$USER -v /etc/passwd:/etc/passwd -v `pwd`:/siem mono bash
su - magnus
So I have to both specify $USER environment variable as well a point the /etc/passwd file. In this way, I can compile in /siem folder and retain ownership of files there not as root.
My solution:
#!/bin/bash
user_cmds="$#"
GID=$(id -g $USER)
UID=$(id -u $USER)
RUN_SCRIPT=$(mktemp -p $(pwd))
(
cat << EOF
addgroup --gid $GID $USER
useradd --no-create-home --home /cmd --gid $GID --uid $UID $USER
cd /cmd
runuser -l $USER -c "${user_cmds}"
EOF
) > $RUN_SCRIPT
trap "rm -rf $RUN_SCRIPT" EXIT
docker run -v $(pwd):/cmd --rm my-docker-image "bash /cmd/$(basename ${RUN_SCRIPT})"
This allows the user to run arbitrary commands using the tools provides by my-docker-image. Note how the user's current working directory is volume mounted
to /cmd inside the container.
I am using this workflow to allow my dev-team to cross-compile C/C++ code for the arm64 target, whose bsp I maintain (the my-docker-image contains the cross-compiler, sysroot, make, cmake, etc). With this a user can simply do something like:
cd /path/to/target_software
cross_compile.sh "mkdir build; cd build; cmake ../; make"
Where cross_compile.sh is the script shown above. The addgroup/useradd machinery allows user-ownership of any files/directories created by the build.
While this works for us. It seems sort of hacky. I'm open to alternative implementations ...
For docker-compose. In the docker-compose.yml:
version: '3'
services:
app:
image: ...
user: ${UID:-0}
...
In .env:
UID=1000
Execute command as www-data user: docker exec -t --user www-data container bash -c "ls -la"
This solved my use case that is: "Compile webpack stuff in nodejs container on Windows running Docker Desktop with WSL2 and have the built assets under your currently logged in user."
docker run -u 1000 -v "$PWD":/build -w /build node:10.23 /bin/sh -c 'npm install && npm run build'
Based on the answer by eigenfield. Thank you!
Also this material helped me understand what is going on.

Resources