running 'docker-compose exec ...' within GitLab CI - docker

I'm trying to run a command on container like this docker-compose exec xyz from .gitlab-ci.yml file.
The error, which I don't understand, reads the input device is not a TTY and then it exits out.
How can I troubleshoot this ?

TTY is effectively STDIN, you're executing a command (I'm guessing with the -it) flag that expects some input after the exec command from STDIN (Like typing a password, or executing bash commands in a running container). As it's a build pipeline it errors because you haven't provided anything. Otherwise can you please provide some more info about your input?

Related

docker exec -> permission denied when executing through ssh

I am trying to execute a command on a docker container that is running in a remote server. I am running most commands through ssh and they all work correctly. However, this command modifies the file /etc/environment and I get a "permission denied" error.
The command in question is docker exec container_id echo 'WDS_SOCKET_PORT=XXXXX' >> /etc/environment
If I run the command from the docker host, it works
If I run a simple command remotely using ssh user#ip docker exec container_id ls, it works
If I run this command remotely using ssh user#ip docker exec container_id echo 'WDS_SOCKET_PORT=XXXXX' >> /etc/environment I get sh: 1: cannot create /etc/environment: Permission denied
I tried adding the option -u 0 to the docker exec command with no luck.
I don't mind making changes to the Dockerfile since I can kill, remove or recreate this container with no problem.
The error isn't coming from docker or ssh, it's coming from your shell that parses the command you want to run. You are trying to modify the file on your host. To do io redirection inside the container, you need to run a shell there and parse the command with that shell.
ssh user#ip "docker exec container_id /bin/sh -c 'echo \"WDS_SOCKET_PORT=XXXXX\" >> /etc/environment'"
EDIT: Note that the whole docker command should be surrounded by quotes. I believe this is because ssh might otherwise parse different parts of the command as parameters of the docker command. This way, each sub-command is clearly delimited.

Docker exec error "executable file not found in $PATH": unknown" when running a command as a string

Below is the command I am trying to run:
docker exec sandbox_1 'influxd-ctl sandbox_1:8091'
I understand that apparently this means the container will execute it with a different shell that does have the necessary $PATH but I'm not sure how to deal with that.
For what it's worth, I tried influxd-ctl without the single quotes and it didn't read the rest of the command.
docker exec sandbox_1 influxd-ctl sandbox_1:8091
Thoughts?
Update: I also tried running bash -c <string> as the command I passed to exec but that didn't seem to work either.
Single quotes shouldn't be used. The Exec Command takes the command and it's arguments as separate arguments.
The correct command in your case should be:
docker exec <container> influxd-ctl <container>:8091
You can also test the command when having a shell inside the container like this:
docker exec -it <container> bash
You should then (provided bash is available inside the container, otherwise other shells can be used instead) get a root shell like this:
root#<container>:~#
Note: The working dir might be different based on where it was set in the Dockerfile used to build the image of the container.
In the now interactive shell talking to the container, you can try your command explicitly without the Exec command passing stuff around.
root#<container>:~# influxd-ctl <container>:8091
If you find that your command doesn't work there, then probably the influxd-ctl command expects different parameters from what you are suggesting.

docker-compose exec python the input device is not a TTY in AWS EC2 UserData

I am using EC2 UserData to bootstrap the instance.
TRacking log of bootstrap execution /var/log/cloud-init-output.log, I found that the script was stopped at :
+ docker-compose exec web python /var/www/flask/app/db_fixtures.py
the input device is not a TTY
It seems like this command it's running in interactive mode, but why ? and how to force noninteractive mode for this command (docker-compose exec) ?
Citing from the docker-compose exec docs:
Commands are by default allocating a TTY, so you can use a command such as docker-compose exec web sh to get an interactive prompt.
To disable this behavior, you can either the -T flag to disable pseudo-tty allocation:
docker-compose exec -T web python /var/www/flask/app/db_fixtures.py
Or set the COMPOSE_INTERACTIVE_NO_CLI environment variable to 1 before running docker-compose exec:
export COMPOSE_INTERACTIVE_NO_CLI=1

What execute RUN instruction in exec mode

In a Dockerfile, RUN instruction has two forms shell and exec:
# shell form
RUN <command>
# exec form
RUN ["executable", "param1", "param2"]
When shell form is used the <command> is run inside a shell, prepending to it a proper shell (i.e.: sh -c "<command>").
So far so good, the question is: how exec form work? How commands are executed without shell? I haven't found a satisfying answer reading official doc.
The exec form of the command runs your command with the same OS syscall that Docker would use to run the shell itself. It's just doing the namespaced version of a fork/exec that linux uses to run any process. The shell itself is a convenience that provides PATH handling, variable expansion, IO redirection, and other scripting features, but these aren't required to run processes at the OS level. This question may help you understand how Linux runs processes.
This looks like a docker file.
With the RUN syntax, the commands will run one at a time in the para virtualised environment and the default shell for the given environment (usually Bash) will be spawned for each command. sh -c is "short-hand" for bash and so you are in effect doing the same thing.
In shell form, the command will run inside a shell with /bin/sh –c
RUN apt-get update
Exec format allows execution of command in images that don’t have /bin/sh
RUN [“apt-get”, “update”]
shell form is easier to write and you can perform shell parsing of variables
• For example
CMD sudo -u $(USER} java ....
• Exec form does not require image to have a shell.

How to get docker exec stdout to be as verbose as running command in container?

If I run a command using docker's exec command, like so:
docker exec container gulp
It simply runs the command, but nothing is outputted to my terminal window.
However, if I actually go into the container and run the command manually:
docker exec -ti container bash
gulp
I see gulp's output:
[13:49:57] Using gulpfile ~/code/services/app/gulpfile.js[13:49:57]
Starting 'scripts'...[13:49:57] Starting 'styles'...[13:49:58]
Starting 'emailStyles'... ...
How can I run my first command and still have the output sent to my terminal window?
Side note: I see the same behavior with npm installs, forever restarts, etc. So, it is not just a gulp issue, but likely something with how docker is mapping the stdout.
How can I run my first command and still have the output sent to my terminal window?
You need to make sure docker run is launched with the -t option in order to allocate a pseudo tty.
Then a docker exec without -t would still work.
I discuss docker exec -it here, which references "Fixing the Docker TERM variable issue ")
docker#machine:/c/Users/vonc/prog$ d run --name test -dit busybox
2b06a0ebb573e936c9fa2be7e79f1a7729baee6bfffb4b2cbf36e818b1da7349
docker#machine:/c/Users/vonc/prog$ d exec test echo ok
ok

Resources