DB2 Client in Docker - docker

We have the need to create a docker container that also has the db2 client installed. This container will also have some shell scripts that make use of the db2 client.
We take a base Cent OS image and then add db2 via a RUN command:
COPY db2rtcl_nr.rsp /db2install/
RUN cd /db2install && curl -o ibm_data_server_runtime_client_linuxx64_v11.1.tar.gz http://public_file_server.com/downloads/appTools/installs/db2/ibm_data_server_runtime_client_linuxx64_v11.1.tar.gz && \
tar -xvf ibm_data_server_runtime_client_linuxx64_v11.1.tar.gz && \
rm -f ibm_data_server_runtime_client_linuxx64_v11.1.tar.gz && \
rtcl/db2setup -u db2rtcl_nr.rsp -f sysreq && \
chown -R 1000:1000 /opt/ibm/db2/V11.1
ENV PATH="$PATH:/opt/ibm/db2/V11.1/bin"
The image builds ok with no errors.
However, when I try running and connecting to the container via interactive shell command:
docker run -it --entrypoint=/bin/bash db2Container
and try to invoke the db2 CLI with
db2
I get the error:
DB21018E A system error occurred. The command line processor could not
continue processing.
Whats confusing is that if I immediately run a bash shell and then invoke the db2 CLI, it works:
bash
db2
(c) Copyright IBM Corporation 1993,2007
Command Line Processor for DB2 Client 11.1.0
You can issue database manager commands and SQL statements from the command
prompt. For example:
db2 => connect to sample
db2 => bind sample.bnd
For general help, type: ?.
For command help, type: ? command, where command can be
the first few keywords of a database manager command. For example:
? CATALOG DATABASE for help on the CATALOG DATABASE command
? CATALOG for help on all of the CATALOG commands.
To exit db2 interactive mode, type QUIT at the command prompt. Outside
interactive mode, all commands must be prefixed with 'db2'.
To list the current command option settings, type LIST COMMAND OPTIONS.
For more detailed help, refer to the Online Reference Manual.
db2 =>
Things I have tried to diagnose the issue:
When I 1st drop into the interactive shell session, I type
env > /tmp/env1.txt
I then type bash and run
env > /tmp/env2.txt
When I diff the files, they are virtually identical EXCEPT for the variable:
SHLVL=2
which I know is just indicating that the 2nd shell is a nested shell
When I 1st drop into the interactive shell session, I type
set > /tmp/set1.txt
I then type bash and run
set > /tmp/set2.txt
When I diff the files, they are virtually identical EXCEPT for the SHLVL variable again
Why is the db2 CLI accessible after I bash in the container but not in the initial session when i have used docker run -it?
We are attempting to use this container as an executable container that has the db2 client in it to connect to external DB2 databases. We are NOT trying to run a db2 DB in a container.
What I am starting to find is that I might have an issue with how the entrypoint is defined in our Dockerfile.
Using:
ENTRYPOINT cat /dev/null && /usr/bin/bash
the DB2 client is available when I run docker run -it ContainerName without having to immediately type bash
BUT it does not work when I try to run the container as an executable docker run ContainerName
The closest I have come to the solution is this modification to the Dockerfile:
ENTRYPOINT []
CMD ["/bin/bash"]
When I run the container as an executable docker run ContainerName db2 list command options it works however NOW if I docker run -it ContainerName I dont immediately have db2 commands available without typing bash once. This is still problematic since this container will have shell script in it that need to be able to run db2 commands

After some more googling, I found this article: https://engineeringblog.yelp.com/2016/01/dumb-init-an-init-for-docker.html
Using their Github page example, I updated our Dockerfile with:
RUN wget -O /usr/local/bin/dumb-init https://github.com/Yelp/dumb-init/releases/download/v1.2.2/dumb-init_1.2.2_amd64
RUN chmod +x /usr/local/bin/dumb-init
and also updated our Dockerfile's entrypoint with:
ENTRYPOINT ["/usr/local/bin/dumb-init", "--"]
CMD ["/bin/bash"]
The result has been that my dummy shell script (that has a db2 command in it) that lives inside of the container works when the docker container is called as an executable:
docker run myContainer /scripts/dummyDB2connect.sh
AND I can also interactively spin up and connect to the container to run db2 commands without having to type the extra bash command.

Related

docker container exits out immediately with a script attached

I'm trying to add a script to a docker run command , command i'm using is :
docker run -dit --name 1.4 ubuntu sh -c 'echo "Input website:"; read website; echo "Searching.."; sleep 1; curl http://$website;'
and then install curl , then enter a website as input and it should reply to me as per the course i'm studying , but running this exact command makes the container exit immediately
any guidance on why would that be ?
also how should i send the input to the container so it can use it afterwards , do i just attach to it after installing curl in the terminal ?
I'm going to recommend an extremely different workflow from what you suggest. Rather than manually installing software and trying to type arguments into the stdin of a shell script, you can build this into a reusable Docker image and provide its options as environment variables.
In comments you describe a workflow where you first start a container, then get a debugging shell inside of it, and then install curl. Unless you're really truly debugging, this is a pretty unusual workflow: anything you install this way will get lost as soon as the container exits, and you'll have to repeat this step every time you re-run the container. Instead, create a new empty directory, and inside that create a file named Dockerfile (exactly that name, no extension, capital D) containing
# Start our new image from this base
FROM ubuntu
# Install any OS-level packages we need
RUN apt-get update \
&& DEBIAN_FRONTEND=noninteractive \ # avoid post-installation questions
apt-get install \
--no-install-recommends \ # don't install unneeded extra packages
--assume-yes \ # (-y) skip an "are you sure" prompt
curl
Rather than try to read from the container's input, you can take the URL as an environment variable. In most cases the best way to give the main command to a container is by specifying it in the Dockerfile. You can imagine running a larger script or program here as well, and it would take the some environment-variable setting (using Python's os.environ, Node's process.env, Ruby's ENV, etc.).
In our case, let's make the main container command be the single curl command that you're trying to run. We haven't specified the value of the environment variable yet, and that's okay: this shell command isn't evaluated until the container actually runs.
# at the end of the Dockerfile
CMD curl "$website"
Now let's build and run it. When we do launch the container, we need to provide that $website environment variable value, which we can do with a docker run -e option.
# Build the image:
docker build \
-t my/curl # giving it a name
. # using the content in the current directory
docker run \
--rm # deleting the container when done
-e website=https://stackoverflow.com \
my/curl # with the same name as above
So note that we're starting the container in the foreground (no -d option) since we want to see its output and we expect it to exit promptly; we're cleaning up the container when it's done; we're not trying to pass a full shell script as a command-line argument; and we are providing our options on the command line, so we don't need to make the container's stdin work (no -i or -t option).
A Docker container is a wrapper around a single process. When that process exits, the container exits too. In this example, the thing you want the container to do is run a curl command; that's not a long-running process, hence docker run --rm but not -d. There's not an "afterwards" here, if you need to query a different Web site then launch a new container. It's very normal to destroy and recreate containers, especially since there are many options that can only be specified when you first start a container.
With the image and container we've built here, in fact, it's useful to think about them as analogous to the /usr/bin/curl binary on your host. You build it once into a reusable artifact (here the Docker image), and you run multiple instances of it (curl commands or new Docker containers) giving options on the command line at startup time. You do not typically "get a shell" inside a curl command-line invocation, and I'd similarly avoid docker exec outside of debugging tasks.
You can also use alpine/curl image to use curl command without needing to install anything.
First start the container in detached mode with -d flag.
Then run your script with exec sub command.
docker run -d --name 1.4 alpine/curl sleep 600
docker exec -it 1.4 sh -c 'echo "Input website:"; read website; echo "Searching.."; sleep 1; curl http://$website;'

Execute local shell script using docker run interactive

Can I execute a local shell script within a docker container using docker run -it ?
Here is what I can do:
$ docker run -it 5ee0b7440be5
bash-4.2# echo "Hello"
Hello
bash-4.2# exit
exit
I have a shell script on my local machine
hello.sh:
echo "Hello"
I would like to execute the local shell script within the container and read the value returned:
$ docker run -it 5e3337440be5 #Some way of passing a reference to hello.sh to the container.
Hello
A specific design goal of Docker is that you can't. A container can't access the host filesystem at all, except to the extent that an administrator explicitly mounts parts of the filesystem into the container. (See #tentative's answer for a way to do this for your use case.)
In most cases this means you need to COPY all of the scripts and support tools into your image. You can create a container running any command you want, and one typical approach is to set the image's CMD to do "the normal thing the container will normally do" (like run a Web server) but to allow running the container with a different command (an admin task, a background worker, ...).
# Dockerfile
FROM alpine
...
COPY hello.sh /usr/local/bin
...
EXPOSE 80
CMD httpd -f -h /var/www
docker build -t my/image .
docker run -d -p 8000:80 --name web my/image
docker run --rm --name hello my/image \
hello.sh
In normal operation you should not need docker exec, though it's really useful for debugging. If you are in a situation where you're really stuck, you need more diagnostic tools to be understand how to reproduce a situation, and you don't have a choice but to look inside the running container, you can also docker cp the script or tool into the container before you docker exec there. If you do this, remember that the image also needs to contain any dependencies for the tool (interpreters like Python or GNU Bash, C shared libraries), and that any docker cpd files will be lost when the container exits.
You can use a bind-mount to mount a local file to the container and execute it. When you do that, however, be aware that you'll need to be providing the container process with write/execute access to the folder or specific script you want to run. Depending on your objective, using Docker for this purpose may not be the best idea.
See #David Maze's answer for reasons why. However, here's how you can do it:
Assuming you're on a Unix based system and the hello.sh script is in your current directory, you can mount that single script to the container with -v $(pwd)/hello.sh:/home/hello.sh.
This command will mount the file to your container, start your shell in the folder where you mounted it, and run a shell:
docker run -it -v $(pwd)/hello.sh:/home/hello.sh --workdir /home ubuntu:20.04 /bin/sh
root#987eb876b:/home ./hello.sh
Hello World!
This command will run that script directly and save the output into the variable output:
output=$(docker run -it -v $(pwd)/hello.sh:/home/test.sh ubuntu:20.04 /home/hello.sh)
echo $output
Hello World!
References for more information:
https://docs.docker.com/storage/bind-mounts/#start-a-container-with-a-bind-mount
https://docs.docker.com/storage/bind-mounts/#use-a-read-only-bind-mount

Docker run uses host PATH when chaining commands

I have written an image that bundles utils to run commands using several CLIs. I want to run this as an executable as follows:
docker run my_image cli command
Where CLI is my custom CLI and command is a command to that CLI.
When I build my image I have the following instruction in the Dockerfile:
ENV PATH="/cli/scripts:${PATH}"
The above works if I do not chain commands to the container. If I chain commands it stops working:
docker run my_image cli command && cli anothercommand
Command 'cli' not found, but can be installed with...
Where the first command works and the other fails.
So the logical conclusion is that cli is missing from path. I tried to verify that with:
docker run my_image printenv PATH
This actually outputs the containers PATH, and everything looks alright. So I tried to chain this command too:
docker run my_image printenv PATH && printenv PATH
And sure enough, this outputs first the containers PATH and then the PATH of my system.
What is the reason for this? How do I work around it?
When you type a command into your shell, your local shell processes it first before any command gets run. It sees (reformatted)
docker run my_image cli command \
&& \
cli anothercommand
That is, your host's shell picks up the &&, so the host first runs docker run and then runs cli anothercommand (if the container exited successfully).
You can tell the container to run a shell, and then the container shell will handle things like command chaining, redirections, and environment variables
docker run my_image sh -c 'cli command && cli anothercommand'
If this is more than occasional use, also consider writing this into a shell script
#!/bin/sh
set -e
cli command
cli another command
COPY the script into your Docker image, and then you can docker run my_image cli_commands.sh or some such.

Update PATH in Centos Docker image (alternative to Dockerfile ENV)

I'm provisioning docker Centos image with Packer and using bash scripts instead of Dockerfile to configure image (this seems to be the Packer way). What I can't seem to figure out is how to update PATH variable so that my custom binaries can be executed like this:
docker run -i -t <container> my_binary
I have tried putting .sh file in /etc/profile.d/ folder and also writing directly to /etc/environment but none of that seems to take effect.
I'm suspecting it has something to do with what shell Docker uses when executing commands in a disposable container. I thought it was Bourne Shell but as mentioned earlier neither /etc/profile.d/ nor /etc/environment approach worked.
UPDATE:
As I understand now, it is not possible to change environment variables in a running container due to reasons explained in #tgogos answer. However I don't believe this is an issue in my case since after Packer is done provisioning the image, it commits it and uploads to Docker Hub. More accurate example would be as follows:
$ docker run -itd --name test centos:6
$ docker exec -it test /bin/bash
[root#006a9c3195b6 /]# echo 'echo SUCCESS' > /root/test.sh
[root#006a9c3195b6 /]# chmod +x /root/test.sh
[root#006a9c3195b6 /]# echo 'export PATH=/root:$PATH' > /etc/profile.d/my_settings.sh
[root#006a9c3195b6 /]# echo 'PATH=/root:$PATH' > /etc/environment
[root#006a9c3195b6 /]# exit
$ docker commit test test-image:1
$ docker exec -it test-image:1 test.sh
Expecting to see SUCCESS printed but getting
OCI runtime exec failed: exec failed: container_linux.go:296: starting container process caused "exec: \"test.sh\": executable file not found in $PATH": unknown
UPDATE 2
I have updated PATH in ~/.bashrc which lets me execute following:
$ docker run -it test-image:1 /bin/bash
[root#8f821c7b9b82 /]# test.sh
SUCCESS
However running docker run -it test-image:1 test.sh still results in
docker: Error response from daemon: OCI runtime create failed: container_linux.go:296: ...
I can confirm that my image CMD is set to "/bin/bash". So can someone explain why running docker run -it test-image:1 test.sh doesn't source ~/.bashrc?
A few good points are mentioned at:
How to set an environment variable in a running docker container (also check the link to the relevant github issue).
and Docker - Updating Environment Variables of a Container
where #BMitch mentions:
Destroy your container and start a new one up with the new environment variable using docker run -e .... It's identical to changing an environment variable on a running process, you stop it and restart with a new value passed in.
and in the comments section, he adds:
Docker doesn't provide a way to modify an environment variable in a running container because the OS doesn't provide a way to modify an environment variable in a running process. You need to destroy and recreate.
update: (see the comments section)
You can use
docker commit --change "ENV PATH=your_new_path_here" test test-image:1
/etc/profile is only read by bash when invoked by a login shell.
For more information about which files are read by bash on startup see this article.
EDIT: If you change the last line in your example to:
docker exec -it test bash -lc test.sh it works as you expect.

Error "The input device is not a TTY"

I am running the following command from my Jenkinsfile. However, I get the error "The input device is not a TTY".
docker run -v $PWD:/foobar -it cloudfoundry/cflinuxfs2 /foobar/script.sh
Is there a way to run the script from the Jenkinsfile without doing interactive mode?
I basically have a file called script.sh that I would like to run inside the Docker container.
Remove the -it from your cli to make it non interactive and remove the TTY. If you don't need either, e.g. running your command inside of a Jenkins or cron script, you should do this.
Or you can change it to -i if you have input piped into the docker command that doesn't come from a TTY. If you have something like xyz | docker ... or docker ... <input in your command line, do this.
Or you can change it to -t if you want TTY support but don't have it available on the input device. Do this for apps that check for a TTY to enable color formatting of the output in your logs, or for when you later attach to the container with a proper terminal.
Or if you need an interactive terminal and aren't running in a terminal on Linux or MacOS, use a different command line interface. PowerShell is reported to include this support on Windows.
What is a TTY? It's a terminal interface that supports escape sequences, moving the cursor around, etc, that comes from the old days of dumb terminals attached to mainframes. Today it is provided by the Linux command terminals and ssh interfaces. See the wikipedia article for more details.
To see the difference of running a container with and without a TTY, run a container without one: docker run --rm -i ubuntu bash. From inside that container, install vim with apt-get update; apt-get install vim. Note the lack of a prompt. When running vim against a file, try to move the cursor around within the file.
For docker run DON'T USE -it flag
(as said BMitch)
And it's not exactly what you are asking, but would be also useful for others:
For docker-compose exec use -T flag!
The -T key would help people who are using docker-compose exec! (It disable pseudo-tty allocation)
For example:
docker-compose -f /srv/backend_bigdata/local.yml exec -T postgres backup
or
docker-compose exec -T mysql mysql -uuser_name -ppassword database_name < dir/to/db_backup.sql
For those who struggle with this error and git bash on Windows, just use PowerShell where -it works perfectly.
If you are using git bash on windows, you just need to put
winpty
before your 'docker line' :
winpty docker exec -it some_container bash
In order for docker to allocate a TTY (the -t option) you already need to be in a TTY when docker run is called. Jenkins executes its jobs not in a TTY.
Having said that, the script you are running within Jenkins you may also want to run locally. In that case it can be really convenient to have a TTY allocated so you can send signals like ctrl+c when running it locally.
To fix this make your script optionally use the -t option, like so:
test -t 1 && USE_TTY="-t"
docker run ${USE_TTY} ...
when using 'git bash',
1) I execute the command:
docker exec -it 726fe4999627 /bin/bash
I have the error:
the input device is not a TTY. If you are using mintty, try prefixing the command with 'winpty'
2) then, I execute the command:
winpty docker exec -it 726fe4999627 /bin/bash
I have another error:
OCI runtime exec failed: exec failed: container_linux.go:344: starting container process caused "exec: \"D:/Git/usr/bin/
bash.exe\": stat D:/Git/usr/bin/bash.exe: no such file or directory": unknown
3) third, I execute the:
winpty docker exec -it 726fe4999627 bash
it worked.
when I using 'powershell', all worked well.
Using docker-compose exec -T fixed the problem for me via Jenkins
docker-compose exec -T containerName php script.php
Same Case Here, I am running the following command throw .sh script(bash) and python .py
However, I get the same error "The input device is not a TTY".
in my case, I'm trying to take the dump from a running container of my "production" env with authentication and passing with some arguments,
then take the output of .bak file of my mssql database container.
Remove -it from the command. If you want to keep it interactive then keep -i.
you can check my .sh file and a long command taking dump.
if using windows, try with cmd , for me it works. check if docker is started.
My Jenkins pipeline step shown below failed with the same error.
steps {
echo 'Building ...'
sh 'sh ./Tools/build.sh'
}
In my "build.sh" script file "docker run" command output this error when it was executed by Jenkins job. However it was working OK when the script ran in the shell terminal.The error happened because of -t option passed to docker run command that as I know tries to allocate terminal and fails if there is no terminal to allocate.
In my case I have changed the script to pass -t option only if a terminal could be detected. Here is the code after changes :
DOCKER_RUN_OPTIONS="-i --rm"
# Only allocate tty if we detect one
if [ -t 0 ] && [ -t 1 ]; then
DOCKER_RUN_OPTIONS="$DOCKER_RUN_OPTIONS -t"
fi
docker run $DOCKER_RUN_OPTIONS --name my-container-name my-image-tag
I know this is not directly answering the question at hand but for anyone that comes upon this question who is using WSL running Docker for windows and cmder or conemu.
The trick is not to use Docker which is installed on windows at /mnt/c/Program Files/Docker/Docker/resources/bin/docker.exe but rather to install the ubuntu/linux Docker. It's worth pointing out that you can't run Docker itself from within WSL but you can connect to Docker for windows from the linux Docker client.
Install Docker on Linux
sudo apt-get install apt-transport-https ca-certificates curl software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
sudo apt-get update
sudo apt-get install docker-ce
Connect to Docker for windows on the port 2375 which needs to be enabled from the settings in docker for windows.
docker -H localhost:2375 run -it -v /mnt/c/code:/var/app -w "/var/app" centos:7
Or set the docker_host variable which will allow you to omit the -H switch
export DOCKER_HOST=tcp://localhost:2375
You should now be able to connect interactively with a tty terminal session.
In Jenkins, I'm using docker-compose exec -T
eg:-
docker-compose exec -T app php artisan migrate
winpty works as long as you don't specify volumes to be mounted such as .:/mountpoint or ${pwd}:/mountpoint
The best workaround I have found is to use the git-bash plugin inside Visual Code Studio and use the terminal to start and stop containers or docker-compose.
For those using Pyinvoke see this documentation which I'll syndicate here in case the link dies:
99% of the time, adding pty=True to your run call will make things work as you were expecting. Read on for why this is (and why pty=True is not the default).
Command-line programs often change behavior depending on whether a controlling terminal is present; a common example is the use or disuse of colored output. When the recipient of your output is a human at a terminal, you may want to use color, tailor line length to match terminal width, etc.
Conversely, when your output is being sent to another program (shell pipe, CI server, file, etc) color escape codes and other terminal-specific behaviors can result in unwanted garbage.
Invoke’s use cases span both of the above - sometimes you only want data displayed directly, sometimes you only want to capture it as a string; often you want both. Because of this, there is no “correct” default behavior re: use of a pseudo-terminal - some large chunk of use cases will be inconvenienced either way.
For use cases which don’t care, direct invocation without a pseudo-terminal is faster & cleaner, so it is the default.
Instead of using -it use --tty
So your docker run should look like this:
docker run -v $PWD:/foobar --tty cloudfoundry/cflinuxfs2 /foobar/script.sh
use only -i flag than -it flag. which can help you to see what going on inside container.
docker exec -i $USER bash <<EOF
apt install nano -y
EOF
you might see the warning but it shows you output on the terminal inside docker.

Resources