Docker postgres client and commands - docker

I am building a Dockerfile where I pull a postgresq client. What I would like to do is to connect to the db and execute some sql commands (all defined in Dockerfile and with no interaction). If I execute something like this below, I am able to connect:
docker run -it --rm jbergknoff/postgresql-client postgresql://username:password#10.1.0.173:5432/db
but I do not know how to replicate this inside the Dockerfile:
FROM jbergknoff/postgresql-client
RUN postgresql://username:password#10.1.0.173:5432/db // error
where the RUN command gives me error.

The reason is very clear, Postgres service is not running at this stage I mean during the build time.
RUN postgresql://username:password#10.1.0.173:5432/db // error
So this will not work as you are expecting.
This is a common problem with such docker image or custom Docker, So I will suggest using offical image so you will not bother with custom entrypoint script and all you just need to copy your SQL script to /docker-entrypoint-initdb.d during build time or can mount to this path at runtime.
FROM postgres
COPY my_initdb.sql /docker-entrypoint-initdb.d/
That's it, the offical image will take care of this script and will run the script when the Postgres service is up and running.
If you want To make this working, you must place such command at entrypoint where the Postgres service is running and capable to handle the connection.
Initialization scripts
If you would like to do additional initialization in an image derived
from this one, add one or more *.sql, *.sql.gz, or *.sh scripts under
/docker-entrypoint-initdb.d (creating the directory if necessary).
After the entrypoint calls initdb to create the default postgres user
and database, it will run any *.sql files, run any executable *.sh
scripts, and source any non-executable *.sh scripts found in that
directory to do further initialization before starting the service.

Related

How to make docker download external file references at initialisation

new user of docker here, I tried to figured it myself with google but...
I have a nodeJS/express app with this structure :
|- modules
|- lib
|- node_modules : nodejs modules
|- server.js
|- docker-compose.yml
|- dockerfile
In the lib directory I have reference files (like for example the geo2ip database : https://dev.maxmind.com/geoip/geoip2/geolite2/ ) that I don't want to include it in my docker image, as it will be huge, and quickly outdated.
Instead I want my docker image, every it runs to download the last version of the file through an url, so I don't have to build another image every time I need to update this reference file.
I tried to add curl command in the dockerfile like this :
CMD curl https://test.com/huge_file.mmdb --output lib/huge_file.mmdb
Or execute shell script, but either it has no effect, either it adds the file to the image, which I don't want.
Any idea how I can ask docker to download a file from the internet, and add it to the container at startup ?
Thanks !
The first thing you need to do for this case is to mount some external storage into your container.
docker run -v $PWD/huge_files:/app/lib ...
This causes whatever is in the huge_files directory locally to be visible inside the container, replacing what was in the lib directory before.
Once you've done this, the easiest thing to do is to just manage this whole process from the host. Download the files at your convenience; possibly have a shell script to to update the files and start the container. The advantage of doing this is that you control exactly when it happens; the downside is that it's a manual step.
If you want the container to do it on startup, you can write an entrypoint script to do this. A Docker ENTRYPOINT is a program that gets run as the main container process; it gets run instead of CMD, but it gets passed the CMD as arguments. Frequently an entrypoint will be a shell script that ends by replacing itself with the CMD, exec "$#". You can use this for any first-time setup your container needs:
#!/bin/sh
# Download the file if it doesn't exist
if [ ! -f lib/huge_file.mmdb ]; then
curl https://test.com/huge_file.mmdb --output lib/huge_file.mmdb
fi
# Switch to the container command
exec "$#"
The downside to this approach is potentially doing a large download outside the user's control; if nothing is mounted into the container, potentially repeating the same download on every container startup.

Docker - run command on host during build

My query is similar to this Execute Command on host during docker build but I need my container to be running when I execute the command.
Background - I'm trying to create a base image for the database part of an application, using the mysql:8.0 image. The installation instructions for the product require me to run a DDL script to create the database (Done, by copying .sql file to the entrypoint directory), but the second step involves running a java based application which reads various config files to insert the required data into the running database. I would like this second step to be captured in the dockerfile somehow so I can then build a new base image containing the tables and the initial data.
Things I've thought of:
Install java and copy the quite large config tool to the container
and EXEC the appropriate command, but I want to avoid installing
java into the database container and certainly the subsequent image
if I can.
I could run the config tool on the host manually and
connect to the running container but my understanding is that this
would only apply to the running container - I couldn't get this into
a new image? It needs to be done from the dockerfile for docker build
to work.
I suspect docker just isn't designed for this.

docker ubuntu sourceing after starting image

I built myself an image for ROS. I run it while mounting my original home on the host and some tricks to get graphics as well. After starting the shell inside docker I always need to execute two source commands. One of the files to be sourced are actually inside the container, but the other resides in my home, which only gets mounted on starting the container. I would have these two files sourced automatically.
I tried adding
RUN bash -c "source /opt/ros/indigo/setup.bash"
to the image file, but this did not actually source it. Using CMD instead of run didn't drop me into the container's shell (I assume it finished executing source and then exited?). I don't even have an idea how to source the file that is only available after startup. What would I need to do?
TL;DR: you need to perform this step as part of your CMD or ENTRYPOINT, and for something like a source command, you need a step after that in the shell to run your app, or whatever shell you'd like. If you just want a bash shell as your command, then put your source command inside something like your .bashrc file. Or you can run something like:
bash -c "source /opt/ros/indigo/setup.bash && bash"
as your command.
One of the files to be sourced are actually inside the container, but the other resides in my home, which only gets mounted on starting the container.
...
I tried adding ... to the image file
Images are built using temporary containers that only see your Dockerfile instructions and the context sent with that to run the build. Containers use that built image and all of your configuration, like volumes, to run your application. There's a hard divider between those two steps, image build and container run, and your volumes are not available during that image build step.
Each of those RUN steps being performed for the image build are done in a temporary container that only stores the output of the filesystem when it's finished. Changes to your environment, a cd into another directory, spawned processes or services in the background, or anything else not written to the filesystem when the command spawned by RUN exits, will be lost. This is one reason you will see commands chained together in a single long RUN command, and it's why you have ENV and WORKDIR commands in the Dockerfile.

How to execute docker commands after a process has started

I wrote a Dockerfile for a service (I have a CMD pointing to a script that starts the process) but I cannot run any other commands after the process has started? I tried using '&' to run the process in the background so that the other commands would run after the process has started but it's not working? Any idea on how to achieve this?
For example, consider I started a database server and wanted to run some scripts only after the database process has started, how do I do that?
Edit 1:
My specific use case is I am running a Rabbitmq server as a service and I want to create a new user, make him administrator and delete the default guest user once the service starts in a container. I can do it manually by logging into the docker container but I wanted to automate it by appending these to the shell script that starts the rabbitmq service but that's not working.
Any help is appreciated!
Regards
Specifically around your problem with Rabbit MQ - you can create a rabbitmq.config file and copy that over when creating the docker image.
In that file you can specify both a default_user and default_pass that will be created when an the database is set from scratch see https://www.rabbitmq.com/configure.html
AS for the general problem - you can change the entry point to a script that runs whatever you need and the service you want instead of the run script of the service
I partially understood your question. Based on what I perceived from your question, I would recommend you to mention the Copy command to copy the script you want to run into the dockerfile. Once you build an image and run the container, start the db service. Then exec the container and get into the container, run the script manually.
If you have CMD command in the dockerfile, then it will be overwritten by the command you specify during the execution. So, I don't think you have any other option to run the script unless you don't have CMD in the dockerfile.

Switching Between Root and Non-Root Users in Docker

So I'm trying to deploy a Django app on Minikube. But in one of the containers, the image requires me to be in root for certain tasks and then switch the postgres user to create some databases and then switch back to root to run more commands.
I know I can use the USER functionality for Docker but that messes up certain task depending on what user I'm in. I have also tried running su - postgres but that returns an error saying that the command has to be from the terminal.
Any thoughts on how to fix this?
The typical tool for this in is gosu. When included in your container, you'd run gosu postgres $cmd where the command is whatever you need to run. If it's the only command you need to have running in the container at the end of your entrypoint script, then you'd exec gosu postgres $cmd. The gosu page includes details of why you'd use their tool, the main reasons being TTY and signal handling. Note the end of their readme also lists a few other alternatives which are worth considering.
If say container is based on the official Postgres image, you can try create a script for all your root tasks and COPY that script to the container's /docker-entrypoint-initdb.d folder. Any .sql and .sh scripts in this folder will be executed AFTER the entrypoint calls initdb, with gosu postgres as seen in the entrypoint script.
If you need to sandwich the initdb between two sets of root tasks, then you will have to carve your own entrypoint script.

Resources