Can't replace pg_hba.conf when create my custom container - docker

I need the next:
1. Create custom image base on image postgres.9.6
2. Create custom container from my custom image
3. Replace original file pg_hba.conf by my custom file
The difference between this to files is only in this line:
In origal pg_hba.conf file:
host all all 127.0.0.1/0 trust
In my pg_hba.conf file.
host all all 0.0.0.0/0 trust
Here my Dockerfile.
FROM postgres:9.6.24
ENV POSTGRES_HOST_AUTH_METHOD=trust
# Create folder Downloads in Docker
WORKDIR /Downloads
COPY /plv8_v.2.x ./Downloads
RUN dpkg -i Downloads/plv8-96_2.1.0-2_amd64.deb
RUN dpkg -i Downloads/v8_3.14.5.10-26_amd64.deb
COPY /postgresql /usr/share/postgresql/9.6/extension/
# Overwrite file pg_hba.conf to avoid password prompt
COPY pg_hba.conf /var/lib/postgresql/data/
Create my custom image by this command:
docker build -t my_image .
Image success created. Nice.
Create my custom container from my custom imeage
docker run --name my_container --restart=always -d -p 127.0.0.1:5432:5432 my_image
But the container is not started.
If I comment this line
COPY pg_hba.conf /var/lib/postgresql/data/
the problem is gone. Then the container is success start.
I want to avoid of PostgreSQL's password prompt. That's why I want to replace pg_hba.conf
So I need to replace pg_hba.conf. So how I can do this?

I found solution.
Not need to copy pg_hba.conf to Docker container. It's enought to use
environment virable:
ENV POSTGRES_HOST_AUTH_METHOD=trust
So here result Dockerfile:
FROM postgres:9.6.24
ENV POSTGRES_HOST_AUTH_METHOD=trust
# Create folder Downloads in Docker
WORKDIR /Downloads
COPY /plv8_v.2.x ./Downloads
RUN dpkg -i Downloads/plv8-96_2.1.0-2_amd64.deb
RUN dpkg -i Downloads/v8_3.14.5.10-26_amd64.deb
COPY /postgresql /usr/share/postgresql/9.6/extension/
And now it's work. It's not ask me about PostgreSQL password anymore.

Related

Why some of the directory in docker container can be mount and share files out and some can not

I'm a new leaner of docker.I came a cross a problem while I'm trying to make my own docker image.
Here's the thing.I create a new DockerFile to build my own mysql image in which I declared MYSQL_ROOT_PASSWORD and put some init scripts in the container.
Here is my Docker
FROM mysql:5.7
MAINTAINER CarbonFace<553127022#qq.com>
ENV TZ Asia/Shanghai
ENV MYSQL_ROOT_PASSWORD Carbon#mysqlRoot7
ENV INIT_DATA_DIR /initData/sql
ENV INIT_SQL_FILE_0 privileges.sql
ENV INIT_SQL_FILE_1 carbon_user_sql.sql
ENV INIT_SQL_FILE_2 carbonface_sql.sql
COPY ./my.cnf /etc/mysql/donf.d/
RUN mkdir -p $INIT_DATA_DIR
COPY ./sqlscript/$INIT_SQL_FILE_0 $INIT_DATA_DIR/
COPY ./sqlscript/$INIT_SQL_FILE_1 $INIT_DATA_DIR/
COPY ./sqlscript/$INIT_SQL_FILE_2 $INIT_DATA_DIR/
COPY ./sqlscript/$INIT_SQL_FILE_0 /docker-entrypoint-initdb.d/
COPY ./sqlscript/$INIT_SQL_FILE_1 /docker-entrypoint-initdb.d/
COPY ./sqlscript/$INIT_SQL_FILE_2 /docker-entrypoint-initdb.d/
CMD ["mysqld"]
I'm trying to build a docker image which contains my own config file and when mounted it would be showed in the local directory and can be modified.
I'm really confused that when I start my container with this image like the official description and also here is my commands:
docker run -dp 3306:3306 \
-v /usr/local/mysql/data:/var/lib/mysql \
-v/usr/local/mysql/conf:/etc/mysql/conf.d \
--name mysql mysql:<my builded tag>
You know I'm trying to mounted the
/usr/local/mysql/conf to the /etc/mysql/conf.d in the container which is been told as the custom config file mounted location.
And I supposed that my custom config file my.cnf which has been copied into the image during docker build and would be show in my local direcroty /usr/local/mysql/conf
And since I already copied my custom config file into image which you can see in my DockerFile.
But it turns out that the directory is empty and the /etc/mysql/conf.d is also overwrite by local directory.
Before I run my container, both /usr/local/mysql/conf and /usr/local/mysql/data is empty at all.
OK fine, I've been told that the volume mounted directory would overwrite the file inside the container.
But how could the empty data directory shows the data files inside the container but the empty conf directory overwrite the conf.d directory in the container.
It make no sense.
I was very confused and I would be very appreciate it if someone can explain why it happens.
My OS is MacOS Big Sur and I used the latest docker.
A host-directory bind mount, -v /host/path:/container/path, always hides the contents of the image and replaces it with the host directory. If the host directory is empty, the container directory will be the same empty directory at container startup time.
The Docker Hub mysql container has an involved entrypoint script that checks to see if the data directory is empty, and if so, initializes the database; abstracted out
#!/bin/sh
# (actually in hundreds of lines of shell code, with more options)
if [ ! -d /var/lib/mysql/data/mysql ]; then
mysql_install_db
# (...and start a temporary database server and run the
# /docker-entrypoint-initdb.d scripts)
fi
# then run the main container command
exec "$#"
Simply the presence of a volume doesn't cause files to be copied (with one exception at one specific point in the lifecycle for named volumes), so if you need to copy content from a container to the host you either need to do it manually with docker cp or have a way in the container code to do it.

Get build files to persist on host after docker-compose build is run

I'm trying to run a docker-compose build command with a Dockerfile and a docker-compose.yml file.
Inside the docker-compose.yml file, I'm trying to bind a local folder on the host machine ./dist with a folder on the container app/dist.
version: '3.8'
services:
dev:
build:
context: .
volumes:
- ./dist:app/dist # I'm expecting files to be changed or added to the container's app/dist to be reflected to the host's ./dist folder
Inside the Dockerfile, I build some files with an NPM script that I'm wanting to make available on the host machine once the build is finished. I'm also touching a new file inside the /app/dist/test.md just as a simple test to see if the file ends up on the host machine, but it does not.
FROM node:8.17.0-alpine as example
RUN mkdir /app
WORKDIR /app
COPY . /app
RUN npm install
RUN npm run dist
RUN touch /app/dist/test.md
Is there a way to do this? I also tried using the "long syntax" as mentioned in the Docker Compose v3 documentation: https://docs.docker.com/compose/compose-file/compose-file-v3/
The easiest way to do this is to install Node and run the npm commands directly on the host.
$BREW_OR_APT_GET_OR_YUM_OR_SOMETHING install node
npm install
npm run dist
# done
There's not an easy way to use a Dockerfile to build host content. The Dockerfile can't write out directly to the host filesystem; if you use a volume mount, the host volume hides the container content before anything else happens.
That means, if you want to use this approach, you need to launch a temporary container to get the content out. You can do it with a one-off container, mounting the host directory somewhere other than /app, making the main container command be cp:
sudo docker build -t myimage .
sudo docker run --rm \
-v "$PWD/dist:/out" \
myimage \
cp -a /app/dist /out
Or, if you specifically wanted to use docker cp:
sudo docker build -t myimage .
sudo docker create --name to-copy myimage
sudo docker cp -r to-copy:/app/dist ./dist
sudo docker rm to-copy
Note that any of these sequences are more complex than just installing a local Node via a package manager, and require administrator permissions (you can use the same technique to overwrite any host file, including the /etc/shadow file with encrypted passwords).

Cannot find generated files in the Docker in the localhost

Let's consider such directory. (Note: A directory ends with \)
root\
|
-- some stuff
|
-- application\
| |
| -- app_stuff
| |
| -- out\
| |
| -- main.cpp
|
-- some stuff
I'm trying to build this app via docker.
The Dockerfile looks like:
FROM emscripten/emsdk:latest
RUN apt-get -q update
RUN mkdir /app
WORKDIR /app
COPY . /app
RUN em++ application/main.cpp -o application/out/app.html
RUN pip3 install aiohttp
RUN pip3 install aiohttp_jinja2
RUN pip3 install jinja2
RUN ls application/out
The docker-compose looks like:
version: '3.8'
services:
application:
build: .
volumes:
- ./application/out:/app/application/out
command: python3 application/entry.py
ports:
- "8080:8080"
As you may notice in Dockerfile (RUN em++ application/main.cpp -o application/out/app.html), whereas docker is processing it generates new files to the out-directory. However, once it's done I can't find those files.
Note: These files appear in application\out in container.
...
Step 10/10 : RUN ls application/out
---> Running in 603f6b99f4b0
app.html
app.js
app.wasm
...
Where have I admitted a mistake?
The Dockerfile gives instructions on how to build a docker image, and not on what happens in the live container.
If you mount a volume, either via docker-compose or via a docker run command, either way, the volume will only be mounted once the container is created.
So what happens is
first docker creates the image executing the commands in the Dockerfile, and stores the image as an image
then docker will create a container using the stored image
then docker will mount the volumes you defined in the docker-compose.yml file. (At this point if anything is already present in the target directory, either the mount will fail or the original content of the target directory will be moved to a 'lost-and-found' directory)
then the entrypoint or cmd command is run (so here that would be python3 application/entry.py)
So if you need to get the output files out in your host directory, you either need to create those files in the entrypoint script of copy them in the entrypoint script
so you can create a file you call myscript.sh with the following
#!/bin/bash
em++ /app/application/main.cpp -o /app/application/out/app.html
python3 /app/application/entry.py
in your Dockerfile you remove the line RUN em++ application/main.cpp -o application/out/app.html and replace it with
COPY ./myscript.sh /
ENTRYPOINT /myscript.sh
and you remove the line command: python3 application/entry.py from your docker-compose.yml file.
You can use the CMD command rather than ENTRYPOINT if you prefer, that's just a matter of personal preference.
A Docker-compose volume can link a directory on the host to a directory inside of a container. You are overwriting the /app/application/out directory inside of the container with a volume to the host's ./application/out, effectively erasing any contents of /app/application/out originating from your built image.
Given the context, I presume your host's ./application/out directory is empty and you are overwriting the container's /app/application/out directory with nothing. You can test this by removing the volumes tag and see if the application is able to find files under /app/application/out afterwards.
Unrelated to your issue, take into consideration that your apt-get update command will cache Debian remote repository lists in your built image; this adds wasted space to your final image. See this post about deleting the cached lists.

Trying to edit a file inside a docker but it returns to default file with no changes

I was getting an error as the following
Blocked host: xx.xxx.xxx To allow requests to xx.xxx.xxx, add the following to your environment configuration: config.hosts << "xx.xxx.xxx"
and as mentioned in a lot of posts I've to edit config/environments/development.rb file from inside the docker to add the following line config.hosts << "xx.xxx.xxx" but when I edit the file with vim and I restart the server the default file returns with no changes.
You can always manage the files outside of Docker.
Let's say your container name is: uitest
Make a folder for it on your computer.
mkdir -p ~/docker/uitest
cd ~/docker/uitest
Then copy the files that you need to modify there.
docker cp uitest:/config/environments/development.rb .
docker cp uitest:/etc/hosts .
Make the wanted file changes on your local machine.
Then you can either copy the files back in the container.
docker cp development.rb uitest:/config/environments/development.rb
docker cp hosts uitest:/etc/hosts
or if practical remove the old container
docker rm uitest
and recreate it with the volumes
docker run -dit --name uitest \
-v $PWD/development.rb:/config/environment/development.rb \
-v $PWD/hosts:/etc/hosts \
uitest/image

Docker: No such file or directory

[root#mymachine redisc]# ls
app.py Dockerfile redis.conf redis-server requirements.txt
[root#mymachine redisc]# cat Dockerfile
# Use an official Python runtime as a parent image
FROM python:2.7-slim
#FROM alpine:3.7
# Define mountable directories.
VOLUME ["/x/build/"]
# Set the working directory to /app
WORKDIR /app
# Copy the current directory contents into the container at /app
ADD . /app
# Make port 80 available to the world outside this container
EXPOSE 6379
# Define environment variable
ENV NAME Redis
# Run app.py when the container launches
CMD ["/app/redis-server", "/app/redis_rtp.conf"]
I've built the image as myredis
[root#mymachine redisc]# docker run -p 6379:6379
*** FATAL CONFIG FILE ERROR ***
Reading the configuration file, at line 104
>>> 'logfile /x/build/redis/logs/redis_6379_container.log'
Can't open the log file: No such file or directory
the above gave me an error so I've tried supplying the path
[root#mymachine redisc]# docker run -p 6379:6379 -v /x/build/redis/log myredis
It gave me the same error but the dir exists.
[root#mymachine9 redisc]# ls /x/build/redis/logs/
redis2_6379.log redis_6379.log
Why isn't the dir not accessible from the container? how can I fix it?
thank you
VOLUME ["/x/build/"] means that you want to mount /x/build/ dir of a container into the host OS.
In contrast, I think you expect that the container mounts /x/build/ of host OS into the container.
That is why I asked [root#mymachine9 redisc]# ls /x/build/redis/logs/ is in the container or host OS and that is why docker returns the error No such file or directory.
Because docker will just have just empty /x/build/ dir.
(if the base image doesn't have the /x/build/, docker will create the dir)
For example,
# Add into you Dockferfile
RUN mkdir -p /testDir && touch /testDir/test && echo "test1234" >> /testDir/test
VOLUME ["/testDir"]
---------
# Run a container
$ docker run --name test image_name
# Check mount position
$ docker inspect test -f {{.Mounts}}
[{volume 09e3cedef5ceeef0cbd944785e0ea629d4c65a20b10d1384bbd50a1c67879845 /var/lib/docker/volumes/09e3cedef5ceeef0cbd944785e0ea629d4c65a20b10d1384bbd50a1c67879845/_data /testDir local true }]
# Move to mount position
$ cd /var/lib/docker/volumes/09e3cedef5ceeef0cbd944785e0ea629d4c65a20b10d1384bbd50a1c67879845/_data
# Check if the content is from testDir of base image.
$ ls
test
$ cat test
test1234
As #fernandezcuesta comment, you can use bind option for your purpose.
-v /x/build/redis/logs:/x/build/redis/logs
or
--mount type=bind,source=/x/build/redis/logs,target=/x/build/redis/logs
-----Edit-----
For now, there is no way to do bind option in Dockerfile, that is while building an image. Refer this issue, you can find why docker doesn't support that.
In short
bind mounts are linked to the host, as Dockerfiles can be shared, it would break compatibility (a Dockerfile with your bind mounts won't work on my machine)
bind mounts are more related to the run than to the build.

Resources