Access db as localhost from docker container - docker

I'm pretty new to Docker but am trying to use it to clean up some of my projects. One such project is a fairly simple PHP/MySQL application. I've "docker-ized" the app by adding a docker-compose.yml with db and php services. Here's my docker-compose.yml:
version: '2'
services:
php:
build: .
ports:
- "80:80"
- "443:443"
volumes:
- ./public_html:/var/www/html
links:
- db
db:
image: mysql:5.5
ports:
- "3306:3306"
environment:
MYSQL_USER: root
MYSQL_PASSWORD:
MYSQL_ROOT_PASSWORD:
MYSQL_ALLOW_EMPTY_PASSWORD: 'yes'
volumes:
- /c/dockerdata:/var/lib/mysql
This works correctly however I have to change all my PHP scripts to use "db" instead of "localhost" when connecting to the mysql database. I'm adding the docker stuff just as a way to clean up development so I'm trying to avoid changing the PHP code itself. Is there a way I can configure this so I'm able to use localhost or 127.0.0.1 to connect?

Docker doesn't allow you to modify /etc/hosts on containers Known issue
You can edit /etc/hosts with entrypoint option
Create entrypoint.sh script
#!/bin/bash
cp /etc/hosts /tmp/hosts
sed -e '/localhost/ s/^#*/#/' -i /tmp/hosts
cp /tmp/hosts /etc/hosts
# add your command here to run php application
Add execute permissions to entrypoint.sh
chmod +x entrypoint.sh
Add below two lines to Dockerfile
ADD entrypoint.sh /entrypoint.sh
ENTRYPOINT /entrypoint.sh
Now do the step 2) from my previous answer.

You can achieve this using below two steps
1) Add below CMD to your Dockerfile
CMD sed -e '/localhost/ s/^#*/#/' -i /etc/hosts
2) Replace 'db' with 'localhost' in docker-compose.yml
links:
- db
db:
image: mysql:5.5
as
links:
- localhost
localhost:
image: mysql:5.5

Related

Not able to mount Docker Volume for PhpMyAdmin

I've install MySQL and PhpMyAdmin on docker
MySQL volume mount works perfectly fine,
But I also want container's /var/www/html/libraries, /var/www/html/themes folders to be saved/persisted to my host.
So that If I change any file and it stays like that..
This is my docker-compose.yml
version: '3.5'
services:
mysql:
container_name: mysql
image: mysql
restart: always
volumes:
- ./var/lib/mysql:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: root
phpmyadmin:
container_name: phpmyadmin
image: phpmyadmin/phpmyadmin:latest
restart: always
volumes:
- ./phpmyadmin/libraries:/var/www/html/libraries # Here's the problem
- ./phpmyadmin/themes:/var/www/html/themes # Here's the problem
environment:
PMA_HOST: mysql
The current problem is,
it does create the folders /phpmyadmin/libraries, /phpmyadmin/themes
But inside they're empty and the container's directories (/var/www/html/libraries, /var/www/html/themes) also becomes empty.
I'm very new to Docker, and currently I've no clue :(
Many Thanks in advance.
Your problem is that /var/www/html is populated at build time and volumes are mounted at run time which causes /var/www/html to be overwritten by what you have locally (i.e. nothing).
You need to extend the Dockerfile for PHPMyAdmin to delay populating those directories until after the volumes have been mounted. You'll need something like this setup:
Modify docker-compose.yml to the following:
...
phpmyadmin:
container_name: phpmyadmin
build:
# Use the Dockerfile located at ./build/phpmyadmin/Dockerfile to build this image
context: ./build/phpmyadmin
dockerfile: Dockerfile
restart: always
volumes:
- ./phpmyadmin/libraries:/var/www/html/libraries
- ./phpmyadmin/themes:/var/www/html/themes
environment:
PMA_HOST: mysql
Create a file at ./build/phpmyadmin/Dockerfile with this content:
FROM phpmyadmin/phpmyadmin:latest
# Move the directories you want into a temporary directory
RUN mv /var/www/html /tmp/
# Modify the start up of this image to use a custom script
COPY ./custom-entrypoint.sh /custom-entrypoint.sh
RUN chmod +x /custom-entrypoint.sh
ENTRYPOINT ["/custom-entrypoint.sh"]
CMD ["apache2-foreground"]
Create a custom entrypoint at ./build/phpmyadmin/custom-entrypoint.sh with this content:
#!/bin/sh
# Copy over the saved files
cp -r /tmp/html /var/www
# Kick off the original entrypoint
exec /docker-entrypoint.sh "$#"
Then you can build and start everything with docker-compose up --build.
Note: this will probably cause issues for you if you're trying to version control these directories - you'll probably need to modify custom-entrypoint.sh.

Possble to Share folders between container to container?

I want to know how to share application folder between container to container.
I found out articles about "how to share folder between container and host" but i could not find "container to container".
I want to do edit the code for frontend application on backend so I need to share the folder. <- this is also my goal.
Any solution?
My config is like this
/
- docker-compose.yml
|
- backend application
|
_ Dockerfile
|
-Frontend application
|
- Dockerfile
And
docker-compose.yml is like this
version: '3'
services:
railsApp: #<- backend application
build: .
command: bundle exec rails s -p 5000 -b '0.0.0.0'
volumes:
- code_share:/var/web/railsApp
ports:
- "3000:3000"
reactApp: #<- frontend application
build: .
command: yarn start
volumes:
- code_share:/var/web/reactApp
ports:
- "3000:3000"
volumes:
code_share:
You are already mounting a named volume in both your frontend and backend now.
According to your configuration, both your application /var/web/railsApp and /var/web/reactApp will see the exact same content.
So whenever you write to /var/web/reactApp in your frontend application container, the changes will also be reflected in the backend /var/web/railsApp
To achieve what you want (having railsApp and reactApp under /var/web), try mounting a folder on host machine into both the container. (make sure your application is writing into respective /var/web folder correctly.
mkdir -p /var/web/railsApp /var/web/reactApp
then adjust your compose file:
version: '3'
services:
railsApp: #<- backend application
build: .
command: bundle exec rails s -p 5000 -b '0.0.0.0'
volumes:
- /var/web:/var/web
ports:
- "3000:3000"
reactApp: #<- frontend application
build: .
command: yarn start
volumes:
- /var/web:/var/web
ports:
- "3000:3000"

docker-compose: wait for a container to be run before running another container

In this example, I want to run prisma container, only when mysql container is exposed on mysql:3036. I tried to use wait-for-it.sh but how can I use this inside prisma container?
https://github.com/vishnubob/wait-for-it
version: '3.7'
services:
prisma:
image: prismagraphql/prisma:1.34.8
restart: always
depends_on:
- mysql
ports:
- '4466:4466'
environment:
PRISMA_CONFIG: |
port: 4466
databases:
default:
connector: mysql
host: mysql
user: root
password: prisma
rawAccess: true
port: 3306
migrations: true
mysql:
image: mysql:5.7
restart: always
environment:
MYSQL_ROOT_PASSWORD: prisma
volumes:
- ./persistence/test/mysql:/var/lib/mysql
redis:
image: redis:5-alpine
command: redis-server
ports:
- 6379:6379
volumes:
- ./persistence/test/redis:/data
hostname: redis
restart: always
# env_file: ${ENV_FILE}
If you want to use the wait-for-it.sh to wait for the service mysql:3036 to become available, you will have to build your own image FROM prismagraphql/prisma:1.34.8 and COPY wait-for-it.sh to that image. After that you will have to create a custom startup script, which will call wait-for-fit.sh and then exec the main prisma process.
e.g. Dockerfile
FROM prismagraphql/prisma:1.34.8
COPY wait-for-it.sh /
COPY entrypoint.sh /
RUN chmod +x /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
e.g. entrypoint.sh
#!/usr/bin/env bash
/wait-for-it.sh mysql:3036 #add timeout if you want `-t 10s`
exec /app/start.sh "$#"
The tricky part is finding out the starting script inside images. Sometimes you will find a public Dockerfile in the projects repo or you will have to inspect the image e.g. docker image inspect prismagraphql/prisma:1.34.8 --format '{{.ContainerConfig.Entrypoint}}'

Wait for a docker container to be ready with command in docker-compose.yml

I have a mysql-db and prisma image in my docker-compose.yml. I want prisma to wait for the db to be ready, cause otherwise prisma keeps restarting and it wont work at all. And I know from here, that I can use ./wait-for-it but I was not able to connect the pieces after searching for a while.
version: '3'
services:
prisma:
image: prismagraphql/prisma:1.25
restart: unless-stopped
ports:
- "4001:4466"
depends_on:
- db
# I added this command
command: ["./wait-for-it.sh", "db:33061", "--"]
environment:
PRISMA_CONFIG: |
managementApiSecret: server.secret.123
port: 4466
databases:
default:
connector: mysql
active: true
host: db
port: 3306
user: ***
password: ***
db:
image: mysql:5.7
restart: unless-stopped
command: --default-authentication-plugin=mysql_native_password
environment:
MYSQL_USER: ***
MYSQL_ROOT_PASSWORD: ***
ports:
- "33061:3306"
volumes:
- /docker/mysql:/var/lib/mysql
I added the command above but nothing changed, not even an error in the logs but as I understand, the command is run inside the container.
How do I get the ./wait-for-it.sh into the container?
And can this even work this way with the command or does this depend
on the prisma-image?
Otherwise, how would I achieve the waiting?
I just have the docker-compose file and want to do docker-compose up -d
Now I found out how to include wait-for-it.sh into the container.
I downloaded the wait-for-it.sh into the project folder and then I created a file called Dockerfile with the contents:
FROM prismagraphql/prisma:1.25
COPY ./wait-for-it.sh /app/wait-for-it.sh
RUN chmod +x /app/wait-for-it.sh
ENTRYPOINT ["/bin/sh","-c","/app/wait-for-it.sh db:3306 -t 30 -- /app/start.sh"]
In my docker-compose.yml I replaced
image: prismagraphql/prisma:1.25 with build: . which causes a new build from the Dockerfile in my project path.
Now the new image will be built from the prisma image and the wait-for-it.sh will be copied into the new image. Then the ENTRYPOINT is overridden and prisma will wait until the db is ready.
You are confusing internal and external ports. Database is visible on port 3306 inside your network, so you have to wait on db:3306 and not on 33061.
Port exposing has no effect inside user-defined bridge network, created by default by docker-compose. All ports are visible to containers inside network by default. When you expose port, you make it visible outside network.
Also, make sure what is ENTRYPOINT for image prismagraphql/prisma:1.25. If it is not /bin/sh -c or other type of shell, your command wont get executed.
UPD
If you get ENTRYPOINT in base image different from /bin/sh -c, you can override it. Supposing you have /bin/sh -c /app/start.sh, you could do following magic:
docker-compose.yml
...
services:
prisma:
entrypoint: ["/bin/sh", "-c", "'./wait-for-it.sh db:3306 && /app/start.sh'"]

Docker container exited with code 0 after docker-compose up -d

I know the question has been asked in various situations, but I'm still stucked despite everything I read on the Internet.
I want to have a script executed after the container "mywebsite" is built and I used ENTRYPOINT for that and I know that in normal use, after the ENTRYPOINT command is executed, the container "mywebsite" exit. I tried several tricks to avoid exit, unfortunately without success.
In my DOCKERFILE I have this :
FROM php:7.1.17-apache
[...]
WORKDIR /var/www
COPY docker-entrypoint.sh /var/www/docker-entrypoint.sh
ENTRYPOINT ["sh", "/var/www/docker-entrypoint.sh"]
Then in my docker-entrypoint.sh I have this :
#!/bin/bash
set -e
cd www
chown -R www-data:www-data sites modules themes
exec "$#"
And here is my docker-compose.yml :
version: '3.3'
services:
mywebsite:
build: .
extra_hosts:
- "mywebsite.local:127.0.0.1"
hostname: mywebsite
domainname: local
ports:
- 8088:80
volumes:
- ./www:/var/www/www
- ./vendor:/var/www/vendor
- ./scripts:/var/www/scripts
links:
- database:database
restart: always
tty: true
database:
image: mysql:5.5.49
container_name: mysql-container
ports:
- 3307:3306
volumes:
- ./www/dumps/mywebsite.sql:/docker-entrypoint-initdb.d/dump.sql
restart: always
command: --max_allowed_packet=32505856
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_DATABASE: mywebsite
When build, all steps are fine, and everything is set properly, but the container "mywebsite" keep exiting. (The "database" service is running fine)
I haded tty: true and exec "$#" but none of that works.
You can end with command like tail -f /dev/null
I often use this directly in my docker-compose.yml with command: tail -f /dev/null. And it is easy to see how I keep the container running.
I had the same problem when creating my own image from a postgis-image. The problem was that I added an entrypoint. When I removed the entrypoint, build the image again, docker-compose does start my container and postgis was accepting connections.
dockerfile
FROM postgis/postgis:12-master
COPY organisation.sql
#ENTRYPOINT ["docker-entrypoint.sh"] #This was the problem
In docker-compose I did not need command's or tty.
version: "3.7"
services:
mydb:
image: mydb:latest
container_name: mytest
ports:
- "5432:5432"
environment:
POSTGRES_DB: postgres
POSTGRES_USER: postgres
POSTGRES_PASSWORD: secret

Resources