I have created a container with docker-compose, it runs without a problem, the database is connected right but the volumes are not in sync between the container and local folder. I started it with the following code:
version: "3"
services:
web:
image: "prestashop/prestashop:1.7"
ports:
- "8080:80"
environment:
- PS_DEV_MODE=1
- XDEBUG_CONFIG
- PS_INSTALL_AUTO=1
- PS_FOLDER_INSTALL=installDirHasToBeRenamed
- PS_FOLDER_ADMIN=admin1234
- ADMIN_MAIL=admin#shop.com
- ADMIN_PASSWD=admin1234
- DB_SERVER=database
- DB_NAME=prestashop
- DB_USER=root
- DB_PASSWD=admin
- PS_INSTALL_DB=1
- PS_ERASE_DB=1
networks:
- network
volumes:
- .:/var/www/html/modules/mymoduledir
# Do not forget to remove directory when removing containers with `docker-compose rm`
- .docker/web:/var/www/html
database:
# MySQL 5.7 is recommended for the Docker version
# https://hub.docker.com/r/prestashop/prestashop/
image: "mysql:5.7"
ports:
- "3307:3306"
environment:
- MYSQL_ROOT_PASSWORD=admin
networks:
- network
networks:
network:
This is a prestashop 1.7 module, the given directory is the one where it will be installed, so I tried the following things to make this work:
I used chmod on the container's side so it can be used but it did not worked, prestashop still wasn't able to use that folder
from a fresh start I deleted the folder, this way it was possible to install the module but the local changes wasn't showed up in the container.
please replace . by $(pwd) not that for your database you don't have volumes mounted.
volumes:
- $(pwd):/var/www/html/modules/mymoduledir
best practice would be in any case :
rootfolder
|-Docker-compose.yaml
|-web/
|-directory_to_mount
|-postgres/
|-directory_to_mount
Related
I have this docker-file
services:
db:
# We use a mariadb image which supports both amd64 & arm64 architecture
image: mariadb:10.6.4-focal
# If you really want to use MySQL, uncomment the following line
#image: mysql:8.0.27
command: '--default-authentication-plugin=mysql_native_password'
volumes:
- db_data:/var/lib/mysql
restart: always
environment:
- MYSQL_ROOT_PASSWORD=somewordpress
- MYSQL_DATABASE=wordpress
- MYSQL_USER=wordpress
- MYSQL_PASSWORD=wordpress
expose:
- 3306
- 33060
wordpress:
image: wordpress:latest
volumes:
- wp_data:/var/www/html
ports:
- 80:80
restart: always
environment:
- WORDPRESS_DB_HOST=db
- WORDPRESS_DB_USER=wordpress
- WORDPRESS_DB_PASSWORD=wordpress
- WORDPRESS_DB_NAME=wordpress
volumes:
db_data:
wp_data:
I run this and install WordPress but I want to learn to make templates and plugins so I need to edit WP files. How can I do that?
After starting your containers using the compose file:
List the running containers: docker ps
Check which of the running containers has the image as wordpress:latest and copy the id of the container associated with it
Enter the container by running docker exec -it <you-container-id> /bin/sh
And now you have a session inside of the container. You can edit the files inside with vi (not the most ideal).
Look up docker volumes if you want to edit the files locally and have them be mapped inside of the containers.
I have got following compose file where i'm sharing some generated html data from Jenkins container to the host drive and reading this data by Nginx container from the host drive. I'm using Ubuntu Server 18.04 on AWS.
The problem is that I can read contents of the jenkins/workspace/allure-report only once. After updating of the html data it becomes inaccessible for Nginx and it throws 403 status code.
I tried all the possible solutions but nothing works. The only ugly solution is to restart Nginx container after every html data updating. I don't like this way and looking for some inbuilt docker features to resolve this.
What didn't help: sharing volume straight between containers without using docker host drive, using rslave option, using docker separate volume that can be used as buffer between the two containers... I believe it should be much more easier!
version: '2'
services:
jenkins:
container_name: jenkins
image: "jenkins/jenkins"
ports:
- "8088:8080"
- "50000:50000"
env_file:
- variables.env
volumes:
- ./jenkins:/var/jenkins_home
selenoid:
container_name: selenoid
network_mode: bridge
image: "aerokube/selenoid"
# default directory for browsers.json is /etc/selenoid/
command: -listen :4444 -conf /etc/selenoid/browsers.json -video-output-dir /opt/selenoid/video/ -timeout 3m
ports:
- "4444:4444"
env_file:
- variables.env
volumes:
- $PWD:/etc/selenoid/ # assumed current dir contains browsers.json
- /var/run/docker.sock:/var/run/docker.sock
selenoid-ui:
container_name: selenoid-ui
network_mode: bridge
image: "aerokube/selenoid-ui"
links:
- selenoid
ports:
- "8080:8080"
env_file:
- variables.env
command: ["--selenoid-uri", "http://selenoid:4444"]
nginx:
container_name: nginx
image: "nginx"
ports:
- "80:80"
volumes:
- ./jenkins/workspace/allure-report:/usr/share/nginx/html:ro,rslave
Found the solution: the easiest way to get access to the dynamic data is to use volumes_from in that container you want to look from.
When I configured my compose file like that I faced another issue - the 403 status has gone but the data was static. But that was my fault, I didn't use "cp -r " command correctly so my data has been copied only once.
I have a docker-compose LAMP stack comprised of three services; a webserver, php and mysql.
The apache2 webroot inside the container is shared to my local machine using a volume like so:
volumes:
- ./public_html:/usr/local/apache2/htdocs
When the stack is running though, I can't edit files inside of the shared volume, since I have a different local user as the user inside the apache2 container. Additionally the installer of my CMS (Processwire) is unable to acquire permissions to the required install directories.
The apache container uses alpine 2.4.35.
I've build my docker-compose file according to this tutorial:
https://medium.com/#thivi/creating-a-lamp-stack-using-docker-compose-13ca4e3950e1
Below I have attached my docker-compose.yml.
version: '3.7'
services:
apache:
build: './apache'
restart: always
ports:
- 80:80
- 443:443
networks:
- frontend
- backend
volumes:
- ./public_html:/usr/local/apache2/htdocs
- ./cert/:/usr/local/apache2/cert/
depends_on:
- php
- mysql
php:
build: './php'
restart: always
networks:
- backend
volumes:
- ./public_html:/usr/local/apache2/htdocs
- ./tmp:/usr/local/tmp
mysql:
build: './mysql'
restart: always
ports:
- 3306:3306
expose:
- 3306
networks:
- backend
volumes:
- ./database:/var/lib/mysql
networks:
backend:
frontend:
Is there any way to fix this issue? I'd be grateful for answers, I've been dealing with this issue for the past 2 days, without getting anywhere and I'm also kind of surprised that such an essential feature like directory sharing is so complicated.
/edit:
I've also noticed something interesting; when I execute a bash inside the apache-container the ownership of apache's document root is set to nobody:nobody, which probably also isn't right.
I'm running this on debian 9
I'm using sudo docker volume create db to create a volume I'm using in my docker-compose.yml. But I still get the error db_1_d89b59353579 | mkdir: cannot create directory '/var/lib/mysql': Permission denied.
How can I set permissions for the user using that volume. And how to get the user?
Docker-Compose:
version: '2'
volumes:
nextcloud:
db:
services:
db:
image: mariadb
command: --transaction-isolation=READ-COMMITTED --binlog-format=ROW
restart: always
volumes:
- db:/var/lib/mysql:z
environment:
- MYSQL_ROOT_PASSWORD=***
- MYSQL_PASSWORD=***
- MYSQL_DATABASE=nextcloud
- MYSQL_USER=nextcloud
app:
image: nextcloud
ports:
- 8080:80
links:
- db
volumes:
- nextcloud:/var/www/html
restart: always
I got an install.sh file where I run:
...
sudo docker volume create db
sudo docker-compose build
docker-compose up -d
Try to first change the mounts to local folders and see if that fixes your issue:
version: '2'
volumes:
nextcloud:
db:
services:
db:
...
volumes:
- ./db:/var/lib/mysql
environment:
- MYSQL_ROOT_PASSWORD=***
- MYSQL_PASSWORD=***
- MYSQL_DATABASE=nextcloud
- MYSQL_USER=nextcloud
app:
...
volumes:
- ./nextcloud:/var/www/html
restart: always
If that does then check that the volumes are correctly removed by docker-compose down. Run docker volume ls. If they still persist then remove them by hand and rerun your containers with the volumes.
Regarding the difference between mounting to a volume (db:/var/lib/mysql) and mounting to a host path (./db:/var/lib/mysql):
In the first case it is a volume managed by Docker. It is meant for persistence but getting to the files is a bit more tricky. In the second case it is a path on the host and it makes it a lot easier to retrieve persisted files. I recommend to run "docker-compose config" for both situations and see the difference in how docker-compose internally transforms the statement.
I have 2 applications that are separate codebases, and they each have their own database on the same db server instance.
I am trying to replicate this in docker, locally on my laptop. I want to be able to have both apps use the same database instance.
I would like
both apps to start in docker at the same time
both apps to be able to access the database on localhost
the database data is persisted
be able to view the data in the database using an IDE on localhost
So each of my apps has its own dockerfile and docker-compose file.
On app1, I start the docker instance of the app which is tied to the database. It all starts fine.
When I try to start app2, I get the following error:
ERROR: for app2_mssql_1 Cannot start service mssql: driver failed programming external connectivity on endpoint app2_mssql_1 (12d550c8f032ccdbe67e02445a0b87bff2b2306d03da1d14ad5369472a200620): Bind for 0.0.0.0:1433 failed: port is already allocated
How can i have them both running at the same time? BOTH apps need to be able to access each others database tables!
Here is the docker-compose.yml files
app1:
version: "3"
services:
web:
build:
context: .
args:
volumes:
- .:/app
ports:
- "3000:3000"
depends_on:
- mssql
mssql:
image: 'microsoft/mssql-server-linux'
ports:
- '1433:1433'
environment:
- ACCEPT_EULA=Y
- SA_PASSWORD=P455w0rd!
volumes:
- app1_db:/var/lib/mssql/data
volumes:
app1_db:
and here is app2:
version: "3"
services:
web:
build:
context: .
args:
volumes:
- .:/app
ports:
- "3000:3000"
depends_on:
- mssql
mssql:
image: 'microsoft/mssql-server-linux'
ports:
- '1433:1433'
environment:
- ACCEPT_EULA=Y
- SA_PASSWORD=P455w0rd!
volumes:
- app2_db:/var/lib/mssql/data
volumes:
app2_db:
Should I be using the same volume in each docker-compose file?
I guess the problem is in each app i am spinning up 2 different db instances, when in reality I guess i just want one, and it be used by all my apps?
The ports part in docker-compose file will bound the container port to host's port which causes port conflict in your case.
You need to remove the ports part from at least one of the compose file. This way, docker-compose can be up for both. And you can have access to both app at same time. But remember both apps will be placed in separate network bridges.
How docker-compose up works:
Suppose your app is in a directory called myapp, and your docker-compose.yml
When you run docker-compose up, the following happens:
A network called myapp_default is created.
A container is created using web’s configuration. It joins the network myapp_default under the name web.
A container is created using db’s configuration. It joins the network myapp_default under the name db.
If you run the second docker-compose.yml in different folder myapp2, then the nework will be myapp2_default.
Current configuration creates two volumes, two datebase containers and two apps. If you can make them run in the same network and run database as the single container it will work.
I don't think you are expecting two database container two two volumes.
Approach 1:
docker-compose.yml as a single compose.
version: "3"
services:
app1:
build:
context: .
args:
volumes:
- .:/app # give the path depending up on the docker file of app1.
ports:
- "3030:3000"
depends_on:
- mssql
app2:
build:
context: .
args:
volumes:
- .:/app # give the path depending up on the docker file of app2.
ports:
- "3032:3000"
depends_on:
- mssql
mssql:
image: 'microsoft/mssql-server-linux'
ports:
- '1433:1433'
environment:
- ACCEPT_EULA=Y
- SA_PASSWORD=SqlServer1234!
volumes:
- app_docker_db:/var/lib/mssql/data
volumes:
app_docker_db:
Approach 2:
To Isolate it further, still want to run them as the sepeare composefiles, create three compose file with network.
docker-compose.yml for database with network
version: "3"
services:
mssql:
image: 'microsoft/mssql-server-linux'
ports:
- '1433:1433'
environment:
- ACCEPT_EULA=Y
- SA_PASSWORD=SqlServer1234!
volumes:
- app_docker_db:/var/lib/mssql/data
networks:
- test_network
volumes:
app_docker_db
networks:
test_network:
docker-ompose.yml for app1
remove the database container and add below lines to your compose file
version: "3"
services:
app1:
build:
context: .
args:
volumes:
- .:/app # give the path depending up on the docker file of app1.
ports:
- "3030:3000"
networks:
default:
external:
name: my-pre-existing-network
Do the same for another docker-compose by replacing the docker-compose file.
There are many other option to create docker-compose files. Configure the default network and Use a pre-existing network
You're exposing the same port (1433) two times to the host machine. (This is what "ports:..." does). This is not possible as it would block the same port on your host (That's what the message says).
I think the most common way in these cases is that you link your db's to your apps. (See https://docs.docker.com/compose/compose-file/#links). By doing this your applications can still access the databases on their common ports (1433), but the databases are not accessible from the host anymore (only from the container that is linked to it).
Another error I see in your docker compose file is that both applications are exposed by the same ports. This is also not possible for the same reason. I would suggest that you change one of them to "3000:3001", so you can access this application on port 3001.