I've been working on a site using laravel 5.8 which runs on a docker container and usually I've been able to save my local changes and the site on my local host reflects them but not my changes aren't seen on the site.
I'm running docker-compose up -d and it starts with the laravel driver, creating php and creating nginx but My local changes just won't show.
Should I be running a different command?
docker-compose file:
version: '3'
networks:
laravel:
services:
nginx:
image: nginx:latest
container_name: nginx
ports:
- "8080:80"
volumes:
- ./:/var/www
- ./resources/docker/nginx:/etc/nginx/conf.d
depends_on:
- php
networks:
- laravel
php:
image: quay.io/testRepo/docker-php-iaccess-odbc:7.3-devel
container_name: php
volumes:
- ./:/var/www
environment:
- PHP_OPACHE_ENABLE=0
ports:
- "9000:9000"
networks:
- laravel
docker-compose volume mounting requires either a full path or using the Version 3 bind configuration.
https://docs.docker.com/compose/compose-file/#volumes
In Linux/Unix OSes the pwd CLI command can be used as a short cut.
Related
Hello I want to publish the "index.php" from the local folder "C:\html\index.php" with docker-compose.yml
in localhost I get the typical apache html "It works". But I do not get the content of my local folder. What I am doing wrong?
here is my docker-compose file:
version: "3"
services:
# --- MySQL 5.7
#
mysql:
container_name: "dstack-mysql"
image: bitnami/mysql:5.7
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_USER=admin
- MYSQL_PASSWORD=root
ports:
- '3306:3306'
php:
container_name: "dstack-php"
image: bitnami/php-fpm:8.1
# --- Apache 2.4
#
apache:
container_name: "dstack-apache"
image: bitnami/apache:2.4
ports:
- '80:8080'
- '443:8443'
depends_on:
- php
volumes:
- C:/html:/var/www/html
phpmyadmin:
container_name: "dstack-phpmyadmin"
image: bitnami/phpmyadmin:latest
depends_on:
- mysql
ports:
- '81:8080'
- '8143:8443'
environment:
- DATABASE_HOST=host.docker.internal
volumes:
dstack-mysql:
driver: local
Update:
volumes:
- ./html:/var/www/html
Doesn't works.
I want to have a web development docker environment where I edit in the folder C:\html\index_hello.html in my computer and I will see the changes in the browser localhost:8080, the changes I did. My expectation is that I write in the browser http://localhost:8080/index_hello.html. Did I something wrong? shall I edit other files e.g. apache.conf?
I would suggest avoiding hardcoding directories and using relative directories.
If you place your docker-compose into your C:/html folder and then change you volume to read:
volumes:
- .:/var/www/html
if you run the following:
cd C:/html
docker-compose up -d
you are telling docker-compose to use . meaning the current directory.
if you put the docker-compose.yml in the C:/ directory you can run change the volume to:
volumes:
- ./html:/var/www/html
then the docker compose command should remain the same.
I am pretty beginner with Docker, and I'm trying to create a local development LAMP (more exactly Apache, MariaDB, PHP) stack using docker-compose, existing Docker images from Docker hub and no Dockerfile if possible, to be used with several local web projects.
I'd like to map my local web project directory /Users/myusername/projects/myprojectname to the default document root for Apache container (which seems to be /app for the Apache image I'm using)
Here is my docker-compose.yml file:
version: "3"
services:
mariadb:
image: mariadb:10.5
container_name: mariadb
restart: always
ports:
- 8889:3306
volumes:
- ./mysql:/var/lib/mysql
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_USER=localmysqluser
- MYSQL_PASSWORD=localmysqlpwd
php:
image: bitnami/php-fpm:7.4
container_name: php
ports:
- 9000:9000
volumes:
- /Users/myusername/projects/myprojectname:/app
apache:
image: bitnami/apache:latest
container_name: apache
restart: always
ports:
- 8080:80
volumes:
- ./apache-vhosts/myapp.conf:/vhosts/myapp.conf:ro
- /Users/myusername/projects/myprojectname:/app
depends_on:
- mariadb
- php
But when I do docker-compose up -d then browse to http://localhost:8080/, I get zero data. Where am I wrong? Is my docker-compose.yml configuration wrong, or is it because of system rights?
I've been looking at this similar question, but I'd prefer not using any Dockerfile if possible.
Further question: is it possible to make a local directory /Users/myusername/projects/ browsable by Apache in my local browser?
As answered by J. Song, exposed port number of this Apache Docker image is 8080, not 80.
So we just need to change port mapping of Apache service to 8080:8080 instead of 8080:80.
there is ruby on rails application which uses mongodb and postgresql databases. When I run it locally everything works fine, however when I try to open in a remote container, it throws error message
2021-03-14T20:22:27.985+0000 Failed: error connecting to db server: no reachable servers
the docker-compose.yml file defines following services:
redis mongodb db rails
I start remote containers with following command:
docker-compose build - build successful
docker-compose up -d - containers are up and running
when I connect to the rails container and try to do
bundle exec rake aws:restore_db
error mentioned above is thrown. I don't know what is wrong here. The mongodb container is up and running.
the docker-compose.yml is mentioned below:
version: '3.4'
services:
redis:
image: redis:5.0.5
mongodb:
image: mongo:3.6.13
volumes:
- mongo-data:/data/db
db:
image: postgres:11.3
volumes:
- db-data:/var/lib/postgresql/data
rails:
build: .
image: proj:latest
depends_on:
- db
- mongodb
- redis
volumes:
- .:/proj
ports:
- "3000:3000"
tty: true
stdin_open: true
env_file:
- .env/development.env
volumes:
db-data:
mongo-data:
this is how I start all four remote containers:
$ docker-compose up -d
Starting proj_db_1 ... done
Starting proj_redis_1 ... done
Starting proj_mongodb_1 ... done
Starting proj_rails_1 ... done
please help me to understand how remote containers should interact with each other.
Your configuration should point to the services by name and not to a port on localhost. For example, if you ware connecting to redis as localhost:6380 or 127.0.0.1:6380, now you need to use redis:6380
If this is still not helping, you can try to add links between containers in order the names given to them as services to be resolved. So the file will look something like this:
version: '3.4'
services:
redis:
image: redis:5.0.5
networks:
- front-end
links:
- "mongodb:mongodb"
- "db:db"
- "rails:rails"
mongodb:
image: mongo:3.6.13
volumes:
- mongo-data:/data/db
networks:
- front-end
links:
- "redis:redis"
- "db:db"
- "rails:rails"
db:
image: postgres:11.3
volumes:
- db-data:/var/lib/postgresql/data
networks:
- front-end
links:
- "redis:redis"
- "mongodb:mongodb"
- "rails:rails"
rails:
build: .
image: proj:latest
depends_on:
- db
- mongodb
- redis
volumes:
- .:/proj
ports:
- "3000:3000"
tty: true
stdin_open: true
env_file:
- .env/development.env
networks:
- front-end
links:
- "redis:redis"
- "mongodb:mongodb"
- "db:db"
volumes:
db-data:
mongo-data:
networks:
front-end:
The links will allow for a hostnames to be defined in the containers.
The link flag is legacy, and in new versions of docker-engine it's not required for user defined networks. Also, the links will be ignored in case of docker swarm deployment. However since there are sill old installations of Docker and docker-compose, this is one thing to try in troubleshooting.
I have a dockerimage on a gitlab registry.
when I (after login on a target machine)
docker run -d -p 8081:8080/tcp gitlab.somedomain.com:5050/root/app
the laravel app is available and running and reachable. Things like php artisan config:clear are working. when I enter the container everything looks fine.
But I don't have any services running. So I had the idea to create a yml file to docker-compose run to set things up in docker-compose-gitlab.yml
version: '3'
services:
mysql:
image: mysql:5.7
container_name: my-mysql
environment:
- MYSQL_ROOT_PASSWORD=***
- MYSQL_DATABASE=dbname
- MYSQL_USER=username
- MYSQL_PASSWORD=***
volumes:
- ./data/mysql:/var/lib/mysql
ports:
- "3307:3306"
application:
image: gitlab.somedomain.com:5050/root/app:latest
build:
context: .
dockerfile: ./Dockerfile
container_name: my-app
ports:
- "8081:8080"
volumes:
- .:/application
env_file: .env.docker
working_dir: /application
depends_on:
- mysql
links:
- mysql
calling docker-compose --verbose -f docker-compose-gitlab.yml up shows me that the mysql service is created and working, the app seems also be creeated but then fails ... exiting with code 0 - no further message.
If I add commands in my yml like php artisan config:clear the error gets even unclearer for me: it says it cannot find artisan and it seems as if the command is executed outside the container ... exiting with code 1. (artisan is a helper and executed via php)
When I call the docker-compose with -d and then do docker ps I can only see mysql running but not the app.
When I use both strategies, the problem is, the two container do not share a common network and can so not work together.
What did I miss? Is this the wrong strategy?
The problem is, that I let a volume directive left over which overwrites my entier application with an empty directory.
You can just leave that out.
version: '3'
services:
mysql:
image: mysql:5.7
container_name: my-mysql
environment:
- MYSQL_ROOT_PASSWORD=***
- MYSQL_DATABASE=dbname
- MYSQL_USER=username
- MYSQL_PASSWORD=***
volumes:
- ./data/mysql:/var/lib/mysql
ports:
- "3307:3306"
application:
image: gitlab.somedomain.com:5050/root/app:latest
build:
context: .
dockerfile: ./Dockerfile
container_name: my-app
ports:
- "8081:8080"
## volumes:
## - .:/application ## this would overwrite the app
env_file: .env.docker
working_dir: /application
depends_on:
- mysql
links:
- mysql
You can debug the network of the containers listing the networks with docker network ls
then when the list is shown inspect the compose network with docker inspect <ComposeNetworkID>
Once you are shure that your services are not in the same network, remove your containers and recreate it again with docker-compose -f docker-compose-gitlab.yml up
If you notice they are in the same network try to use the container name instead localhost to reach each other, if it is the case.
I have a docker-compose LAMP stack comprised of three services; a webserver, php and mysql.
The apache2 webroot inside the container is shared to my local machine using a volume like so:
volumes:
- ./public_html:/usr/local/apache2/htdocs
When the stack is running though, I can't edit files inside of the shared volume, since I have a different local user as the user inside the apache2 container. Additionally the installer of my CMS (Processwire) is unable to acquire permissions to the required install directories.
The apache container uses alpine 2.4.35.
I've build my docker-compose file according to this tutorial:
https://medium.com/#thivi/creating-a-lamp-stack-using-docker-compose-13ca4e3950e1
Below I have attached my docker-compose.yml.
version: '3.7'
services:
apache:
build: './apache'
restart: always
ports:
- 80:80
- 443:443
networks:
- frontend
- backend
volumes:
- ./public_html:/usr/local/apache2/htdocs
- ./cert/:/usr/local/apache2/cert/
depends_on:
- php
- mysql
php:
build: './php'
restart: always
networks:
- backend
volumes:
- ./public_html:/usr/local/apache2/htdocs
- ./tmp:/usr/local/tmp
mysql:
build: './mysql'
restart: always
ports:
- 3306:3306
expose:
- 3306
networks:
- backend
volumes:
- ./database:/var/lib/mysql
networks:
backend:
frontend:
Is there any way to fix this issue? I'd be grateful for answers, I've been dealing with this issue for the past 2 days, without getting anywhere and I'm also kind of surprised that such an essential feature like directory sharing is so complicated.
/edit:
I've also noticed something interesting; when I execute a bash inside the apache-container the ownership of apache's document root is set to nobody:nobody, which probably also isn't right.