I'm trying to setup my environment to develop Phoenix apps using Docker.
Unitil this point everything is great, except the VIRTUAL_HOST part, I'd like to access my app by visiting app.dev instead of localhost:4000.
I'm using this docker-compose.yml file :
version: '2'
services:
proxy:
image: jwilder/nginx-proxy
ports:
- 80:80
postgres:
image: postgres:latest
restart: always
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=root
web:
build: .
command: mix phx.server
volumes:
- .:/app
ports:
- 4000:4000
depends_on:
- postgres
environment:
- MIX_ENV=dev
- VIRTUAL_HOST=app.dev
- VIRTUAL_PORT=4000
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=root
links:
- postgres
when I try to access the app.dev I'm getting site can't be reached.
edit #1
For using VIRTUAL_HOST, do I really need the reverse proxy for this ? or a simple dns or something will be enough ?
edit #2
Ok, that's strange, when I curl the app.dev I get the html content, but I can't access it (app.dev) from the browser.
You don't need nginx, you just need to add app.dev to your /etc/hosts file.
127.0.0.1 app.dev
Related
I have laravel app that lives in docker, and I want to integrate elasticsearch to my app
That is how my docker-compose.yaml looks
version: '3'
services:
laravel:
build: ./docker/build
container_name: laravel
restart: unless-stopped
privileged: true
ports:
- 8084:80
- "22:22"
volumes:
- ./docker/settings:/settings
- ../2agsapp:/var/www/html
# - vendor:/var/www/html/vendor
- ./docker/temp:/backup
- composer_cache:/root/.composer/cache
environment:
- ENABLE_XDEBUG=true
links:
- mysql
mysql:
image: mariadb:10.2
container_name: mysql
volumes:
- ./docker/db_config:/etc/mysql/conf.d
- ./db:/var/lib/mysql
ports:
- "8989:3306"
environment:
- MYSQL_USER=dev
- MYSQL_PASSWORD=dev
- MYSQL_ROOT_PASSWORD=root
- MYSQL_DATABASE=laravel
command: --innodb_use_native_aio=0
phpmyadmin:
container_name: pma_laravel
image: phpmyadmin/phpmyadmin:latest
environment:
- MYSQL_USER=dev
- MYSQL_ROOT_PASSWORD=root
- MYSQL_PASSWORD=dev
- MYSQL_DATABASE=laravel
- PMA_HOST=mysql
ports:
- 8083:80
links:
- mysql
es:
image: docker.elastic.co/elasticsearch/elasticsearch:7.10.1
ports:
- "9200:9200"
- "9300:9300"
environment:
- discovery.type=single-node
volumes:
storage:
composer_cache:
I run docker-compose up -d and then got really strange issue
If I execute curl localhost:9200 inside laravel container it returns this message Failed to connect to localhost port 9200: Connection refused
But if I wull run curl localhost:9200 out of the docker it returns expected response
Maybe I don't understand how it works, hope someone will help me
when you want to access another container within some container you should use the container name, not localhost.
If you are inside laravel and want to access Elasticsearch you should:
curl es:9200
Since you mapped the 9200 port to localhost (ports section in docker-compose) this port is available from your local machine as well, that's why curling from local machine to 9200 works.
I have simple docker orchestration to serve Magento files
here is my docker-compose.yml:
version: "3"
services:
php:
build: docker/php
volumes:
- .:/var/www/html
db:
image: mariadb:10.4
ports:
- 3306:3306
environment:
MYSQL_ROOT_PASSWORD: password
MYSQL_DATABASE: magento
volumes:
- ./docker/mysql/databases:/var/lib/mysql
nginx:
image: nginx:alpine
ports:
- 8009:80
links:
- php:phpfpm
volumes:
- ./docker/nginx/conf.d:/etc/nginx/conf.d
- ./docker/nginx/magento.conf:/etc/nginx/magento.conf
- .:/var/www/html
elasticsearch:
image: bitnami/elasticsearch:7
ports:
- 9200:9200
volumes:
- ./docker/elasticsearch/data:/bitnami/elasticsearch/data
as you see, I want to serve the store on my host machine port 8009.
My question is, what should I set base-url when using setup:install command and installing my Magento store?
If I set it to something like localhost or 127.0.0.1, when accessing localhost:8009 it redirects me to 127.0.0.1 which is obviously the wrong location
When you use to customer port in docker you want to use a base URL is URL: port.
example =>
base_url=https://localhost:8009/
I've created PrestaShop store on server. Is there any possible way to use docker for my store and migrate it into another server using docker? I know that I'll need docker-compose but to be honest I don't know what to do with files on current server.
Ok, so I deeped into problem and solution for ma quesstion is as below. What I did is pull original image from prestashop and copy there my files.
Next step was use mariadb image. I had backup.sql file exported from previous store phpmyadmin
version: '2'
services:
prestashop:
image: prestashop
ports:
- 80:80
links:
- mariadb:mariadb
depends_on:
- mariadb
volumes:
- ./src:/var/www/html
- ./src/modules:/var/www/html/modules
- ./src/themes:/var/www/html/themes
- ./src/override:/var/www/html/override
environment:
- PS_DEV_MODE=1
- DB_SERVER=mariadb
- DB_USER=root
- DB_PASSWD=root
- DB_NAME=prestashop
- PS_INSTALL_AUTO=0
mariadb:
image: mariadb
volumes:
- backup.sql:/docker-entrypoint-initdb.d
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_DATABASE=prestashop
phpmyadmin:
image: phpmyadmin/phpmyadmin
links:
- mariadb
ports:
- 81:80
environment:
- PMA_HOST=mariadb
- PMA_USER=root
- PMA_PASSWORD=root
The biggest issue is IP in docker-machine. Keep in mind that if you are using docker toolbox you have IP 192.168.99.100 but in Docker for Windows your IP depends on localhost (or just type localhost).
You can use this docker-compose.yml :
version: "3"
services:
prestashop:
image: prestashop/prestashop
networks:
mycustomnetwork:
ports:
- 82:80
links:
- mariadb:mariadb
depends_on:
- mariadb
volumes:
- ./src:/var/www/html
- ./src/modules:/var/www/html/modules
- ./src/themes:/var/www/html/themes
- ./src/override:/var/www/html/override
environment:
- PS_DEV_MODE=1
- DB_SERVER=mariadb
- DB_USER=root
- DB_PASSWD=mycustompassword
- DB_NAME=prestashop
- PS_INSTALL_AUTO=0
mariadb:
image: mariadb
networks:
mycustomnetwork:
volumes:
- presta_db:/var/lib/mysql
environment:
- MYSQL_ROOT_PASSWORD=mycustompassword
- MYSQL_DATABASE=prestashop
phpmyadmin:
image: phpmyadmin/phpmyadmin
networks:
mycustomnetwork:
links:
- mariadb:mariadb
ports:
- 1235:80
depends_on:
- mariadb
environment:
- PMA_HOST=mariadb
- PMA_USER=root
- PMA_PASSWORD=mycustompassword
volumes:
presta_db:
networks:
mycustomnetwork:
external: true
Replace mycustomnetwork and mycustompassword
Then run docker-compose up
Web url : localhost:82
PHP MyAdmin url : localhost:1235
You can follow this tutorial to setup Prestashop in a Docker environment.
https://hub.docker.com/r/prestashop/prestashop/
You will need to add your current files to the Prestashop container and most likely import your database in a MySQL container. Docker-compose will be used to launch those containers together. Once this is done, you will be able to deploy the whole thing anywhere.
You should also include bridge network in your compose file, some examples might work from here https://runnable.com/docker/docker-compose-networking.
This way db can be configured to be accessed only by prestashop on local docker network without being exposed outside. Presta db can also be pointed to the name of the running image, in case your IP changes or something. All what you would leave running is port 80 on the app.
version: '3'
services:
app:
build: .
ports:
- "8000:8000"
volumes:
- .:/srv/redditaurus
environment:
- REDDIT_KEY=${REDDIT_KEY}
- REDDIT_SECRET=${REDDIT_SECRET}
links:
- mysql:mysql
mysql:
image: mysql
environment:
MYSQL_ROOT_PASSWORD: root
ports:
- "3306:3306"
# volumes:
# - ./mysql:/var/lib/mysql/
nginx:
image: nginx
ports:
- "80:80"
This is my docker-compose.yml. The weirdest thing is happening. I can visit localhost:8000 and get the redditaurus app without any issue. However, if I try to do the same thing with localhost:80, or localhost:3306 from a mysql terminal, I'll get access denied or ERR_EMPTY_RESPONSE.
If I try 0.0.0.0:80, I get the default nginx page, so that's okay, but why won't localhost work?
MySQL refuses to be served on either localhost or 0.0.0.0. I've tried accessing it from Sequel Pro, from inside a linked container, and from my host machine's console, and nothing can get into it. If I exec into the SQL container, I can log in just fine, so it's not a password issue.
Why can't I get to my containers normally? :(
You have missing some configuration properties. try this
version: '3'
services:
app:
build: .
ports:
- "8000:8000"
volumes:
- .:/srv/redditaurus
environment:
- REDDIT_KEY=${REDDIT_KEY}
- REDDIT_SECRET=${REDDIT_SECRET}
links:
- mysql:mysql
mysql:
image: mysql
entrypoint: ['/entrypoint.sh', '--default-authentication-plugin=mysql_native_password']
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_ALLOW_EMPTY_PASSWORD: "YES"
ports:
- "3306:3306"
nginx:
image: nginx
ports:
- "80:80"
if you want to connect mysql via terminal. run this
mysql -uroot -proot —protocol tcp
Next thing is your nginx binding with 80 is work correct.
Problem in here is not docker-compose. It can be in your os configurations.
I used mysql:5.7 tag in docker-compose, and that allowed the container to work. I guess the latest branch has some issue with my local env.
Still not sure what's up with nginx, but it's not an issue.
I have an app composed of multiple rails projects, I am trying to dockerize them, each app starts on a different rails port :
main app: 1665
admin: 3002
website: 3000
...
This is my docker-compose.yml file :
version: '2'
services:
db:
image: postgres:9.6
container_name: acme_db
hostname: db.myapp.dev
hostname: db.ach
ports:
- "5432:5432"
volumes:
- myapp_pgdata:/var/lib/postgresql/data/pgdata
environment:
- PGDATA=/var/lib/postgresql/data/pgdata
- VIRTUAL_HOST=db.myapp.dev
networks:
- generic
myapp:
image: acme/myapp
container_name: acme_myapp
hostname: app.myapp.dev
command: rails s -p 1665 -b '0.0.0.0'
volumes:
- ./myapp:/usr/src/app
- $SSH_AUTH_SOCK:/tmp/ssh_auth_sock
ports:
- "1665:1665"
depends_on:
- db
environment:
- SSH_AUTH_SOCK=/tmp/ssh_auth_sock
- RAILS_ENV=development
- VIRTUAL_HOST=myapp.dev
networks:
- generic
admin:
image: acme/admin
container_name: acme_admin
hostname: admin2.myapp.dev
command: rails s -p 3002 -b '0.0.0.0'
volumes:
- ./admin2:/usr/src/app
- $SSH_AUTH_SOCK:/tmp/ssh_auth_sock
ports:
- "3002:3002"
depends_on:
- myapp
environment:
- SSH_AUTH_SOCK=/tmp/ssh_auth_sock
- RAILS_ENV=development
- VIRTUAL_HOST=admin2.myapp.dev
networks:
- generic
website:
image: acme/website
container_name: acme_website
hostname: web.myapp.dev
command: rails s -p 3001 -b '0.0.0.0'
volumes:
- ./website:/usr/src/app
- $SSH_AUTH_SOCK:/tmp/ssh_auth_sock
ports:
- "3001:3001"
environment:
- SSH_AUTH_SOCK=/tmp/ssh_auth_sock
- RAILS_ENV=development
- VIRTUAL_HOST=myapp.dev
networks:
- generic
volumes:
myapp_pgdata:
external: true
networks:
generic:
external: true
Running each app works fine, but I have a problem when applications need to communicate between them, for instance the website need to forward an http request to the main app, and when it does, it tries to resolve this uri: http://app.myapp.dev:1665/register and, the resolved ip is 127.0.0.1 instead of the myapp docker container ip.
How can I manage this situation ? Should I use completely different hostnames for each container ? Ideally I would like to avoid DNS resolution so rails tries to hit app.myapp.dev:1665 instead of resolving app.myapp.dev and then resolving 127.0.0.1:1665
btw, I am using jwilder/nginx-proxy to resolve containers hostnames from my laptop.
Any thoughts ?
Your setup allows your containers to resolve each other by their names through the 'network' set up by docker-compose.
So website should be able to hit the main app through http://myapp:1665/register