Firstly thank you for your time .
i was trying my hands on docker.
when i saw this article
http://jasonwilder.com/blog/2014/03/25/automated-nginx-reverse-proxy-for-docker/
please have a look at my docker-compose.yml file , i am using below images
jwilder/nginx-proxy:latest
grafana/grafana:4.6.2
version: "2"
services:
proxy:
build: ./proxy
container_name: proxy
restart: always
volumes:
- /etc/localtime:/etc/localtime:ro
- /var/run/docker.sock:/tmp/docker.sock:ro
ports:
- 80:80
- 443:443
grafana:
build: ./grafana
container_name: grafana
volumes:
- grafana-data:/var/lib/grafana
environment:
VIRTUAL_HOST: grafana.localhost
GF_SECURITY_ADMIN_PASSWORD: password
depends_on:
- proxy
volumes:
grafana-data:
so when i do docker-compose up -d on my local system i am able to access the grafana container.
Now i have deploy this docker app on aws how do i access the grafana container on ec2 with VIRTUAL_HOST
any help or idea how to do this will be appreciated ! Thanks !
Related
I use docker with WSL2 on a Debian VM and i'm trying to install passbolt.
I follow the steps on this guide : https://help.passbolt.com/hosting/install/ce/docker.html.
When i run docker-compose up, it's working and i can reach the database with telnet but it's impossible to reach the instance of passbolt with telnet and with my browser.
It's strange because the two containers: mariadb and passbolt are running.
This is my docker-compose.yml:
version: '3.4'
services:
db:
image: mariadb:10.3
env_file:
- env/mysql.env
volumes:
- database_volume:/var/lib/mysql
ports:
- "127.0.0.1:3306:3306"
passbolt:
image: passbolt/passbolt:latest-ce
#Alternatively you can use rootless:
#image: passbolt/passbolt:latest-ce-non-root
tty: true
container_name: passbolt
restart: always
depends_on:
- db
env_file:
- env/passbolt.env
volumes:
- gpg_volume:/etc/passbolt/gpg
- images_volume:/usr/share/php/passbolt/webroot/img/public
command: ["/usr/bin/wait-for.sh", "-t", "0", "db:3306", "--", "/docker-entrypoint.sh"]
ports:
- 80:80
- 443:443
#Alternatively for non-root images:
# - 80:8080
# - 443:4433
volumes:
database_volume:
gpg_volume:
images_volume:
If anybody can help me, thanks!
Your docker-compose file looks quite ordinary and I don't see any issues.
Can you please attach your passbolt.env and mysql.env (remove any important information ofcourse).
Also, the passbolt.conf (VirtualHost) might be useful.
Make sure that the DNS A record is valid and that you have no firewall blocks.
Error logs will be appreciated aswell.
I'm very much new to the Docker world. I have a docker-compose file which works fine for me.
But how do I create these Docker images and run it in an EC2.
Any help would be appreciated.
PS: I don't want to use ECS or ECR for this. I hope DockerHub should work fine for storing and retrieving these images (Correct me if I'm wrong).
Thanks.
version: "3"
services:
app:
image: node:12.13.1
volumes:
- ./:/app
working_dir: /app
depends_on:
- mongo
- nats
environment:
NODE_ENV: development
ports:
- 3000:3000
command: npm run dev
app_2:
image: node:12.13.1
volumes:
- ../app_2/:/app
working_dir: /app_2
depends_on:
- mongo
- nats
links:
- mongo
environment:
NODE_ENV: development
ports:
- 4000:4000
command: npm run dev
mongo:
image: mongo
expose:
- 27017
ports:
- "27017:27017"
volumes:
- ./data/db:/data/db
nats:
image: 'nats:2.1.2'
expose:
- "4222"
ports:
- "8222:8222"
hostname: nats-server
Install docker, docker-compose and then just run docker-compose up. Just remember to open the port 4000 of the EC2 instance and accessible from your IP or any needed ip's.
I'm using ecs-cli to deploy my docker-compose.yml to ecs with SSL support.
When I run the command it's show me that the container is running. but when I browse to url is show me 404 error.
why?
this is my docker-compose.yml:
version: '2'
services:
tester-cluster:
image: yeasy/simple-web:latest
environment:
VIRTUAL_HOST: mydomin.net
LETSENCRYPT_HOST: mydomin.net
LETSENCRYPT_EMAIL: mydomin#gmail.com
nginx-proxy:
image: jwilder/nginx-proxy
ports:
- '80:80'
- '443:443'
volumes:
- '/etc/nginx/vhost.d'
- '/usr/share/nginx/html'
- '/var/run/docker.sock:/tmp/docker.sock:ro'
- '/etc/nginx/certs'
letsencrypt-nginx-proxy-companion:
image: jrcs/letsencrypt-nginx-proxy-companion
volumes:
- '/var/run/docker.sock:/var/run/docker.sock:ro'
volumes_from:
- 'nginx-proxy'
You will have to set the WORDPRESS_DB_HOST for the wordpress server as well. This will be something similar to the following:
WORDPRESS_DB_HOST: mysql:3306
Note the host name would be the name of the db container.
You can view container logs by running the following:
docker-compose logs -f -t
I am using Docker version 1.12.3 and docker-compose version 1.8.1. I have some services which contains for example elasticsearch, rabbitmq and a webapp
My problem is that a service can not access another service by its host becuase docker-compose does not put all service hots in /etc/hosts file. I don't know their IP's because it is defined on docker-compose up phase.
I use networks feature as it is described at https://docs.docker.com/compose/networking/ instead of links because I do circular reference and links doesn't support it. But using networks does not put all services hosts to each service nodes /etc/hosts file. I set container_name, I set hostname but nothing happened. What I am missing;
Here is my docker-compose.yml;
version: '2'
services:
elasticsearch1:
image: elasticsearch:5.0
container_name: "elasticsearch1"
hostname: "elasticsearch1"
command: "elasticsearch -E cluster.name=GameOfThrones -E node.name='Ned Stark' -E discovery.zen.ping.unicast.hosts=elasticsearch1,elasticsearch2,elasticsearch3"
volumes:
- "/opt/elasticsearch/data"
ports:
- "9200:9200"
- "9300:9300"
networks:
- webapp
elasticsearch2:
image: elasticsearch:5.0
container_name: "elasticsearch2"
hostname: "elasticsearch2"
command: "elasticsearch -E cluster.name=GameOfThrones -E node.name='Daenerys Targaryen' -E discovery.zen.ping.unicast.hosts=elasticsearch1,elasticsearch2,elasticsearch3"
volumes:
- "/opt/elasticsearch/data"
networks:
- webapp
elasticsearch3:
image: elasticsearch:5.0
container_name: "elasticsearch3"
hostname: "elasticsearch3"
command: "elasticsearch -E cluster.name=GameOfThrones -E node.name='John Snow' -E discovery.zen.ping.unicast.hosts=elasticsearch1,elasticsearch2,elasticsearch3"
volumes:
- "/opt/elasticsearch/data"
networks:
- webapp
rabbit1:
image: harbur/rabbitmq-cluster
container_name: "rabbit1"
hostname: "rabbit1"
environment:
- ERLANG_COOKIE=abcdefg
networks:
- webapp
rabbit2:
image: harbur/rabbitmq-cluster
container_name: "rabbit2"
hostname: "rabbit2"
environment:
- ERLANG_COOKIE=abcdefg
- CLUSTER_WITH=rabbit1
- ENABLE_RAM=true
networks:
- webapp
rabbit3:
image: harbur/rabbitmq-cluster
container_name: "rabbit3"
hostname: "rabbit3"
environment:
- ERLANG_COOKIE=abcdefg
- CLUSTER_WITH=rabbit1
networks:
- webapp
my_webapp:
image: my_webapp:0.2.0
container_name: "my_webapp"
hostname: "my_webapp"
command: "supervisord -c /etc/supervisor/supervisord.conf -n"
environment:
- DYNACONF_SETTINGS=settings.prod
ports:
- "8000:8000"
tty: true
networks:
- webapp
networks:
webapp:
driver: bridge
This is how I understand they can't comunicate with each other;
I get this error on elasticserach cluster initialization;
Caused by: java.net.UnknownHostException: elasticsearch3
And this is how I docker-composing
docker-compose up
If the container expects the hostname to be available immediate when the container starts that is likely why it's failing.
The hostname isn't going to exist until the other containers start. You can use an entrypoint script to wait until all the hostnames are available, then exec elasticsearch ...
Today I switched from "Docker Toolbox" to "Docker for Mac", because Docker now has finally write-access to my User directory (which doesn't worked with "Docker Toolbox") - Yay!
But this change also includes that all containers now running under my localhost and not under Docker's IP as before (e.g. 192.168.99.100).
Since my localhost listens to various ports by default (80, 443, ...) and I don't want to always add new created ports, that doesn't conflict with the standard one's, to my local dev domains (e.g. example.dev:8443), I wonder how to run my containers as before.
I read about network configs and tried a lot of things (creating a new host network, exposing ports with an IP in front of it, ...), but didn't got it working.
What kind of config do I need to run my app container with the IP 192.168.99.100? Thats my docker-compose.yml so far.
version: '2'
services:
app:
build:
context: .
dockerfile: Dockerfile
depends_on:
- mysql
- redis
- memcached
ports:
- 80:80
- 443:443
- 22:22
- 3000:3000
- 3001:3001
volumes:
- ./app/:/app/
- /tmp/debug/:/tmp/debug/
- ./:/docker/
volumes_from:
- storage
# cap and privileged needed for slowlog
cap_add:
- SYS_PTRACE
privileged: true
env_file:
- etc/environment.yml
- etc/environment.development.yml
mysql:
build:
context: docker/mysql/
dockerfile: MariaDB-10
ports:
- 3306:3306
volumes_from:
- storage
volumes:
- ./data/mysql:/var/lib/mysql
- /tmp/debug/:/tmp/debug/
env_file:
- etc/environment.yml
- etc/environment.development.yml
redis:
build: docker/redis/
volumes_from:
- storage
env_file:
- etc/environment.yml
- etc/environment.development.yml
memcached:
build: docker/memcached/
volumes_from:
- storage
env_file:
- etc/environment.yml
- etc/environment.development.yml
storage:
build: docker/storage/
volumes:
- /storage
You need to declare "networks:" for each of your services:
e.g.
version: '2'
services:
app:
image: xxxx:xxx
ports:
- "80:80"
networks:
- my-network
mysql:
image: xxxx:xxx
networks:
- my-network
networks:
my-network:
driver: bridge
Then from side your app configuration, you can use "mysql" as the hostname of database server.
You can define a network in your compose file, then add any services to the network.
https://docs.docker.com/compose/networking/
But I would suggest you just use different ports now that you are running natively. I.e. 8080:80