I am new to docker and I am trying to dockerize this application I have written in Golang. It is a simple web server that interacts with rabbitmq and mongodb
It takes the creadentials form a toml file and loads it into a config struct before starting the application server on port 3000. These are the credentials
mongo_server = "localhost"
database = "collect_db"
rabbitmq_server = "amqp://guest:guest#localhost:5672/"
If it can't connect to these urls it fails with an error. Following is my docker-compose.yml
version: '3'
services:
rabbitmq:
image: rabbitmq
ports:
- 5672:5672
mongodb:
image: mongo
ports:
- 27017:27017
web:
build: .
image: palash2504/collect
container_name: collect_service
ports:
- 3000:3000
depends_on:
- rabbitmq
- mongodb
links: [rabbitmq, mongodb]
But it fails to connect with rabbitmq on the url used for local development i.e. amqp://guest:guest#localhost:5672/
I realise that the rabbitmq container might be running on some different address other than the one provided in the config file.
I would like to know the correct way for setting any env credentials to be able to connect my app to rabbitmq.
Also what approach would be best to change my application code for initializing connections to external services? I was thinking about ditching the config.toml file and using os.Getenv and os.Setenv to get the urls for connections.
Localhost addresses are resolved, well, locally. They thus will not work inside containers, since they will look for a local address (i.e. inside the container).
Services can access each other by using service names as an address. So in the web container you can target mongodb for example.
You might give this a shot:
mongo_server = mongodb
database = "collect_db"
rabbitmq_server = "amqp://guest:guest#rabbitmq/"
It is advisable to set service target environment variables in the compose file itself:
#docker-compose.yml
#...other stuff...
web:
#...other stuff...
environment:
RABBITMQ_SERVER: rabbitmq
MONGO_SERVER: mongodb
depends_on:
- rabbitmq
- mongodb
This gives you a single place to make adjustments to the configuration.
As a side note, to me it seems that links: [rabbitmq, mongodb] can be removed. And I would advise not to alter the container name (remove container_name: collect_service unless it is necessary)
Related
I have two containers: one for Cypress and another for my web app. I have them both set up in a docker-compose.yml file like so:
version: '3.2'
services:
pa-portal:
image: web_app_image
container_name: pa_portal
volumes:
- productDB:/web_app/db
ports:
- "8080:8080"
cypress:
image: "cypress/included:4.4.0"
depends_on:
- pa-portal
environment:
- CYPRESS_baseUrl=http://pa-portal:8080
working_dir: /cypress-testing
volumes:
- ./:/cypress-testing
volumes:
productDB:
From the Cypress (testing framework) container I am able to access the web app using http://pa-portal:8080 but from a browser on my host the only way I can access the web app that has been launched by the pa_portal container is using localhost:8080.
Why are there different urls depending on where I am accessing from?
Is there some fundamental knowledge i need to do some research on
Everything is working as designed.
The service name is just a redirect WITHIN the docker-infrastructure. It doesn't work like a hosts-entry for outside of this scope.
To get what you want look into Traefik .
You can set it up with a docker container, add labels to your docker compose and with that traeffik will route your localhost to the given domainname you want.
I googled a simple howto for that, but the traefik docs are fine too:
https://www.digitalocean.com/community/tutorials/how-to-use-traefik-as-a-reverse-proxy-for-docker-containers-on-ubuntu-16-04
I've created a project in node.js to store and fetch credentials from cyberark conjur (using its REST-API)
But to test the application I'm stumbling to setup conjur server.
Problem is server is running fine within docker container, but how to access it outside(host machine) (port mapping is not working)
Or is there any conjur server hosted on Internet for public usage
All I want is to test API calls
As of writing this, the Conjur Node.js API is not currently actively being supported. Here are some suggestions for testing the API's.
Can I see the command you're using to start docker/docker-compose file?
Method 1
If you're using the setup from the Conjur Quickstart Guide, your docker-compose.yml file should look something like:
...
conjur:
image: cyberark/conjur
container_name: conjur_server
command: server
environment:
DATABASE_URL: postgres://postgres#database/postgres
CONJUR_DATA_KEY:
CONJUR_AUTHENTICATORS:
depends_on:
- database
restart: on-failure
proxy:
image: nginx:1.13.6-alpine
container_name: nginx_proxy
ports:
- "8443:443"
volumes:
- ./conf/:/etc/nginx/conf.d/:ro
- ./conf/tls/:/etc/nginx/tls/:ro
depends_on:
- conjur
- openssl
restart: on-failure
...
This means Conjur is running behind an NGINX proxy to handle the SSL and does not have a port exposed to outside the Docker network it is running on. With this setup you can access the Conjur Server on https://localhost:8443 on your local machine.
Note: You will need the SSL cert located in ./conf/tls/. Since this is a demo environment, these are made readily available for testing like this.
Method 2
If you do not care about security and are just purely testing the REST API endpoints, you could always cut out the SSL and just modify the docker-compose.yml to expose the Conjur server's port to your local machine like this:
...
conjur:
image: cyberark/conjur
container_name: conjur_server
command: server
environment:
DATABASE_URL: postgres://postgres#database/postgres
CONJUR_DATA_KEY:
CONJUR_AUTHENTICATORS:
ports:
- "8080:80"
depends_on:
- database
restart: on-failure
Now you'll be able to talk to the Conjur Server on your local machine through http://localhost:8080.
For more info: Networking in Docker Compose docs
I have a simple web application that uses HTML and PHP to capture information via a HTML form. It take this information and uses HTML-POST to sent it to a service, also running on my local host on port 8400, as application XML. I have this application running in a LAMP stack on macOS and it works perfectly. The XML gets to the service without any issues.
When I moved this app into a containerized LAMP stack using Docker, the application runs, but when my PHP tries to post it to the other service running on port 8400, it cannot get there.
I am confident that this is an issue with Docker networking, but I am not sure what the problem is. Here is my docker-compose.yml file:
version: "1.0"
services:
php:
build: "./php/"
networks:
- backend
- frontend
volumes:
- ./public_html/:/var/www/html/
apache:
build: "./apache/"
depends_on:
- php
- mysql
networks:
- frontend
- backend
ports:
- "8080:80"
volumes:
- ./public_html/:/var/www/html/
mysql:
image: mysql:5.6.40
networks:
- backend
environment:
- MYSQL_ROOT_PASSWORD=rootpassword
networks:
frontend:
backend:
I think the answer probably lies in the allowing the container to reach my localhost network, but being relatively new to Docker, I am unsure.
How can I configure Docker networking to allow posting to services outside of Dockernet on specific ports?
I beginner in Docker, I write the simple docker-compose.yml file for run two service container first container for node app and another one for redis issue with my app server unable to connect with redis container here is my code:
version: '3'
services:
redis:
image: redis
ports:
- "6379:6379"
networks:
- test
app_server:
image: app_server
depends_on:
- redis
links:
- redis
ports:
- "4004:4004"
networks:
- test
networks:
test:
Output:
Error: Redis connection to 127.0.0.1:6379 failed - connect ECONNREFUSED
Looks like your webapp is connecting to 127.0.0.1/localhost instead of redis. So not a docker issue, but more of a programming issue within your web app. you could add environment variable in your webapp (something like REDIS_HOST) and then give that parameter in the compose-file. This of course requires your web application to read redis host from environment variable.
Example environment variable assignment in compose:
webapp:
image: my_web_app
environment:
- REDIS_HOST=redis
Again, this requires that your web app is actually utilizing REDIS_HOST environment variable in its code.
127.0.0.1:6379 is connect to current container localhost not to redis container
With your docker-composer file. Now your connect to redis via redis container name. Becase docker-compose automatic create an docker bridge network - whic allow you call to another container via their name...
docker inspect to see redis container name - for example current redis container name is redis_abc, so you can connect to redis via redis_abc:6379 Or more simple, just add container_name: redis_server to docker-compose file for certain container name..
https://docs.docker.com/network/bridge/
Unable to connect to containers running on separate docker hosts
I've got 2 docker Tomcat containers running on 2 different Ubuntu vm's. System-A has a webservice running and System-B has a db. I haven't been able to figure out how to connect the application running on system-A to the db running on system-B. When I run the database on system-A, the application(which is also running on system-A) can connect to the database. I'm using docker-compose to setup the network(which works fine when both containers are running on the same VM). I've execd into etc/hosts file in the application container on system-A and I think whats missing is the ip address of System-B.
services:
db:
image: mydb
hostname: mydbName
ports:
- "8012: 8012"
networks:
data:
aliases:
- mydbName
api:
image: myApi
hostname: myApiName
ports:
- "8810: 8810"
networks:
data:
networks:
data:
You would configure this exactly the same way you would as if Docker wasn't involved: configure the Tomcat instance with the DNS name or IP address of the other server. You would need to make sure the service is published outside of Docker space using a ports: directive.
On server-a.example.com you could run this docker-compose.yml file:
version: '3'
services:
api:
image: myApi
ports:
- "8810:8810"
env:
DATABASE_URL: "http://server-b.example.com:8012"
And on server-b.example.com:
version: '3'
services:
db:
image: mydb
ports:
- "8012:8012"
In principle it would be possible to set up an overlay network connecting the two hosts, but this is a significantly more complicated setup.
(You definitely don't want to use docker exec to modify /etc/hosts in a container: you'll have to repeat this step every time you delete and recreate the container, and manually maintaining hosts files is tedious and error-prone, particularly if you're moving containers between hosts. Consul could work as a service-discovery system that provides a DNS service.)