i created a new docker-stack where i would need several influxdb instances, which i can’t connect to my grafana container atm. Here is a port of my docker-compose.yml
services:
grafana:
image: grafana/grafana
container_name: grafana
restart: always
ports:
- 3000:3000
networks:
- monitoring
volumes:
- grafana-volume:/var/lib/grafana
influxdb:
image: influxdb
container_name: influxdb
restart: always
ports:
- 8086:8086
networks:
- monitoring
volumes:
- influxdb-volume:/var/lib/influxdb
influxdb-2:
image: influxdb
container_name: influxdb-2
restart: always
ports:
- 12380:12380
networks:
- monitoring
volumes:
- influxdb-volume-2:/var/lib/influxdb
When i try to create a new influxdb datasource in grafana with influxdb-2 i get a Network Error: Bad Gateway(502), the logfile is showing:
2782ca98a4d7_grafana | 2019/10/05 13:18:50 http: proxy error: dial tcp 172.20.0.4:12380: connect: connection refused
Any ideas?
Thanks
#hmm provides the answer.
When you create services within Docker Compose, you:
are able to access containers by the service name. Grafana will reference influxdb-2 by that name.
are not able to change the ports a container exposes. Per #hmm, influxdb-2 must still be referenced on port 8086 because that's the port the container exposes; you can't change it unless you change the image.
you may (but you don't need to) expose the containers' ports to the host (using --ports: [[HOST-PORT]]:[[CONTAINER-PORT]]
Long and the short of it is that the InfluxDB service in influxdb-2 should be referenced as influxdb-2:8086. If you want to expose this service to the host (!), you could do ports: - 12380:8086. You may change the value of 12380 to something available on your host but you cannot change the value of the container port (8086).
The main reason that you would include the --ports: flag on influxdb-2 is for debugging from the host. But the grafana service does not require this. It will access the influxdb-2 service through the network provisioned by Docker Compose on port 8086.
You do want to expose the grafana service on the host because, otherwise, it would be inaccessible to you (from the host). It's akin to public|private. grafana is host public but the influxdb* services may be host private because they are generally only needed by the grafana service.
HTH!
Related
how can we run docker commands inside container with docker-compose?
Simply I want to get IP of some other network container.
I am running three container va-server, db and api-server. All the containers are in same docker-network
Here I am providing docker-compose file below:
version: "2.3"
services:
va-server:
container_name: va_server
image: nitinroxx/facesense:amd64_2022.11.28 #facesense:alpha
runtime: nvidia
restart: always
mem_limit: 4G
networks:
- perimeter-network
db:
container_name: mongodb
image: mongo:latest
ports:
- "27017:27017"
restart: always
volumes:
- ./facesense_db:/data/db
command: [--auth]
networks:
- perimeter-network
api-server:
container_name: api_server
image: nitinroxx/facesense:api_amd64_2022.11.28
ports:
- "80:80"
- "465:465"
restart: always
networks:
- perimeter-network
networks:
perimeter-network:
driver: bridge
ipam:
config:
- gateway: 10.16.239.1
subnet: 10.16.239.0/24
I have install docker inside the container which giving me below permission error:
docker.errors.dockerexception: error while fetching server api version: ('connection aborted.', permissionerror(13, 'permission denied')
...inside [a] container [...] I want to get IP of some other network container....
Docker provides an internal DNS service that can resolve container names to their Docker-internal IP addresses. From one of the containers you show, you could look up a host name like db to get the container's IP address; but in practice, this is a totally normal DNS name and all but the lowest-level networking interfaces can use those directly.
This does require that all of the containers involved be on the same Docker network. Normally Compose sets this up automatically for you; in the file you show I might delete the networks: blocks and container_name: overrides in the name of simplicity. Also see Networking in Compose in the Docker documentation.
In short:
You can probably use the Compose service names va-server, db, and api-server as host names without specifically knowing their IP addresses.
This probably means you never need to know the container IP addresses at all (they're usually unusable from outside Docker).
If you do need an IP address from inside a container, a DNS lookup can find it.
You can't usually run docker commands from inside containers. You can't do this safely without making it possible for the container to take over the whole host. There are usually better patterns that don't tie you to the Docker stack specifically.
I build a website using Strapi and Gatsby, everythings works well when I try to connect to a remote database, but I'm trying to create a db inside a container and so far no luck.
Essentially, what I did is create the following docker-compose:
version: '3'
services:
backend:
container_name: myapp_backend
build: ./backend/
ports:
- '3002:3002'
volumes:
- ./backend:/usr/src/myapp/backend
- /usr/src/myapp/backend/node_modules
environment:
- APP_NAME=myapp_backend
- DATABASE_CLIENT=mysql
- DATABASE_HOST=db
- DATABASE_PORT=3307
- DATABASE_NAME=myapp_db
- DATABASE_USERNAME=johnny
- DATABASE_PASSWORD=stecchino
- DATABASE_SSL=false
- DATABASE_AUTHENTICATION_DATABASE=myapp_db
- HOST=localhost
depends_on:
- db
restart: always
db:
container_name: myapp_mysql
image: mysql:5.7
volumes:
- ./db.sql:/docker-entrypoint-initdb.d/db.sql
restart: always
ports:
- 3307:3307
environment:
MYSQL_ROOT_PASSWORD: 5!JF6!FgAkvt
MYSQL_DATABASE: myapp_db
MYSQL_USER: johnny
MYSQL_PASSWORD: stecchino
command: mysqld --character-set-server=utf8 --collation-server=utf8_general_ci --init-connect='SET NAMES UTF8;' --innodb-flush-log-at-trx-commit=0
phpmyadmin:
image: phpmyadmin/phpmyadmin
container_name: 'myapp_phpmyadmin'
links:
- db
environment:
PMA_HOST: db
PMA_PORT: 3307
ports:
- '8081:80'
volumes:
- /sessions
depends_on:
- db
frontend:
container_name: myapp_frontend
build: ./frontend/
ports:
- '3001:3001'
depends_on:
- backend
volumes:
- ./frontend:/usr/src/myapp/frontend
the backend service contains the Strapi application, the db service contains the mysql instance which runs on the port 3307 'cause 3306 is already in use.
Then I have also installed phpmyadmin, and last but not least the Gastby site. When I run using docker-compose up --build, and try to access to phpmyadmin using:
http://localhost:8081/index.php
with the following credentials:
user: johnny
pwd: stecchino
I get:
MySQL mysqli::real_connect():(HY000/2002): Connection refused
now, what I did for fix that situation is pass the port 3306 instead of 3307 to backend and phpmyadmin service. And magically, everything works. But why? I have mapped container and host to 3307...
There are 2 things happening here.
Mysql is running on port 3306.
This is because you never told the mysql container to run on port 3307. The default configuration is running on 3306.
phpadmin can connect to mysql at port 3306.
Of course it can. This is because when you define multiple services within the same docker-compose file, they start on the same network. This means that they can see and connect to each other's internal ports without the need for external port binding like 3306:3306
I would suggest to keep port bindings only for services that you want access outside the docker environment (like the UI), and for internal components just expose the port like this
expose:
- 3306
Both answers are useful, I am particularly fond of Manish's answer
I wanted to add some additional wording:
There are the internal docker networks which nothing from the outside can gain access to. From inside any given service (or container), you can reach every other service (or container) via:
<service-name>:<port>/path/of/resources
<container-name>:<port>/path/of/resources
In order to access resources inside the docker network from outside of docker, whether that is from your host environment, or farther upstream on the internet, the docker daemon needs to bind to host ports, and then forward information received on those ports to a docker service (and ultimately a docker container).
In your docker-compose.yml when you do the 3307:3307 you are telling the docker daemon to listen on port 3307, and forward to your db service internally on it's port 3307.
However, from what we can all see, mysql is still internally (that is, inside the container) listening for traffic on port 3306. Any containers or services on the same docker networks as your db service (mysql running container(s)) would be able to access mysql via something like:
<driver>:mysql://db:3306/<dbname>
If you wanted all host traffic and docker network traffic to access mysql on port 3307, you would also need to configure mysql to listen on port 3307 instead of 3306. That tidbit of information does not appear to be in your question at the time of writing.
I hope the additional information helps! It's a topic I chat often about when talking docker with folks.
Because 3306 is the exposed port by the official Dockerfile.
What you can do is to map the port that is running MySQL to another port on your host: 3307:3306 for instance (always host:container)
I'm facing a relatively simple problem here but I'm starting to wonder why it doesn't work.
I want to start two Docker Containers with Docker Compose: InfluxDB and Chronograph.
Unfortunately, the chronograph does not reach InfluxDB under the given hostname: "Unable to connect to InfluxDB Influx 1: Error contacting source"
What could be the reason for this?
Here is my docker-compose.yml:
version: "3.8"
services:
influxdb:
image: influxdb
restart: unless-stopped
ports:
- 8086:8086
volumes:
- influxdb-volume:/var/lib/influxdb
networks:
- test
chronograf:
image: chronograf
restart: unless-stopped
ports:
- 8888:8888
volumes:
- chronograf-volume:/var/lib/chronograf
depends_on:
- influxdb
networks:
- test
volumes:
influxdb-volume:
chronograf-volume:
networks:
test:
driver: bridge
I have also tried to start a shell inside the two containers and then ping the containers to each other or use wget to get the HTTP-API of the other container. Even this communication between the containers does not work. On both attempts with wget and ping I get timeouts.
It must be said that I use a Banana Pi BPI-M1 here. Is it possible that it is somehow due to the Linux that container to container communication does not work?
If not configured, chronograf will try to access influxdb on localhost:8086. To be able to reach the correct influxdb instance, you need to specify the url accordingly using either the --influxdb-url command line flag or (personal preference) an environment variable INFLUXDB_URL. Those should be set to the value of http://influxdb:8086 which is the docker DNS name derived from the service name of your compose file (the keys one level below services).
This should do the trick (snippet):
chronograf:
image: chronograf
restart: unless-stopped
ports:
- 8888:8888
volumes:
- chronograf-volume:/var/lib/chronograf
environment:
- INFLUXDB_URL=http://influxdb:8086
depends_on:
- influxdb
networks:
- test
Please check the chronograf readme (section Using the container with InfluxDB) for details on configuring the image and check the docker compose networking docs on some more info about networks and dns naming.
The Docker service creates some iptables entries in the tables filter and nat. My OpenVPN Gateway script executed the following commands at startup:
iptables --flush -t filter
iptables --flush -t nat
This will delete the entries from Docker and communication between the containers and the Internet will no longer be possible.
I have rewritten the script and now everything works again.
When all are run standalone outside of docker it works with no problem when core attempts to do a get from cerner. However, doing the same when all are dockerized as below I get:
Get http://cerner:8602/api/v1/patient/search: dial TCP 192.168.240.4:8602: connect: connection refused. The .4 is the IP of the cerner container and .2 is the IP of the core container
Cerner is the name of the container being called from core. If I change the name to the ip-address of the host server and use the ports, it works fine also. It just does not allow container to container using the containers DNS or IP. I have attempted with and without the private network and get the same thing.
The containers are all scratch go.
version: '3.7'
services: caConnector:
image: vertisoft/ca_connector:latest
ports:
- "8601:7001"
env_file:
- .env.ca_connector
networks:
- core-net
fhir:
image: vertisoft/fhir_connector:latest
container_name: cerner
ports:
- "8602:7002"
env_file:
- .env.fhir_connector
networks:
- core-net
core:
image: vertisoft/core:latest
ports:
- "8600:7000"
env_file:
- .env.core
networks:
- core-net
networks: core-net:
driver: bridge
You should call the container service with containerPort, not with hostPort in service to service communication. in your case, it should be 7000 to 7002 for any container to connect using container name.
Get http://cerner:8602/api/v1/patient/search: dial TCP
192.168.240.4:8602: connect: connection refused.
As in the error, it tries to attempt connection using publish port.
For example
version: "3"
services:
web:
build: .
ports:
- "8000:8000"
db:
image: postgres
ports:
- "8001:5432"
When you run docker-compose up, the following happens:
A network called myapp_default is created.
A container is created
using web’s configuration. It joins the network myapp_default under
the name web. A container is created using db’s configuration. It
joins the network myapp_default under the name db.
In v2.1+, overlay
networks are always attachable
Each container can now look up the hostname web or db and get back the
appropriate container’s IP address. For example, web’s application
code could connect to the URL postgres://db:5432 and start using the
Postgres database.
It is important to note the distinction between HOST_PORT and CONTAINER_PORT. In the above example, for db, the HOST_PORT is 8001 and the container port is 5432 (postgres default). Networked service-to-service communication use the CONTAINER_PORT. When HOST_PORT is defined, the service is accessible outside the swarm as well.
Within the web container, your connection string to db would look like postgres://db:5432, and from the host machine, the connection string would look like postgres://{DOCKER_IP}:8001.
compose-networking
Seems to be a common question but with different contexts but I'm having a problem connecting to my localhost DB when using Docker.
If I inspect the mysql container using docker inspect and find the IP address and use this as the DB host as part of the CMS, it runs fine... the only issue is the mysql container IP address changes (upon eachdocker-compose up and if I change wifi networks) so ideally I'd like to use 'localhost' or '127.0.0.1' but for some reason this results in a SQLSTATE[HY000] [2002] Connection refused error.
How can I use 'localhost' or '127.0.0.1' as the DB hostname in CMS applications so I don't have to keep changing it as the container IP address changes?
This is my docker-compose.yml file:
version: "3"
services:
webserver:
build:
context: ./bin/webserver
restart: 'always'
ports:
- "80:80"
- "443:443"
links:
- mysql
volumes:
- ${DOCUMENT_ROOT-./www}:/var/www/html
- ${PHP_INI-./config/php/php.ini}:/usr/local/etc/php/php.ini
- ${VHOSTS_DIR-./config/vhosts}:/etc/apache2/sites-enabled
- ${LOG_DIR-./logs/apache2}:/var/log/apache2
networks:
mynet:
aliases:
- john.dev
mysql:
image: 'mysql:5.7'
restart: 'always'
ports:
- "3306:3306"
volumes:
- ${MYSQL_DATA_DIR-./data/mysql}:/var/lib/mysql
- ${MYSQL_LOG_DIR-./logs/mysql}:/var/log/mysql
environment:
MYSQL_ROOT_PASSWORD: example
networks:
- mynet
phpmyadmin:
image: phpmyadmin/phpmyadmin
links:
- mysql
environment:
PMA_HOST: mysql
PMA_PORT: 3306
ports:
- '8080:80'
volumes:
- /sessions
networks:
- mynet
networks:
mynet:
Try using mysql instead of localhost.
You are defining a link between webserver container and mysql container, so webserver container is able to resolve mysql IP.
According to Docker documentation:
Docker Cloud gives your containers two ways find other services:
Using service and container names directly as hostnames
Using service links, which are based on Docker Compose links
Service and Container Hostnames update automatically when a service
scales up or down or redeploys. As a user, you can configure service
names, and Docker Cloud uses these names to find the IP of the
services and containers for you. You can use hostnames in your code to
provide abstraction that allows you to easily swap service containers
or components.
Service links create environment variables which allow containers to
communicate with each other within a stack, or with other services
outside of a stack. You can specify service links explicitly when you
create a new service or edit an existing one, or specify them in the
stackfile for a service stack.
From Docker compose documentation:
Containers for the linked service are reachable at a hostname identical to the alias, or the service name if no alias was specified.