influxDB and cadvisor integration issue - docker

I want to access to the gathered data from cadvisor through influxdb
here my docker configurations:
//for cadvisor
docker run
--volume=/:/rootfs:ro
--volume=/var/run:/var/run:rw
--volume=/sys:/sys:ro
--volume=/var/lib/docker/:/var/lib/docker:ro
--publish=8080:8080
--detach=true
--name=cadvisorDB
google/cadvisor:latest
-storage_driver=influxdb
-storage_driver_host=127.0.0.1:8086
-storage_driver_db=databaseName
//for InfluxDB
docker run
-d
-p 8083:8083
-p 8086:8086
--expose 8090
--expose 8099
tutum/influxdb
//and I created manually the databse through the WEB UI on localhost:8083
with the name databaseName`
So once I start the two containers, I go to the influxDB to explore data (by making a query). An error says that there is no data

Everything in the configuration looks fine. The problem is probably in this line:
-storage_driver_host=127.0.0.1:8086
because 127.0.0.1 is refering to cadvisor container localhost and not your localhost. Try to put instead docker Nat ip (usually 172.17.42.1).

This is what I use in my "docker-compose" YAML file. Should be very easy to translate to the usual "docker run" syntax. In my case I'm linking the InfluxDB container in cAdvisor, so cAdvisor knows how to resolve the hostname "influxdb" regardless of the internal Docker IP assigned to the container.
influxdb:
image: tutum/influxdb
hostname: influxdb
volumes:
- ./influxdb:/data
environment:
- PRE_CREATE_DB=cadvisor
ports:
- "8083:8083"
- "8086:8086"
expose:
- "8090"
- "8099"
cadvisor:
image: google/cadvisor
hostname: cadvisor
volumes:
- /:/rootfs:ro
- /var/run:/var/run:rw
- /sys:/sys:ro
- /var/lib/docker:/var/lib/docker:ro
ports:
- "8089:8080"
links:
- influxdb
command: -storage_driver_db=cadvisor -storage_driver_host=influxdb:8086
NOTE: InfluxDB can create your DB automatically if you set the PRE_CREATE_DB environment variable.

Related

docker-compose Nextcloud Instance behind reverse proxy Bad gateway 502

I want to switch from using the docker run-command to a docker-compose file with my nextcloud instance that runs behind a reverse proxy (jwilder/nginx-proxy).
This is the run command I used to use:
sudo docker run -d -p 8080:80 --expose 80 --expose 443 -e VIRTUAL_HOST=nextcloud.example.com -v nextcloud:/var/www/html --restart=always --name=nextcloud nextcloud:24.0.8
I installed the mariaDB later in the container so that I didn't have to struggle with networking. Also I use the Port 8080 only in my internal network for fast up- and downloading.
This worked quite well, but now I want to create a similar environment with docker-compose:
version: '3.8'
volumes:
nextcloud:
db:
services:
db:
image: mariadb:10.5
restart: always
command: --transaction-isolation=READ-COMMITTED --binlog-format=ROW
volumes:
- db:/var/lib/mysql
environment:
- MYSQL_ROOT_PASSWORD=my-super-strong-password
- MYSQL_PASSWORD=my-other-super-strong-password
- MYSQL_DATABASE=nextcloud
- MYSQL_USER=nextcloud
app:
image: nextcloud:24.0.8
restart: always
ports:
- 8080:80
expose:
- 80
- 443
links:
- db
volumes:
- nextcloud:/var/www/html
environment:
- MYSQL_PASSWORD=my-other-super-strong-password
- MYSQL_DATABASE=nextcloud
- MYSQL_USER=nextcloud
- MYSQL_HOST=db
- PHP_MEMORY_LIMIT=1G
- PHP_UPLOAD_LIMIT=128M
- VIRTUAL_HOST=nextcloud.example.com
The containers are starting successfully and I can use nextcloud in my internal network. But I cannot reach them from my domain. Instead I get a 502 Bad request. The VIRTUAL_HOST redirection seems to work since I'd get a 503 Service Temporarily Unavailable instead.
I think exposing the ports 80 and 443 doesn't work.
I've tried to add a proxy network:
networks:
proxy:
driver: bridge
external: true
and added
networks:
- default
to the db service and
networks:
- default
- proxy
to the app service.
That didn't fixed the problem. Does any of you have an idea what I can try next?
I've tried different ways to expose the ports and tried to create different networks
Nevermind found the problem.
Instead of simply creating an network named proxy, I had to create a new jwilder reverse-proxy service via docker compose with a name, as an example myreverseproxy. In each service I want to make public I needed to name this proxy as:
networks:
- default
- myreverseproxy
Also I had to use the name in the networks service area:
networks:
myreverseproxy:
external: true

Docker inter-container communication

I'm facing a relatively simple problem here but I'm starting to wonder why it doesn't work.
I want to start two Docker Containers with Docker Compose: InfluxDB and Chronograph.
Unfortunately, the chronograph does not reach InfluxDB under the given hostname: "Unable to connect to InfluxDB Influx 1: Error contacting source"
What could be the reason for this?
Here is my docker-compose.yml:
version: "3.8"
services:
influxdb:
image: influxdb
restart: unless-stopped
ports:
- 8086:8086
volumes:
- influxdb-volume:/var/lib/influxdb
networks:
- test
chronograf:
image: chronograf
restart: unless-stopped
ports:
- 8888:8888
volumes:
- chronograf-volume:/var/lib/chronograf
depends_on:
- influxdb
networks:
- test
volumes:
influxdb-volume:
chronograf-volume:
networks:
test:
driver: bridge
I have also tried to start a shell inside the two containers and then ping the containers to each other or use wget to get the HTTP-API of the other container. Even this communication between the containers does not work. On both attempts with wget and ping I get timeouts.
It must be said that I use a Banana Pi BPI-M1 here. Is it possible that it is somehow due to the Linux that container to container communication does not work?
If not configured, chronograf will try to access influxdb on localhost:8086. To be able to reach the correct influxdb instance, you need to specify the url accordingly using either the --influxdb-url command line flag or (personal preference) an environment variable INFLUXDB_URL. Those should be set to the value of http://influxdb:8086 which is the docker DNS name derived from the service name of your compose file (the keys one level below services).
This should do the trick (snippet):
chronograf:
image: chronograf
restart: unless-stopped
ports:
- 8888:8888
volumes:
- chronograf-volume:/var/lib/chronograf
environment:
- INFLUXDB_URL=http://influxdb:8086
depends_on:
- influxdb
networks:
- test
Please check the chronograf readme (section Using the container with InfluxDB) for details on configuring the image and check the docker compose networking docs on some more info about networks and dns naming.
The Docker service creates some iptables entries in the tables filter and nat. My OpenVPN Gateway script executed the following commands at startup:
iptables --flush -t filter
iptables --flush -t nat
This will delete the entries from Docker and communication between the containers and the Internet will no longer be possible.
I have rewritten the script and now everything works again.

How can i verify that cassandra is working

I have installed 3 docker containers with this docker-composer.yml below
version: '3'
services:
nginx:
image: nginx:alpine
volumes:
- ./app:/app
- ./nginx-config/:/etc/nginx/conf.d/
ports:
- 80:80
depends_on:
- php
php:
image: php:7.1-fpm-alpine
volumes:
- ./app:/app
cassandra:
image: 'docker.io/bitnami/cassandra:3-debian-10'
ports:
- '7000:7000'
- '9042:9042'
volumes:
- ./app:/app
environment:
- CASSANDRA_SEEDS=cassandra
- CASSANDRA_PASSWORD_SEEDER=yes
- CASSANDRA_PASSWORD=cassandra
My question is how to put localhost:7000 or even localhost:9042 nothing is working.
All containers is working perfectly when i run docker ps
Both ports that you have tired on the browser is not HTTP port.
- '7000:7000'
- '9042:9042'
By default, Cassandra uses 7000 for cluster communication (7001 if SSL is enabled), 9042 for native protocol clients, and 7199 for JMX. The internode communication and native protocol ports are configurable in the Cassandra Configuration File. The JMX port is configurable in cassandra-env.sh (through JVM options). All ports are TCP.
Cassandra Ports
You can verify cassandra status or connectivity from the inside container or you need to install the client on the host to check connectivity.
run docker ps and copy cassandra container name, then run below command.
docker exec -it container_name bash -c "cqlsh -u cassandra -p cassandra"
You can expect output like
[cqlsh 5.0.1 | Cassandra 3.11.6 | CQL spec 3.4.4 | Native protocol v4]
Use HELP for help.

Port exposed with docker run but not docker-compose up

I am trying to run rabbitmq along with influxdb TICK stack with docker-compose. When I run rabbitmq with this command:docker run -d --rm -p 5672:5672 -p 15672:15672 rabbitmq:3-management, both ports are open and I am able to access from a remote machine. However, when I run rabbitmq as part of a docker-compose file, it is not accessable from a remote machine. Here is my docker-compose.yml file:
version: "3.7"
services:
influxdb:
image: influxdb
volumes:
- ./influxdb/influxdb/data/:/var/lib/influxdb/
- ./influxdb/influxdb/config/:/etc/influxdb/
ports:
- "8086:8086"
rabbitmq:
image: rabbitmq:3-management
volumes:
- ./rabbitmq/data:/var/lib/rabbitmq
ports:
- "15672:15672"
- "5672:5627"
telegraf:
image: telegraf
volumes:
- ./influxdb/telegraf/config/:/etc/telegraf/
- /proc:/host/proc:ro
depends_on:
- "influxdb"
- "rabbitmq"
chronograf:
image: chronograf
volumes:
- ./influxdb/chronograf/data/:/var/lib/chronograf/
ports:
- "8888:8888"
depends_on:
- "telegraf"
More information: when I run this with docker-compose up -d the 8086 and 8888 are accessible from a remote machine (I confirm with using nmap command). Also, either way I am able to access the rabbitmq management console at http://localhost:15672.
How can I set this up so I can access rabbitmq from a remote machine using docker-compose?
Thank you.
Looks like just a typo in the port mapping in docker-compose.yml: 5672:5627 should actually be 5672:5672.
Otherwise the docker-compose configuration looks just fine.

Expose container DNS to another container?

Using Docker Compose and Traefik, I am trying to have the application container communicate to the solr container and vice versa in a local environment.
Currently, I can access both the application and the solr URL in the browser just fine, but they cannot 'see' or talk to one another internally.
I am new with Docker. Here is a section of my docker compose file with the relevant containers:
php:
image: wodby/drupal-php:$PHP_TAG
container_name: "${PROJECT_NAME}_php"
environment:
PHP_SENDMAIL_PATH: /usr/sbin/sendmail -t -i -S mailhog:1025
DB_HOST: $DB_HOST
DB_USER: $DB_USER
DB_PASSWORD: $DB_PASSWORD
DB_NAME: $DB_NAME
DB_DRIVER: $DB_DRIVER
PHP_FPM_USER: wodby
PHP_FPM_GROUP: wodby
COLUMNS: 80
volumes:
- ./:/var/www/html:cached
solr:
image: wodby/solr:$SOLR_TAG
container_name: "${PROJECT_NAME}_solr"
environment:
SOLR_DEFAULT_CONFIG_SET: $SOLR_CONFIG_SET
SOLR_HEAP: 1024m
labels:
- 'traefik.backend=${PROJECT_NAME}_solr'
- 'traefik.port=8983'
- 'traefik.frontend.rule=Host:solr.${PROJECT_BASE_URL}'
traefik:
image: traefik
container_name: "${PROJECT_NAME}_traefik"
command: -c /dev/null --web --docker --logLevel=INFO
ports:
- '80:80'
- '8983:8983'
volumes:
- /var/run/docker.sock:/var/run/docker.sock
I can access Solr at the given URL, but the application cannot see it at the same URL. I need to be able to do this so it can talk to Solr and have it crawl/etc.
Is there a way to expose them so they can see each other by their hostname?
docker-compose has container DNS resolution built-in. You can expose a specific port on your container to enable access within the docker network, or define port (as you have done for your traefik container) to expose it both within the docker network and externally. In either case, you will be able to access another container by its name (eg. php,solr,or traefik in this case) on the exposed port from another container.

Resources