install influxdb in a docker with non standard port - docker

I am new in dockers and I need some help please.
I am trying to install TICK in docker. Influxdb, Kapacitor and Chronograf will be installed in dockers but telegraf will be installed in each machine that will be necessary.
Port 8086 in my host is in use, so I will use 8087 for influxdb. Is it posible to configure influxdb dokcer with -p 8087:8086? If so, which port should I configure in conf files?
Docker compose file will be:
version: '3'
networks:
TICK_network:
services:
influxdb:
image: influxdb
container_name: influxdb
networks:
TICK_network:
ports:
- "8087:8086"
- "8083:8083"
expose:
- "8087"
- "8083"
hostname: influxdb
volumes:
- /var/lib/influxdb:/var/lib/influxdb
- /etc/influxdb/influxdb.conf:/etc/influxdb/influxdb.conf:ro
restart: unless-stopped
kapacitor:
image: kapacitor
container_name: kapacitor
networks:
TICK_network:
links:
- influxdb
ports:
- "9092:9092"
expose:
- "9092"
hostname: kapacitor
volumes:
- /var/lib/kapacitor:/var/lib/kapacitor
- /etc/kapacitor/kapacitor.conf:/etc/kapacitor/kapacitor.conf:ro
restart: unless-stopped
chronograf:
image: chronograf
container_name: chronograf
networks:
TICK_network:
links:
- influxdb
- kapacitor
ports:
- "8888:8888"
expose:
- "8888"
hostname: chronograf
volumes:
- /var/lib/chronograf:/var/lib/chronograf
restart: unless-stopped
influxdb.conf is edited to point to port 8087
[http]
enabled = true
bind-address = ":8087"
auth-enabled = true
Kapacitor.conf and telegraf.conf are also pointing to port 8087.
But I am receiving following errors:
Telegraf log:
W! [outputs.influxdb] when writing to [http://localhost:8087]: database "telegraf" creation failed: Post http://localhost:8087/query: EOF
E! [outputs.influxdb] when writing to [http://localhost:8087]: Post http://localhost:8087/write?db=tick: EOF
E! [agent] Error writing to outputs.influxdb: could not write any address
kapacitor log:
vl=error msg="encountered error" service=run err="open server: open service *influxdb.Service: failed to link subscription on startup: authorization failed"
run: open server: open service *influxdb.Service: failed to link subscription on startup: authorization failed

What you did is correct if you want to access those services from outside the Docker network, that is from the host access to localhost:8087 for example.
However, this is not correct in your case. As you are using docker-compose, all the services are in the same network, and therefore, you need to attack the port in which the influx is listening in the Docker network (the right-side port), that is, 8086.
But, even if you do so, it will still not work. Why? Because you are trying to access localhost from the Telegraf container. You need to configure the access to influx as influxdb:8086, not as localhost:8087. influxdb here is the name of the container, if for example you name it ailb90, then it would be ailb90:8086

thanks for your answer. But telegraf is not installed in a container. This is why I access to database using urls = ["http://localhost:8087"].
In the other hand, kapacitor is installed in a docker container. The conexion to influxdb is made using the string urls=["https://influxdb:8087"]. If I cinfigure kapacitor in port 8086 it gives a connexion error (I think it is because influxdb.conf is pointing to port 8087):
lvl=error msg="failed to connect to InfluxDB, retrying..." service=influxdb cluster=default err="Get http://influxdb:8086/ping: dial tcp 172.0.0.2:8086: connect: connection refused"

Related

Docker & Hive - Port 50070 ports on Windows permission denied

I want to setup a local hive server and found this repo:
https://github.com/big-data-europe/docker-hive
This is the yaml file I use.
version: "3"
services:
namenode:
image: bde2020/hadoop-namenode:2.0.0-hadoop2.7.4-java8
volumes:
- namenode:/hadoop/dfs/name
environment:
- CLUSTER_NAME=test
env_file:
- ./hadoop-hive.env
ports:
- "50070:50070"
datanode:
image: bde2020/hadoop-datanode:2.0.0-hadoop2.7.4-java8
volumes:
- datanode:/hadoop/dfs/data
env_file:
- ./hadoop-hive.env
environment:
SERVICE_PRECONDITION: "namenode:50070"
ports:
- "50075:50075"
hive-server:
image: bde2020/hive:2.3.2-postgresql-metastore
env_file:
- ./hadoop-hive.env
environment:
HIVE_CORE_CONF_javax_jdo_option_ConnectionURL: "jdbc:postgresql://hive-metastore/metastore"
SERVICE_PRECONDITION: "hive-metastore:9083"
ports:
- "10000:10000"
hive-metastore:
image: bde2020/hive:2.3.2-postgresql-metastore
env_file:
- ./hadoop-hive.env
command: /opt/hive/bin/hive --service metastore
environment:
SERVICE_PRECONDITION: "namenode:50070 datanode:50075 hive-metastore-postgresql:5432"
ports:
- "9083:9083"
hive-metastore-postgresql:
image: bde2020/hive-metastore-postgresql:2.3.0
presto-coordinator:
image: shawnzhu/prestodb:0.181
ports:
- "8080:8080"
volumes:
namenode:
datanode:
Error:
Error starting userland proxy: Bind for 0.0.0.0:50075: unexpected error Permission denied
The ports >50000 are blocked on windows, I don´t have admin rights on my company pc, so I tried to map the ports like this:
ports:
- "40070:50070"
environment:
SERVICE_PRECONDITION: "namenode:40070 datanode:40075 hive-metastore-postgresql:5432"
This will let me get the Container started, but the container seem not to be able to communicate.
hive-metastore_1 | [1/100] check for namenode:40070...
hive-metastore_1 | [1/100] namenode:40070 is not available yet
hive-metastore_1 | [1/100] try in 5s once again ...
956a5237dbe2_docker-hive_datanode_1 | [4/100] check for namenode:40070...
956a5237dbe2_docker-hive_datanode_1 | [4/100] namenode:40070 is not available yet
I tried to change both ports:
ports:
- "40070:40070"
This will not work, because some IPs seem to be hardcoded:
ded7410db1b9_docker-hive_namenode_1 | 21/10/08 12:39:05 INFO hdfs.DFSUtil: Starting Web-server for hdfs at: http://0.0.0.0:50070
ded7410db1b9_docker-hive_namenode_1 | 21/10/08 12:39:05 INFO http.HttpServer2: Jetty bound to port 50070
Does anyone know how to get this running?
With the following:
ports:
- "40070:50070"
all you are doing is directing traffic from host port 40070 to container port 50070.
So to access "namenode" from the host machine for example:
localhost:40070
And to access "namenode" inside the compose network:
namenode:50070
Service precondition with BDE checks the container and the port repeatedly to see if the service is running before setting up its own services to ensure things are ready first. You have not changed the port running on the container, so your containers should still communicate via port 50070.
You have incorrectly changed the precondition to scan instead for your host port 40070, whereas it should look for the internal network container port 50070 regardless of host port.
Change it to the following:
ports:
- "40070:50070"
environment:
SERVICE_PRECONDITION: "namenode:50070 datanode:50075 hive-metastore-postgresql:5432"
You can change the operating ports on Hive etc. with the environmental variable file provided, but you shouldn't need to. Exposing host port 40070 to container port 50070 has no impact on the operation of the docker services.

Multiple isloated elasticsearch cluster with single docker-compose file

I want to create 2 elasticsearch cluster in single docker-compose file, so that I can test few changes only on new es cluster,
My docker-compose file is look like this
version: "2.2"
services:
elasticsearch-master:
image: elasticsearch:6.6.0
volumes:
- esdata1:/usr/share/elasticsearch/data
ports:
- "9200:9200"
mem_limit: '2048M'
new-elasticsearch-master:
image: elasticsearch:6.6.0
volumes:
- esdata2:/usr/share/elasticsearch/data
ports:
- "9400:9200"
mem_limit: '2048M'
search:
image: search:latest
entrypoint: java -Delasticsearch.host=elasticsearch-master -DnewElasticsearch.host=new-elasticsearch-master -DnewElasticsearch.port=9400 -jar app.jar
ports:
- "8083:8083"
depends_on:
- elasticsearch-master
- new-elasticsearch-master
mem_limit: '500M'
volumes:
esdata1:
esdata2:
I have 1 java service where I am adding both the host with different environment variable
-Delasticsearch.host=elasticsearch-master
-DnewElasticsearch.host=new-elasticsearch-master
But when I run code from java search service as follow
new RestTemplate().getForEntity("http://elasticsearch-master:9200/_cat/indices?v",String.class)
This gives me correct response.
But when I try to connect to another host on 9400.
new RestTemplate().getForEntity("http://new-elasticsearch-master:9400/_cat/indices?v",String.class)
I am getting Connection Refused error
When I try same host with 9200 then that gives me 200 response.
new RestTemplate().getForEntity("http://new-elasticsearch-master:9200/_cat/indices?v",String.class)
Can someone please tell me how can I make 2 different connection with different port as below.
http://elasticsearch-master:9200
http://new-elasticsearch-master:9400
Thanks
You got the expected behavior. The ports field in docker-compose map the ports to your localhost, which mean that the "old" Elasticsearch will be available via localhost:9200 and the "new" Elasticsearch will be available via localhost:9400.
On the other hand, docker-compose services communicate in an internal network and the service name is the hostname and the port is the original listening port.
Thus, you were able to access (internally) your old one via http://elasticsearch-master:9200 and the new one via http://new-elasticsearch-master:9200.
If you wish to use the new Elasticsearch with 9400 you need to change its settings: http.port. You can do that like:
new-elasticsearch-master:
image: elasticsearch:6.6.0
volumes:
- esdata2:/usr/share/elasticsearch/data
environment:
- http.port=9400
ports:
- "9400:9400"
mem_limit: '2048M'
note that you have to change the port mapping as well (because it will map your new port, 9400 to the localhost 9400).

database "telegraf" creation failed: Post "http://influxdb:8086/query": dial tcp 172.31.0.2:8086: connect: connection refused

I installed four containers in my ec2 instance and every container is running fine. One of the containers in Telegraf and another one in influxdb. So I am trying to write the data from Telegraf to Influxdb and in Telegraf is coming from the AWS Kinesis. So after everything up and running data from kinesis is coming to the Telegraf but from telegraf data is not coming to Influxdb.
Here what I have changed in telegraf.conf file for getting data from Influxdb.
[[outputs.influxdb]]
urls = [“influxdb:8086”]
#urls = [“http://localhost:8086”]
database = “telegraf”
This is a snippet from my docker-compose.yml file.
influxdb:
image: influxdb:1.8.2
container_name: influxdb
restart: always
ports:
- 8086:8086
networks:
- analytics
volumes:
- /mnt/db/:/mnt/db/
- /mnt/influx/:/mnt/influx/
- ./etc/influxdb/influxdb.conf:/etc/influxdb/influxdb.conf
telegraf:
image: telegraf
container_name: telegraf
restart: always
depends_on:
- influxdb
networks:
- analytics
volumes:
- telegraf-storage:/var/lib/telegraf
- ./etc/telegraf/telegraf.conf:/etc/telegraf/telegraf.conf
environment:
INFLUXDB_URL: http://influxdb:8086
# - username=admin
# - password=admin
links:
- influxdb
This is the error, I am getting. I checked port is also listening. Data is also coming in Telegraf from Kinesis.
[outputs.influxdb] When writing to [http://influxdb:8086]: database "telegraf" creation failed: Post "http://influxdb:8086/query": dial tcp 172.31.0.2:8086: connect: connection refused

Minio / Keycloak integration: connection refused

I am trying to connect MinIO with KeyCloak and I follow the instructions provided in this documentation:
https://github.com/minio/minio/blob/master/docs/sts/keycloak.md
What I have done so far is deploy a Docker container for the MinIO server, another one for the MinioClient and a third one used for the KeyCloak server.
As you can see in the following snippet the configuration of the Minio Client container is done correctly, since I can list the buckets available in the Minio Server:
mc ls myminio
[2020-05-14 11:54:59 UTC] 0B bucket1/
[2020-05-06 12:23:01 UTC] 0B bucket2/
I have an issue arising when I try to configure MinIO as depicted in step 3 (Configure MinIO) of the documentation. In more detail, the command that I run is this one:
mc admin config set myminio identity_openid config_url="http://localhost:8080/auth/realms/demo/.well-known/openid-configuration" client_id="account"
And the error I get is this one:
mc: <ERROR> Cannot set 'identity_openid config_url=http://localhost:8080/auth/realms/demo/.well-known/openid-configuration client_id=account' to server. Get http://localhost:8080/auth/realms/demo/.well-known/openid-configuration: dial tcp 127.0.0.1:8080: connect: connection refused.
When I curl this address http://localhost:8080/auth/realms/demo/.well-known/openid-configuration from the MinIO Client container though, I retrieve the JSON file.
Turns out, all I had to do is change the localhost in the config_url, from localhost to the IP of the KeyCloak container (172.17.0.3).
This is just a temporary solution that works for now, but I will continue searching for something more concrete than just hardcoding the IP.
When I figure out the solution, this answer will be updated.
Update
I had to create a docker-compose.yml file as the one below in order to overcome the issues without having to manually place the IP of the KeyCloak container.
version: '2'
services:
miniod:
image: minio/minio
restart: always
container_name: miniod
ports:
- 9000:9000
volumes:
- "C:/data:/data"
environment:
- "MINIO_ACCESS_KEY=access_key"
- "MINIO_SECRET_KEY=secret_key"
command: ["server", "/data"]
networks:
- minionw
mcd:
image: minio/mc
container_name: mcd
networks:
- minionw
kcd:
image: quay.io/keycloak/keycloak:10.0.1
container_name: kcd
restart: always
ports:
- 8080:8080
environment:
- "KEYCLOAK_USER=admin"
- "KEYCLOAK_PASSWORD=pass"
networks:
- minionw
networks:
minionw:
driver: "bridge"
Connection refused occurs when a port is not accessible on the hostname or IP we specified.
Please try exposing the port using --expose flag along with the port number which you wish to expose when using the docker CLI. Then being exposed, you can access on it on localhost

Docker Grafana with two InfluxDBs: Connection refused

i created a new docker-stack where i would need several influxdb instances, which i can’t connect to my grafana container atm. Here is a port of my docker-compose.yml
services:
grafana:
image: grafana/grafana
container_name: grafana
restart: always
ports:
- 3000:3000
networks:
- monitoring
volumes:
- grafana-volume:/var/lib/grafana
influxdb:
image: influxdb
container_name: influxdb
restart: always
ports:
- 8086:8086
networks:
- monitoring
volumes:
- influxdb-volume:/var/lib/influxdb
influxdb-2:
image: influxdb
container_name: influxdb-2
restart: always
ports:
- 12380:12380
networks:
- monitoring
volumes:
- influxdb-volume-2:/var/lib/influxdb
When i try to create a new influxdb datasource in grafana with influxdb-2 i get a Network Error: Bad Gateway(502), the logfile is showing:
2782ca98a4d7_grafana | 2019/10/05 13:18:50 http: proxy error: dial tcp 172.20.0.4:12380: connect: connection refused
Any ideas?
Thanks
#hmm provides the answer.
When you create services within Docker Compose, you:
are able to access containers by the service name. Grafana will reference influxdb-2 by that name.
are not able to change the ports a container exposes. Per #hmm, influxdb-2 must still be referenced on port 8086 because that's the port the container exposes; you can't change it unless you change the image.
you may (but you don't need to) expose the containers' ports to the host (using --ports: [[HOST-PORT]]:[[CONTAINER-PORT]]
Long and the short of it is that the InfluxDB service in influxdb-2 should be referenced as influxdb-2:8086. If you want to expose this service to the host (!), you could do ports: - 12380:8086. You may change the value of 12380 to something available on your host but you cannot change the value of the container port (8086).
The main reason that you would include the --ports: flag on influxdb-2 is for debugging from the host. But the grafana service does not require this. It will access the influxdb-2 service through the network provisioned by Docker Compose on port 8086.
You do want to expose the grafana service on the host because, otherwise, it would be inaccessible to you (from the host). It's akin to public|private. grafana is host public but the influxdb* services may be host private because they are generally only needed by the grafana service.
HTH!

Resources