ioredis Cannot connect to Redis cluster running in Docker - docker

I have a Nest.js application running that connects to Redis cluster from the provided host/port like this:
const port = this.configService.get('REDIS_PORT')
const host = this.configService.get('REDIS_HOST').replace('rediss://', '')
this.logger.log(`Initializing Redis client at ${host}:${port}`)
const dev = true
const clusterConfig: ClusterOptions = {
scaleReads: 'all',
enableAutoPipelining: true,
redisOptions: {
password: this.configService.get('REDIS_PASSWORD'),
db: 0,
showFriendlyErrorStack: true,
tls: dev
? undefined
: {
checkServerIdentity: (_host, _cert) => {
// skip certificate hostname validation
return undefined
},
},
},
}
this.redis = new Cluster(
[
{ port: 6379, host: 'localhost' },
],
clusterConfig,
)
On stage environment, this works fine. If I run app inside Docker locally (where redis cluster is set up using docker compose), it works fine.
BUT, if I run redis cluster in Docker-compose and application on host machine, and try to connect to redis cluster using ports exposed to host machine from docker (localhost:6379), it cannot connect and writes the following logs:
ioredis:cluster:subscriber selected a subscriber 173.17.0.4:7002 +0ms
ioredis:redis status[173.17.0.4:7002 (ioredis-cluster(subscriber))]: wait -> wait +0ms
ioredis:cluster cluster slots result count: 3 +0ms
ioredis:cluster cluster slots result [0]: slots 5461~10922 served by [ '173.17.0.3:7001' ] +0ms
ioredis:cluster cluster slots result [1]: slots 0~5460 served by [ '173.17.0.2:7000' ] +0ms
ioredis:cluster cluster slots result [2]: slots 10923~16383 served by [ '173.17.0.4:7002' ] +0ms
ioredis:cluster:connectionPool Reset with [
ioredis:cluster:connectionPool { host: '173.17.0.3', port: 7001, readOnly: false },
ioredis:cluster:connectionPool { host: '173.17.0.2', port: 7000, readOnly: false },
ioredis:cluster:connectionPool { host: '173.17.0.4', port: 7002, readOnly: false }
ioredis:cluster:connectionPool ] +2ms
ioredis:cluster:connectionPool Disconnect ::1:7000 because the node does not hold any slot +0ms
ioredis:redis status[::1:7000]: wait -> close +2ms
ioredis:connection skip reconnecting since the connection is manually closed. +2ms
ioredis:redis status[::1:7000]: close -> end +0ms
ioredis:cluster:connectionPool Remove ::1:7000 from the pool +0ms
ioredis:cluster:connectionPool Connecting to 173.17.0.3:7001 as master +0ms
ioredis:redis status[173.17.0.3:7001]: wait -> wait +0ms
ioredis:cluster:connectionPool Connecting to 173.17.0.2:7000 as master +0ms
ioredis:redis status[173.17.0.2:7000]: wait -> wait +0ms
ioredis:cluster:connectionPool Connecting to 173.17.0.4:7002 as master +0ms
ioredis:redis status[173.17.0.4:7002]: wait -> wait +0ms
ioredis:cluster status: connecting -> connect +2ms
ioredis:redis status[173.17.0.2:7000]: wait -> connecting +0ms
ioredis:redis queue command[173.17.0.2:7000]: 0 -> cluster([ 'INFO' ]) +0ms
ioredis:cluster:subscriber subscriber has left, selecting a new one... +2ms
ioredis:redis status[::1:7000 (ioredis-cluster(subscriber))]: wait -> close +0ms
ioredis:connection skip reconnecting since the connection is manually closed. +0ms
ioredis:redis status[::1:7000 (ioredis-cluster(subscriber))]: close -> end +0ms
ioredis:cluster:subscriber selected a subscriber 173.17.0.2:7000 +0ms
ioredis:redis status[173.17.0.2:7000 (ioredis-cluster(subscriber))]: wait -> wait +0ms
ioredis:redis status[::1:7000 (ioredis-cluster(refresher))]: ready -> close +1ms
ioredis:connection skip reconnecting since the connection is manually closed. +1ms
ioredis:redis status[::1:7000 (ioredis-cluster(refresher))]: close -> end +0ms
ioredis:redis status[::1:7000 (ioredis-cluster(refresher))]: ready -> close +1ms
ioredis:connection skip reconnecting since the connection is manually closed. +1ms
ioredis:redis status[::1:7000 (ioredis-cluster(refresher))]: close -> end +0ms
ioredis:redis status[::1:7000 (ioredis-cluster(refresher))]: ready -> close +0ms
ioredis:connection skip reconnecting since the connection is manually closed. +0ms
ioredis:redis status[::1:7000 (ioredis-cluster(refresher))]: close -> end +0ms
I don't understand this part. It first says that
ioredis:cluster cluster slots result [0]: slots 5461~10922 served by [ '173.17.0.3:7001' ] +0ms
ioredis:cluster cluster slots result [1]: slots 0~5460 served by [ '173.17.0.2:7000' ] +0ms
ioredis:cluster cluster slots result [2]: slots 10923~16383 served by [ '173.17.0.4:7002' ] +1ms
and then:
Disconnect ::1:7000 because the node does not hold any slot +0ms
so does it hold any slot or not?
I can easily connect to this redis cluster from a RedisInsight client
I need app running on host machine for debugging. How do I fix this and why does this happen?
This is docker-compose config:
version: '2'
services:
api:
build:
context: .
dockerfile: Dockerfile
args:
NPM_TOKEN: ${NPM_TOKEN}
volumes:
- ./src:/usr/src/app/src
command: ['yarn', 'start:dev']
restart: always
ports:
- '3399:3399'
depends_on:
- redis-cluster
networks:
- inner_net
env_file:
- .env
env:
- REDIS_HOST=redis1
- REDIS_PORT=7000
redis1:
image: redis:3
ports:
- "6379:7000"
volumes:
- ./redis/redis-cluster1.tmpl:/usr/local/etc/redis/redis.conf
command: redis-server /usr/local/etc/redis/redis.conf
networks:
inner_net:
ipv4_address: 173.17.0.2
redis2:
image: redis:3
ports:
- "7001:7001"
volumes:
- ./redis/redis-cluster2.tmpl:/usr/local/etc/redis/redis.conf
command: redis-server /usr/local/etc/redis/redis.conf
networks:
inner_net:
ipv4_address: 173.17.0.3
redis3:
image: redis:3
ports:
- "7002:7002"
volumes:
- ./redis/redis-cluster3.tmpl:/usr/local/etc/redis/redis.conf
command: redis-server /usr/local/etc/redis/redis.conf
networks:
inner_net:
ipv4_address: 173.17.0.4
redis-cluster:
tty: true
build:
context: redis
args:
redis_version: '3.2.9'
hostname: server
depends_on:
- redis1
- redis2
- redis3
networks:
inner_net:
ipv4_address: 173.17.0.5
networks:
inner_net:
name: inner-service-net
driver: bridge
ipam:
driver: default
config:
- subnet: 173.17.0.0/16

I have found an answer to this. For this to work, you have to provide ioredis with natMap parameter. This is what worked for me:
const devNatMap = {
'173.17.0.2:7000': {
host: '127.0.0.1',
port: 7000,
},
'173.17.0.3:7001': {
host: '127.0.0.1',
port: 7001,
},
'173.17.0.4:7002': {
host: '127.0.0.1',
port: 7002,
},
}
const clusterConfig: ClusterOptions = {
scaleReads: 'all',
enableAutoPipelining: true,
natMap: dev ? devNatMap : undefined,
redisOptions: {
password: this.configService.get('REDIS_PASSWORD'),
db: 0,
showFriendlyErrorStack: true,
tls: dev
? undefined
: {
checkServerIdentity: (_host, _cert) => {
// skip certificate hostname validation
return undefined
},
},
},
}
this.redis = new Cluster([{ port, host }], clusterConfig)

Related

ElasticSearch Logstash not connecting "Connection refused" - Docker

I need help! (who would have thought, right? lol)
I have a job interview in few days and it would mean the world to me to be well prepared for it and have some working examples.
I am trying to set up an ELK pipeline to stream data from kafka, through logstash, elasticsearch and finally read it from Kibana. The usual.
I am making use of containers, but the duo logstash - elasticsearch are giving me an aneurism.
Everything else works perfectly fine. I've checked the logs off of kafka and that is working just fine. Kibana is collected to elasticsearch just fine as well. But logstash and es really don't want to match.
Here is the setup
docker-compose.yml
version: '3.6'
services:
elasticsearch:
image: elasticsearch:8.6.0
container_name: elasticsearch
#restart: always
volumes:
- elastic_data:/usr/share/elasticsearch/data/
environment:
cluster.name: elf-kafka-cluster
ES_JAVA_OPTS: "-Xmx256m -Xms256m"
discovery.type: single-node
xpack.security.enabled: false
ports:
- '9200:9200'
- '9300:9300'
networks:
- elk
kibana:
image: kibana:8.6.0
container_name: kibana
#restart: always
ports:
- '5601:5601'
environment:
- ELASTICSEARCH_HOSTS=http://elasticsearch:9200
depends_on:
- elasticsearch
networks:
- elk
logstash:
image: logstash:8.6.0
container_name: logstash
#restart: always
volumes:
- type: bind
source: ./logstash_pipeline/
target: /usr/share/logstash/pipeline
read_only: true
command: logstash -f /home/ettore/Documenti/Portfolio/ELK/logstash/logstash.conf
depends_on:
- elasticsearch
ports:
- '9600:9600'
environment:
xpack.monitoring.enabled: true
# LS_JAVA_OPTS: "-Xmx256m -Xms256m"
links:
- elasticsearch
networks:
- elk
volumes:
elastic_data: {}
networks:
elk:
driver: bridge
logstash.conf
input {
kafka {
bootstrap_servers => "localhost:9092"
topics => ["topic"]
}
}
output {
elasitcsearch {
hosts => ["http://localhost:9200"]
index => "topic"
workers => 1
}
}
These are logstash error logs when I compose up:
logstash | [2023-01-17T13:59:02,680][WARN ][deprecation.logstash.monitoringextension.pipelineregisterhook] Internal collectors option for Logstash monitoring is deprecated and targeted for removal in the next major version.
logstash | Please configure Metricbeat to monitor Logstash. Documentation can be found at:
logstash | https://www.elastic.co/guide/en/logstash/current/monitoring-with-metricbeat.html
logstash | [2023-01-17T13:59:04,711][INFO ][logstash.licensechecker.licensereader] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elasticsearch:9200/]}}
logstash | [2023-01-17T13:59:05,373][INFO ][logstash.licensechecker.licensereader] Failed to perform request {:message=>"Connect to elasticsearch:9200 [elasticsearch/172.20.0.2] failed: Connection refused", :exception=>Manticore::SocketException, :cause=>#<Java::OrgApacheHttpConn::HttpHostConnectException: Connect to elasticsearch:9200 [elasticsearch/172.20.0.2] failed: Connection refused>}
logstash | [2023-01-17T13:59:05,379][WARN ][logstash.licensechecker.licensereader] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"http://elasticsearch:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :message=>"Elasticsearch Unreachable: [http://elasticsearch:9200/][Manticore::SocketException] Connect to elasticsearch:9200 [elasticsearch/172.20.0.2] failed: Connection refused"}
logstash | [2023-01-17T13:59:05,436][INFO ][logstash.licensechecker.licensereader] Failed to perform request {:message=>"Connect to elasticsearch:9200 [elasticsearch/172.20.0.2] failed: Connection refused", :exception=>Manticore::SocketException, :cause=>#<Java::OrgApacheHttpConn::HttpHostConnectException: Connect to elasticsearch:9200 [elasticsearch/172.20.0.2] failed: Connection refused>}
logstash | [2023-01-17T13:59:05,444][WARN ][logstash.licensechecker.licensereader] Marking url as dead. Last error: [LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError] Elasticsearch Unreachable: [http://elasticsearch:9200/_xpack][Manticore::SocketException] Connect to elasticsearch:9200 [elasticsearch/172.20.0.2] failed: Connection refused {:url=>http://elasticsearch:9200/, :error_message=>"Elasticsearch Unreachable: [http://elasticsearch:9200/_xpack][Manticore::SocketException] Connect to elasticsearch:9200 [elasticsearch/172.20.0.2] failed: Connection refused", :error_class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError"}
logstash | [2023-01-17T13:59:05,449][WARN ][logstash.licensechecker.licensereader] Attempt to validate Elasticsearch license failed. Sleeping for 0.02 {:fail_count=>1, :exception=>"Elasticsearch Unreachable: [http://elasticsearch:9200/_xpack][Manticore::SocketException] Connect to elasticsearch:9200 [elasticsearch/172.20.0.2] failed: Connection refused"}
logstash | [2023-01-17T13:59:05,477][ERROR][logstash.licensechecker.licensereader] Unable to retrieve license information from license server {:message=>"No Available connections"}
logstash | [2023-01-17T13:59:05,567][ERROR][logstash.monitoring.internalpipelinesource] Failed to fetch X-Pack information from Elasticsearch. This is likely due to failure to reach a live Elasticsearch cluster.
logstash | [2023-01-17T13:59:05,661][INFO ][logstash.config.source.local.configpathloader] No config files found in path {:path=>"/home/ettore/Documenti/Portfolio/ELK/logstash/logstash.conf"}
logstash | [2023-01-17T13:59:05,664][ERROR][logstash.config.sourceloader] No configuration found in the configured sources.
logstash | [2023-01-17T13:59:06,333][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600, :ssl_enabled=>false}
logstash | [2023-01-17T13:59:06,411][INFO ][logstash.runner ] Logstash shut down.
logstash | [2023-01-17T13:59:06,419][FATAL][org.logstash.Logstash ] Logstash stopped processing because of an error: (SystemExit) exit
logstash | org.jruby.exceptions.SystemExit: (SystemExit) exit
logstash | at org.jruby.RubyKernel.exit(org/jruby/RubyKernel.java:790) ~[jruby.jar:?]
logstash | at org.jruby.RubyKernel.exit(org/jruby/RubyKernel.java:753) ~[jruby.jar:?]
logstash | at usr.share.logstash.lib.bootstrap.environment.<main>(/usr/share/logstash/lib/bootstrap/environment.rb:91) ~[?:?]
and this is to prove that everything is working as intended with es (or so it seems)
netstat -an | grep 9200
tcp 0 0 0.0.0.0:9200 0.0.0.0:* LISTEN
tcp6 0 0 :::9200 :::* LISTEN
unix 3 [ ] STREAM CONNECTED 49200
I've looked through everything and this is 100% not a duplicate because I have tried it all. I really can't figure it out. Hope anyone can help.
Thank you for you time.
You should set logstash.yml
Create a logstash.yml with values below:
http.host: "0.0.0.0"
xpack.monitoring.elasticsearch.hosts: [ "http://localhost:9200" ]
In your docker-compose.yml, add another volume in Logstash container as shown below:
./logstash.yml:/usr/share/logstash/config/logstash.yml
Additionally, its good to run with restart condition.

Micronaut with kafka fails to retrieve metadata on first attempts

I'm trying to send messages from a Micronaut 3.6.3 application to Kafka deployed with docker-compose. On first attempt I receive a warning like this:
[Producer clientId=producer-1] Error while fetching metadata with
correlation id 1 : {accountRegistered=LEADER_NOT_AVAILABLE}
For the following messages, the problem disappear but my requirement is to not lost any about account registration.
My docker compose configuration:
services:
kafka:
image: 'bitnami/kafka:3.2'
hostname: 'kafka'
environment:
ALLOW_PLAINTEXT_LISTENER: 'yes'
KAFKA_BROKER_ID: 1
KAFKA_CFG_ADVERTISED_LISTENERS: 'INSIDE://kafka:29092, OUTSIDE://localhost:9092'
KAFKA_CFG_INTER_BROKER_LISTENER_NAME: 'INSIDE'
KAFKA_CFG_LISTENERS: 'INSIDE://:29092, OUTSIDE://:9092'
KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP: 'INSIDE:PLAINTEXT, OUTSIDE:PLAINTEXT'
KAFKA_CFG_ZOOKEEPER_CONNECT: 'zookeeper:2181'
ports:
- '9092:9092'
depends_on:
- 'zookeeper'
#TODO: Can be removed with the future versions of Kafka (using KRaft)
zookeeper:
image: 'bitnami/zookeeper:3.8'
hostname: 'zookeeper'
environment:
ALLOW_ANONYMOUS_LOGIN: 'yes'
ports:
- '2181:2181'
From the application I use 'localhost:9092' to connect.
My consumer code:
#KafkaListener(offsetReset = OffsetReset.EARLIEST)
class AccountReferenceUpdaterEventConsumer {
#Inject
AccountReferenceEntityRepository accountReferenceEntityRepository
#Topic('accountRegistered')
void receive(#MessageBody AccountRegisteredEvent event) {
def account = event.source
accountReferenceEntityRepository.findById(account.id)
.ifPresentOrElse(
accountReference -> log.warn('Account {} already registered', account.id),
() -> {
def accountReference = new AccountReferenceEntity(
accountId: account.id,
username: account.username
)
accountReferenceEntityRepository.save(accountReference)
}
)
}
}
application.yml:
kafka:
bootstrap:
servers: 'localhost:9092'

Promtail: Loki Server returned HTTP status 429 Too Many Requests

I'm running Loki for test purposes in Docker and am recently getting following error from the Promtail and Loki containers:
level=warn ts=2022-02-18T09:41:39.186511145Z caller=client.go:349 component=client host=loki:3100 msg="error sending batch, will retry" status=429 error="server returned HTTP status 429 Too Many Requests (429): Maximum active stream limit exceeded, reduce the number of active streams (reduce labels or reduce label values), or contact your Loki administrator to see if the limit can be increased"
I have tried increasing limit settings (ingestion_rate_mb and ingestion_burst_size_mb) in my Loki config.
I setup two Promtail jobs - one job ingesting MS Exchange logs from a local directory (currently 8TB and increasing), the other job gets logs spooled from syslog-ng.
I've read that reducing labels help. But I'm only using two labels.
Configuration
Below my config files (docker-compose, loki, promtail):
docker-compose.yaml
version: "3"
networks:
loki:
services:
loki:
image: grafana/loki:2.4.2
container_name: loki
restart: always
user: "10001:10001"
ports:
- "3100:3100"
command: -config.file=/etc/loki/local-config.yaml
volumes:
- ${DATADIR}/loki/etc:/etc/loki:rw
- ${DATADIR}/loki/chunks:/loki/chunks
networks:
- loki
promtail:
image: grafana/promtail:2.4.2
container_name: promtail
restart: always
volumes:
- /var/log/loki:/var/log/loki
- ${DATADIR}/promtail/etc:/etc/promtail
ports:
- "1514:1514" # for syslog-ng
- "9080:9080" # for http web interface
command: -config.file=/etc/promtail/config.yml
networks:
- loki
grafana:
image: grafana/grafana:8.3.4
container_name: grafana
restart: always
user: "476:0"
volumes:
- ${DATADIR}/grafana/var:/var/lib/grafana
ports:
- "3000:3000"
networks:
- loki
Loki Config
auth_enabled: false
server:
http_listen_port: 3100
common:
path_prefix: /loki
storage:
filesystem:
chunks_directory: /loki/chunks
rules_directory: /loki/rules
replication_factor: 1
ring:
instance_addr: 127.0.0.1
kvstore:
store: inmemory
schema_config:
configs:
- from: 2020-10-24
store: boltdb-shipper
object_store: filesystem
schema: v11
index:
prefix: index_
period: 24h
ruler:
alertmanager_url: http://localhost:9093
# https://grafana.com/docs/loki/latest/configuration/#limits_config
limits_config:
reject_old_samples: true
reject_old_samples_max_age: 168h
ingestion_rate_mb: 12
ingestion_burst_size_mb: 24
per_stream_rate_limit: 24MB
chunk_store_config:
max_look_back_period: 336h
table_manager:
retention_deletes_enabled: true
retention_period: 2190h
ingester:
lifecycler:
address: 127.0.0.1
ring:
kvstore:
store: inmemory
replication_factor: 1
final_sleep: 0s
chunk_encoding: snappy
Promtail Config
server:
http_listen_port: 9080
grpc_listen_port: 0
positions:
filename: /tmp/positions.yaml
clients:
- url: http://loki:3100/loki/api/v1/push
scrape_configs:
- job_name: exchange
static_configs:
- targets:
- localhost
labels:
job: exchange
__path__: /var/log/loki/exchange/*/*/*log
- job_name: syslog-ng
syslog:
listen_address: 0.0.0.0:1514
idle_timeout: 60s
label_structured_data: yes
labels:
job: "syslog-ng"
relabel_configs:
- source_labels: ['__syslog_message_hostname']
target_label: 'host'

Two rabbitmq isntances on one server with docker compose how to change the default port

I would like to run two instances of rabbitmq on one server. All I create with docker-compose. The thing is how I can change the default node and management ports. I have tried setting it via ports but it didn't help. When I was facing the same scenario but with mongo, I have used command: mongod --port CUSTOM_PORT . What would be the analogical command here for rabbitmq?
Here is my config for the second instance of rabbitmq.
version: '2'
services:
rabbitmq:
image: rabbitmq:3-management-alpine
container_name: 'rabbitmq_test'
ports:
- 5673:5673
- 15673:15673
volumes:
- ./rabbitmq/data/:/var/lib/rabbitmq/
- ./rabbitmq/log/:/var/log/rabbitmq
networks:
- rabbitmq_go_net_test
environment:
RABBITMQ_DEFAULT_USER: 'test'
RABBITMQ_DEFAULT_PASS: 'test'
HOST_PORT_RABBIT: 5673
HOST_PORT_RABBIT_MGMT: 15673
networks:
rabbitmq_go_net_test:
driver: bridge
And the outcome is below
Management plugin: HTTP (non-TLS) listener started on port 15672
rabbitmq_test | 2021-03-18 11:32:42.553 [info] <0.738.0> Ready to start client connection listeners
rabbitmq_test | 2021-03-18 11:32:42.553 [info] <0.44.0> Application rabbitmq_prometheus started on node rabbit#fb24038613f3
rabbitmq_test | 2021-03-18 11:32:42.557 [info] <0.1035.0> started TCP listener on [::]:5672
We can see that there are still ports 5672 and 15672 exposed instead of 5673 and 15673.
EDIT
ports:
- 5673:5672
- 15673:15672
I have tried that the above conf yet with no success
rabbitmq_test | 2021-03-18 14:08:56.167 [info] <0.797.0> Management plugin: HTTP (non-TLS) listener started on port 15672
rabbitmq_test | 2021-03-18 14:08:56.167 [info] <0.903.0> Statistics database started.
rabbitmq_test | 2021-03-18 14:08:56.167 [info] <0.902.0> Starting worker pool 'management_worker_pool' with 3 processes in it
rabbitmq_test | 2021-03-18 14:08:56.168 [info] <0.44.0> Application rabbitmq_management started on node rabbit#9358e6f4d2a5
rabbitmq_test | 2021-03-18 14:08:56.208 [info] <0.44.0> Application prometheus started on node rabbit#9358e6f4d2a5
rabbitmq_test | 2021-03-18 14:08:56.213 [info] <0.916.0> Prometheus metrics: HTTP (non-TLS) listener started on port 15692
rabbitmq_test | 2021-03-18 14:08:56.213 [info] <0.44.0> Application rabbitmq_prometheus started on node rabbit#9358e6f4d2a5
rabbitmq_test | 2021-03-18 14:08:56.213 [info] <0.738.0> Ready to start client connection listeners
rabbitmq_test | 2021-03-18 14:08:56.216 [info] <0.1035.0> started TCP listener on [::]:5672
I have found the solution. I provided the configuration file to the rabbitmq container.
loopback_users.guest = false
listeners.tcp.default = 5673
default_pass = test
default_user = test
management.tcp.port = 15673
And a working docker-compose file
version: '2'
services:
rabbitmq:
image: rabbitmq:3-management-alpine
container_name: 'rabbitmq_test'
ports:
- 5673:5673
- 15673:15673
volumes:
- ./rabbitmq/data/:/var/lib/rabbitmq/
- ./rabbitmq/log/:/var/log/rabbitmq
- ./conf/myrabbit.conf:/etc/rabbitmq/rabbitmq.conf
networks:
- rabbitmq_go_net_test
networks:
rabbitmq_go_net_test:
driver: bridge
A working example with rabbitmq:3.9.13-management-alpine
docker/rabbitmq/rabbitmq.conf:
loopback_users.guest = false
listeners.tcp.default = 5673
default_pass = guest
default_user = guest
default_vhost = /
docker/rabbitmq/Dockerfile:
FROM rabbitmq:3.9.13-management-alpine
COPY --chown=rabbitmq:rabbitmq rabbitmq.conf /etc/rabbitmq/rabbitmq.conf
EXPOSE 4369 5671 5672 5673 15691 15692 25672 25673
docker-compose.yml:
...
rabbitmq:
#image: "rabbitmq:3-management-alpine"
build: './docker/rabbitmq/'
container_name: my-rabbitmq
environment:
RABBITMQ_DEFAULT_VHOST: /
ports:
- 5673:5672
- 15673:15672
networks:
- default
...

Connect to docker container using hostname on local environment in kubernetes

I have a kubernete docker-compose that contains
frontend - a web app running on port 80
backend - a node server for API running on port 80
database - mongodb
I would like to ideally access frontend via a hostname such as http://frontend:80, and for the browser to be able to access the backend via a hostname such as http://backend:80, which is required by the web app on the client side.
How can I go about having my containers accessible via those hostnames on my localhost environment (windows)?
docker-compose.yml
version: "3.8"
services:
frontend:
build: frontend
hostname: framework
ports:
- "80:80"
- "443:443"
- "33440:33440"
backend:
build: backend
hostname: backend
database:
image: 'mongo'
environment:
- MONGO_INITDB_DATABASE=framework-database
volumes:
- ./mongo/mongo-volume:/data/database
- ./mongo/init-mongo.js:/docker-entrypoint-initdb.d/init-mongo.js:ro
ports:
- '27017-27019:27017-27019'
I was able to figure it out, using the docker-compose aliases & networks I was able to connect every container to the same development network.
There was 3 main components:
container mapping node dns server - Grabs the aliases via docker ps and creates a DNS server that redirects those requests to 127.0.0.1 (localhost)
nginx reverse proxy container - mapping the hosts to the containers via their aliases in the virtual network
projects - each project is a docker-compose.yml that may have an unlimited number of containers running on port 80
docker-compose.yml for clientA
version: "3.8"
services:
frontend:
build: frontend
container_name: clienta-frontend
networks:
default:
aliases:
- clienta.test
backend:
build: backend
container_name: clienta-backend
networks:
default:
aliases:
- api.clienta.test
networks:
default:
external: true # connect to external network (see below for more)
name: 'development' # name of external network
nginx proxy docker-compose.yml
version: '3'
services:
parent:
image: nginx:alpine
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf
ports:
- "80:80" #map port 80 to localhost
networks:
- development
networks:
development: #create network called development
name: 'development'
driver: bridge
DNS Server
import dns from 'native-dns'
import { exec } from 'child_process'
const { createServer, Request } = dns
const authority = { address: '8.8.8.8', port: 53, type: 'udp' }
const hosts = {}
let server = createServer()
function command (cmd) {
return new Promise((resolve, reject) => {
exec(cmd, (err, stdout, stderr) => stdout ? resolve(stdout) : reject(stderr ?? err))
})
}
async function getDockerHostnames(){
let containersText = await command('docker ps --format "{{.ID}}"')
let containers = containersText.split('\n')
containers.pop()
await Promise.all(containers.map(async containerID => {
let json = JSON.parse(await command(`docker inspect ${containerID}`))?.[0]
let aliases = json?.NetworkSettings?.Networks?.development?.Aliases || []
aliases.map(alias => hosts[alias] = {
domain: `^${alias}*`,
records: [
{ type: 'A', address: '127.0.0.1', ttl: 100 }
]
})
}))
}
await getDockerHostnames()
setInterval(getDockerHostnames, 8000)
function proxy(question, response, cb) {
var request = Request({
question: question, // forwarding the question
server: authority, // this is the DNS server we are asking
timeout: 1000
})
// when we get answers, append them to the response
request.on('message', (err, msg) => {
msg.answer.map(a => response.answer.push(a))
});
request.on('end', cb)
request.send()
}
server.on('close', () => console.log('server closed', server.address()))
server.on('error', (err, buff, req, res) => console.error(err.stack))
server.on('socketError', (err, socket) => console.error(err))
server.on('request', async function handleRequest(request, response) {
await Promise.all(request.question.map(question => {
console.log(question.name)
let entry = Object.values(hosts).find(r => new RegExp(r.domain, 'i').test(question.name))
if (entry) {
entry.records.map(record => {
record.name = question.name;
record.ttl = record.ttl ?? 600;
return response.answer.push(dns[record.type](record));
})
} else {
return new Promise(resolve => proxy(question, response, resolve))
}
}))
response.send()
});
server.serve(53, '127.0.0.1');
Don't forget to update your computers network settings to use 127.0.0.1 as the DNS server
Git repository for dns server + nginx proxy in case you want to see the implementation: https://github.com/framework-tools/dockerdnsproxy

Resources