First all was working great with default configuration until I hit one memory limit, then I needed to add a Redis configuration file (latest, 7.0). In this file bind is set to 127.0.0.1 with default port, so I tried that. I also changed that to bind 0.0.0.0 but I got the same error.
Now in for my environment variables I'm putting: ~redis//redis:6379~ or redis:6379 so here is my configuration (docker-compose.yml):
version: '3.7'
services:
classified-ads:
container_name: classified-ads
depends_on:
- redis-service
ports:
- 3000:3000
build:
context: ./
# restart: unless-stopped
environment:
- REDIS_URI=redis:6379
- INSIDE_DOCKER=wahoo
# our custom image
redis:
container_name: redis-service
build:
context: ./docker/redis/
privileged: true
command: sh -c "./init.sh"
ports:
- '6379:6379'
volumes:
- ./host-db/redis-data:/data/redis:rw
The error I'm getting is bloated and is from my client which is ioredis but it is clearly a connection error. (ioredis wrapped with Fastify/redis, so it is a failed promise that is very verbose but not clearly indicative, but it's 100% a connection error).
I checked Redis logs piped to Docker and it is running fine.
Edit: ping redis//redis:6379 does not work from my app image while ping redis:6379 works so I changed that.
I found the sollution. While the default redis:alpine image use configuration with protection_mode yes I was using a new configuration with protection_mode no. I also removed the bind 'address' all together.
And reconnected normally from other services with redis:port as usual.
all thanks to https://stackoverflow.com/a/57541475/1951298
Related
I have got following compose file where i'm sharing some generated html data from Jenkins container to the host drive and reading this data by Nginx container from the host drive. I'm using Ubuntu Server 18.04 on AWS.
The problem is that I can read contents of the jenkins/workspace/allure-report only once. After updating of the html data it becomes inaccessible for Nginx and it throws 403 status code.
I tried all the possible solutions but nothing works. The only ugly solution is to restart Nginx container after every html data updating. I don't like this way and looking for some inbuilt docker features to resolve this.
What didn't help: sharing volume straight between containers without using docker host drive, using rslave option, using docker separate volume that can be used as buffer between the two containers... I believe it should be much more easier!
version: '2'
services:
jenkins:
container_name: jenkins
image: "jenkins/jenkins"
ports:
- "8088:8080"
- "50000:50000"
env_file:
- variables.env
volumes:
- ./jenkins:/var/jenkins_home
selenoid:
container_name: selenoid
network_mode: bridge
image: "aerokube/selenoid"
# default directory for browsers.json is /etc/selenoid/
command: -listen :4444 -conf /etc/selenoid/browsers.json -video-output-dir /opt/selenoid/video/ -timeout 3m
ports:
- "4444:4444"
env_file:
- variables.env
volumes:
- $PWD:/etc/selenoid/ # assumed current dir contains browsers.json
- /var/run/docker.sock:/var/run/docker.sock
selenoid-ui:
container_name: selenoid-ui
network_mode: bridge
image: "aerokube/selenoid-ui"
links:
- selenoid
ports:
- "8080:8080"
env_file:
- variables.env
command: ["--selenoid-uri", "http://selenoid:4444"]
nginx:
container_name: nginx
image: "nginx"
ports:
- "80:80"
volumes:
- ./jenkins/workspace/allure-report:/usr/share/nginx/html:ro,rslave
Found the solution: the easiest way to get access to the dynamic data is to use volumes_from in that container you want to look from.
When I configured my compose file like that I faced another issue - the 403 status has gone but the data was static. But that was my fault, I didn't use "cp -r " command correctly so my data has been copied only once.
I am trying to integrate my own ABCI-application with the localnet. The docker-compose looks as
version: '3'
services:
node0:
container_name: node0
image: "tendermint/localnode"
ports:
- "26656-26657:26656-26657"
environment:
- ID=0
- LOG=${LOG:-tendermint.log}
volumes:
- ./build:/tendermint:Z
command: node --proxy_app=tcp://abci0:26658
networks:
localnet:
ipv4_address: 192.167.10.2
abci0:
container_name: abci0
image: "abci-image"
volumes:
- $GOPATH/src/samplePOC:/go/src/samplePOC
ports:
- "26658:26658"
build:
context: .
dockerfile: $GOPATH/src/samplePOC/Dockerfile
command: /go/src/samplePOC/samplePOC
networks:
localnet:
ipv4_address: 192.167.10.6
Both the nodes and the abci- containers are built successfully. The ABCI server is started successfully and the nodes are trying to make connections. However the main problem is that the I see the two are not able to communicate with each other.
I get the following error:
node0 |E[2019-10-29|15:14:28.525] abci.socketClient failed to connect
to tcp://abci0:26658. Retrying... module=abci-client connection=query
err="dial tcp 192.167.10.6:26658: connect: connection refused"
Can someone please help me here?
My first thought is that you may need to add a depends_on: ["abci0"] to node0, as the ABCI application must be listening before Tendermint will try to connect.
Of course, TM should continue to retry so this may not be the issue.
Another thing you can try, is to run tendermint on your host machine, and attempt to connect to the exposed port of ABCI port on abci0 (26658) to isolate the problem to the docker configuration.
If you're not able to run tendermint node --proxy_app=tcp://localhost:26658 the problem likely lies in your ABCI application.
I assume you've initialized a directory in the volume you mount into node0?
I got this working with the kvstore example from Tendermint.
version: "3.4"
services:
kvstore-app:
image: alpine
expose:
- "26658"
volumes:
- ./kvstore-example:/home/dev/kvstore-example
command: "/home/dev/kvstore-example --socket-addr tcp://kvstore-app:26658"
tendermint-node:
image: tendermint/tendermint
depends_on:
- kvstore-app
ports:
- "26657:26657"
environment:
- TMHOME=/tmp/tendermint
volumes:
- ./tmp/tendermint:/tmp/tendermint
command: node --proxy_app=tcp://kvstore-app:26658
I'm not exactly sure why your docker-compose.yml isn't working, but it's likely that you are not binding the socket of your abci application in a way that is accessible to the node. I'm explicitly telling the abci application to do so with the argument --socket-addr tcp://kvstore-app:26658". Additionally, I'm just exposing the port of the abci application on the docker network, but I think mapping the port should do this implicitly.
Also I would get rid of all the network stuff. Personally, I use the network configuration only if I have some very specific network goals in mind.
What is the use of container_name in docker-compose.yml file? Can I use it as hostname which is nothing but the service name in docker-compose.yml file.
Also when I explicitly write hostname under services does it override the hostname represented by service name?
hostname: just sets what the container believes its own hostname is. In the unusual event you got a shell inside the container, it might show up in the prompt. It has no effect on anything outside, and there’s usually no point in setting it. (It has basically the same effect as hostname(1): that command doesn’t cause anything outside your host to know the name you set.)
container_name: sets the actual name of the container when it runs, rather than letting Docker Compose generate it. If this name is different from the name of the block in services:, both names will be usable as DNS names for inter-container communication. Unless you need to use docker to manage a container that Compose started, you usually don’t need to set this either.
If you omit both of these settings, one container can reach another (provided they’re in the same Docker Compose file and have compatible networks: settings) using the name of the services: block and the port the service inside the container is listening in.
version: '3'
services:
redis:
image: redis
db:
image: mysql
ports: [6033:3306]
app:
build: .
ports: [12345:8990]
env:
REDIS_HOST: redis
REDIS_PORT: 6379
MYSQL_HOST: db
MYSQL_PORT: 3306
The easiest answer is the following:
container_name: This is the container name that you see from the host machine when listing the running containers with the docker container ls command.
hostname: The hostname of the container. Actually, the name that you define here is going to the /etc/hosts file:
$ exec -it myserver /bin/bash
bash-4.2# cat /etc/hosts
127.0.0.1 localhost
172.18.0.2 myserver
That means you can ping machines by that names within a Docker network.
I highly suggest set these two parameters the same to avoid confusion.
An example docker-compose.yml file:
version: '3'
services:
database-server:
image: ...
container_name: database-server
hostname: database-server
ports:
- "xxxx:yyyy"
web-server:
image: ...
container_name: web-server
hostname: web-server
ports:
- "xxxx:xxxx"
- "5101:4001" # debug port
you can customize the image name to build & container name during docker-compose up for this, you need to mention like below in docker-compose.yml file.
It will create an image & container with custom names.
version: '3'
services:
frontend_dev:
stdin_open: true
environment:
- CHOKIDAR_USEPOLLING=true
build:
context: .
dockerfile: Dockerfile.dev
image: "mycustomname/sample:v1"
container_name: mycustomname_sample_v1
ports:
- '3000:3000'
volumes:
- /app/node_modules
- .:/app
Tried looking around but couldn't find anything close to what I need.
I have a docker-compose file with a docker container (web) that uses another container's IP (api) in its environment variable by resolving the hostname:
version: '3'
services:
web:
build: ../client/
ports:
- "5000:5000"
- "3000:3000"
environment:
REACT_APP_API_DEV: http://api:8000/server/graphql
api:
build: ../server/
env_file:
- server_variables.env
ports:
- "8000:8000"
redis:
image: "redis:alpine"
My issue is that web doesn't resolve this variable when it's running. I can ping api just fine inside the web container but http://api:8000 doesn't resolve properly. I also tried making HOST=api the variable and building the URI manually but that doesn't work either.
EDIT: I added a complete docker-compose.yml file for reference. I can curl the api just fine from inside the web container, but my app can't seem to resolve it properly. I'm using NodeJS and React
Alright, I found the issue. Apparently, my web container was fetching from api with the http://api:8000 URI but my browser doesn't know what api is (only the containers do).
I followed the stuff suggested in here to resolve the hostname on my machine and it worked out.
You have to link them using network
version: '3'
services:
web:
...
environment:
- HOST: http://api:8000
networks:
- my-network
...
api:
networks:
- my-network
...
networks:
my-network:
I'm using docker compose for a web application that I'm creating with asp.net core, postgres and redis. I have everything set up in compose to connect to postgres using the service name I've specified in the docker-compose.yml file. When trying to do the same with redis, I get an exception. After doing research it turns out this exception is a known issue and the work around is using the ip address of the the machine instead of a host name. However I cannot figure out how to get the ipaddress of the redis service from the compose file. Is there a way to do that?
Edit
Here is the compose file
version: "3"
services:
postgres:
image: 'postgres:9.5'
env_file:
- '.env'
volumes:
- 'postgres:/var/lib/postgresql/data'
ports:
- '5433:5432'
redis:
image: 'redis:3.0-alpine'
command: redis-server --requirepass devpassword
volumes:
- 'redis:/var/lib/redis/data'
ports:
- '6378:6379'
web:
build: .
env_file:
- '.env'
ports:
- "8000:80"
volumes:
- './src/edb/Controllers:/app/Controllers'
- './src/edb/Views:/app/Views'
- './src/edb/wwwroot:/app/wwwroot'
- './src/edb/Lib:/app/Lib'
volumes:
postgres:
redis:
Ok, I found the answer. It was something I was trying but didn't realize the address may change everytime you restart the containers.
Run docker ps to get a list of running contianers then copy the id of your container and run docker inspect {container_id} and that will output the ipaddress that you can access it with from within the other running containers.
The reason I was confused was because that address may change when the containers are started. So I had to guess what the ip address was going to be before I started the containers. Luckly after 5 times I guessed correctly.