I have a situation with cassandra container.
I have 2 docker-compse.yaml files in different folders.
docker-compose.yaml in folder 1
version: "3"
services:
cassandra-cluster-node-1:
image: cassandra:3.0
container_name: cassandra-cluster-node-1
hostname: cassandra-cluster-node-1
ports:
- '9142:9042'
- '7199:7199'
- '9160:9160'
docker-compose.yaml in folder 2
version: "3"
services:
cassandra-cluster-node-2:
image: cassandra:3.0
container_name: cassandra-cluster-node-2
hostname: cassandra-cluster-node-2
ports:
- '9242:9042'
- '7299:7199'
- '9260:9160'
I tried to up cassandra on folder 1, the system work well, after that I up cassandra on folder 2. But at this time, service cassandra on folder 1 is killed automatically. So I didn't understand with them, could you guys please, who have experiences with Docker can help me to explain this situation?
The error in cassandra_1 after I run cassandra_2
cassandra-cluster-node-1 exited with code 137
Thank you, I'm going to appreciate your help.
137 is out of memory error. Cassandra uses a lot of memory if started with default settings. By default it takes 1/4 of the system memory. For each instans. You can restrict the memory usage using environment variables (see my example further down)
Docker compose creates a network for each directory it runs under. With your setup the two nodes will never be able to find each other. This is the output from my test, your files are put into two directories; cass1 and cass1
$ docker network ls
NETWORK ID NAME DRIVER SCOPE
dbe9cafe0af3 bridge bridge local
70cf3d77a7fc cass1_default bridge local
41af3e02e247 cass2_default bridge local
21ac366b7a31 host host local
0787afb9aeeb none null local
You can see the two networks cass1_default and cass2_default. So the two nodes will not find each other.
If you want them to find each other you have to give the first one as a seed to second one, and they have to be in the same network (same docker-compose file)
version: "3"
services:
cassandra-cluster-node-1:
image: cassandra:3.0
container_name: cassandra-cluster-node-1
hostname: cassandra-cluster-node-1
environment:
- "MAX_HEAP_SIZE=1G"
- "HEAP_NEWSIZE=256M"
ports:
- '9142:9042'
- '7199:7199'
- '9160:9160'
cassandra-cluster-node-2:
image: cassandra:3.0
container_name: cassandra-cluster-node-2
hostname: cassandra-cluster-node-2
environment:
- "MAX_HEAP_SIZE=1G"
- "HEAP_NEWSIZE=256M"
- "CASSANDRA_SEEDS=cassandra-cluster-node-1"
ports:
- '9242:9042'
- '7299:7199'
- '9260:9160'
depends_on:
- cassandra-cluster-node-1
Related
I have two separate projects in two separate folders.
when I run one of them, the second one cannot run because of conflict between ports.
The problem is for ElasticSearch image.
Followings are two docker-compose files:
# /home/foder_1/
version: '3'
services:
elasticsearch_ci:
image: elasticsearch:7.14.2
restart: always
expose:
- 9200
environment:
- discovery.type=single-node
- xpack.security.enabled=false
env_file:
- ./envs/ci.env
container_name: elasticsearch_ci_pipeline
Second one:
# /home/folder_2/
version: '3'
services:
elasticsearch:
image: elasticsearch:7.14.2
expose:
- 9200
volumes:
- elastic_search_data_staging:/var/lib/elastic_search/data/
environment:
- discovery.type=single-node
- xpack.security.enabled=false
When I run docker ps, I see the second ElasticSearch container created but it doesn't show its ports.
How can I solve the problem?
Update:
The problem is in this situation, my web application (django-base) cannot connect to the second elastic search instance.
Also, when I change port number in the second docker-compose for ES, (for example adding 9500 as Expose), again the port numbers of ES is the default ports (9200, 9300) plus my new port (9500) and my web application cannot connect to none of them.
Finally I found what the problem is.
My server has only 4 GBs of RAM and when one elasticsearch is running, other instances of elasticsearch cannot start because the first instance consumes the most of RAM.
if you want to run two separate instances of elasticsearch, you shoul consider at least 6 GBs of RAM per instance.
I am trying to understand how I access containers between each other through their container name. Specifically when using a pgadmin container and connecting to a postgresql container through dns.
In docker-compose V3 , I cannot link them, nor does networks: seem to be available either.
The main reason to need this is when the containers spin up they don't have a static IP address, so in pgadmin I can't connect to the postgresql DB using the same IP every time , so a dns name would work better (ie: the container name).
Can we do this with docker-compose or at least set a static ip address for a specific container?
I have tried creating a user defined network:
networks:
backed:
and then using it in the service:
app:
networks:
- backend
This causes a docker-compose error regarding an invalid option of "networks" in the app.
docker-compose.yml
version: "0.1"
services:
devapi:
container_name: devapi
restart: always
build: .
ports:
- "3000:3000"
api-postgres-pgadmin:
container_name: api-postgres-pgadmin
image: dpage/pgadmin4:latest
ports:
- "5050:80"
environment:
- PGADMIN_DEFAULT_EMAIL=stuff#stuff.com
- PGADMIN_DEFAULT_PASSWORD=12345
api-postgres:
container_name: api-postgres
image: postgres:10
volumes:
- ./data:/data/db
ports:
- "15432:5432"
environment:
- POSTGRES_PASSWORD=12345
Actually, I spot one immediate problem:
version: "0.1"
Why are you doing this? The current version of the compose file format is 3.x. E.g:
version: "3"
See e.g. the Compose file version 3 reference.
The version determines which feature are available. It's entirely possible that by setting version: "0.1" you are explicitly disabling support for the networks parameter. You'll note that the reference shows examples using the networks attribute.
As an aside, unless there is a particular reason you ened it, I would drop the use of the container_name in your compose file, since this makes it impossible to run multiple instances of the same compose file on your host.
networks are available from docker-compose version 3 but you are using version:"0.1" in your docker-compose file.
Change the version: "0.1" to version: "3" in docker-compose.yml.
I am using backtrader as client with IBpy2 to access my IBC controlled IBGateway running on Docker.
I'm facing the issue that my system starts and just hangs there, with no errors or printed debug info.
I debugged my way as far as this line, reading:
self.m_serverVersion = self.m_reader.readInt()
Which is waiting to receive the server version through the connection, which never arrives.
I get this only when the IBGateway runs through docker, I don't understand how it's possible that IBpy can establish a connection but cannot exchange data.
I could not pinpoint where the problem might be, the fact that it happens only when IBC is on docker compose suggests that this depends on Docker compose, here's my docker-compose.yml file
--- updated: ---
version: '3.7'
services:
trader:
build: ./
image: mytrader
container_name: mytrader
networks:
- trading
depends_on:
- tws
tws:
build: ./ib-docker
image: ibconnect
container_name: ibconnect
ports:
# - "4001:4001"
- "4003:4003"
- "5901:5901"
volumes:
- ./ib-docker/config.ini:/root/ibc/config.ini
# - ./ib-docker/twsstart.sh:/opt/ibc/twsstart.sh
- ./ib-docker/gatewaystart.sh:/opt/ibc/gatewaystart.sh
environment:
- TZ=UTC
# Variables pulled from /root/IBController/IBControllerGatewayStart.sh
- VNC_PASSWORD=password
- IBC_PATH=/opt/ibc
- LOG_PATH=/root/ibc/logs
env_file:
- tws_credentials.env
networks:
- trading
networks:
trading:
driver: bridge
and the list of networks
% docker network ls
NETWORK ID NAME DRIVER SCOPE
4ad25f1cf0f4 bridge bridge local
9ca6f0e3f509 giuliotrader_default bridge local
3afbca83e020 giuliotrader_trading bridge local
73c2590a3a11 host host local
34e58c19f5e3 none null local
happy to post any additional files or info as might be needed.
Thanks,
Good afternoon, maybe you should use link from trader to tws
services:
trader:
links:
- tws
build: ./
image: mytrader
container_name: mytrader
I have 2 applications that are separate codebases, and they each have their own database on the same db server instance.
I am trying to replicate this in docker, locally on my laptop. I want to be able to have both apps use the same database instance.
I would like
both apps to start in docker at the same time
both apps to be able to access the database on localhost
the database data is persisted
be able to view the data in the database using an IDE on localhost
So each of my apps has its own dockerfile and docker-compose file.
On app1, I start the docker instance of the app which is tied to the database. It all starts fine.
When I try to start app2, I get the following error:
ERROR: for app2_mssql_1 Cannot start service mssql: driver failed programming external connectivity on endpoint app2_mssql_1 (12d550c8f032ccdbe67e02445a0b87bff2b2306d03da1d14ad5369472a200620): Bind for 0.0.0.0:1433 failed: port is already allocated
How can i have them both running at the same time? BOTH apps need to be able to access each others database tables!
Here is the docker-compose.yml files
app1:
version: "3"
services:
web:
build:
context: .
args:
volumes:
- .:/app
ports:
- "3000:3000"
depends_on:
- mssql
mssql:
image: 'microsoft/mssql-server-linux'
ports:
- '1433:1433'
environment:
- ACCEPT_EULA=Y
- SA_PASSWORD=P455w0rd!
volumes:
- app1_db:/var/lib/mssql/data
volumes:
app1_db:
and here is app2:
version: "3"
services:
web:
build:
context: .
args:
volumes:
- .:/app
ports:
- "3000:3000"
depends_on:
- mssql
mssql:
image: 'microsoft/mssql-server-linux'
ports:
- '1433:1433'
environment:
- ACCEPT_EULA=Y
- SA_PASSWORD=P455w0rd!
volumes:
- app2_db:/var/lib/mssql/data
volumes:
app2_db:
Should I be using the same volume in each docker-compose file?
I guess the problem is in each app i am spinning up 2 different db instances, when in reality I guess i just want one, and it be used by all my apps?
The ports part in docker-compose file will bound the container port to host's port which causes port conflict in your case.
You need to remove the ports part from at least one of the compose file. This way, docker-compose can be up for both. And you can have access to both app at same time. But remember both apps will be placed in separate network bridges.
How docker-compose up works:
Suppose your app is in a directory called myapp, and your docker-compose.yml
When you run docker-compose up, the following happens:
A network called myapp_default is created.
A container is created using web’s configuration. It joins the network myapp_default under the name web.
A container is created using db’s configuration. It joins the network myapp_default under the name db.
If you run the second docker-compose.yml in different folder myapp2, then the nework will be myapp2_default.
Current configuration creates two volumes, two datebase containers and two apps. If you can make them run in the same network and run database as the single container it will work.
I don't think you are expecting two database container two two volumes.
Approach 1:
docker-compose.yml as a single compose.
version: "3"
services:
app1:
build:
context: .
args:
volumes:
- .:/app # give the path depending up on the docker file of app1.
ports:
- "3030:3000"
depends_on:
- mssql
app2:
build:
context: .
args:
volumes:
- .:/app # give the path depending up on the docker file of app2.
ports:
- "3032:3000"
depends_on:
- mssql
mssql:
image: 'microsoft/mssql-server-linux'
ports:
- '1433:1433'
environment:
- ACCEPT_EULA=Y
- SA_PASSWORD=SqlServer1234!
volumes:
- app_docker_db:/var/lib/mssql/data
volumes:
app_docker_db:
Approach 2:
To Isolate it further, still want to run them as the sepeare composefiles, create three compose file with network.
docker-compose.yml for database with network
version: "3"
services:
mssql:
image: 'microsoft/mssql-server-linux'
ports:
- '1433:1433'
environment:
- ACCEPT_EULA=Y
- SA_PASSWORD=SqlServer1234!
volumes:
- app_docker_db:/var/lib/mssql/data
networks:
- test_network
volumes:
app_docker_db
networks:
test_network:
docker-ompose.yml for app1
remove the database container and add below lines to your compose file
version: "3"
services:
app1:
build:
context: .
args:
volumes:
- .:/app # give the path depending up on the docker file of app1.
ports:
- "3030:3000"
networks:
default:
external:
name: my-pre-existing-network
Do the same for another docker-compose by replacing the docker-compose file.
There are many other option to create docker-compose files. Configure the default network and Use a pre-existing network
You're exposing the same port (1433) two times to the host machine. (This is what "ports:..." does). This is not possible as it would block the same port on your host (That's what the message says).
I think the most common way in these cases is that you link your db's to your apps. (See https://docs.docker.com/compose/compose-file/#links). By doing this your applications can still access the databases on their common ports (1433), but the databases are not accessible from the host anymore (only from the container that is linked to it).
Another error I see in your docker compose file is that both applications are exposed by the same ports. This is also not possible for the same reason. I would suggest that you change one of them to "3000:3001", so you can access this application on port 3001.
I have a couple of app containers that I want to connect to the mongodb container. I tried with external_links but I can not connect to the mongodb.
I get
MongoError: failed to connect to server [mongodb:27017] on first
connect
Do I have to add the containers into the same network to get external_links working?
MongoDB:
version: '2'
services:
mongodb:
image: mongo:3.4
restart: always
ports:
- "27017:27017"
volumes:
- data:/data/db
volumes:
data:
App:
version: '2'
services:
app-dev:
restart: Always
build: repository/
ports:
- "3000:80"
env_file:
- ./environment.env
external_links:
- mongodb_mongodb_1:mongodb
Networks:
# sudo docker network ls
NETWORK ID NAME DRIVER SCOPE
29f8bae3e136 bridge bridge local
67d5519cb2e6 dev_default bridge local
9e7097c844cf host host local
481ee4301f7c mongodb_default bridge local
4275508449f6 none null local
873a46298cd9 prod_default bridge local
Documentation at https://docs.docker.com/compose/compose-file/#/externallinks says
If you’re using the version 2 file format, the externally-created containers must be connected to at least one of the same networks as the service which is linking to them.
Ex:
Create a new docker network
docker network create -d bridge custom
docker-compose-1.yml
version: '2'
services:
postgres:
image: postgres:latest
ports:
- 5432:5432
networks:
- custom
networks:
custom:
external: true
docker-compose-2.yml
version: '2'
services:
app:
image: training/webapp
networks:
- custom
external_links:
- postgres:postgres
networks:
custom:
external: true
Yuva's answer above for the version 2 holds good for version 3 as well.
The documentation for the external_links isn't clear enough.
For more clarity I pasted the version 3 variation with annotation
version: '3'
services:
app:
image: training/webapp
networks:
- <<network created by other compose file>>
external_links:
- postgres:postgres
networks:
<<network created by other compose file>>:
external: true
Recently I faced Name resolution failure trying to link 2 containers handled by docker-compose v3 representing gRPC server and client in my case, but failed and with external_links.
I'll probably duplicate some of the info posted here, but will try to summarize
as all these helped me solving the issue.
From external_links docs (as mentioned in earlier answer):
If you’re using the version 2 or above file format, the externally-created containers must be connected to at least one of the same networks as the service that is linking to them.
The following configuration solved the issue.
project-grpc-server/docker-compose.yml
version: '3'
services:
app:
networks:
- some-network
networks:
some-network:
Server container configured as expected.
project-grpc-client/docker-compose.yml
services:
app:
external_links:
# Assigning easy alias to the target container
- project-grpc-server_app_1:server
networks:
# Mentioning current container as a part of target network
- project-grpc-server_some-network
networks:
# Announcing target network (where server resides)
project-grpc-server_some-network:
# Telling that announced network already exists (shouldn't be created but used)
external: true
When using defaults (no container_name configured) the trick with configuring client container is in prefixes. In my case network name had prefix project-grpc-server_ when working with docker-compose and than goes the name itself some-network (project-grpc-server_some-network). So fully qualified network names should be passed when dealing with separate builds.
While container name is obvious as it appears from time to time on the screen the full network name is not easy-to-guess candidate when first facing this section of Docker, unless docker network ls.
I'm not a Docker expert, so please don't judge too strict if all this is obvious and essential in Docker world.