How to prevent conflict between two separate docker-compose - docker

I have two separate projects in two separate folders.
when I run one of them, the second one cannot run because of conflict between ports.
The problem is for ElasticSearch image.
Followings are two docker-compose files:
# /home/foder_1/
version: '3'
services:
elasticsearch_ci:
image: elasticsearch:7.14.2
restart: always
expose:
- 9200
environment:
- discovery.type=single-node
- xpack.security.enabled=false
env_file:
- ./envs/ci.env
container_name: elasticsearch_ci_pipeline
Second one:
# /home/folder_2/
version: '3'
services:
elasticsearch:
image: elasticsearch:7.14.2
expose:
- 9200
volumes:
- elastic_search_data_staging:/var/lib/elastic_search/data/
environment:
- discovery.type=single-node
- xpack.security.enabled=false
When I run docker ps, I see the second ElasticSearch container created but it doesn't show its ports.
How can I solve the problem?
Update:
The problem is in this situation, my web application (django-base) cannot connect to the second elastic search instance.
Also, when I change port number in the second docker-compose for ES, (for example adding 9500 as Expose), again the port numbers of ES is the default ports (9200, 9300) plus my new port (9500) and my web application cannot connect to none of them.

Finally I found what the problem is.
My server has only 4 GBs of RAM and when one elasticsearch is running, other instances of elasticsearch cannot start because the first instance consumes the most of RAM.
if you want to run two separate instances of elasticsearch, you shoul consider at least 6 GBs of RAM per instance.

Related

Setting up Fluree on Docker

I have a blockchain web application set up on Docker so that I can have multiple servers running together. But, all the images are on one computer. I would like to have at least one image running on a different computer (on the same network). I've tried to edit my yaml file to include IP addresses of the computers, but while I do not get any errors, the ledgers dont get created on the other computer. Here is my yaml. In this example, Server 4 should be on a different computer (but even though I gave it a different IP address, the two computers do not seem to be synced). I am not sure if my issue is with Docker or Fluree (I am new to both). Thanks!
version: '3'
services:
ledger1:
image: fluree/ledger
ports:
- 8090:8090
- 9791:9791
environment:
fdb_group_servers: server1#192.168.1.15:9791,server2#192.168.1.15:9792,server3#192.168.1.15:9793,server4#192.168.1.25:9794
fdb_group_this_server: server1
ledger2:
image: fluree/ledger
ports:
- 8091:8090
- 9792:9792
environment:
fdb_group_servers: server1#192.168.1.15:9791,server2#192.168.1.15:9792,server3#192.168.1.15:9793,server4#192.168.1.25:9794
fdb_group_this_server: server2
ledger3:
image: fluree/ledger
ports:
- 8092:8090
- 9793:9793
environment:
fdb_group_servers: server1#192.168.1.15:9791,server2#192.168.1.15:9792,server3#192.168.1.15:9793,server4#192.168.1.25:9794
fdb_group_this_server: server3
ledger4:
image: fluree/ledger
ports:
- 8093:8090
- 9794:9794
environment:
fdb_group_servers: server1#192.168.1.15:9791,server2#192.168.1.15:9792,server3#192.168.1.15:9793,server4#192.168.1.25:9794
fdb_group_this_server: server4

How to stop docker-compose from doing unwanted replication

I've got a very simple single-host docker compose setup:
version: "3"
services:
bukofka:
image: picoglavar
restart: always
environment:
- PORT=8000
- MODEL=/models/large
volumes:
- glavar:/models
chlenix:
image: picoglavar
restart: always
environment:
- PORT=8000
- MODEL=/models/small
volumes:
- glavar:/models
# ... other containers ...
As you can see, it's only two services based off a single image, so nothing special really. When I open up docker ps I can see these two services churning. And there I open htop and see that each python application is run at least four times; this is very surprising because I haven't setup any in-container kind of replication, and I'm not running this in any kind of swarm mode.
Why does this happen?
I'm a complete idiot. And colour blind too, apparently.
Note that the lines in green are threads, not processes: https://superuser.com/a/1496571/173193
per #nick-odell

How do I connect containers using container name with docker-compose?

I am trying to understand how I access containers between each other through their container name. Specifically when using a pgadmin container and connecting to a postgresql container through dns.
In docker-compose V3 , I cannot link them, nor does networks: seem to be available either.
The main reason to need this is when the containers spin up they don't have a static IP address, so in pgadmin I can't connect to the postgresql DB using the same IP every time , so a dns name would work better (ie: the container name).
Can we do this with docker-compose or at least set a static ip address for a specific container?
I have tried creating a user defined network:
networks:
backed:
and then using it in the service:
app:
networks:
- backend
This causes a docker-compose error regarding an invalid option of "networks" in the app.
docker-compose.yml
version: "0.1"
services:
devapi:
container_name: devapi
restart: always
build: .
ports:
- "3000:3000"
api-postgres-pgadmin:
container_name: api-postgres-pgadmin
image: dpage/pgadmin4:latest
ports:
- "5050:80"
environment:
- PGADMIN_DEFAULT_EMAIL=stuff#stuff.com
- PGADMIN_DEFAULT_PASSWORD=12345
api-postgres:
container_name: api-postgres
image: postgres:10
volumes:
- ./data:/data/db
ports:
- "15432:5432"
environment:
- POSTGRES_PASSWORD=12345
Actually, I spot one immediate problem:
version: "0.1"
Why are you doing this? The current version of the compose file format is 3.x. E.g:
version: "3"
See e.g. the Compose file version 3 reference.
The version determines which feature are available. It's entirely possible that by setting version: "0.1" you are explicitly disabling support for the networks parameter. You'll note that the reference shows examples using the networks attribute.
As an aside, unless there is a particular reason you ened it, I would drop the use of the container_name in your compose file, since this makes it impossible to run multiple instances of the same compose file on your host.
networks are available from docker-compose version 3 but you are using version:"0.1" in your docker-compose file.
Change the version: "0.1" to version: "3" in docker-compose.yml.

Can not start 2 cassandra containers on mac

I have a situation with cassandra container.
I have 2 docker-compse.yaml files in different folders.
docker-compose.yaml in folder 1
version: "3"
services:
cassandra-cluster-node-1:
image: cassandra:3.0
container_name: cassandra-cluster-node-1
hostname: cassandra-cluster-node-1
ports:
- '9142:9042'
- '7199:7199'
- '9160:9160'
docker-compose.yaml in folder 2
version: "3"
services:
cassandra-cluster-node-2:
image: cassandra:3.0
container_name: cassandra-cluster-node-2
hostname: cassandra-cluster-node-2
ports:
- '9242:9042'
- '7299:7199'
- '9260:9160'
I tried to up cassandra on folder 1, the system work well, after that I up cassandra on folder 2. But at this time, service cassandra on folder 1 is killed automatically. So I didn't understand with them, could you guys please, who have experiences with Docker can help me to explain this situation?
The error in cassandra_1 after I run cassandra_2
cassandra-cluster-node-1 exited with code 137
Thank you, I'm going to appreciate your help.
137 is out of memory error. Cassandra uses a lot of memory if started with default settings. By default it takes 1/4 of the system memory. For each instans. You can restrict the memory usage using environment variables (see my example further down)
Docker compose creates a network for each directory it runs under. With your setup the two nodes will never be able to find each other. This is the output from my test, your files are put into two directories; cass1 and cass1
$ docker network ls
NETWORK ID NAME DRIVER SCOPE
dbe9cafe0af3 bridge bridge local
70cf3d77a7fc cass1_default bridge local
41af3e02e247 cass2_default bridge local
21ac366b7a31 host host local
0787afb9aeeb none null local
You can see the two networks cass1_default and cass2_default. So the two nodes will not find each other.
If you want them to find each other you have to give the first one as a seed to second one, and they have to be in the same network (same docker-compose file)
version: "3"
services:
cassandra-cluster-node-1:
image: cassandra:3.0
container_name: cassandra-cluster-node-1
hostname: cassandra-cluster-node-1
environment:
- "MAX_HEAP_SIZE=1G"
- "HEAP_NEWSIZE=256M"
ports:
- '9142:9042'
- '7199:7199'
- '9160:9160'
cassandra-cluster-node-2:
image: cassandra:3.0
container_name: cassandra-cluster-node-2
hostname: cassandra-cluster-node-2
environment:
- "MAX_HEAP_SIZE=1G"
- "HEAP_NEWSIZE=256M"
- "CASSANDRA_SEEDS=cassandra-cluster-node-1"
ports:
- '9242:9042'
- '7299:7199'
- '9260:9160'
depends_on:
- cassandra-cluster-node-1

docker - multiple databases on local

I have 2 applications that are separate codebases, and they each have their own database on the same db server instance.
I am trying to replicate this in docker, locally on my laptop. I want to be able to have both apps use the same database instance.
I would like
both apps to start in docker at the same time
both apps to be able to access the database on localhost
the database data is persisted
be able to view the data in the database using an IDE on localhost
So each of my apps has its own dockerfile and docker-compose file.
On app1, I start the docker instance of the app which is tied to the database. It all starts fine.
When I try to start app2, I get the following error:
ERROR: for app2_mssql_1 Cannot start service mssql: driver failed programming external connectivity on endpoint app2_mssql_1 (12d550c8f032ccdbe67e02445a0b87bff2b2306d03da1d14ad5369472a200620): Bind for 0.0.0.0:1433 failed: port is already allocated
How can i have them both running at the same time? BOTH apps need to be able to access each others database tables!
Here is the docker-compose.yml files
app1:
version: "3"
services:
web:
build:
context: .
args:
volumes:
- .:/app
ports:
- "3000:3000"
depends_on:
- mssql
mssql:
image: 'microsoft/mssql-server-linux'
ports:
- '1433:1433'
environment:
- ACCEPT_EULA=Y
- SA_PASSWORD=P455w0rd!
volumes:
- app1_db:/var/lib/mssql/data
volumes:
app1_db:
and here is app2:
version: "3"
services:
web:
build:
context: .
args:
volumes:
- .:/app
ports:
- "3000:3000"
depends_on:
- mssql
mssql:
image: 'microsoft/mssql-server-linux'
ports:
- '1433:1433'
environment:
- ACCEPT_EULA=Y
- SA_PASSWORD=P455w0rd!
volumes:
- app2_db:/var/lib/mssql/data
volumes:
app2_db:
Should I be using the same volume in each docker-compose file?
I guess the problem is in each app i am spinning up 2 different db instances, when in reality I guess i just want one, and it be used by all my apps?
The ports part in docker-compose file will bound the container port to host's port which causes port conflict in your case.
You need to remove the ports part from at least one of the compose file. This way, docker-compose can be up for both. And you can have access to both app at same time. But remember both apps will be placed in separate network bridges.
How docker-compose up works:
Suppose your app is in a directory called myapp, and your docker-compose.yml
When you run docker-compose up, the following happens:
A network called myapp_default is created.
A container is created using web’s configuration. It joins the network myapp_default under the name web.
A container is created using db’s configuration. It joins the network myapp_default under the name db.
If you run the second docker-compose.yml in different folder myapp2, then the nework will be myapp2_default.
Current configuration creates two volumes, two datebase containers and two apps. If you can make them run in the same network and run database as the single container it will work.
I don't think you are expecting two database container two two volumes.
Approach 1:
docker-compose.yml as a single compose.
version: "3"
services:
app1:
build:
context: .
args:
volumes:
- .:/app # give the path depending up on the docker file of app1.
ports:
- "3030:3000"
depends_on:
- mssql
app2:
build:
context: .
args:
volumes:
- .:/app # give the path depending up on the docker file of app2.
ports:
- "3032:3000"
depends_on:
- mssql
mssql:
image: 'microsoft/mssql-server-linux'
ports:
- '1433:1433'
environment:
- ACCEPT_EULA=Y
- SA_PASSWORD=SqlServer1234!
volumes:
- app_docker_db:/var/lib/mssql/data
volumes:
app_docker_db:
Approach 2:
To Isolate it further, still want to run them as the sepeare composefiles, create three compose file with network.
docker-compose.yml for database with network
version: "3"
services:
mssql:
image: 'microsoft/mssql-server-linux'
ports:
- '1433:1433'
environment:
- ACCEPT_EULA=Y
- SA_PASSWORD=SqlServer1234!
volumes:
- app_docker_db:/var/lib/mssql/data
networks:
- test_network
volumes:
app_docker_db
networks:
test_network:
docker-ompose.yml for app1
remove the database container and add below lines to your compose file
version: "3"
services:
app1:
build:
context: .
args:
volumes:
- .:/app # give the path depending up on the docker file of app1.
ports:
- "3030:3000"
networks:
default:
external:
name: my-pre-existing-network
Do the same for another docker-compose by replacing the docker-compose file.
There are many other option to create docker-compose files. Configure the default network and Use a pre-existing network
You're exposing the same port (1433) two times to the host machine. (This is what "ports:..." does). This is not possible as it would block the same port on your host (That's what the message says).
I think the most common way in these cases is that you link your db's to your apps. (See https://docs.docker.com/compose/compose-file/#links). By doing this your applications can still access the databases on their common ports (1433), but the databases are not accessible from the host anymore (only from the container that is linked to it).
Another error I see in your docker compose file is that both applications are exposed by the same ports. This is also not possible for the same reason. I would suggest that you change one of them to "3000:3001", so you can access this application on port 3001.

Resources