Setting up Fluree on Docker - docker

I have a blockchain web application set up on Docker so that I can have multiple servers running together. But, all the images are on one computer. I would like to have at least one image running on a different computer (on the same network). I've tried to edit my yaml file to include IP addresses of the computers, but while I do not get any errors, the ledgers dont get created on the other computer. Here is my yaml. In this example, Server 4 should be on a different computer (but even though I gave it a different IP address, the two computers do not seem to be synced). I am not sure if my issue is with Docker or Fluree (I am new to both). Thanks!
version: '3'
services:
ledger1:
image: fluree/ledger
ports:
- 8090:8090
- 9791:9791
environment:
fdb_group_servers: server1#192.168.1.15:9791,server2#192.168.1.15:9792,server3#192.168.1.15:9793,server4#192.168.1.25:9794
fdb_group_this_server: server1
ledger2:
image: fluree/ledger
ports:
- 8091:8090
- 9792:9792
environment:
fdb_group_servers: server1#192.168.1.15:9791,server2#192.168.1.15:9792,server3#192.168.1.15:9793,server4#192.168.1.25:9794
fdb_group_this_server: server2
ledger3:
image: fluree/ledger
ports:
- 8092:8090
- 9793:9793
environment:
fdb_group_servers: server1#192.168.1.15:9791,server2#192.168.1.15:9792,server3#192.168.1.15:9793,server4#192.168.1.25:9794
fdb_group_this_server: server3
ledger4:
image: fluree/ledger
ports:
- 8093:8090
- 9794:9794
environment:
fdb_group_servers: server1#192.168.1.15:9791,server2#192.168.1.15:9792,server3#192.168.1.15:9793,server4#192.168.1.25:9794
fdb_group_this_server: server4

Related

How to prevent conflict between two separate docker-compose

I have two separate projects in two separate folders.
when I run one of them, the second one cannot run because of conflict between ports.
The problem is for ElasticSearch image.
Followings are two docker-compose files:
# /home/foder_1/
version: '3'
services:
elasticsearch_ci:
image: elasticsearch:7.14.2
restart: always
expose:
- 9200
environment:
- discovery.type=single-node
- xpack.security.enabled=false
env_file:
- ./envs/ci.env
container_name: elasticsearch_ci_pipeline
Second one:
# /home/folder_2/
version: '3'
services:
elasticsearch:
image: elasticsearch:7.14.2
expose:
- 9200
volumes:
- elastic_search_data_staging:/var/lib/elastic_search/data/
environment:
- discovery.type=single-node
- xpack.security.enabled=false
When I run docker ps, I see the second ElasticSearch container created but it doesn't show its ports.
How can I solve the problem?
Update:
The problem is in this situation, my web application (django-base) cannot connect to the second elastic search instance.
Also, when I change port number in the second docker-compose for ES, (for example adding 9500 as Expose), again the port numbers of ES is the default ports (9200, 9300) plus my new port (9500) and my web application cannot connect to none of them.
Finally I found what the problem is.
My server has only 4 GBs of RAM and when one elasticsearch is running, other instances of elasticsearch cannot start because the first instance consumes the most of RAM.
if you want to run two separate instances of elasticsearch, you shoul consider at least 6 GBs of RAM per instance.

Docker Swarm app deployed with stack is not accessible externally

I am new to Docker swarm and therefore unsure what to do.
I am trying to create a drupal site and a DB using stack in Swarm.
My configuration consists of 3 VMs (using Virtual Box) which are connected to the bridged network adapter and are on the same subnet as the host (10.0.0.X).
Once I have deployed the app there were no errors, however I was not able to access the site from the host.
I have also verified connectivity between all nodes on the following ports
7946/tcp, 7946/udp, and 4789/udp
What am I missing?
This is the compose file I am using:
version: '3.1'
services:
drupal:
container_name: drupal
image: drupal:8.2
ports:
- "8080:80"
networks:
- drupal_net
volumes:
- drupal-modules:/var/www/html/modules
- drupal-profiles:/var/www/html/profiles
- drupal-sites:/var/www/html/sites
- drupal-themes:/var/www/html/themes
postgres:
networks:
- drupal_net
container_name: postgres
image: postgres:9.6
secrets:
- psql-password
environment:
- POSTGRES_PASSWORD_FILE=/run/secrets/psql-password
volumes:
- drupal-data:/var/lib/postgresql/data
networks:
drupal_net:
driver: overlay
volumes:
drupal-data:
drupal-modules:
drupal-profiles:
drupal-sites:
drupal-themes:
secrets:
psql-password:
external:
name: psql-pw
Notes:
I have tried to add the host machine to the Swarm and was able to access the app only via localhost. If I were to enter one of nodes IP address, I still received no response.
From another device on the same network, the app was still inaccessible. (I was able to ping all nodes including the host)
I have tried to create to a single node Swarm and still there was no access to the app
All of the scenarios mentioned I have tested with a simple nginx app as well

Can not start 2 cassandra containers on mac

I have a situation with cassandra container.
I have 2 docker-compse.yaml files in different folders.
docker-compose.yaml in folder 1
version: "3"
services:
cassandra-cluster-node-1:
image: cassandra:3.0
container_name: cassandra-cluster-node-1
hostname: cassandra-cluster-node-1
ports:
- '9142:9042'
- '7199:7199'
- '9160:9160'
docker-compose.yaml in folder 2
version: "3"
services:
cassandra-cluster-node-2:
image: cassandra:3.0
container_name: cassandra-cluster-node-2
hostname: cassandra-cluster-node-2
ports:
- '9242:9042'
- '7299:7199'
- '9260:9160'
I tried to up cassandra on folder 1, the system work well, after that I up cassandra on folder 2. But at this time, service cassandra on folder 1 is killed automatically. So I didn't understand with them, could you guys please, who have experiences with Docker can help me to explain this situation?
The error in cassandra_1 after I run cassandra_2
cassandra-cluster-node-1 exited with code 137
Thank you, I'm going to appreciate your help.
137 is out of memory error. Cassandra uses a lot of memory if started with default settings. By default it takes 1/4 of the system memory. For each instans. You can restrict the memory usage using environment variables (see my example further down)
Docker compose creates a network for each directory it runs under. With your setup the two nodes will never be able to find each other. This is the output from my test, your files are put into two directories; cass1 and cass1
$ docker network ls
NETWORK ID NAME DRIVER SCOPE
dbe9cafe0af3 bridge bridge local
70cf3d77a7fc cass1_default bridge local
41af3e02e247 cass2_default bridge local
21ac366b7a31 host host local
0787afb9aeeb none null local
You can see the two networks cass1_default and cass2_default. So the two nodes will not find each other.
If you want them to find each other you have to give the first one as a seed to second one, and they have to be in the same network (same docker-compose file)
version: "3"
services:
cassandra-cluster-node-1:
image: cassandra:3.0
container_name: cassandra-cluster-node-1
hostname: cassandra-cluster-node-1
environment:
- "MAX_HEAP_SIZE=1G"
- "HEAP_NEWSIZE=256M"
ports:
- '9142:9042'
- '7199:7199'
- '9160:9160'
cassandra-cluster-node-2:
image: cassandra:3.0
container_name: cassandra-cluster-node-2
hostname: cassandra-cluster-node-2
environment:
- "MAX_HEAP_SIZE=1G"
- "HEAP_NEWSIZE=256M"
- "CASSANDRA_SEEDS=cassandra-cluster-node-1"
ports:
- '9242:9042'
- '7299:7199'
- '9260:9160'
depends_on:
- cassandra-cluster-node-1

traefik hostname works for web apps but not for mongodb

I'm running a mongo instance with docker-compose and traefik.
myapp-mongo:
build: ../images/myapp-mongo
restart: always
ports:
- "27017:27017"
labels:
- "traefik.ports=27017,27018"
- "traefik.backend=myapp-mongo"
- "traefik.frontend.rule=Host:myapp-mongo.docker.localhost"
networks:
- development
environment:
- MONGO_USER=${MONGO_USER}
- MONGO_PASSWD=${MONGO_PASSWD}
- MONGO_AUTHDB=${MONGO_AUTHDB}
Mongo is running fine and I can connect using 127.0.0.1 from my Mac.
The problem is that I can't connect using hostname myapp-mongo.docker.localhost. It only works using IP 127.0.0.1.
Trying to ping the IP 127.0.0.1 responds ok, but trying to ping the hostname doesn't work.
I've already added 127.0.0.1 proxy.docker.localhost into /etc/hosts to get traefik working.
All other web apps has hostnames working fine like eg myapp.docker.localhost. This problem is only happening with this mongodb container.
Probably because Træfik is HTTP proxy and so will only support HTTP/HTTPS connections.
I believe #bpatel is right (see comment I left on his answer with link to github conversation) Traefik at the time of writing only supports HTTP/HTTPS.
Solution using native docker networks
However, you can get around this issue! Since you are using docker, you can work around by using the container name in your code (assuming mongo and your mongo accessing code are both running in containers on a shared docker network. This will be the case if the containers are spun up with docker-compose). Run the following to see if your containers are linked up correctly:
run docker ps to get your container names running (under the NAMES column)
run docker network ls to see your network names
run docker network inspect <target_network_name> to verify your containers from step 1 are on the same network.
I run docker-compose from three separate compose files, so you should be able to cover most cases from the following (apologies for any syntax errors, the following are stripped down code examples):
Entire docker-compose file that that starts up traefik (under directory name 'proxy')
version: '2'
services:
traefik:
image: traefik
command: --web --docker --docker.domain=docker.localhost --logLevel=DEBUG
networks:
- webgateway
ports:
- "80:80"
- "8080:8080"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /dev/null:/traefik.toml
networks:
webgateway:
driver: bridge
snippet from my docker-compose file that spins up mongo
version: '2'
services:
database:
image: mongo
ports:
- "27017:27017"
networks:
- web
networks:
web:
external:
name: proxy_webgateway
snippet from docker-compose that has mongo accessing code
version: '2'
services:
topicOntologyBuilder:
image: topic-ontology-builder
labels:
- "traefik.backend=topicOntologyBuilder"
- "traefik.port=80"
- "traefik.frontend.rule=Host:topic-ontology.docker.localhost"
networks:
- web
volumes:
- ./:/home
networks:
web:
external:
name: proxy_webgateway
Connection in Code
Not certain what language you're using, this is what the following js code looked like for me to connect to mongo (inside that 'topicOntologyBuilder` container, while using traefik as the proxy (again, this works because we're making the most of docker networks):
var MongoClient = require('mongodb').MongoClient;
MongoClient.connect('mongodb://<MONGO_CONTAINER_NAME>/<DB_NAME>', function(err, db) {
//insert code here to interact with mongo
})
Why this works
This works because docker does some clever DNS stuff within the containers so that each container knows the IP of other containers, by looking it up in their DNS entry, by the container names
Extra intel
If your containers are on separate computers/vm's, you'll probably want to play around with a service discovery tool (Consul plays well with Traefik) or do something fancy with a docker network overlay which is specific for containers in a cluster.
If using raw docker networks, you can assign container aliases (this doesn't work with Traefik though, or at least it didn't a couple months back).

How to share localhost between two different Docker containers?

I have two different Docker containers and each has a different image. Each app in the containers uses non-conflicting ports. See the docker-compose.yml:
version: "2"
services:
service_a:
container_name: service_a.dev
image: service_a.dev
ports:
- "6473:6473"
- "6474:6474"
- "1812:1812"
depends_on:
- postgres
volumes:
- ../configs/service_a/var/conf:/opt/services/service_a/var/conf
postgres:
container_name: postgres.dev
hostname: postgres.dev
image: postgres:9.6
ports:
- "5432:5432"
volumes:
- ../configs/postgres/scripts:/docker-entrypoint-initdb.d/
I can cURL each image successfully from the host machine (Mac OS), e.g. curl -k https://localhost:6473/service_a/api/version works. What I'd like to do is to be able to refer to postgres container from the service_a container via localhost as if these two containers were one and they share the same localhost. I know that it's possible if I use the hostname postgres.dev from inside the service_a container, but I'd like to be able to use localhost. Is this possible? Please note that I am not very well versed in networking or Docker.
Mac version: 10.12.4
Docker version: Docker version 17.03.0-ce, build 60ccb22
I have done quite some prior research, but couldn't find a solution.
Relevant: https://forums.docker.com/t/localhost-and-docker-compose-networking-issue/23100/2
The right way: don't use localhost. Instead use docker's built in DNS networking and reference the containers by their service name. You shouldn't even be setting the container name since that breaks scaling.
The bad way: if you don't want to use the docker networking feature, then you can switch to host networking, but that turns off a very key feature and other docker capabilities like the option to connect containers together in their own isolated networks will no longer work. With that disclaimer, the result would look like:
version: "2"
services:
service_a:
container_name: service_a.dev
image: service_a.dev
network_mode: "host"
depends_on:
- postgres
volumes:
- ../configs/service_a/var/conf:/opt/services/service_a/var/conf
postgres:
container_name: postgres.dev
image: postgres:9.6
network_mode: "host"
volumes:
- ../configs/postgres/scripts:/docker-entrypoint-initdb.d/
Note that I removed port publishing from the container to the host, since you're no longer in a container network. And I removed the hostname setting since you shouldn't change the hostname of the host itself from a docker container.
The linked forum posts you reference show how when this is a VM, the host cannot communicate with the containers as localhost. This is an expected limitation, but the containers themselves will be able to talk to each other as localhost. If you use a VirtualBox based install with docker-toolbox, you should be able to talk to the containers by the virtualbox IP.
The really wrong way: abuse the container network mode. The mode is available for debugging container networking issues and specialized use cases and really shouldn't be used to avoid reconfiguring an application to use DNS. And when you stop the database, you'll break your other container since it will lose its network namespace.
For this, you'll likely need to run two separate docker-compose.yml files because docker-compose will check for the existence of the network before taking any action. Start with the postgres container:
version: "2"
services:
postgres:
container_name: postgres.dev
image: postgres:9.6
ports:
- "5432:5432"
volumes:
- ../configs/postgres/scripts:/docker-entrypoint-initdb.d/
Then you can make a second service in that same network namespace:
version: "2"
services:
service_a:
container_name: service_a.dev
image: service_a.dev
network_mode: "container:postgres.dev"
ports:
- "6473:6473"
- "6474:6474"
- "1812:1812"
volumes:
- ../configs/service_a/var/conf:/opt/services/service_a/var/conf
Specifically for Mac and during local testing, I managed to get the multiple containers working using docker.for.mac.localhost approach. I documented it http://nileshgule.blogspot.sg/2017/12/docker-tip-workaround-for-accessing.html

Resources