CouchDB not running on Docker image - docker

I am trying to learn server-side swift and am having success deploying via Heroku as a Docker container but am struggling to get my database working when using couchdb with it. The database runs fine running locally but I can't seem to get it to run in the Docker container.
My current Dockerfile is as follows:
FROM ibmcom/swift-ubuntu:5.0.2
WORKDIR /ServerSideSwift
COPY . .
RUN swift build -c release
CMD .build/release/ServerSideSwift
So to add couchdb to this I tried to create a docker-compose.yml that looks like this:
version: "3.7"
services:
web:
build:
context: .
dockerfile: Dockerfile
ports:
- "8080:8080"
links:
- db
db:
image: couchdb
ports:
- "5984:5984"
Building the image works fine and running works well too but when it tried to create a new database(in swift) i get the errors i put in the swift code that show couchdb isnt running and therefore cant create any new databases.
Can anyone see where i am going wrong?
Update 3: my current docker-compose.yml:
version: "3.7"
networks:
app-net:
driver: bridge
services:
app:
build: .
ports:
- "8080:8080"
networks:
- app-net
db:
image: couchdb
ports:
- "5984:5984"
environment:
COUCHDB_USER: Test
COUCHDB_PASSWORD: test
networks:
- app-net

First, change your connection string from "localhost" to "DB" to use Docker DNS. Then change the connection param to not use encryption.

CouchDB is accessible by default on localhost which will be localhost
inside the container since you are using docker.
you can try exec inside the CouchDB container and run curl
localhost:5984 and it should work.
If you want to allow certain IPs to connect to your CouchDB server then you should use bind_address config_docs.
To allow all IPs use bind_address = 0.0.0.0 in local.ini.
bind_address
Defines the IP address by which CouchDB will be accessible.
[httpd]
bind_address = 127.0.0.1
To let CouchDB listen any available IP address, just setup 0.0.0.0 value:
[httpd]
bind_address = 0.0.0.0
Add this config in your custom local.ini file and mount it inside the couchdb container in this path /opt/couchdb/etc/.
version: "3.7"
networks:
app-net:
driver: bridge
services:
app:
build: .
ports:
- "8080:8080"
networks:
- app-net
db:
image: couchdb
ports:
- "5984:5984"
environment:
COUCHDB_USER: Test
COUCHDB_PASSWORD: test
volumes:
- path_to_local.ini:/opt/couchdb/etc/
networks:
- app-net

Related

How i can to connect network between webserver and database in docker-compose?

I have a problem about network in docker. In the docker-compose.yml includes 2 instance below
webserver (frontend + backend)
database
But i tried to bridge network and default but not working at all.The backend cannot connect to database show error "connection refuse". then i tried to docker exec -t .. into webserver and then ping to database it show "timeout".
I cannot connect database with ip address (i got a database ip address from docker exec and then hostname -i) but i connected success using "localhost"
this my docker-compose.yml
version: '3.8'
services:
postgres_server:
container_name: postgres14-4_container
image: postgres:14.4
command: postgres -c 'max_connections=200'
restart: always
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
ports:
- '5222:5432'
volumes:
- db:/var/lib/postgresql14/data
networks:
- web_network
webserver:
container_name: frontend_backend_container
image: webserver
ports:
- '9090:80'
- '8081:8081'
env_file:
- backend_env
depends_on:
- postgres_server
restart: always
networks:
- web_network
volumes:
db:
driver: local
networks:
web_network:
driver: bridge
To configure remote connections to postgres, you have to adjust pg_hba.conf. For example add:
# Remote access
host all all 0.0.0.0/0 trust
where is your backend_env file?
I guess you have there the host + port to connect to the db.
You don't need to define anything special (like the bridge).
The webserver container should be able to access the postgres_server via postgres_server:5432 (not localhost and not 5222).

setup networking of multiple docker containers in different projects using docker-compose

Hello I have multiple projects that have there own dockerfiles and docker-compose.yml files. I am not too familiar on how I would setup the networking between these projects. So they could share the same databases and the project would be able to talk to on another. Does anyone have suggests?
Right now, In one of the projects I am just pulling in all the dockerfile into a docker-compose.yml and setting-up all the services I need from all the other projects in this yml file. I do not think this is ideal and there is a high level a coupling between the services.
version: "3"
services:
db:
image: mysql/mysql-server
ports:
- 3306:3306
mongo:
image: mongo
restart: always
rails_app:
build:
context: ${RAILS_APP_PATH}
dockerfile: Dockerfile
volumes:
- ${RAILS_APP_PATH}:/application
ports:
- 4000:4000
depends_on:
- db
- mongo
links:
- db
- mongo
frontend:
build:
context: ${FRONTEND_PATH}
ports:
- ${EXPOSED_PORT}:${EXPOSED_PORT}
depends_on:
- go_services
links:
- go_services
go_services:
build:
context: .
dockerfile: Dockerfile
ports:
- "8080:8080"
depends_on:
- db
- mongo
- rails_app
links:
- db
- mongo
- rails_app
The trick is to use an External Docker Network.
Set up the network and the Containers can talk to each other by their Service Names.
Setup the the network on the Host
docker network create my-net
First compose file
version: '3.9'
services:
mymongo:
image: mongo:latest
restart: unless-stopped
container_name: mongo
environment:
MONGO_INITDB_DATABASE: mymongo
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: password
volumes:
- ./database:/data/db
ports:
- "27017:27017"
networks:
default:
external: true
name: my-net
Second compose file
version: '3.9'
services:
ui:
build:
context: ./build
dockerfile: Dockerfile_ui
image: ui
restart: "no"
container_name: ui
ports:
- "8005:3000"
command: ["npm", "start"]
networks:
default:
external: true
name: my-net
You can do this without any special Compose setup, if:
each project is self-contained (they do not share databases)
the service locations are configurable via environment variables
you don't mind communicating via the host
If you're thinking about scaling up this project at all, this approach can look attractive. It will work even if you're running each Compose file on a different host, and it translates well into clustered environments like Kubernetes.
Go ahead and break up your Compose file into several independent ones:
# rails/docker-compose.yml
version: '3.8'
services:
db:
image: mysql/mysql-server
app:
build: .
ports: ['4000:4000']
depends_on: [db]
# go/docker-compose.yml
services:
mongo:
image: mongo
service:
build: .
ports: ['8080:8080']
depends_on: [mongo]
environment:
- RAILS_APP_URL
The very last line here passes the RAILS_APP_URL environment variable from the host environment into the container.
You can start the Rails application independently:
docker-compose -f ./rails/docker-compose.yml up -d
You need to find some hostname where the container can call back to the host. On MacOS and Windows hosts, Docker provides a special hostname host.docker.internal for this. You can then connect the client container to the published port of its server:
export RAILS_APP_URL=http://host.docker.internal:4000
docker-compose -f ./go/docker-compose.yml up
If you're doing development, you can run the service you're working on locally, and its dependencies in containers, and point the environment variable at the container
go build -o ./server ./cmd/server
export RAILS_APP_URL=http://localhost:4000
./server
If you want to run this setup on multiple hosts but without using a dedicated cluster manager like Docker Swarm or Kubernetes, set the environment variable to point at the DNS name of the host running the service. If you did want to translate this to Kubernetes, a Helm "chart" would be analogous, containing the Deployment, Service, etc. and dependencies for a single component, and you could configure the other service's URL through Helm values.

Local Communication Between Services

I have 2 services inside my docker cluster. frontend runs on port 8090, and backend runs on port 8000. How can I make frontend call backend via local DNS like fetch('https://backend.local/')? Because if I use docker-hostname, I need to specify the port to call the back-end. Do I need to have a local DNS Server inside my docker?
You have to create a Software Defined Network (SDN) in docker and then all containers running in that network can communicate with each other using the container names or you can define alias for each and use that. A simple docker-compose file for a backend microservice and mysql database can be created using the below configs.
version: '3.2'
networks:
testNetwork:
services:
mysql-dev:
image: mysql:latest
container_name: mysql-dev
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_DATABASE=root
ports:
- "3306:3306"
networks:
- testNetwork
backend:
image: backend:1.0
container_name: backend
environment:
- DB_USER=root
- DB_PASS=root
- DB_NAME=root
- DB_HOST=mysql-dev
- DB_DIALECT=mysql
ports:
- "4000:4000"
working_dir: /backend
command: npm start
networks:
- testNetwork

Connection between docker containers: need to put gateway instead of name of container, why?

I was testing things for my own and I had problem: Trying to connect a nodeexpress container(app) to a mongo container(database), I can connect to mongo from MongoCompose if I connect to localhost:27017 but cant into the container of nodeexpress with mongoose configuration url to connect database like this 'mongodb://localhost:27017/dbtest'.
So I look up at SO some solutions (like this) and answers what I see was instead of 'mongodb://localhost:27017/dbtest' I need to write the name of the my container 'mongodb://mymongo:27017/dbtest', but for me this didnt work, only recieve ECONNREFUSED error.
Containers was in the same network, here is my dockerfile and docker-compose file.
Dockerfile
#node 8.16.2
FROM node:8.16.2
COPY . /app
WORKDIR /app
RUN npm install
EXPOSE 3000
CMD ["npm","start"]
docker-compose.yaml
version: "3.7"
services:
db:
image: mongo
ports:
- 27017:27017
networks:
- testing
app:
build:
context: .
dockerfile: Dockerfile
networks:
- testing
networks:
testing:
I solved this problem like this mongodb://172.17.0.1:27017/dbtest where 172.17.0.1 is the Gateway of the network that are the containers.
Can someone explain this behavior and if it is correct ?
Platform Linux
Where did you get the name mymongo from? You have defined the name of mongodb service as db in your compose file. So use the connection string 'mongodb://db:27017/dbtest'
version: "3.7"
services:
db: ---------------> This is the name of your mongo service
image: mongo
ports:
- 27017:27017
networks:
- testing
app:
build:
context: .
dockerfile: Dockerfile
networks:
- testing
networks:
testing:

How to access docker container using localhost address

I am trying to access a docker container from another container using localhost address.
The compose file is pretty simple. Both containers ports are exposed.
There are no problems when building.
In my host machine I can successfully execute curl http://localhost:8124/ and get a response.
But inside the django_container when trying the same command I get Connection refused error.
I tried adding them in the same network, still result didn't change.
Well if I try to execute with the internal ip of that container like curl 'http://172.27.0.2:8123/' I get the response.
Is this the default behavior? How can I reach clickhouse_container using localhost?
version: '3'
services:
django:
container_name: django_container
build: ./django
ports:
- "8007:8000"
links:
- clickhouse:clickhouse
volumes:
- ./django:/usr/src/run
command: bash /usr/src/run/run.sh
clickhouse:
container_name: clickhouse_container
build: ./clickhouse
ports:
- "9001:9000"
- "8124:8123"
- "9010:9009"
So with this line here - "8124:8123" you're mapping the port of clickhouse container to localhost 8124. Which allows you to access clickhouse from localhost at port 8124.
If you want to hit clickhouse container from within the dockerhost network you have to use the hostname for the container. This is what I like to do:
version: '3'
services:
django:
hostname: djano
container_name: django
build: ./django
ports:
- "8007:8000"
links:
- clickhouse:clickhouse
volumes:
- ./django:/usr/src/run
command: bash /usr/src/run/run.sh
clickhouse:
hostname: clickhouse
container_name: clickhouse
build: ./clickhouse
ports:
- "9001:9000"
- "8124:8123"
- "9010:9009"
If you make the changes like I have made above you should be able to access clickhouse from within the django container like this curl http://clickhouse:8123.
As in #Billy Ferguson's answer, you can visit using localhost in host machine just because: you define a port mapping to route localhost:8124 to clickhouse:8123.
But when from other container(django), you can't. But if you insist, there is a ugly workaround: share host's network namespace with network_mode, but with this the django container will just share all network of host.
services:
django:
hostname: djano
container_name: django
build: ./django
ports:
- "8007:8000"
links:
- clickhouse:clickhouse
volumes:
- ./django:/usr/src/run
command: bash /usr/src/run/run.sh
network_mode: "host"
It depends of config.xml settings. If in config.xml <listen_host> 0.0.0.0</listen_host> you can use clickhouse-client -h your_ip --port 9001

Resources