I have a static Gatsby app that needs the uri from another container for a Hasura GraphQL connection.
The Problem
The Gatsby container finishes the docker build before Hasura's, so the URI in Gatsby is set to undefined.
How can I make it so the uri is dynamic and changes to Hasura's actual container IP address when it's done building?
What I tried
Add a depends_on in docker-compose.yml to force Gatsby to wait until the Hasura container is ready, so it'll have the IP by the time Gatsby container starts the build. But according to [0] it doesn't guarantee that Gatsby will wait until Hasura finishes to start itself.
It suggests adding a custom bash script to force the Gatsby container to wait. If I were to use wait-for-it.sh, what should the subcommand be (the command after the wait finishes)?
[0] https://docs.docker.com/compose/startup-order/
docker-compose.yml
version: '3.6'
services:
database:
image: postgres:12.2
container_name: 'postgres-db'
env_file:
- database.env
volumes:
- ./schema.sql:/docker-entrypoint-initdb.d/1-schema.sql
- ./seed.sql:/docker-entrypoint-initdb.d/2-seed.sql
hasura:
image: hasura/graphql-engine:v1.2.1
restart: on-failure
container_name: 'hasura'
depends_on:
- database
ports:
- '8180:8080'
env_file:
- hasura.env
web:
build: '.'
image: 'webserver'
container_name: 'nginx-webserver'
restart: on-failure
depends_on:
- hasura
ports:
- '8080:80'
volumes:
- /app/node_modules
- .:/app
env_file:
- webserver.env
webserver.env file
NODE_ENV=production
GATSBY_WEBPACK_PUBLICPATH=/
HASURA_ENDPOINT=http://hasura:8080/v1/graphql
GraphQL Apollo client that needs the Hasura URI:
export const client = new ApolloClient({
uri: process.env.HASURA_ENDPOINT,
fetch,
});
Found the solution.
I was thinking about the container network relationship incorrectly.
The client looks at the HOST'S ip address when connecting, not the container's.
Explanation
The Hasura container exposes itself to everyone via localhost:8180. If you look at the docker-compose file, port 8180:8080 means "Make Hasura's port 8080 accessible to localhost's port 8180".
The gatsby app (nginx-webserver) should point to localhost:8180, not hasura:8080.
My final docker-compose.yml:
version: '3.6'
services:
database:
image: postgres:12.2
container_name: 'postgres-db'
env_file:
- database.env
volumes:
- ./schema.sql:/docker-entrypoint-initdb.d/1-schema.sql
- ./seed.sql:/docker-entrypoint-initdb.d/2-seed.sql
hasura:
image: hasura/graphql-engine:v1.2.1
restart: on-failure
container_name: 'hasura'
depends_on:
- database
ports:
- '8180:8080'
env_file:
- hasura.env
web:
build: '.'
image: 'nginx-webserver'
container_name: 'web'
restart: on-failure
ports:
- '8080:80'
volumes:
- .:/app
- app/node_modules
env_file:
- webserver.env
ApolloClient setup:
import ApolloClient from 'apollo-boost';
import fetch from 'isomorphic-fetch';
export const HASURA_ENDPOINT_URI =
process.env.NODE_ENV === 'development'
? 'http://localhost:8090/v1/graphql'
: 'http://localhost:8180/v1/graphql';
export const client = new ApolloClient({
uri: HASURA_ENDPOINT_URI,
fetch
});
Related
I am beginner in Docker and can not get response from my project that running in docker. I have a Go project with 4 services. When It Run as local machine in my pc, everything is good and not have problem. But when it run in docker and send request by postman, could not get response and socket hang up was present.
I have 4 service for this:
1- Rest API service that dockerfile is :
FROM golang:latest as GolangBase
...
...
EXPOSE 8082
CMD ["/go/bin/ecg", "server"]
2- Page service that dockerfile is :
FROM golang:latest as GolangBase
...
...
EXPOSE 8080
CMD ["/go/bin/ecg", "page"]
2- Redis
3- Postgres
docker-compose in root:
version: "2.3"
services:
server:
build:
context: .
dockerfile: docker/app/Dockerfile
container_name: ecg-go
ports:
- "127.0.0.1:8082:8082"
depends_on:
- postgres
- redis
networks:
- ecg-service_default
restart: always
page:
build:
context: .
dockerfile: docker/page/Dockerfile
container_name: ecg-page
ports:
- "127.0.0.1:8080:8080"
depends_on:
- postgres
networks:
- ecg-service_default
restart: always
redis:
image: redis:6
container_name: ecg-redis
volumes:
- redis_data:/data
networks:
- ecg-service_default
postgres:
image: postgres:alpine
container_name: ecg-postgres
environment:
POSTGRES_PASSWORD: docker
POSTGRES_DB: ecg
POSTGRES_USER: ecg
volumes:
- pg_data:/var/lib/postgresql/data
networks:
- ecg-service_default
volumes:
pg_data:
redis_data:
networks:
ecg-service_default:
I build images and run containers by docker-compose up -d command and all services is created and running.
But when sending Request to http://localhost:8082/.. it return Could not get response, socket hang up.
What's the problem ??
I have created a docker network consisting of a Neo4j container and a python script that writes to the Neo4j database in another container. The python script uses the Neo4j service name to create the Neo4j URI bolt://db:7687 (stored in the .env file), which is accessible from within the docker network.
My question is, how do I translate the following docker-compose.yml to Kubernetes? My attempt to translate it with Kompose failed because Service "daily_pe_update" won't be created because 'ports' is not specified.
version: "3.8"
services:
db:
container_name: db
image: neo4j:4.4.3-community
ports:
- 7888:7474
- 7999:7687
restart: unless-stopped
volumes:
- .db/data:/data
- .db/conf:/conf
- .db/logs:/logs
- .db/plugins:/plugins
networks:
- data_service
env_file:
- ./.env
daily_pe_update:
depends_on:
- "db"
container_name: daily_pe_update
build: ./pe_update
command: "python daily_pe_update.py"
restart: on-failure:5
volumes:
- ./pe_update/:/usr/app/src/
env_file:
- ./.env
networks:
- data_service
networks:
data_service:
I have built a CRUD application using spring-boot and MySQL, MySQL is in docker and I am able to connect from local and my application is working. But when I tried to deploy the Spring-boot application in docker now it is not able to connect to Docker MySQL.
## Spring application.properties
server.port=8001
# MySQL Props
spring.jpa.properties.hibernate.dialect = org.hibernate.dialect.MySQL5InnoDBDialect
spring.jpa.hibernate.ddl-auto = create
spring.datasource.url=jdbc:mysql://${MYSQL_HOST:localhost}:${MYSQL_PORT:9001}/${MYSQL_DATABASE:test-db}
spring.datasource.username=${MYSQL_USER:admin}
spring.datasource.password=${MYSQL_PASSWORD:nimda}
##Dockerfile
FROM openjdk:11
RUN apt-get update
ADD target/mysql-crud-*.jar mysql-crud.jar
ENTRYPOINT ["java", "-jar", "mysql-crud.jar"]
## docker-compose.yml
version: '3.9'
services:
dockersql:
image: mysql:latest
restart: always
container_name: dockersql
ports:
- "3306:3306"
env_file: .env
environment:
- MYSQL_DATABASE=$MYSQL_DATABASE
- MYSQL_USER=$MYSQL_USER
- MYSQL_PASSWORD=$MYSQL_PASSWORD
- MYSQL_ROOT_PASSWORD=$MYSQL_ROOT_PASSWORD
networks:
- crud-network
mycrud:
depends_on:
- dockersql
restart: always
container_name: mycrud
env_file: .env
environment:
- MYSQL_HOST=dockersql:3306
- MYSQL_DATABASE=$MYSQL_DATABASE
- MYSQL_USER=$MYSQL_USER
- MYSQL_PASSWORD=$MYSQL_PASSWORD
- MYSQL_ROOT_PASSWORD=$MYSQL_ROOT_PASSWORD
build: .
networks:
- crud-network
networks:
crud-network:
driver: bridge
# .env file
MYSQL_DATABASE=test-db
MYSQL_USER=admin
MYSQL_PASSWORD=nimda
MYSQL_ROOT_PASSWORD=nimda
Can anyone help me?
Even better add a health check for MySQL and make it a condition for spring boot to start
dockersql:
healthcheck:
test: [ "CMD-SHELL", 'mysql --user=${MYSQL_USER} --database=${MYSQL_DATABASE} --password=${MYSQL_PASSWORD} --execute="SELECT count(table_name) > 0 FROM information_schema.tables;"' ]
mycrud:
depends_on:
dockersql:
condition: service_healthy
The --execute can also be modified to include application-specific healthcheck. for example, checking on a specific table that it exists.
I found out that before MySQL is completely up and running, my spring boot tries to connect MySQL and that is causing the error.
After adding
mycrud:
depends_on:
- dockersql
container_name: mycrud
restart: on-failure
It resolves my issue.
I am developing a workflow service as a training project. Abstracting from the details, everything you need to know for this question is in the image. For deployment, I rented a server and ran docker-compose on it. Everything works well, but what I'm worried about is that ports 8000 and 5432 are open.
The first question is, is it worth worrying? And if so, how to get rid of it?
Docker-compose file content below
version: "3"
services:
db:
container_name: 'emkk-db'
image: postgres
volumes:
- ./backend/data:/var/lib/postgresql/data
env_file:
- ./backend/db.env
ports:
- "5432:5432"
backend:
container_name: 'emkk-backend'
image: emkk_backend
build: ./backend
volumes:
- ./backend:/emkk/backend
env_file:
- ./backend/.env
ports:
- "8000:8000"
depends_on:
- db
frontend:
container_name: 'emkk-frontend'
image: emkk_frontend
build: ./frontend
command: npm run start
env_file:
- ./frontend/.env
volumes:
- /emkk/frontend/node_modules
- ./frontend:/emkk/frontend
ports:
- "80:80"
depends_on:
- backend
I also want to configure HTTPS protocol. I tried installing nginx and putting a certificate on it using a certbot, and then proxying requests to containers. I sat with this for several hours and I still did not manage to achieve anything better than a HTTPS for the nginx start page.
Maybe I'm doing completely wrong things, but I'm new to this, I haven't had to deal with deployments before. I would be grateful for your answers, which will contain an idea or an example of how you can do this.
If you don't have a connection to 8000 (probably WAS) or 5432 (database) from an external server, you can change docker-compose.yml to:
you have to expose only necessary ports for external clients.
when you connect to backend from web, you should use service name like backend:8000
when you connect to db from backend, you should use service name like db:5432
version: "3"
services:
db:
container_name: 'emkk-db'
image: postgres
volumes:
- ./backend/data:/var/lib/postgresql/data
env_file:
- ./backend/db.env
backend:
container_name: 'emkk-backend'
image: emkk_backend
build: ./backend
volumes:
- ./backend:/emkk/backend
env_file:
- ./backend/.env
depends_on:
- db
frontend:
container_name: 'emkk-frontend'
image: emkk_frontend
build: ./frontend
command: npm run start
env_file:
- ./frontend/.env
volumes:
- /emkk/frontend/node_modules
- ./frontend:/emkk/frontend
ports:
- "80:80"
depends_on:
- backend
And, you can use nginx proxy manager to service with HTTPS and a certificate from the certbot.
I have the following Docker Compose file where I declare 3 services: db, api, sdk-py-test
The service sdk-py-test creates a container craft-sdk-py-test which sends a bunch of http requests to the GraphQL API container craft-api created by the service named api.
The requests succeed when craft-sdk-py-test sends them on the external URL: http://172.30.0.3:5433/graphql
But when I try to send them via the Docker network internal URL: http://api:5433/graphql I get immediately an error message:
gql.transport.exceptions.TransportServerError: 502 Server Error: notresolvable for url: http://api:5433/graphql
How can I use the internal service name api to route the requests instead of the IP address?
Docker Compose File
version: "3.5"
services:
db:
container_name: craft-db
restart: always
image: craft-db
env_file:
- ./.env
ports:
- 5432:5432
api:
container_name: craft-api
restart: always
image: craft-api
env_file:
- ./.env
depends_on:
- db
ports:
- 5433:5433
sdk-py-test:
container_name: craft-sdk-py-test
image: craft-sdk-py-test
build:
context: ./
dockerfile: Dockerfile.test
env_file:
- ./.env
depends_on:
- api
volumes:
- ./tests:/tests/tests
- ./craft:/tests/craft
command: ["nose2", "-v"]