I am trying to upload my backend to Google Cloud Run. I'm using Docker-Compose with 2 components: a Golang Server and a Postgres DB.
When I run Docker-Compose locally, everything works great! When I upload to Gcloud with
gcloud builds submit . --tag gcr.io/BACKEND_NAME
gcloud run deploy --image gcr.io/BACKEND_NAME --platform managed
Gcloud's health check fails, getting stuck on Deploying... Revision deployment finished. Waiting for health check to begin. and throws Cloud Run error: Container failed to start. Failed to start and then listen on the port defined by the PORT environment variable. Logs for this revision might contain more information.
I understand that Google Cloud Run provides a PORT env variable, which I tried to account for in my docker-compose.yml. But the command still fails. I'm out of ideas, what could be wrong here?
Here is my docker-compose.yml
version: '3'
services:
db:
image: postgres:latest # use latest official postgres version
container_name: db
restart: "always"
environment:
- POSTGRES_USER=user
- POSTGRES_PASSWORD=pass
- POSTGRES_DB=db
volumes:
- database-data:/var/lib/postgresql/data/ # persist data even if container shuts down
api:
container_name: api
depends_on:
- db
restart: on-failure
build: .
ports:
# Bind GCR provided incoming PORT to port 8000 of our api
- "${PORT}:8000"
environment:
- POSTGRES_USER=user
- POSTGRES_PASSWORD=pass
- POSTGRES_DB=db
volumes:
database-data: # named volumes can be managed easier using docker-compose
and the api container is a Golang binary, which waits for a connection to be made with the Postgres DB before calling http.ListenAndServe(":8000", handler).
Related
I'm trying to set up a simple MLflow tracking server with docker that uses a mysql backend store and S3 bucket for artifact storage. I'm using a simple docker-compose file to set this up on a server and supplying all of the credentials through a .env file. When I try to run the sklearn_elasticnet_wine example from the mlflow repo here: https://github.com/mlflow/mlflow/tree/master/examples/sklearn_elasticnet_wine usingTRACKING_URI = "http://localhost:5005 from the machine hosting my tracking server, the run fails with the following error: botocore.exceptions.NoCredentialsError: Unable to locate credentials. I've verified that my environment variables are correct and available in my mlflow_server container. The runs show up in my backend store so the run only seems to be failing at the artifact logging step. I'm not sure why this isn't working. I've seen a examples of how to set up a tracking server online, including: https://towardsdatascience.com/deploy-mlflow-with-docker-compose-8059f16b6039. Some use minio also but others just specify their s3 location as I have. I'm not sure what I'm doing wrong at this point. Do I need to explicitly set the ARTIFACT_URI as well? Should I be using Minio? Eventually, I'll be logging runs to the server from another machine, hence the nginx container. I'm pretty new to all of this so I'm hoping it's something really obvious and easy to fix but so far the Google has failed me. TIA.
version: '3'
services:
app:
restart: always
build: ./mlflow
image: mlflow_server
container_name: mlflow_server
expose:
- 5001
ports:
- "5001:5001"
networks:
- internal
environment:
- AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID}
- AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY}
- AWS_DEFAULT_REGION=${AWS_DEFAULT_REGION}
- AWS_S3_BUCKET=${AWS_S3_BUCKET}
- DB_USER=${DB_USER}
- DB_PASSWORD=${DB_PASSWORD}
- DB_PORT=${DB_PORT}
- DB_NAME=${DB_NAME}
command: >
mlflow server
--backend-store-uri mysql+pymysql://${DB_USER}:${DB_PASSWORD}#${DB_HOST}:${DB_PORT}/${DB_NAME}
--default-artifact-root s3://${AWS_S3_BUCKET}/mlruns/
--host 0.0.0.0
--port 5001
nginx:
restart: always
build: ./nginx
image: mlflow_nginx
container_name: mlflow_nginx
ports:
- "5005:80"
networks:
- internal
depends_on:
- app
networks:
internal:
driver: bridge
Finally figure this out. I didn't realize that the client also needed to have access to the AWS credentials for S3 storage.
I am currently setting up a buildkite build pipeline which runs an api in one docker container and an application in another docker container alongside it while running Cypress tests (which are also running within the second container).
I use the following docker compose file:
version: '3'
services:
testing-image:
build:
context: ../
dockerfile: ./deploy/Dockerfile-cypress
image: cypress-testing
cypress:
image: cypress-testing
ipc: host
depends_on:
- api
db:
image: postgres:latest
ports:
- "54320:5432"
redis:
image: redis:latest
ports:
- "63790:6379"
api:
build:
context: ../api/
dockerfile: ./Dockerfile
image: api
command: /env/development/command.sh
links:
- db
- redis
depends_on:
- db
- redis
The application runs in the cypress container when started by buildkite. It then starts the cypress tests and some of them pass. However, any test that requires communication with the API fails because the cypress container is unable to see localhost within the API container. I am able to enter the API container using a terminal and have verified that it is working perfectly internally using cURL.
I have tried various URLs within the cypress container to try to reach the API, which is available on port 8080 within the API container, including api://api:8080 and http://api:8080 but none of them have been able to see the API.
Does anybody know what could be going on here?
I am very new to K8s, so I didn't use it ever. But I had familiarized myself with the concept of nodes/pods. I know that minikube is the local k8s engine for debug/etc and that I should interact with any k8s engine via kubectl tool. Now my questions are:
Does launching the same configuration on my local minikube instance and production AWS/etc instance guarantee that the result will be identic?
How do I set up continuous deployment for my project? Now I have configured CI that pushes images of tested code to the docker hub with the :latest tag. But I want them to be automatically deployed in the Rolling Update mode without interrupting uptime.
It would be great to get correct configurations with the steps I should perform to make it work on any cluster? I don't want to save docker-compose's notation and use kompose. I want to make it properly in the k8s context.
My current docker-compose.yml is (django and react services are available from dockerhub now):
version: "3.5"
services:
nginx:
build:
context: .
dockerfile: Dockerfile.nginx
restart: always
command: bash -c "service nginx start && tail -f /dev/null"
ports:
- 80:80
- 443:443
volumes:
- /mnt/wts_new_data_volume/static:/data/django/static
- /mnt/wts_new_data_volume/media:/data/django/media
- ./certs:/etc/letsencrypt/
- ./misc/ssl/server.crt:/etc/ssl/certs/server.crt
- ./misc/ssl/server.key:/etc/ssl/private/server.key
- ./misc/conf/nginx.conf:/etc/nginx/nginx.conf:ro
- ./misc/conf/passports.htaccess:/etc/passports.htaccess:ro
depends_on:
- react
redis:
restart: always
image: redis:latest
privileged: true
command: redis-server
celery:
build:
context: backend
command: bash -c "celery -A project worker -B -l info"
env_file:
- ./misc/.env
depends_on:
- redis
django:
build:
context: backend
command: bash -c "/code/manage.py collectstatic --no-input && echo donecollectstatic && /code/manage.py migrate && bash /code/run/daphne.sh"
volumes:
- /mnt/wts_new_data_volume/static:/data/django/static
- /mnt/wts_new_data_volume/media:/data/django/media
env_file:
- ./misc/.env
depends_on:
- redis
react:
build:
context: frontend
depends_on:
- django
The short answer is yes, you can replicate what you have with docker-compose with K8s.
It depends on your infrastructure. For example if you have an external LoadBalancer in your AWS deployment, it won't be the same in your local.
You can do rolling updates (this typically works with stateless services). You can also take advantage of a GitOps type of approach.
The docker-compose notation is different from K8s so yes, you'll have to translate that to Kubernetes objects: Pods, Deployments, Secrets, ConfigMaps, Volumes, etc and for the most part the basic objects will work on any cluster, but there will always be some specific objects related to the physical characteristics of your cluster (i.e storage volumes, load balancer, etc). The Kubernetes docs are very comprehensive and are super helpful.
We are using docker-compose to set up the services for our app:
version: "3"
services:
db:
container_name: db
image: postgres:11.1
environment:
POSTGRES_USER: xxx
POSTGRES_PASSWORD: xxx
POSTGRES_DB: xxx
PGPASSWORD: xxx
volumes:
- pgdata:/var/lib/postgresql/data
- ./data/dbdump:/dbdump
networks:
- zenet
ports:
- "5432:5432"
# The React web application
web:
container_name: web
build:
context: .
dockerfile: devenv/web/Dockerfile
volumes:
- ./src/client-app:/usr/local/abc
- /usr/local/abc/node_modules
networks:
- zenet
ports:
- "3000:3000"
command: npm run startindocker
# The Django Rest Framework API
api:
container_name: api
build:
context: .
dockerfile: devenv/api/Dockerfile
environment:
DJANGO_SETTINGS_MODULE: abc.settings.dev
PYTHONSTARTUP: /root/pythonstartup.sh
PYTHONIOENCODING: UTF-8
volumes:
- .:/usr/local/borrow-a-boat
- ./devenv/api/pythonrc.py:/root/pythonstartup.sh
networks:
- zenet
depends_on:
- "db"
ports:
- "9000:9000"
command:
python3 /usr/local/borrow-a-boat/src/django/abc/manage.py runserver 0.0.0.0:9000
tty: true
volumes:
pgdata:
customboatdata:
networks:
zenet:
(sensitive info has been replaced)
My colleagues have the setup running fine. I setup the app & the volumes & containers are up & running. I can hit the service api at port 9000 fine from browser & confirm that the db is populated. However, my web service is unable to get the data from the api. How can I confirm that the above assertion is correct & that the web really cannot communicate with the api service.
And how can I fix this & get the web to receive the data from api. Apologies for the newbie question.
EDIT:
When I run ping api from within the web container using docker exec -it [containerID] /bin/sh, I am recieving a response in the form of :
64 bytes from 172.18.0.4: seq=139 ttl=64 time=0.084 ms
So, clearly, my assertion is incorrect. Why is web service unable to get a response from api service. When I load the web app in browser, I do not get any log display in the terminal of the api being hit.
EDIT-2 :
As per #runwuf question & my response, clearly, the 'web' is able to communicate with the 'api' service. So, something else is wrong. Here are the steps, we follow to setup the stack on our systems. I use a Linux Mint 19.2 OS, while the team uses Macs. The commands are:
docker kill $(docker_container_names)
docker rm -v $(docker_container_names)
docker volume rm abc_pgdata
docker image rm abc_api
docker image rm abc_web
docker-compose build
docker-compose up -d db api web
ssh abc#abc.com 'pg_dump abc | gzip' | gunzip | docker-compose run --rm db psql --host db --username abc
docker-compose run --rm db psql --host db --username abc -c "update core_photo set image_base = 'sample.jpg'"
docker-compose run --rm db psql --host db --username abc -c "update core_experienceimage set image_base = 'sample.jpg'"
In the end, it was a case of env variable not accessible within web service. All it took was to read the console logs in the browser which showed the undefined variable.
The lesson for me is to when it comes to problem solving, no matter how new the technology, don't forget to use the tools you are familiar with.
I have created a docker-compose file it has two services with Go and Mysql. It creates container for go and mysql. Now i am running code which try to connect to mysql database which is running as a docker container. but i get error.
docker-compose.yml
version: "2"
services:
app:
container_name: golang
restart: always
build: .
ports:
- "49160:8800"
links:
- "mysql"
depends_on:
- "mysql"
mysql:
image: mysql
container_name: mysql
volumes:
- dbdata:/var/lib/mysql
restart: always
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_DATABASE=testDB
- MYSQL_USER=root
- MYSQL_PASSWORD=root
ports:
- "3307:3306"
volumes:
dbdata:
Error while connecting to mysql database
golang | 2019/02/28 11:33:05 dial tcp 127.0.0.1:3306: connect: connection refused
golang | 2019/02/28 11:33:05 http: panic serving 172.24.0.1:49066: dial tcp 127.0.0.1:3306: connect: connection refused
golang | goroutine 19 [running]:
Connection with MySql Database
func DB() *gorm.DB {
db, err := gorm.Open("mysql", "root:root#tcp(mysql:3306)/testDB?charset=utf8&parseTime=True&loc=Local")
if err != nil {
log.Panic(err)
}
log.Println("Connection Established")
return db
}
EDIT:Updated docker file
FROM golang:latest
RUN go get -u github.com/gorilla/mux
RUN go get -u github.com/jinzhu/gorm
RUN go get -u github.com/go-sql-driver/mysql
COPY ./wait-for-it.sh .
RUN chmod +x /wait-for-it.sh
WORKDIR /go/src/app
ADD . src
EXPOSE 8800
CMD ["go", "run", "src/main.go"]
I am using gorm package which lets me connet to the database
depends_on is not a verification that MySQL is actually ready to receive connections. It will start the second container once the database container is running regardless it was ready for connections or not which could lead to such an issue with your application as it expects the database to be ready which might not be true.
Quoted from the documentation:
depends_on does not wait for db and redis to be “ready” before starting web - only until they have been started.
There are many tools/scripts that can be used to solve this issue like wait-for which sh compatible in case your image based on Alpine for example (You can use wait-for-it if you have bash in your image)
All you have to do is to add the script to your image through Dockerfile then use this command in docker-compose.yml for the service that you want to make it wait for the database.
What comes after -- is the command that you would normally use to start your application
version: "2"
services:
app:
container_name: golang
...
command: ["./wait-for", "mysql:3306", "--", "go", "run", "myapplication"]
links:
- "mysql"
depends_on:
- "mysql"
mysql:
image: mysql
...
I have removed some parts from the docker-compose for easier readability.
Modify this part go run myapplication with the CMD of your golang image.
See Controlling startup order for more on this problem and strategies for solving it.
Another issue that will rise after you solve the connection issue will be as the following:
Setting MYSQL_USER with root value will cause a failure in MySQL with this error message:
ERROR 1396 (HY000) at line 1: Operation CREATE USER failed for 'root'#'%'
This is because this user already exist in the database and it tries to create another. if you need to use the root user itself you can use only this variable MYSQL_ROOT_PASSWORD or change the value of MYSQL_USER so you can securely use it in your application instead of the root user.
Update: In case you are getting not found and the path was correct, you might need to write the command as below:
command: sh -c "./wait-for mysql:3306 -- go run myapplication"
First, if you are using latest version of docker compose you don't need the link argument in you app service. I quote the docker compose documentation Warning: The --link flag is a legacy feature of Docker. It may eventually be removed. Unless you absolutely need to continue using it, https://docs.docker.com/compose/compose-file/#links
I think the solution is to use the networks argument. This create a docker network and add each service to it.
Try this
version: "2"
services:
app:
container_name: golang
restart: always
build: .
ports:
- "49160:8800"
networks:
- my_network
depends_on:
- "mysql"
mysql:
image: mysql
container_name: mysql
volumes:
- dbdata:/var/lib/mysql
restart: always
networks:
- my_network
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_DATABASE=testDB
- MYSQL_USER=root
- MYSQL_PASSWORD=root
ports:
- "3307:3306"
volumes:
dbdata:
networks:
my_network:
driver: bridge
By the way, if you only connect to Mysql from your app service you don't need to expose the mysql port. If the containers runs in the same network they can reach all ports inside this network.
If my example doesn't works try this
run the docker compose and next go into the app container using
docker container exec -it CONTAINER_NAME bash
Install ping in order to test connection and then run ping mysql.