i'm quite new to GCP and been using mostly AWS. I am currently trying to play around with GCP and want to deploy a container using docker-compose.
I set up a very basic docker-compose.yml file as follows:
# docker-compose.yml
version: '3.3'
services:
git:
image: alpine/git
volumes:
- ${PWD}:/git
command: "clone https://github.com/PHP-DI/demo.git"
composer:
image: composer
volumes:
- ${PWD}/demo:/app
command: "composer install"
depends_on:
- git
web:
image: php:7.4-apache
ports:
- "8080:${PORT:-80}"
- "8000:${PORT:-8000}"
volumes:
- ${PWD}/demo:/var/www/html
command: php -S 0.0.0.0:8000 -t /var/www/html
depends_on:
- composer
So the container will get the code from git, then install the dependencies using composer and finally be available on port 8000.
On my machine, running docker-compose up does everything. However how can push this docker-compose to google cloud.
I have tried building a container using the docker/compose image and a Dockerfile as follows:
FROM docker/compose
WORKDIR /opt
COPY docker-compose.yml .
WORKDIR /app
CMD docker-compose -f /opt/docker-compose.yml up web
Then push the container to the registry. And from there i tried deploying to:
cloud run - did not work as i could not find a way to specify mounted volume for /var/run/docker.sock
Kubernetes - i mounted the docker.sock but i keep getting an error in the logs that /app from the git service is read only
compute engine - same error as above
I don't want to make a container by copying all local files into it then upload, as the dependencies could be really big thus making a heavy container to push.
I have a working docker-compose and just want to use it on GCP. What's the easiest way?
This can be done by creating a cloudbuild.yaml file in your project root directory.
Add the following step to cloudbuild.yaml:
steps:
# running docker-compose
- name: 'docker/compose:1.26.2'
args: ['up', '-d']
On Google Cloud Platform > Cloud Builder : configure the file type of your build configuration as Cloud Build configuration file (yaml or json), enter the file location : cloudbuild.yaml
If the repository event that invokes trigger is set to "push to a branch" then Cloud Build will launch docker-compose.yml to build your containers.
Take a look at Kompose. It can help you convert the docker compose instructions into Kuberenetes specific deployment and services. You can then apply the Kubernetes files against your GKE Clusters. Note that you will have to build the containers and store in Container Registry first and update the image tag in service definitions accordingly.
If you are trying to setup same as on-premise VM in GCE, you can install these and run. Ref: https://dev.to/calvinqc/the-easiest-docker-docker-compose-setup-on-compute-engine-1op1
Related
I am trying to determine why the cloudformation building of application fails when trying to create resources for BackgroundjobsService (Create failed in cloudformation). The only main differences from other services I have built is that it has no exposed ports and I am using ubuntu instead of php-apache image.
Dockerfile (I made it super simply (basically do nothing)
# Pulling Ubuntu image
FROM ubuntu:20.04
docker-compose.yml
services:
background_jobs:
image: 000.dkr.ecr.us-east-1.amazonaws.com/company/job-scheduler
restart: always
env_file: ../.env.${ENV}
build:
context: "."
How I deploy (I verified the enf files exist in the parent directory of job-scheduler).
cd job-scheduler
ENV=dev docker --context default compose build
docker push 000.dkr.ecr.us-east-1.amazonaws.com/company/job-scheduler:latest
ENV=dev docker --context tcetra-dev compose up
I don't know how to find any sort of error logs but the task defination gets created and all my env vars are in there.
I'm trying to send logs from fluentd (installed using docker) to opensearch.
In configuration file, there's #type opensearch that uses the plugin fluent-plugin-opensearch which I installed locally as a Ruby gem.
I get the following error:
2022-04-22 15:47:10 +0000 [error]: config error file="/fluentd/etc/fluentd.conf" error_class=Fluent::NotFoundPluginError error="Unknown output plugin 'opensearch'. Run 'gem search -rd fluent-plugin' to find plugins"
As a solution, I found out that I need to add the plugin to the fluentd docker container, but I couldn't find a way to do that.
Any way to add the plugin to docker or an alternative to this solution would be appreciated.
The comments already gave a hint, you will need to build your own Docker image. Depending on the infrastructure you have available, you can either build the image, store it in some registry and then use it in your compose file, or build it on the machine that you use docker on.
The Dockerfile
Common to both approaches is that you'll need a Dockerfile. I am using Calyptias Docker image as a base, but you can use whatever fluentd image you like to. My docker file looks as follows:
FROM ghcr.io/calyptia/fluentd:v1.14.6-debian-1.0
USER root
RUN gem install fluent-plugin-opensearch
RUN fluent-gem install fluent-plugin-rewrite-tag-filter fluent-plugin-multi-format-parser
USER fluent
ENTRYPOINT ["tini", "--", "/bin/entrypoint.sh"]
CMD ["fluentd"]
As you can see it installs a few more plugins, but the first RUN line is the important one for you.
Option 1
If you have a container registry available, you can build the image and push it there, either using a CI/CD pipeline or simply locally. Then you can reference this custom image instead of whatever other fluentd image you're using today as such:
fluentd:
image: registry.your-domain.xyz/public-projects/fluentd-opensearch:<tag|latest>
container_name: fluentd
ports:
- ...
restart: unless-stopped
volumes:
- ...
Adjust the config to your needs.
Option 2
You can also have docker-compose build the container locally for you. For this, create a directory fluentd in the same folder where you store your docker-compose.yml and place the Dockerfile there.
fluentd:
build: ./fluentd
container_name: fluentd
ports:
- ...
restart: unless-stopped
volumes:
- ...
Instead of referencing the image from some registry, you can reference a local build directory. This should get you started.
So I'm trying to deploy my app to Heroku.
Here is my docker-compose.yml
version: '3'
#Define services
services:
#Back-end Spring Boot Application
entaurais:
#The docker file in scrum-app build the jar and provides the docker image with the following name.
build: ./entauraIS
container_name: backend
#Environment variables for Spring Boot Application.
ports:
- 8080:8080 # Forward the exposed port 8080 on the container to port 8080 on the host machine
depends_on:
- postgresql
postgresql:
image: postgres:13
environment:
- POSTGRES_PASSWORD=root
- POSTGRES_USER=postgres
- POSTGRES_DB=entauracars
ports:
- "5433:5433"
expose:
- "5433"
entaura-front:
build: ./entaura-front
container_name: frontend
ports:
- "4200:4200"
volumes:
- /usr/src/app/node_modules
My frontend Dockerfile:
FROM node:14.15.0
RUN mkdir /usr/src/app
WORKDIR /usr/src/app
COPY package*.json /usr/src/app/
RUN npm install
COPY . /usr/src/app
EXPOSE 4200
CMD [ "npm", "start" ]
My backend Dockerfile:
FROM maven:3.6.0-jdk-11-slim AS build
COPY src /usr/src/app/src
COPY pom.xml /usr/src/app
RUN mvn -f /usr/src/app/pom.xml clean package
FROM openjdk:11-jre-slim
COPY --from=build /usr/src/app/target/entauraIS.jar /usr/app/entauraIS.jar
ENTRYPOINT ["java","-jar","/usr/app/entauraIS.jar"]
As far as I'm aware heroku needs it's own heroku.yml file, but with the examples I've seen I have no idea how to convert it to my sitaution. Any help is appreaciated, I am completely lost with Heroku.
One of the examples of heroku.yml that I looked at:
build:
docker:
web: Dockerfile
run:
web: npm run start
release:
image: web
command:
-npm run migrate up
docker-compose.yml to heroku.yml
docker-compose has some similar fields that heroku.yml. You could create manually.
It will be awesome the creation of some npm module to convert the docker-compose to heroku.yml. You just need to read the docker-compose.yml and pick some values to create a heroku.yml. Check this to know how read and write yml files.
docker is not required in heroku
If you are looking for a platform to deploy your apps and avoid infrastructure nightmares, heroku is an option for you.
Even more, if your application are standard (java & nodejs), does not need crazy configurations to build and is self-contained (no private libraries), you don't need docker :D
If your nodejs package.json has the standard scripts: start and build, it will run in heroku just perform git push to heroku without Dockerfile. Heroku will detect the nodejs, version and your app will start.
If your java has the spring-boot standard configurations, is the same, just push your code to heroku. In this case, previously to the push, add the postgress add-on manually and use environment variables in your application.properties jdbc url.
one process by app in heroku
If you have api + frontend you will need two apps in heroku. Also your api will need the postgress add-on
Heroku does not work like docker-compose, I mean : one host with all of your apps: front + api + db
Docker
If you want to use Docker, just put the Dockerfile and git push. Heroku will detect that docker is required and will perform the standard commands : docker build ... docker run... so, no extra configuration is required
heroku.yml
If you Docker is mandatory for your apps, and the standard docker build ... and docker run... is not enough for your apps, you wil need heroku.yml
You will need one heroku.yml by each app in heroku.
One advantage of this could be that the manually addition of postgress add-on will not required because is defined in heroku.yml
I am new to docker-compose, I have built a simple web application using flask and redis and it works fine in my localhost, my question is how to push this web app including the python and redis images to docker hub and pull that image from a different machine.
I usually do docker-compose build,
docker push
version: '3'
services:
web:
build: .
image: "alhaffar/flask_redis_app:2.0"
ports:
- "8088:5000"
depends_on:
- redis
redis:
image: "redis:alpine"
the Dockerfile
FROM python:3.7
# CHANGE WORKIN DIR AND COPY FILES
WORKDIR /code
COPY . /code
# INSTALL REQUIRED PACAKGES
RUN pip install --upgrade pip
RUN pip install -r requirements.txt
# RUN THE APP
CMD ["python", "./main.py"]
when I try to pull the image into a different machine and issue docker run, it runs only the python image without redis image.
how I can run all images
Dockerhub and other docker registries work with images. Docker-compose is just an abstraction which helps to set up a bunch of images, that can work together, by using one configuration-file - docker-compose. There is nothing like docker-compose registries. Then if you have your docker-compose file on the other machine you just use docker-compose up and images should be pulled - assuming they are published to some registry (public/private). Image with your app should be published by you and refis will be taken form dockerhub registry, if you are using redis official image.
Docker-compose is helpful when you are doing some local development and want to set up your working environment quickly. If you want to set up this environment on other machine you would have to share the docker-compose file with them and have the docker and docker-compose installed on that other machine.
If your docker-compose is configured to build some image on start, you can still push this image using docker-compose push command.
With your Docker Compose Script you do two things:
Build your Flask App —> Image 1
Pull and run Redis —> Image 2
If you push Image 1 to DockerHub and pull it on an other machine, you are missing the second Image.
What you want to do is run the Docker compose script on the second machine without the build line.
I have a golang script which interacts with postgres. Created a service in docker-compose.yml for both golang and postgre. When I run it locally with "docker-compose up" it works perfect, but now I want to create one single image to push it to my dockerhub so it can be pulled and ran with just "docker run ". What is the correct way of doing it?
Image created by "docker-compose up --build" launches with no error with "docker run " but immediately stops.
docker-compose.yml:
version: '3.6'
services:
go:
container_name: backend
build: ./
volumes:
- # some paths
command: go run ./src/main.go
working_dir: $GOPATH/src/workflow/project
environment: #some env variables
ports:
- "80:80"
db:
image: postgres
environment: #some env variables
volumes:
- # some paths
ports:
- "5432:5432"
Dockerfile:
FROM golang:latest
WORKDIR $GOPATH/src/workflow/project
CMD ["/bin/bash"]
I am a newbie with docker so any comments on how to do things idiomatically are appreciated
docker-compose does not combine docker images into one, it runs (with up) or builds then runs (with up --build) docker containers based on the images defined in the yml file.
More info are in the official docs
Compose is a tool for defining and running multi-container Docker applications.
so, in your example, docker-compose will run two containers:
1 - based on the go configurations
2 - based on the db configurations
to see what containers are actually running, use the command:
docker ps -a
for more info see docker docs
It is always recommended to run each searvice on a separate container, but if you insist to make an image which has both golangand postrges, you can take a postgres base image and install golang on it, or the other way around, take golangbased image and install postgres on it.
The installation steps can be done inside the Dockerfile, please refer to:
- postgres official Dockerfile
- golang official Dockerfile
combine them to get both.
Edit: (digital ocean deployment)
Well, if you copy every thing (docker images and the yml file) to your droplet, it should bring the application up and running similar to what happens when you do the same on your local machine.
An example can be found here: How To Deploy a Go Web Application with Docker and Nginx on Ubuntu 18.04
In production, usually for large scale/traffic applications, more advanced solutions are used such as:
- docker swarm
- kubernetes
For more info on Kubernetes on digital ocean, please refer to the official docs
hope this helps you find your way.