how to set enviroment variable in docker - docker

High level- i have front end web application which runs on one docker container and i made second container for database mysql.
I picked a environment variable mysqldb i need to set that variable to ip address of Docker MySQL container. Part two- i got web application which has to know what ip address is running on( mysql container whatever its going to be because the ip of the container will change) so it has to read that environment variable that was set. So my question do i set a variable so when i run the program mysql container runs and shows that database i set up is working
Dockerfile
FROM golang:1.19-bullseye AS build
WORKDIR /app
COPY ./ ./
RUN go build -o main ./
FROM debian:bullseye
COPY --from=build /app/main /usr/local/bin/main
#CMD[apt-get install mysql-clientmy]
CMD ["/usr/local/bin/main"]
makefile
build:
go build -o bin/main main.go
run:
go run main.go
runcontainer:
docker run -d -p 9008:8080 tiny
compile:
echo "Compiling for every OS and Platform"
GOOS=linux GOARCH=arm go build -o bin/main-linux-arm main.go
GOOS=linux GOARCH=arm64 go build -o bin/main-linux-arm64 main.go
GOOS=freebsd GOARCH=386 go build -o bin/main-freebsd-386 main.go
part of my go program
func main() {
linkList = map[string]string{}
http.HandleFunc("/link", addLink)
http.HandleFunc("/hpe/", getLink)
http.HandleFunc("/", Home)
ip := flag.String("i", "0.0.0.0", "")
port := flag.String("p", "8080", "")
flag.Parse()
fmt.Printf("Listening on %s \n", net.JoinHostPort(*ip, *port))
log.Fatal(http.ListenAndServe(net.JoinHostPort(*ip, *port), nil))
}

Yes, you can achieved this by using env variable in Dockerfile or Docker compose file. By the way don't use IP of db container. Always use hostname. Hostname is static but every recreation of container IP will be get changed.

You can do it the following way in the Dockerfile itself(The example itself if from the Docker documentation):
FROM busybox
ENV FOO=/bar
WORKDIR ${FOO} # WORKDIR /bar
ADD . $FOO # ADD . /bar
COPY \$FOO /quux # COPY $FOO /quux
In case of a Docker-compose file, you can try to do the following:
version:'3'
services:
mysqldb:
container_name: mydb
restart: always
env_file:
- db.env
You should change it according to your requirements.
Addendum:
As far as I can understand what you're trying to achieve, and you can solve this problem by using a db_env in your go program structure as well as your docker-compose file.
Try to do the following:
create a .env file in your go project structure,
add HOST=172.0.0.1
Read the variable in your go program using either a third party package like viper or simply using os.Getenv("HOST")
create a docker-compose file and add both the services you're supposed to create.
You can look at the example that I provided earlier and then create services accordingly by specifying the same db_env below the docker-compose env_file flag.

Related

How to copy a subproject to the container in a multi container Docker app with Docker Compose?

I want to build a multi container docker app with docker compose. My project structure looks like this:
docker-compose.yml
...
webapp/
...
Dockerfile
api/
...
Dockerfile
Currently, I am just trying to build and run the webapp via docker compose up with the correct build context. When building the webapp container directly via docker build, everything runs smoothly.
However, with my current specifications in the docker-compose.yml the line COPY . /webapp/ in webapp/Dockerfile (see below) copies the whole parent project to the container, i.e. the directory which contains the docker-compose.yml, and not just the webapp/ sub directory.
For some reason the line COPY requirements.txt /webapp/ works as expected.
What is the correct way of specifying the build context in docker compose? Why is the . in the Dockerfile interpretet as relative to the docker-compose.yml, while the requirements.txt is relative to the Dockerfile as expected? What am I missing?
Here are the contents of the docker-compose.yml:
version: "3.8"
services:
frontend:
container_name: "pc-frontend"
volumes:
- .:/webapp
env_file:
- ./webapp/.env
build:
context: ./webapp
ports:
- 5000:5000
and webapp/Dockerfile:
FROM python:3.9-slim
# set environment variables
ENV PYTHONWRITEBYTECODE 1
ENV PYTHONBUFFERED 1
# set working directory
WORKDIR /webapp
# copy dependencies
COPY requirements.txt /webapp/
# install dependencies
RUN pip install -r requirements.txt
# copy project
COPY . /webapp/ # does not work as intended
# add entrypoint to app
# ENTRYPOINT ["start-gunicorn.sh"]
CMD [ "ls", "-la" ] # for debugging
# expose port
EXPOSE 5000
The COPY directive is (probably) working the way you expect. But, you have volumes: that are overwriting the image content with something else. Delete the volumes: block.
The image build sequence is working exactly the way you expect. build: { context: ./webapp } uses the webapp subdirectory as the build context and sends it to the Docker daemon. When the Dockerfile for example COPY requirements.txt . it comes out of this directory. If you, for example, docker-compose run frontend pip freeze, you should see the installed Python packages.
After the image is built, Compose starts a container, and at that point volumes: take effect. When you say volumes: ['.:/webapp'], here the . before the colon refers to the directory containing the docker-compose.yml file (and not the webapp subdirectory), and then it hides everything in the /webapp directory in the container. So you're replacing the image's /webapp (which had been built from the webapp subdirectory) with the current directory on the host (one directory higher).
You should usually be able to successfully combine an ordinary host-based development environment and a Docker deployment setup. Use a non-Docker Python virtual environment to build the application and run its unit tests, then use docker-compose up --build to run integration tests and the complete application. With a setup like this, you don't need to deal with the inconveniences of the Python runtime being "somewhere else" as you're developing, and you can safely remove the volumes: block.

Deploy and customize a VueJs app using Docker and GitlabCI

i made a VueJS app, basically a website administration app that allows to display / edit data using different APIs.
This app is customizable using env variables (VUE_APP_XXX) : the urls of the APIs, the title, the color theme etc... About 30 variables, but i will certainly add other ones in the future.
For now i deploy my app using Gitlab CI, i have this Dockerfile (i remove most of env variables for clarification) :
# build
FROM node:lts-alpine as build-stage
WORKDIR /app
ENV PATH /app/node_modules/.bin:$PATH
COPY package*.json ./
RUN npm install
COPY . .
ARG VUE_APP_API_URL
ARG VUE_APP_DATA_URL
ENV VUE_APP_API_URL $VUE_APP_API_URL
ENV VUE_APP_DATA_URL $VUE_APP_DATA_URL
RUN npm run build
# production
FROM nginx:stable-alpine as production-stage
COPY --from=build-stage /app/dist /usr/share/nginx/html
RUN rm /etc/nginx/conf.d/default.conf
COPY nginx/nginx.conf /etc/nginx/conf.d
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
and the gitlab-ci.yml :
docker-build:
# Official docker image.
image: docker:latest
stage: build
services:
- docker:dind
before_script:
- docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
script:
- |
docker build --pull
--build-arg VUE_APP_API_URL="$CI_TEST_API_URL"
--build-arg VUE_APP_DATA_URL="$CI_TEST_DATA_URL"
-t "$CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG" .
- docker push "$CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG"
only:
- dev
when: manual
Then on my server i just make a docker run... to launch my app. Everything works fine, except that :
i have to specify all the variables manually in the Dockerfile and gitlab-ci.yml
the resulting docker image could contain "sensible" data such as logins and passwords
i have to create one image per instance of my app
For legal reasons i need to create one repository per app, and one repository per website (beacause each of these could have a different owner).
So my question is : what would be the best approach to deploy many instances of this app, for many websites ? Knowing that each website needs its own admin app, that can be hosted on its own server.
I'm quite a newbie to docker, i was thinking of :
creating a docker image for a generic app, with default env variables (maybe just with the code, without the npm run build step ?)
using this docker image to create a container for each instance, using its own env variables (is it possible to build the app with docker compose ?)
I'm quite confused about where/when to build the VueJs app, and what tools to use for that.
Any help appreciated, thanks !
Update :
following taleodor suggestion, i found this medium post that seems to do the job. I found it a little 'tricky' but it works :)
When you have a docker image and then deploy it, every tool has a way to override Environment variables baked into the image itself.
So, for your example, with plain docker you could do
docker run -e VUE_APP_API_URL=http://my.url ...
And this would essentially override VUE_APP_API_URL variable previously set on the container. Again, any orchestration platform (docker compose, swarm, kubernetes etc) has capability to override environment variables as this is a very common scenario.
So usual approach I use - pre-fill environment variables inside the image with most generic values and then override them on actual deployments - then you don't need many images (many images for the same thing would be an anti-pattern).
If you have more questions - feel free to join our community discord and you can ask me there - https://discord.gg/UTxjBf9juQ
Update (following comment below): For the static images with compiled UI code, the algorithm is to run some entrypoint script that would do substitution over compiled files. Sample workable pattern is following (you might need tweaks to get it done right, but this gives the idea):
Assuming you're using nginx as a base image, create entrypoint.sh file that would look something like (see this SO for some references: Can envsubst not do in-place substitution?):
#!/bin/sh
find /usr/share/nginx/html/*.js -type f -exec sh -c "cat {} | envsubst | tee {}.tmp && mv {}.tmp {}" \;
nginx -g 'daemon off;'
Copy this entrypoint.sh file into your docker image and use it with ENTRYPOINT instruction.

How does the command key in docker-compose file work

I am trying to understand the docker sample application 'example-voting-app'. I am trying to build the app with docker-compose. I am confused with the behaviour of 'command' key in docker compose file and the CMD Instruction in Dockerfile. The application consists of a service called 'vote'. The configuration for the vote service in docker-compose.yml file is:
services: # we list all our application services under this 'services' section.
vote:
build: ./vote # specifies docker to build the
command: python app.py
volumes:
- ./vote:/app
ports:
- "5000:80"
networks:
- front-tier
- back-tier
The configuration of the Dockerfile provided in ./vote directory is as below:
# Using official python runtime base image
FROM python:2.7-alpine
# Set the application directory
WORKDIR /app
# Install our requirements.txt
ADD requirements.txt /app/requirements.txt
RUN pip install -r requirements.txt
# Copy our code from the current folder to /app inside the container
ADD . /app
# Make port 80 available for links and/or publish
EXPOSE 80
# Define our command to be run when launching the container
CMD ["gunicorn", "app:app", "-b", "0.0.0.0:80", "--log-file", "-", "--access-logfile", "-", "--workers", "4", "--keep-alive", "0"]
My doubt here is which command ( 'python app.py' or 'gunicorn app:app -b ...') will be executed when i try building the application using docker-compose up
The Docker Compose command:, or everything in a docker run invocation after the image name, overrides the Dockerfile CMD.
If the image also has an ENTRYPOINT, the command you provide here is passed as arguments to the entrypoint in the same way the Dockerfile CMD does.
For a typical Compose setup you shouldn't need to specify a command:. In a Python/Flask context, the most obvious place it's useful is if you're also using a queueing system like Celery with the same shared code base: you can use command: to run a Celery worker off of the image you build, instead of a Flask application.

How to make a docker compose file for an existing image?

What I am trying to do is use a Docker image I found online timwiconsulting/ionic-v1.3, and run my ionic project within Docker. I want to mount my ionic project in Docker and forward my ports so I can run the emulator in a browser.
I want to ask how do I create a docker-compose.yml file for an existing container?
I found a Docker image timwiconsulting/ionic-v1.3 that I want to run, which has the correct version of the tools that I want.
Now I want to create a compose file to forward the ports to my computer, and mount the project files. I create this docker-compose.yml file:
version: '3'
services:
web:
build: .
ports:
- "8100:8100"
- "35729:35729"
volumes:
- /Users/leetcat/project/:/project
But every time I try to do docker-compose up I get the error:
~/user: docker-compose up
Building web
Step 1/6 : FROM timwiconsulting:ionic-v1.3
ERROR: Service 'web' failed to build: pull access denied for timwiconsulting, repository does not exist or may require 'docker login
I am doing something wrong. I think I want to be creating a docker-compose.yml file for the container timwiconsulting/ionic-v1.3. Feel free to tell me I am totally off the mark with what docker is.
Here is my Dockerfile:
# Use an official Python runtime as a parent image
FROM timwiconsulting:ionic-v1.3
# Set the working directory to /app
WORKDIR /project
# Copy the current directory contents into the container at /app
ADD . /project
# Install any needed packages specified in requirements.txt
# RUN pip install --trusted-host pypi.python.org -r requirements.txt
# Make port 80 available to the world outside this container
EXPOSE 8100
EXPOSE 35729
# Define environment variable
ENV NAME World
# Run app.py when the container launches
# CMD ["python", "app.py"]
# docker exec -it <container_hash> /bin/bash/

Use Docker to run a build process

I'm using docker and docker-compose to set up a build pipeline. I've got a front-end that's written in javascript and needs to be built before being used. The backend is written in go.
To make this component integrate with the rest of our docker-compose setup, I want to do the building in a docker image as well.
This is the flow I'm going for:
during build do:
build the frontend stuff and put it in /output (that is bound to the
output volume
build the backend server
when running do:
run the server, it has access to the build files in /output
I'm quite new to docker and docker-compose so I'm not sure if this is possible, or even the right thing to do.
For reference, here's my docker-compose.yml:
version: '2'
volumes:
output:
driver: local
services:
frontend:
build: .
volumes:
- output:/output
backend:
build: ./backend
depends_on:
- frontend
volumes:
- output:/output
and Dockerfile:
FROM node
# create working dir
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
ADD package.json /usr/src/app/package.json
# install packages
RUN npm install
COPY . /usr/src/app
# build frontend files and place results in /output
RUN npm build
RUN cp /usr/src/app/build/* /output
And backend/Dockerfile:
FROM go
# copy and build server
COPY . /usr/src/backend
WORKDIR /usr/src/backend
RUN go build
# run the server
ENTRYPOINT ["/usr/src/backend/main"]
Something is wrong here, but I do not know what. It seems as though the output of the build step are not persisted in the output volume. What can I do to fix this?
You cannot attach a volume during docker build.
The reason for this is that the goal of the docker build command is to build an image, and nothing else, it doesn't need to have volumes, as Dockerfile has ADD / COPY.
To produce your output, you should create a script which mostly does the npm install ; npm build ; cp /usr/src/app/build/* /output from your current dockerfile and use this script as the entrypoint / cmd in your dockerfile.
I'm not sure compose can run this, but in any case, I find it more clear wrapped in a shell script that first executes the frontend builder container, then executing the backend container with the output directory as a volume.

Resources