Dockerfile - possible command line argument? - docker

I have a value in a Dockerfile called ${APP_NAME}. What is it? If this were bash scripting, I would assume it to be some sort of variable but it hasn't been assigned a value anywhere. Is it a command line argument? If so, how would I pass it in when I wanted to call docker-compose with it?
For reference, the Docker file looks like this:
version: '2'
services:
nginx:
container_name: ${APP_NAME}_nginx
hostname: nginx
build:
context: ./containers/nginx
dockerfile: Dockerfile
ports:
- "80:80"
- "443:443"
volumes:
- .:/app
links:
- phpfpm
networks:
- backend
phpfpm:
container_name: ${APP_NAME}_phpfpm
hostname: phpfpm
expose:
- "9000"
build:
context: ./containers/php-fpm
dockerfile: Dockerfile
working_dir: /app
volumes:
- .:/app
links:
- mysql
networks:
- backend
mysql:
container_name: ${APP_NAME}_mysql
hostname: mysql
build:
context: ./containers/mysql
dockerfile: Dockerfile
volumes:
- ./storage/mysql:/var/lib/mysql
- ${MYSQL_ENTRYPOINT_INITDB}:/docker-entrypoint-initdb.d
environment:
- MYSQL_DATABASE=${DB_DATABASE}
- MYSQL_ROOT_PASSWORD=${DB_PASSWORD}
ports:
- "33061:3306"
expose:
- "3306"
networks:
- backend
networks:
backend:
driver: "bridge"
And actually, I'm probably going to have a lot of questions about docker because I've never really used it before so a reference to Dockerfile syntax would be helpful.

This means that there is probably somewhere in your project .env file which contains variables necessary for docker compose. You can find more about it at the official docker compose docs. It says that you can set default values for environment variables using a .env file, which Compose automatically looks for. Values set in the shell environment override those set in the .env file. Try to find more here: https://docs.docker.com/compose/compose-file/#variable-substitution

Related

How to extend Dockerimages via docker-compose

i try to extend my Basic Images from Webdevops.
I'll try to add the base-app to my Container that already exists.
Thats my docker-compose:
version: "3"
services:
base-app:
image: "webdevops/base-app"
restart: always
apache:
image: "webdevops/php-apache-dev:7.2"
container_name: apache
restart: always
ports:
- '80:80'
- '443:443'
depends_on:
- mysql
- base-app
volumes:
- "./:/app"
environment:
- XDEBUG_MODE=develop,debug
- XDEBUG_CLIENT_HOST=host.docker.internal
- XDEBUG_CLIENT_PORT=9003 # 9000=xdebug v2, 9003=v3
- XDEBUG_REMOTE_CONNECT_BACK=0
- XDEBUG_REMOTE_AUTOSTART=1
- XDEBUG_IDE_KEY=docker
- XDEBUG_START_WITH_REQUEST=trigger
extra_hosts:
- "host.docker.internal:host-gateway"
mysql:
image: "mysql:latest"
restart: always
container_name: mysql
ports:
- '3306:3306'
volumes:
- './mysql:/var/lib/mysql'
depends_on:
- base-app
environment:
MYSQL_ROOT_PASSWORD: 'xxxxx'
How to extend my Images with base-app?
You can extend any image using a Dockerfile. For example, you might write a Dockerfile
FROM webdevops/php-apache-dev:7.2
COPY . /app
You can do other steps as required, like RUNning an installation command or setting an alternate CMD you need this image to run. You generally should avoid putting host names or credentials of any sort into the Docker image, leave these as environment: settings in the Compose file.
In your docker-compose.yml file, use a build: block to indicate this image should be built. Do not include an image: line; unless you're specifically planning to push this extended image to a registry; and if you do it must have a different name from the base image.
services:
apache:
build: . # _instead of_ image:
restart: always
depends_on: [...]
ports: [...]
environment: [...]
# no volumes: since the code is already in the image
# container_name: is usually unnecessary

Docker Postgres database not running or accessible

Below is my Dockerfile:
FROM node:14
WORKDIR /workspace
COPY . .
COPY /prisma ./prisma/
RUN npm install
EXPOSE 3333
EXPOSE 9229
CMD [ "npm", "run", "start" ]
And my docker-compose.yml
version: '3.8'
services:
todoapp-api:
container_name: todoapp-api
build:
context: .
dockerfile: Dockerfile
ports:
- 3333:3333
postgres:
image: postgres:13.5
container_name: postgres
restart: always
environment:
- POSTGRES_USER=myuser
- POSTGRES_PASSWORD=mypassword
volumes:
- postgres:/var/lib/postgresql/data
ports:
- '5432:5432'
volumes:
postgres:
networks:
nestjs-crud:
And my .env:
DATABASE_URL="postgresql://myuser:mypassword#192.168.1.1/mydb?schema=public"
After struggling with making the database run and be accessible, I found out that one possible solution was to change the DATABASE_URL. As you can see, I am writing my IP Address there to get it to run and this works for me. However, when I replace 192.168.1.1 with the name of the service: postgres, it stops working and I get the error:
Can't reach database server at postgres:5432
Writing the IP address is not ideal of course. However, if I don't write the IP address then the database server just doesn't work.
I think you may need to atributte networks in the containers specs. You already defined what networks you have in the YAML but they need to be inserted in container's spec like
todoapp-api:
container_name: todoapp-api
networks:
- nestjs-crud
build:
context: .
dockerfile: Dockerfile
ports:
- 3333:3333
networks:
nestjs-crud:
internal: true
My recomendation is to create one network for the db and other for the API, then assing the network db for the db, and both in the API, thus, the API can acess db network. Than, you can acess the db by the host nestjs-crud.postgres
To bounce back, on the point of the comment above, the two services are not in the same network, which is why you have the concern. To solve this problem, it will be necessary to put the services in the same network by putting the mention:
networks:
- nestjs-crud
and depends_on in todoapp-api
in the todoapp-api and postgres service, this becomes:
version: '3.8'
services:
todoapp-api:
container_name: todoapp-api
build:
context: .
dockerfile: Dockerfile
ports:
- 3333:3333
networks:
- nestjs-crud
depends_on:
- postgres
postgres:
image: postgres:13.5
container_name: postgres
restart: always
environment:
- POSTGRES_USER=myuser
- POSTGRES_PASSWORD=mypassword
volumes:
- postgres:/var/lib/postgresql/data
ports:
- '5432:5432'
networks:
- nestjs-crud
volumes:
postgres:
networks:
nestjs-crud:
And add in .env database service name.

AWS code pipeline not getting environment variables

I'm deploying my Django app using AWS code pipeline which is dockerized and I was storing my env variables inside an env file for local development but for the code pipeline I set them all inside environment variables but the variables are still getting None.
docker-compose.yml
version: "3.8"
services:
db:
container_name: db
image: "postgres"
restart: always
volumes:
- postgres-data:/var/lib/postgresql/data/
app:
container_name: app
build:
context: .
restart: always
volumes:
- static-data:/vol/web
depends_on:
- db
proxy:
container_name: proxy
build:
context: ./proxy
restart: always
depends_on:
- app
ports:
- 80:8000
volumes:
- static-data:/vol/static
volumes:
postgres-data:
static-data:
getting env variable in django like:
os.environ.get('FRONTEND_URL')
in my case, I put all env variables inside my AWS CodeBuild environment variables and call them in my docker environment as below.
version: "3.8"
services:
db:
container_name: db
image: "postgres"
restart: always
volumes:
- postgres-data:/var/lib/postgresql/data/
environment:
- VARIABLE_NAME: ${VARIABLE_NAME}
You can specify the env file in the docker compose itself along with relative path.
....
....
app:
container_name: app
build:
context: .
restart: always
env_file:
- <web-variables1.env>
volumes:
- static-data:/vol/web
depends_on:
- db
....
....
Refer more details at documentation

docker-compose force services to create seperate containers even when images are the same

I have 2 services which use the same image:, what can i do, to force docker-compose to generate 2 seperate containers?
Thanks!
EDIT:
Full docker-compose:
version: "3.5"
services:
database:
container_name: proj-database
env_file: ../orm/.env.${PROJ_ENV}
image: postgres
restart: always
ports:
- 5432:5432
networks:
- proj
api:
image: golang:1.17
container_name: proj-api
env_file: ../cryptoModuleAPI/.env.${PROJ_ENV}
restart: always
build: ../cryptoModuleAPI/
links:
- database:database
ports:
- 8080:8080
volumes:
- ../cryptoModuleAPI:/proj/api
- ../orm:/proj/orm
networks:
- proj
admin:
image: golang:1.17
container_name: proj-admin
env_file: ../admin/.env.${PROJ_ENV}
restart: always
build: ../admin/
links:
- database:database
ports:
- 8081:8081
volumes:
- ../admin:/proj/admin
- ../orm:/proj/orm
networks:
- proj
networks:
proj:
external:
name: proj
I just run with docker-compose up
You misunderstand how the build and image directives work when used together.
Paraphrasing the docs,
https://docs.docker.com/compose/compose-file/compose-file-v3/#build
If you specify image as well as build, then Compose names the built image with the value of the image directive.
Compose is going to build two images, both named the same thing. Only one will survive. I'm surprised your app spins up at all!
Provide a different name for the image directive of each service, or leave it out entirely.

How to share enviroment variables in docker

I´m new to docker and I´m having lots of troubles to make it start. I´m making an asp.net core 1.0.1 application user docker container tools for visual studio 2017. I have the following env.file in the same root as the compose file with this values:
REDIS_PORT=6379
and this docker compose yml:
version: '2'
services:
haproxy:
image: eeacms/haproxy
links:
- webapplication3
ports:
- "80:80"
webapplication3:
image: webapplication3
enviroment:
- REDIS_PORT=${REDIS_PORT}
build:
context: .
dockerfile: Dockerfile
links:
- redis
ports:
- "80"
redis:
image: redis
ports:
- ${REDIS_PORT}
I want to know the redis port I have to connect to from my Asp.net core app. As far as I know, the only way to do it is using env variables, and since I don´t want to copy paste the port everywhere I´d like to use the .env file style. Anyway this is not working saying:
Unsupported config option for services.webapplication3:'enviroment'
Any ideas what the problem could be?
You missed a letter "n" in a word environment.
You need to pass the env_file option:
version: '2'
services:
haproxy:
image: eeacms/haproxy
links:
- webapplication3
ports:
- "80:80"
webapplication3:
image: webapplication3
env_file:
- env.file
environment:
- REDIS_PORT=${REDIS_PORT}
build:
context: .
dockerfile: Dockerfile
links:
- redis
ports:
- "80"
redis:
image: redis
env_file:
- env.file
ports:
- ${REDIS_PORT}
Take a look at https://docs.docker.com/compose/environment-variables/ for more info.

Resources