Setting up docker compose for development - docker

I am trying to create a docker-compose.yml which will allow me to start up a few services, where some of those services will have their own Dockerfile. For example:
- my-project
- docker-compose.yml
- web
- Dockerfile
- src/
- worker
- Dockerfile
- src/
I'd like a developer to be able to checkout the project and just run docker-compose up --build to get going.
Also, I'm trying mount the source for a service inside the docker container, so that a developer is able to edit the files on the host machine and those changes will be reflected inside the container immediately (say if it a rails app, it will get recompiled on file change).
I have tried to get just the web service going, but I just cannot mount web directory inside the container: https://github.com/zoran119/haskell-webservice
And here is docker-compose.yml:
version: "2"
services:
web:
build: web
image: web
volumes:
- ./web:/app
Can anyone spot a problem here?

The problem is that host ./web folder shadows the internal /app folder. Which means anything that is inside is overshadowed by your host folder. So you can follow an approach like below
Additional bash script for setups
./scripts/deploy_app.sh
#!/bin/bash
set -ex
# By default checkout the master branch, if none specified
BRANCH=${BRANCH:-master}
cd /usr/src/app
git clone https://github.com/tarunlalwani/docker-nodejs-sample-app .
git checkout $BRANCH
# Install app dependencies
npm install
./scripts/run_app.sh
#!/bin/bash
set -ex
cd /usr/src/app
exec npm start
Dockerfile
FROM node:boron
# Create app directory
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY ./scripts /scripts
EXPOSE 8080
CMD [ "bash", "-c" , "/scripts/deploy_app.sh && /scripts/run_app.sh"]
Now in your docker-compose you can use below
version: '3'
services:
app:
build:
context: .
volumes:
- ./app:/usr/src/app
Now when you do docker-compose up it will run the /scripts/deploy_app.sh and deploy the app to /usr/src/app inside the container. The host folder ./app will have the source for developers to edit.
You can enhance the script not to download source code if the folder already has data. The branch of the code can be controlled using the BRANCH environment variables. If you want you can even run the script in Dockerfile as well to have images which by default contains the source
See a detailed article I wrote about Static and Dynamic code deployment
http://tarunlalwani.com/post/deploying-code-inside-docker-images-statically-dynamically/

So it seems your issue turn out to be be something different
When I try docker-compose up i get below error
Attaching to hask_web_1
web_1 | Version 1.3.2, Git revision 3f675146590da4f3edf768b89355f798229da2a5 (4395 commits) x86_64 hpack-0.15.0
web_1 | 2017-09-10 09:58:04.741696: [debug] Checking for project config at: /app/stack.yaml
web_1 | #(Stack/Config.hs:863:9)
web_1 | 2017-09-10 09:58:04.741873: [debug] Loading project config file stack.yaml
web_1 | #(Stack/Config.hs:881:13)
web_1 | <stdin>: hGetLine: end of file
hask_web_1 exited with code 1
But when i use docker-compose run web i get the following output
$ docker-compose run web
Version 1.3.2, Git revision 3f675146590da4f3edf768b89355f798229da2a5 (4395 commits) x86_64 hpack-0.15.0
2017-09-10 09:58:37.859351: [debug] Checking for project config at: /app/stack.yaml
#(Stack/Config.hs:863:9)
2017-09-10 09:58:37.859580: [debug] Loading project config file stack.yaml
#(Stack/Config.hs:881:13)
2017-09-10 09:58:37.862281: [debug] Trying to decode /root/.stack/build-plan-cache/x86_64-linux/lts-9.3.cache
So that made me realize your issue. docker-compose up and docker-compose run has one main difference that is the tty. Run allocates a tty while up doesn't. So you need to change the compose to
version: "2"
services:
web:
build: web
image: web
volumes:
- ./web:/app
tty: true

Related

Docker ENV vars updated in container but not in application

I've updated an environment variable in my Dockerfile, restarted with docker compose up -d
Adding in a shell file to be run on container start, with the line echo $MY_VAR, echoes the appropriate value, however, when I go open the browser console within my application and type env, it only prints out my previous env.
I've tried clearing my cache, force rebuilding of the image via the -d flag on docker compose up, deleting the old images, literally anything and everything, yet somehow the old env remains.
My Dockerfile:
FROM node:17.4.0-alpine3.14
WORKDIR /code
CMD ["bin/run"]
ENV \
MY_VAR='abcdef' \
VERSION='development'
COPY package*.json ./
RUN npm install
COPY src src
COPY cogs.js ./
COPY bin bin
RUN bin/build
My Docker Compose
version: "3.9"
services:
balancer:
image: nginx:1.19.7-alpine
ports:
- 80:80
volumes:
- ./src/nginx.conf:/etc/nginx/nginx.conf
networks:
default:
aliases:
- www.dev.mydomain.com
app: &app
build:
context: "../app"
volumes:
- ../app/bin:/code/bin
- ../app/package-lock.json:/code/package-lock.json
- ../app/package.json:/code/package.json
- ../app/src:/code/src
- app-dist:/code/dist
environment:
MY_VAR: abcdef
VERSION: 'development'
app-watch:
<<: *app
command: ["bin/watch"]
volumes:
app-dist:
Where I use it in my app; config.js
const { env } = globalThis;
export default {
myVar: env.MY_VAR,
version: env.VERSION
};
Updated docker vars (STRIPE_PUBLIC_KEY === MY_VAR)
I'm honestly completely confused as to how the variables can be updated when I echo $MY_VAR in my bin/run script, but logging out the env in browser returns an outdated version of the env.
I think you should not put the variable in both, Dockerfile and docker-compose.yml (unless you explicitly need it that way to build the app), but either in docker-compose.yml or in a .env file.
Start by docker compose build if the images depend on the env vars during build stage.
Docker detects the changes when running docker compose up but if you want to force recreate, use the --force-recreate flag. (-d is used to detach the container from the session).
docker compose restart is not suitable at that point, because:
If you make changes to your docker-compose.yml configuration these
changes are not reflected after running docker compose restart command.
Also make sure to do hard refresh on the website you are looking at the results using CTRL+R (in most browsers).

Is it even possible to convert my docker-compose.yml to heroku.yml?

So I'm trying to deploy my app to Heroku.
Here is my docker-compose.yml
version: '3'
#Define services
services:
#Back-end Spring Boot Application
entaurais:
#The docker file in scrum-app build the jar and provides the docker image with the following name.
build: ./entauraIS
container_name: backend
#Environment variables for Spring Boot Application.
ports:
- 8080:8080 # Forward the exposed port 8080 on the container to port 8080 on the host machine
depends_on:
- postgresql
postgresql:
image: postgres:13
environment:
- POSTGRES_PASSWORD=root
- POSTGRES_USER=postgres
- POSTGRES_DB=entauracars
ports:
- "5433:5433"
expose:
- "5433"
entaura-front:
build: ./entaura-front
container_name: frontend
ports:
- "4200:4200"
volumes:
- /usr/src/app/node_modules
My frontend Dockerfile:
FROM node:14.15.0
RUN mkdir /usr/src/app
WORKDIR /usr/src/app
COPY package*.json /usr/src/app/
RUN npm install
COPY . /usr/src/app
EXPOSE 4200
CMD [ "npm", "start" ]
My backend Dockerfile:
FROM maven:3.6.0-jdk-11-slim AS build
COPY src /usr/src/app/src
COPY pom.xml /usr/src/app
RUN mvn -f /usr/src/app/pom.xml clean package
FROM openjdk:11-jre-slim
COPY --from=build /usr/src/app/target/entauraIS.jar /usr/app/entauraIS.jar
ENTRYPOINT ["java","-jar","/usr/app/entauraIS.jar"]
As far as I'm aware heroku needs it's own heroku.yml file, but with the examples I've seen I have no idea how to convert it to my sitaution. Any help is appreaciated, I am completely lost with Heroku.
One of the examples of heroku.yml that I looked at:
build:
docker:
web: Dockerfile
run:
web: npm run start
release:
image: web
command:
-npm run migrate up
docker-compose.yml to heroku.yml
docker-compose has some similar fields that heroku.yml. You could create manually.
It will be awesome the creation of some npm module to convert the docker-compose to heroku.yml. You just need to read the docker-compose.yml and pick some values to create a heroku.yml. Check this to know how read and write yml files.
docker is not required in heroku
If you are looking for a platform to deploy your apps and avoid infrastructure nightmares, heroku is an option for you.
Even more, if your application are standard (java & nodejs), does not need crazy configurations to build and is self-contained (no private libraries), you don't need docker :D
If your nodejs package.json has the standard scripts: start and build, it will run in heroku just perform git push to heroku without Dockerfile. Heroku will detect the nodejs, version and your app will start.
If your java has the spring-boot standard configurations, is the same, just push your code to heroku. In this case, previously to the push, add the postgress add-on manually and use environment variables in your application.properties jdbc url.
one process by app in heroku
If you have api + frontend you will need two apps in heroku. Also your api will need the postgress add-on
Heroku does not work like docker-compose, I mean : one host with all of your apps: front + api + db
Docker
If you want to use Docker, just put the Dockerfile and git push. Heroku will detect that docker is required and will perform the standard commands : docker build ... docker run... so, no extra configuration is required
heroku.yml
If you Docker is mandatory for your apps, and the standard docker build ... and docker run... is not enough for your apps, you wil need heroku.yml
You will need one heroku.yml by each app in heroku.
One advantage of this could be that the manually addition of postgress add-on will not required because is defined in heroku.yml

issues in docker-compose when running up, cannot find localhost and services starting in wrong order

I'm having a couple of issues running docker-compose.
docker-compose up already works in starting the webservice (stuffapi) and I can hit the endpoint with http://localhost:8080/stuff.
I have a small go app that I would like to run with docker-compose using a local dockerfile. The dockerfile when built locally cannot call the stuffapi service on localhost. I have tried using the service name, ie http://stuffapi:8080 however this gives an error lookup stuffapi on 192.168.65.1:53: no such host.
I'm guessing this has something to do with the default network setup?
After the stuffapi service has started I would like my service to be built (stuffsdk in dockerfile), then execute a command to run the go app which calls the stuff (web) service. docker-compose tries to build the local dockerfile first but when it runs its last command RUN ./main, it fails as the stuffapi hasn't been started first. In my service I have a depends_on the stuffapi service so I thought that would start first?
docker-compose.yaml
version: '3'
services:
stuffapi:
image: XXX
ports:
- 8080:8080
stuffsdk:
depends_on:
- stuffapi
build: .
dockerfile:
From golang:1.15
RUN mkdir /stuffsdk
RUN mkdir /main
ADD ./stuffsdk /stuffsdk
ADD ./main /main
ENV BASE_URL=http://stuffapi:8080
WORKDIR /main
RUN go build
RUN ./main

How to nuke everything inside my Docker containers and start a new?

(I'm clearly haven't fully mastered Docker's concepts yet, so please do correct me when I'm incorrectly/inaccurately using terms.)
I was running out of storage space, so I ran docker system prune to clean up my system a bit. However, shortly (perhaps immediately) after that, I started running into segmentation faults after starting Webpack dev server in my container. My guess at this point would be that that is due to some npm package having to be rebuilt, but it not doing so due to some old artefacts still lingering around? I'm not running into the segmentation faults if I run Webpack dev server outside of the container:
web_1 | [2] Project is running at http://0.0.0.0:8000/
web_1 | [2] webpack output is served from /
web_1 | [2] 404s will fallback to /index.html
web_1 | [2] Segmentation fault (core dumped)
web_1 | [2] error Command failed with exit code 139.
Thus, I'm wondering whether docker system prune really removes everything related to the Docker images I've run before, or whether there's some additional cleanup I can do.
My Dockerfile is a follows, where ./stacks/frontend is the directory from which Webpack dev server is run (through yarn start):
FROM node:6-alpine
LABEL Name="Flockademic dev environment" \
Version="0.0.0"
ENV NODE_ENV=development
WORKDIR /usr/src/app
# Needed for one of the npm dependencies (fibers, when compiling node-gyp):
RUN apk add --no-cache python make g++
COPY ["package.json", "yarn.lock", "package-lock.json*", "./"]
# Unfortunately it seems like Docker can't properly glob this at this time:
# https://stackoverflow.com/questions/35670907/docker-copy-with-file-globbing
COPY ["stacks/frontend/package.json", "stacks/frontend/yarn.lock", "stacks/frontend/package-lock*.json", "./stacks/frontend/"]
COPY ["stacks/accounts/package.json", "stacks/accounts/yarn.lock", "stacks/accounts/package-lock*.json", "./stacks/accounts/"]
COPY ["stacks/periodicals/package.json", "stacks/periodicals/yarn.lock", "stacks/periodicals/package-lock*.json", "./stacks/periodicals/"]
RUN yarn install # Also runs `yarn install` in the subdirectories
EXPOSE 3000 8000
CMD yarn start
And this is its section in docker-compose.yml:
version: '2'
services:
web:
image: flockademic
build:
context: .
dockerfile: docker/web/Dockerfile
ports:
- 3000:3000
- 8000:8000
volumes:
- .:/usr/src/app/:rw
# Prevent locally installed node_modules from being mounted inside the container.
# Unfortunately, this does not appear to be possible for every stack without manually enumerating them:
- /usr/src/app/node_modules
- /usr/src/app/stacks/frontend/node_modules
- /usr/src/app/stacks/accounts/node_modules
- /usr/src/app/stacks/periodicals/node_modules
links:
- database
environment:
# Some environment variables I use
I'm getting somewhat frustrated with not having a clear picture of what's going on :) Any suggestions on how to completely restart (and what concepts I'm getting wrong) would be appreciated.
So apparently docker system prune has some additional options, and the proper way to nuke everything was docker system prune --all --volumes. The key for me was probably --volumes, as those would probably hold cached packages that had to be rebuilt.
The segmentation fault is gone now \o/

Docker live server does not render pages in my project directory

I am new to Docker. I'm creating a live reload server for a project folder that has static HTML files based on this NPM package. This is what my Dockerfile looks like:
FROM node:8.9.3-alpine
LABEL Name=app-static Version=1.0.0
# Configure container image
ENV NODE_ENV development
WORKDIR /app
VOLUME [ "app" ]
RUN npm install -g live-server#1.2.0
EXPOSE 1764
CMD [ "live-server", "--port=1764", "--entry-file=index.html" ]
This is what my docker-compose.yml file looks like:
version: '2.1'
services:
app-static:
image: app-static
build: .
ports:
- 1764:1764
When I run docker-compose up, my container spins up and I get the following message in my terminal:
Serving "/app" at http://127.0.0.1:1764
When I navigate to http://127.0.0.1:1764/index.html though, I see the following message in my browser window:
Cannot GET /index.html
Why is this happening? Looks like my project files aren't available to my container. Appreciate your help. Thanks.

Resources