Docker Compose not starting mongo service even main service depends on it - docker

I am trying to construct ci process to build, test and publish my .NET Core app using Docker-Compose and bash scripts.
I have UnitTests, IntegrationTests and XApi projects in a folder
and have created DockerFile and docker-compose.yml like below.
IntegrationTests are dependent to mongointegration, so I added links and depends_on attributes to testandpublish service in docker-compose.yml.
When I try to docker-compose up or docker-compose up testandpublish,
it fails to connect mongo. (DockerFile - step 10), mongo service has not been started yet (Don't understand why)
In step 10, if I change RUN to CMD, it can connect to mongo, docker-compose works fine. But this time I cannot detect tests are failed or succeed in my sh script, because now it does not break docker-compose up command..
My question is: Why docker compose does not start service mongointegration? And if it is impossible, how can I understand that service testandpublish failed? Thanks.
Structure:
XProject
-src
-Tests
-UnitTests
-IntegrationTests
-Dockerfile
-docker-compose.yml
-XApi
My Dockerfile content is (I have added line numbers to explain problem here):
1.FROM microsoft/dotnet:1.1.0-sdk-projectjson
2.COPY . /app
3.WORKDIR /app/src/Tests/UnitTests
4.RUN ["dotnet", "restore"]
5.RUN ["dotnet", "build"]
6.RUN ["dotnet", "test"]
7.WORKDIR /app/src/Tests/IntegrationTests
8.RUN ["dotnet", "restore"]
9.RUN ["dotnet", "build"]
10.RUN ["dotnet", "test"]
11.WORKDIR /app/src/XApi
12.RUN ["dotnet", "restore"]
13.RUN ["dotnet", "build"]
14.CMD ["dotnet", "publish", "-c", "Release", "-o", "publish"]
and my docker-compose.yml
version: "3"
services:
testandpublish:
build: .
links:
- mongointegration
depends_on:
- mongointegration
mongointegration:
image: mongo
ports:
- "27017:27017"

The image build phase and the container run phase are two very seperate steps in docker-compose.
Build and Run Differences
The build phase creates each of the image layers from the steps in the Dockerfile. Each happens in standalone containers. None of your service config, apart from the build: stanza specific to a services build, is available during the build.
Once the image is built, it can be run as a container with the rest of your docker-compose service config.
Instead of running tests in your Dockerfile, you could create a script to use as the CMD that runs all your test steps in the container.
#!/bin/sh
set -uex
cd /app/src/Tests/UnitTests
dotnet restore
dotnet build
dotnet test
cd /app/src/Tests/IntegrationTests
dotnet restore
dotnet build
dotnet test"
cd /app/src/XApi
dotnet restore
dotnet build
dotnet publish -c Release -o publish
If the microsoft/dotnet:1.1.0-sdk-projectjson image is Windows based you might need to convert this to the equivalent CMD or PS commands.
Container Dependencies
depends_on doesn't work quite as well as most people assume it will. In it's simple form, depends_on only waits for the container to launch before moving onto starting the dependent container. It's not smart enough to wait for the process inside the container be ready. Proper dependencies can be done with a healthcheck and a condition.
services:
testandpublish:
build: .
links:
- mongointegration
depends_on:
mongointegration:
condition: service_healthy
mongointegration:
image: mongo
ports:
- "27017:27017"
healthcheck:
test: ["CMD", "docker-healthcheck"]
interval: 30s
timeout: s
retries: 3
Using the Docker health check script, after it's been copied into the container via a Dockerfile.
#!/bin/bash
set -eo pipefail
host="$(hostname --ip-address || echo '127.0.0.1')"
if mongo --quiet "$host/test" --eval 'quit(db.runCommand({ ping: 1 }).ok ? 0 : 1)'; then
exit 0
fi
exit 1

RUN steps are executed when Docker builds the image and no containers are available yet. Instead CMD step is executed on the run time and Docker Compose has already started depending mongointegration container.

Related

Docker Multi Stage Build Dependencies

I want to create a dockerfile which contains 2 stages.
The first stage is to set up a MySQL server and the second stage is to start a backend service that accesses the server.
The problem is that the backend service stops when no MySQL server is available. Is there a way to make the stage dependent on the first stage being started?
what is a little strange is that when i create the dockerfile with the database at the top, the log of the backend is displayed. If the backend is on top, the log of the MySQL is displayed when starting.
Actual Dockerfile:
FROM mysql:latest AS BackendDatabase
RUN chown -R mysql:root /var/lib/mysql/
ARG MYSQL_DATABASE="DienstplanverwaltungDatabase"
ARG MYSQL_USER="user"
ARG MYSQL_PASSWORD="password"
ARG MYSQL_ROOT_PASSWORD="password"
ENV MYSQL_DATABASE=$MYSQL_DATABASE
ENV MYSQL_USER=$MYSQL_USER
ENV MYSQL_PASSWORD=$MYSQL_PASSWORD
ENV MYSQL_ROOT_PASSWORD=$MYSQL_ROOT_PASSWORD
EXPOSE 3306
FROM openjdk:10-jre-slim AS Backend
LABEL description="Backend Dienstplanverwaltung"
LABEL maintainer="Martin"
COPY ./SpringDienstplanverwaltung/build/libs/dienstplanverwaltung-0.0.1-SNAPSHOT.jar /usr/local/app.jar
EXPOSE 8080
ENTRYPOINT java -jar /usr/local/app.jar
actually you need Docker-composer of two containers. One for Mysql one for java app.
Multistage is mostly for cases like #1 build something, for example java or Go. #2 create second image and copy results of build. The general idea is to keep the second stage clean. We do not need to build tools, only results in second stage.
please see example:
FROM
Learn more about the "FROM" Dockerfile command.
golang:1.16
WORKDIR /go/src/github.com/alexellis/href-counter/
RUN go get -d -v golang.org/x/net/html
COPY app.go ./
RUN CGO_ENABLED=0 go build -a -installsuffix cgo -o app .
FROM alpine:latest
RUN apk --no-cache add ca-certificates
WORKDIR /root/
COPY --from=0 /go/src/github.com/alexellis/href-counter/app ./
CMD ["./app"]
Okay you seem to be a little confused with various things here. First of all, multi-stage builds are for building an application that needs some kind of build/compiling process, copying that build into another container with fewer dependencies and with just the executable, so in this context, trying to run a database in a multistage build makes no sense at all, due to the fact that building the container does not run it.
Now, you want to have a multi stage to build the java app and then copy that build into another container and then run it. Also, when you are running that container you need a mysql database, using docker-compose is a good tool for that, like this example:
version: '3.8'
services:
db:
image: mysql:8.0
cap_add:
- SYS_NICE
restart: always
environment:
- MYSQL_DATABASE=mydatabase
- MYSQL_ROOT_PASSWORD=mypassword
ports:
- '3306:3306'
volumes:
- db:/var/lib/mysql
# - ./db/init.sql:/docker-entrypoint-initdb.d/init.sql
api:
container_name: your-backend
build:
context: .
image: your-backend
depends_on:
- db
ports:
- 8080:8080
environment:
ENV_VAR_EXAMPLE: example
links:
- db
volumes:
db:
driver: local
Also, an example multi-stage Dockerfile for java applications:
# First stage: complete build environment
FROM maven:3.5.0-jdk-8-alpine AS builder
# add pom.xml and source code
ADD ./pom.xml pom.xml
ADD ./src src/
# package jar
RUN mvn clean package
# Second stage: minimal runtime environment
From openjdk:8-jre-alpine
# copy jar from the first stage
COPY --from=builder target/my-app-1.0-SNAPSHOT.jar my-app-1.0-SNAPSHOT.jar
EXPOSE 8080
CMD ["java", "-jar", "my-app-1.0-SNAPSHOT.jar"]

Container exited with code 0, and my app is served from the host OS

I want to dockerize a Next.js project.
I am using Ubuntu 20.04
I first created a Next.js app in my /home/user/project/ folder using npx create-next-app
So I have the project source code in my host machine.
But I want to dockerize it, so I created a docker-compose.yaml:
next:
build:
context: ./next
dockerfile: Dockerfile
container_name: next
volumes:
- ./next:/var/www/html
ports:
- "3000:3000"
networks:
- nginx
And this is the Dockerfile:
#Creates a layer from node:alpine image.
FROM node:alpine
#Creates directories
RUN mkdir -p /usr/src/app
#Sets an environment variable
ENV PORT 3000
#Sets the working directory for any RUN, CMD, ENTRYPOINT, COPY, and ADD commands
WORKDIR /usr/src/app
#Copy new files or directories into the filesystem of the container
COPY package.json /usr/src/app
COPY package-lock.json /usr/src/app
#Execute commands in a new layer on top of the current image and commit the results
RUN npm install
##Copy new files or directories into the filesystem of the container
COPY . /usr/src/app
#Execute commands in a new layer on top of the current image and commit the results
RUN npm run build
#Informs container runtime that the container listens on the specified network ports at runtime
EXPOSE 3000
#Allows you to configure a container that will run as an executable
ENTRYPOINT ["npm", "run"]
Then I build my container using docker-compose build && docker-compose up.
The container is built, but it's not running and is displaying EXITED (0)
and the LOGS has the following message:
Lifecycle scripts included in next-frontend#0.1.0:
start
next start
available via `npm run-script`:
dev
next dev
build
next build
lint
next lint
But of course if I run in the host npm run dev it will run the app from the host, and not from the container (It runs, but that's not what I want)
I feel like there is some very fundamental mistake in my deployment, but I just started with Docker so I can't find out what
Also, I copied the Dockerfile from a tutorial so it might not fit the way I created the project
ENTRYPOINT ["npm", "run"]... What?
From npm run documentation,
This runs an arbitrary command from a package's "scripts" object. If no "command" is provided, it will list the available scripts.
In the docker-compose.yml, you need to override the CMD instruction (that is empty in your case) with the npm script you want to run. Something like this:
next:
build:
context: ./next
dockerfile: Dockerfile
container_name: next
command: ["start"]
volumes:
- ./next:/var/www/html
ports:
- "3000:3000"
networks:
- nginx
Since you are using the Compose Spec, this is the reference for the command instruction.

How do I automatically start a .NET Core 3.1 API when the docker container starts?

When I deploy my .NET Core WebAPI to a Docker container, it fails to "run" by default. (It's dotnet that doesn't run, the actual container runs as expected)
Running docker ps -a shows the container with a Status of "UP":
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
eb3fa8be5101 firstapi:dev "tail -f /dev/null" 4 minutes ago Up 4 minutes 0.0.0.0:32790->80/tcp MyFirstApi
Attempting to hit "http://localhost:32790/api/WeatherForecast" shows an error in the browser (ERR_EMPTY_RESPONSE) and "Error: socket hang up" in Postman.
When I debug this project in Visual Studio (the debugging project is set to "Docker Compose") I can successfully hit the API.
As soon as I stop debugging, I can't reach the endpoints again.
When I shell into the container and manually launch the API, it works from my browser again:
PS C:\WINDOWS\system32> docker exec -it eb3 /bin/bash
root#eb3fa8be5101:/app# cd /app/bin/Debug/netcoreapp3.1/
root#eb3fa8be5101:/app/bin/Debug/netcoreapp3.1# dotnet FirstApi.dll
[18:43:16 INF] Starting up
[18:43:16 INF] Now listening on: http://[::]:80
[18:43:16 INF] Application started. Press Ctrl+C to shut down.
[18:43:16 INF] Hosting environment: Development
[18:43:16 INF] Content root path: /app/bin/Debug/netcoreapp3.1
My dockerfile looks pretty standard, untouched from when VS generated it:
#See https://aka.ms/containerfastmode to understand how Visual Studio uses this Dockerfile to build your images for faster debugging.
FROM mcr.microsoft.com/dotnet/core/aspnet:3.1-buster-slim AS base
WORKDIR /app
EXPOSE 80
FROM mcr.microsoft.com/dotnet/core/sdk:3.1-buster AS build
WORKDIR /src
COPY ["/FirstApi.csproj", "FirstApi/"]
RUN dotnet restore "FirstApi/FirstApi.csproj"
COPY . .
WORKDIR "/src/FirstApi"
RUN dotnet build "FirstApi.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "FirstApi.csproj" -c Release -o /app/publish
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "FirstApi.dll"]
And my launchSettings.json file is also pretty standard:
{
"$schema": "http://json.schemastore.org/launchsettings.json",
"profiles": {
"Docker": {
"commandName": "Docker",
"launchBrowser": true,
"launchUrl": "{Scheme}://{ServiceHost}:{ServicePort}",
"publishAllPorts": true,
"useSSL": false
}
}
}
The docker-compose.yml file:
version: '3.4'
services:
firstapi:
image: ${DOCKER_REGISTRY-}firstapi
build:
context: .
dockerfile: FirstApi/Dockerfile
And the docker-compose.override.yml:
version: '3.4'
services:
firstapi:
environment:
- ASPNETCORE_ENVIRONMENT=Development
ports:
- "80"
volumes:
- ${APPDATA}/Microsoft/UserSecrets:/root/.microsoft/usersecrets:ro
Running docker-compose up --build from outside Visual Studio will rebuild and run the container as expected.
It is possible that port 80 is causing the issue. Port 80 is taken by IIS and could cause issues when you run a container. Map it to some other port.
firstapi:
environment:
- ASPNETCORE_ENVIRONMENT=Development
ports:
- **"8000:80"**
volumes:
- ${APPDATA}/Microsoft/UserSecrets:/root/.microsoft/usersecrets:ro

RUN command cannot access volumes

It appears RUN in a dockerfile can't see my volume directory where ENTRYPOINT can.
Here is an example with a dockerfile and docker-compose.yml that is having the issue:
FROM microsoft/dotnet:2.0-sdk
EXPOSE 5000
ENV ASPNETCORE_ENVIRONMENT=Development
WORKDIR /src/testing
RUN dotnet restore
ENTRYPOINT ["dotnet", "run", "--urls=http://0.0.0.0:5000"]
docker-compose.yml:
version: "3.4"
services:
doctnetcore-api-project:
build: ./api/
container_name: doctnetcore-api-project
image: doctnetcore-api-project:development
restart: 'always'
networks:
- mynetwork
volumes:
- /api/src:/src
networks:
mywebmc:
external:
name: mynetwork
When I run docker-compose up I get an error shown below:
MSBUILD : error MSB1003: Specify a project or solution file. The current working directory does not contain a project or solution file.
ERROR: Service 'doctnetcore-api-project' failed to build: The command '/bin/sh -c dotnet restore' returned a non-zero code: 1
If I comment out RUN dotnet restore and run dotnet restore manually before running docker-compose, it works fine.
So for whatever reason it appears RUN can't see my volume directory and ENTRYPOINT can see my volume directory.
The statements in a Dockerfile are executed at build-time (docker build) and at this point there are no volumes present.
In contrast, the ENTRYPOINT is executed when you run a container (docker run) which has access to potentially mapped volumes.

Use Docker to run a build process

I'm using docker and docker-compose to set up a build pipeline. I've got a front-end that's written in javascript and needs to be built before being used. The backend is written in go.
To make this component integrate with the rest of our docker-compose setup, I want to do the building in a docker image as well.
This is the flow I'm going for:
during build do:
build the frontend stuff and put it in /output (that is bound to the
output volume
build the backend server
when running do:
run the server, it has access to the build files in /output
I'm quite new to docker and docker-compose so I'm not sure if this is possible, or even the right thing to do.
For reference, here's my docker-compose.yml:
version: '2'
volumes:
output:
driver: local
services:
frontend:
build: .
volumes:
- output:/output
backend:
build: ./backend
depends_on:
- frontend
volumes:
- output:/output
and Dockerfile:
FROM node
# create working dir
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
ADD package.json /usr/src/app/package.json
# install packages
RUN npm install
COPY . /usr/src/app
# build frontend files and place results in /output
RUN npm build
RUN cp /usr/src/app/build/* /output
And backend/Dockerfile:
FROM go
# copy and build server
COPY . /usr/src/backend
WORKDIR /usr/src/backend
RUN go build
# run the server
ENTRYPOINT ["/usr/src/backend/main"]
Something is wrong here, but I do not know what. It seems as though the output of the build step are not persisted in the output volume. What can I do to fix this?
You cannot attach a volume during docker build.
The reason for this is that the goal of the docker build command is to build an image, and nothing else, it doesn't need to have volumes, as Dockerfile has ADD / COPY.
To produce your output, you should create a script which mostly does the npm install ; npm build ; cp /usr/src/app/build/* /output from your current dockerfile and use this script as the entrypoint / cmd in your dockerfile.
I'm not sure compose can run this, but in any case, I find it more clear wrapped in a shell script that first executes the frontend builder container, then executing the backend container with the output directory as a volume.

Resources