I'm currently on a relatively new journey into microservices architecture, and so far have been developing using a docker-compose file in my .NET 6 solution that builds my containers locally. Whilst this has been very useful as the solution grows it is becoming very painful to run locally as having containers running locally on my system is making very slow and cumbersome.
The production version of this solution will eventually target Kubernetes, and so it seems like a good idea to create a development Kubernetes cluster, and use something like the bridge to Kubernetes addin for Visual Studio and abandon docker-compose.
The journey towards this though has brought up a number of problems along the way that I'm hoping I can get some advice on. I have used Compose to convert my docker compose into deployment and service files, my containers are built and live on my local system, when I deploy development containers which have the :dev tag the seem to not run and I get the CrashLoopBackOff issue, see no logs and so no real way to workout why this happens. However if I build the containers under a release profile and use the :latest tag the containers deploy, and run without crashing.
My question is how do I deploy my containers to kuernetes in development profile without them crashing so that I can get up and running with bridge to Kubernetes? I'm wondering whether it is related to the fact that the built containers are local in the container registry provided by docker desktop, or do I need to provide anything specific for containers that are built using a development profile?
DockerFile
FROM mcr.microsoft.com/dotnet/aspnet:6.0 AS base
WORKDIR /app
EXPOSE 80
FROM mcr.microsoft.com/dotnet/sdk:6.0 AS build
WORKDIR /src
COPY ["Microservices/PatientService/PatientService/PatientService.csproj", "Microservices/PatientService/PatientService/"]
COPY ["Microservices/PatientService/PatientService.Data/PatientService.Data.csproj", "Microservices/PatientService/PatientService.Data/"]
COPY ["Microservices/PatientService/PatientService.Domain/PatientService.Domain.csproj", "Microservices/PatientService/PatientService.Domain/"]
RUN dotnet restore "Microservices/PatientService/PatientService/PatientService.csproj"
COPY . .
WORKDIR "/src/Microservices/PatientService/PatientService"
RUN dotnet build "PatientService.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "PatientService.csproj" -c Release -o /app/publish
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "PatientService.dll"]
Kubernetes deployment
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
io.kompose.service: patientservice
name: patientservice
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: patientservice
strategy: {}
template:
metadata:
labels:
io.kompose.service: patientservice
spec:
containers:
- image: patientservice:latest
imagePullPolicy: IfNotPresent
name: patientservice
startupProbe:
httpGet:
path: /health/startup
port: 80
failureThreshold: 30
periodSeconds: 10
livenessProbe:
httpGet:
path: /healthz
port: 80
initialDelaySeconds: 0
periodSeconds: 10
timeoutSeconds: 1
failureThreshold: 3
readinessProbe:
httpGet:
path: /ready
port: 80
successThreshold: 3
ports:
- containerPort: 80
resources: {}
restartPolicy: Always
status: {}
Kubernetes service manifest
apiVersion: v1
kind: Service
metadata:
labels:
io.kompose.service: patientservice
name: patientservice
spec:
ports:
- name: "5052"
port: 5052
targetPort: 80
selector:
io.kompose.service: patientservice
status:
loadBalancer: {}
When this is deployed, I get CrashLoopBackOut (unless I build as release). I have followed a number of articles that have not helped me, I see a number of suggestion which suggest exec into the container, but I'm not sure what/where I need to look inside the container to help resolve my issue.
Update
It seems that this issue only occurs if I use docker-compose.yaml either on command line, or in Visual Studio, it also happens if I build an individual docker file within Visual Studio. If I build an individual docker file on command line, then add tag to include my docker hub username it works i.e. username/microservicename:latest it runs on Kubernetes without crashing. Does this mean I need to build each container manually on the command line, or is there a tool that I've missed to do this for me?
Related
I've been struggling for two weeks now. So, any help is welcome.
I created an AKS and I tried to host my apps there.
What is done:
All the containers are tested locally and working perfectly.
All CI/CD are done properly
The main services exposed to internet are frontend (Angular 13) and ASP.NET Core 6 Web API
The frontend is working properly using tls (without certificate for now). I can access it with no problem using a host url. Exposed using ingress on the app gateway.
The problems starts :) The Web API pod is running. I can access it using the load balancer + static ip (kubernetes service type LoadBalancer). But I'm unable to expose it using ingress on the app gateway.
No errors on my pod when I run a kubectl logs podname
The Web API ingress is a copy/paste of frontend api (may be the mistake)
Final thing: when I change the api service to LoadBalancer type, I can access the api from outside using the load balancer.
I believe something's wrong in my ingress definition
Here the two ingress:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: api-ingress
namespace: test
annotations:
kubernetes.io/ingress.class: azure/application-gateway
spec:
rules:
- host: "testapi.url.ca"
http:
paths:
- pathType: Prefix
path: /
backend:
service:
name: api-svc
port:
number: 80
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: test-ingress
namespace: test
annotations:
kubernetes.io/ingress.class: azure/application-gateway
appgw.ingress.kubernetes.io/ssl-redirect: "true"
spec:
tls:
- secretName: url-tls
hosts:
- test.url.ca
- test.url.com
rules:
- host: "test.url.ca"
http:
paths:
- pathType: Prefix
path: /
backend:
service:
name: web-svc
port:
number: 80
My api service:
apiVersion: v1
kind: Service
metadata:
name: api-svc
namespace: test
spec:
selector:
app: api
ports:
- name: regular
port: 80
targetPort: 80
protocol: TCP
Finaly the docker file to create the image:
FROM mcr.microsoft.com/dotnet/aspnet:6.0 AS base
WORKDIR /app
EXPOSE 80/tcp
EXPOSE 443/tcp
FROM mcr.microsoft.com/dotnet/sdk:6.0 AS build
WORKDIR /src
COPY ./Lib/Contract ./Lib/Contract
COPY ./Lib/Framework ./Lib/Framework
COPY ./Output/Api ./Output/Api
WORKDIR /src/Output/Api
RUN dotnet restore Api.csproj --disable-parallel
RUN dotnet build Api.csproj --no-restore -c Release
FROM build AS publish
RUN dotnet publish Api.csproj --no-restore --no-build -c Release -o /app
FROM base AS final
WORKDIR /app
COPY --from=publish /app/SensitiveData/localhost1.pfx /https/localhost1.pfx
COPY --from=publish /app .
RUN ls -l /app
ENTRYPOINT ["dotnet", "Api.dll"]
Thank for your help
I tried different Microsoft tutorials with no success.
I also tried to use the kubernetes ingress but i faced other problems and i decided to come back to my path.
I have tried so many times to run skaffold from my project directory. It keeps me returning the same error: 1/1 deployment(s) failed
Skaffold.yaml file:
apiVersion: skaffold/v2alpha3
kind: Config
deploy:
kubectl:
manifests:
- ./infra/k8s/*
build:
local:
push: false
artifacts:
- image: ankan00/auth
context: auth
docker:
dockerfile: Dockerfile
sync:
manual:
- src: 'src/**/*.ts'
dest: .
Created a docker image of ankan00/auth by docker build -t ankan00/auth .
It ran successfully when I was working with this project. But I had to uninstall docker for some reason and then when I reinstalled docker built the image again(after deleting the previous instance of the image in docker desktop), then skaffold is not working anymore. I tried to delete skaffold folder and reinstall skaffold but the problem remains the same. Everytime it ends up in cleaning up and throwing 1/1 deployment(s) failed.
My Dockerfile:
FROM node:alpine
WORKDIR /app
COPY package.json .
RUN npm install
COPY . .
CMD ["npm", "start"]
my auth-depl.yaml file which is in infra\k8s directory
apiVersion: apps/v1
kind: Deployment
metadata:
name: auth-depl
spec:
replicas: 1
selector:
matchLabels:
app: auth
template:
metadata:
labels:
app: auth
spec:
containers:
- name: auth
image: ankan00/auth
---
apiVersion: v1
kind: Service
metadata:
name: auth-srv
spec:
selector:
app: auth
ports:
- name: auth
protocol: TCP
port: 3000
targetPort: 3000
Okay! I resolved the isses by re-installing the docker desktop and not enabling Kubernetes in it. I installed Minikube and then I ran skaffold dev and this time it was not giving error in deployments to stabilize... stage. Looks like Kubernetes desktop is the culprit? I am not sure though because I ran it successfully before.
New Update!!! I worked again on the Kubernetes desktop. I deleted Minikube because Minicube uses the same port that the ingress-Nginx server uses to run the project. So, I had decided to put back Kubernetes desktop, also Google cloud Kubernetes engine. And scaffold works perfectly this time.
Hello Im new to minikube and I cant connect to an exposed service. I created an Api .Net Core, I builded the image, go it into my private registry and then i created an deployment with an yml file that works.
But i cant expose that deployment as service. Everytime after I expose it its all fine but I cant connect to it via the port and the minikube ip adress.
If I try to connect to the ipadress:port I get connection refused.
Deployment yml file:
kind: Deployment
apiVersion: apps/v1
metadata:
name: testapp1-deployment
labels:
app: testapp1
spec:
replicas: 2
selector:
matchLabels:
app: testapp1
template:
metadata:
labels:
app: testapp1
version: v0.1
spec:
containers:
- name: testapp1-deployment
image: localhost:5000/testapp1
imagePullPolicy: Never
resources:
requests:
cpu: 120m
ports:
- containerPort: 80
Service yml file:
apiVersion: v1
kind: Service
metadata:
name: testapp1-service
spec:
type: NodePort
selector:
app: testapp1
ports:
- protocol: TCP
port: 80
targetPort: 80
The problem was my dockerfile and I wasn't enabling docker support in my app in ASP .NET Core. I enabled the docker support and changed the dockerfile a bit then I rebuild it and it worked for me.
FROM mcr.microsoft.com/dotnet/core/sdk:3.1 AS build-env
WORKDIR /app
COPY *.csproj ./
RUN dotnet restore
COPY . ./
RUN dotnet publish -c Release -o out
FROM mcr.microsoft.com/dotnet/core/aspnet:3.1
WORKDIR /app
COPY --from=build-env /app/out .
ENTRYPOINT ["dotnet", "aspnetapp.dll"]
that's the dockerfile Im using for my app at the moment so if someone else face the same problem as me try to use the dockerfile. If it still won't work look up in previous comments.
I've been experimenting with skaffold with a local minikube installation. It's a nice to be able to develop your project on something that is as close as possible to production.
If I use the getting-started example provided on skaffold github repo, everything works just fine, my IDE (intellij idea) stops on the breakpoints and when I modify my code, the changes are reflected instantly.
Now on my personal project which is a bit more complicated than a simple main.go file, things don't work as expected. The IDE stops on the breakpoint but hot code reload are not happening even though I see in the console that skaffold detected the changes made on that particular file but unfortunately the changes are not reflected/applied.
A docker file is used to build an image, the docker file is the following
FROM golang:1.14 AS builder
WORKDIR /app
COPY go.mod go.sum ./
RUN go mod download
COPY . .
RUN CGO_ENABLED=0 go build -o /app.o ./cmd/shortener/shortener.go
FROM alpine:3.12
COPY --from=builder /app.o ./
COPY --from=builder /app ./
EXPOSE 3000
ENV GOTRACEBACK=all
CMD ["./app.o"]
On kubernetes side, I'm creating a deployment and a service as follows:
apiVersion: apps/v1
kind: Deployment
metadata:
name: url-shortener-deployment
spec:
selector:
matchLabels:
app: url-shortener
template:
metadata:
labels:
app: url-shortener
spec:
containers:
- name: url-shortener
image: url_shortener
ports:
- containerPort: 3000
---
apiVersion: v1
kind: Service
metadata:
name: url-shortener-service
spec:
selector:
app: url-shortener
ports:
- port: 3000
nodePort: 30000
type: NodePort
As for skaffold, here's the skaffold.yaml file:
apiVersion: skaffold/v2beta5
kind: Config
metadata:
name: url-shortener
build:
artifacts:
- image: url_shortener
context: shortener
docker:
dockerfile: build/docker/Dockerfile.dev
noCache: false
deploy:
kubectl:
manifests:
- stack/mongo/mongo.yaml
- shortener/deployments/kubernetes/shortener.yaml
I've enabled verbose logging and I notice this in the output whenever I save (CTRL+S) a source code file.
time="2020-07-05T22:51:08+02:00" level=debug msg="Found dependencies for dockerfile: [{go.mod /app true} {go.sum /app true} {. /app true}]"
time="2020-07-05T22:51:08+02:00" level=info msg="files modified: [shortener/internal/handler/rest/rest.go]"
I'm assuming that this means that the change has been detected.
breakpoints works correctly in the IDE but code swap in kubernetes don't seem to be happening
The debug functionality deliberately disables Skaffold's file-watching, which rebuilds and redeploys containers on file change. The redeploy causes existing containers to be terminated, which tears down any ongoing debug sessions. It's really disorienting and aggravating to have your carefully-constructed debug session be torn down because you accidentally saved a change to a comment! 😫
But we're looking at how to better support this more iterative debugging within Cloud Code.
If you're using Skaffold directly, we recently added the ability to re-enable file-watching via skaffold debug --auto-build --auto-deploy (present in v1.12).
I have this repo, and docker-compose up will launch the project, create 2 containers (a DB and API), and everything works.
Now I want to build and deploy to Kubernetes. I try docker-compose build but it complains there's no Dockerfile. So I start writing a Dockerfile and then discover that docker/Dockerfiles don't support loading ENV vars from an env_file or .env file. What gives? How am I expected to build this image? Could somebody please enlighten me?
What is the intended workflow for building a docker image with the appropriate environment variables?
Those environment variables shouldn't be set at docker build step but at running the application on Kubernetes or docker-compose.
So:
Write a Dockerfile and place it at root folder. Something like this:
FROM node
COPY package.json .
RUN npm install
COPY . .
ENTRYPOINT ["npm", "start"]
Modify docker-compose.yaml. In the image field you must specify the name for the image to be built. It should be something like this:
image: YOUR-DOCKERHUB-USERNAME/node-rest-auth-arangodb
There is no need to set user and working_dir
Build the image with docker-compose build (you can also do this with docker build)
Now you can use docker-compose up to run your app locally, with the .env file
To deploy it on Kubernetes you need to publish your image in dockerhub (unless you run Kubernetes locally):
docker push YOUR-DOCKERHUB-USERNAME/node-rest-auth-arangodb
Finally, create a Kubernetes manifest. Sadly kubernetes doesn't support env files as docker-compose do, you'll need to manually set these variables in the manifest:
apiVersion: apps/v1
kind: Deployment
metadata:
name: platform-api
labels:
app: platform-api
spec:
replicas: 1
selector:
matchLabels:
app: platform-api
template:
metadata:
labels:
app: platform-api
spec:
containers:
- name: platform-api
image: YOUR-DOCKERHUB-USERNAME/node-rest-auth-arangodb
ports:
- containerPort: 8080
env:
- name: NODE_ENV
value: develop
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: platform-db
labels:
app: platform-db
spec:
replicas: 1
selector:
matchLabels:
app: platform-db
template:
metadata:
labels:
app: platform-db
spec:
containers:
- name: arangodb
image: YOUR-DOCKERHUB-USERNAME/node-rest-auth-arangodb
ports:
- containerPort: 8529
env:
- name: ARANGO_ROOT_PASSWORD
value: localhost
Deploy it with kubectl create
Please note that this code is just indicative, I don't know exactly your user case. Find more information in docker-compose and kubernetes docs and tutorials. Good luck!
I've updated the project on github, it now all works, and the readme documents how to run it.
I realized that env vars are considered runtime vars, which is why --env-file is an option for docker run and not docker build. This must also (I assume) be why docker-compose.yml has the env_file option, which I assume just passes the file to docker build. And in Kubernetes, I think these are passed in from a configmap. This is done so the image remains more portable; same project can be run with different vars passed in, no rebuild required.
Thanks ignacio-millán for the input.