Dockerfile does not copy file from gitlab directory. Below is the dockerfile,
FROM docker.elastic.co/logstash/logstash:7.16.3
USER root
RUN yum install -y curl dos2unix
COPY scripts/somefile.sh /src/app/
RUN dos2unix /src/app/somefile.sh
WORKDIR /src/app/
ENTRYPOINT ["/src/app/somefile.sh"]
The Project tree looks like the following,
C:.
├───.idea
│ └───codeStyles
├───deployment
│ ├───dev
└───scripts
│ ├───somefile.sh
below is deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: demo-app
name: demo-app
namespace: --
spec:
replicas: 1
selector:
matchLabels:
app: demo-app
template:
metadata:
labels:
app: demo-app
spec:
containers:
- image: docker-image-described
name: demo-app
resources: ...
volumeMounts:
- name: some-secret
mountPath: /src/abc/secret
readOnly: true
securityContext:
privileged: true
restartPolicy: Always
volumes:
- name: very-secret
secret:
secretName: some-secret
Further, building the image using gitlab-ci file. And then creating a deployment using this image. Please help me understand what am I doing wrong as when I exec inside the running pod, I can't see the file in the defined destination location.
I don't get any errors, which I usually do if there is an error about wrong source location. I also did go through some more similar questions, so I already checked that the EOL is Unix.
Also, I would like to add some observations,
I cannot create the directory as well with RUN mkdir -p /src/app/scripts/ This also runs without any error, just not
reflected inside the pod.
This is probably due to using an old (wrong) docker image, so changes or updates are not reflected.
Kubernetes does not pull a new image version if the image is already present if the image tag is not "latest". Verify that the image is correctly build by pulling it and running it locally with docker. Use a different tag to force kubernetes to pull the image.
Related
I have tried so many times to run skaffold from my project directory. It keeps me returning the same error: 1/1 deployment(s) failed
Skaffold.yaml file:
apiVersion: skaffold/v2alpha3
kind: Config
deploy:
kubectl:
manifests:
- ./infra/k8s/*
build:
local:
push: false
artifacts:
- image: ankan00/auth
context: auth
docker:
dockerfile: Dockerfile
sync:
manual:
- src: 'src/**/*.ts'
dest: .
Created a docker image of ankan00/auth by docker build -t ankan00/auth .
It ran successfully when I was working with this project. But I had to uninstall docker for some reason and then when I reinstalled docker built the image again(after deleting the previous instance of the image in docker desktop), then skaffold is not working anymore. I tried to delete skaffold folder and reinstall skaffold but the problem remains the same. Everytime it ends up in cleaning up and throwing 1/1 deployment(s) failed.
My Dockerfile:
FROM node:alpine
WORKDIR /app
COPY package.json .
RUN npm install
COPY . .
CMD ["npm", "start"]
my auth-depl.yaml file which is in infra\k8s directory
apiVersion: apps/v1
kind: Deployment
metadata:
name: auth-depl
spec:
replicas: 1
selector:
matchLabels:
app: auth
template:
metadata:
labels:
app: auth
spec:
containers:
- name: auth
image: ankan00/auth
---
apiVersion: v1
kind: Service
metadata:
name: auth-srv
spec:
selector:
app: auth
ports:
- name: auth
protocol: TCP
port: 3000
targetPort: 3000
Okay! I resolved the isses by re-installing the docker desktop and not enabling Kubernetes in it. I installed Minikube and then I ran skaffold dev and this time it was not giving error in deployments to stabilize... stage. Looks like Kubernetes desktop is the culprit? I am not sure though because I ran it successfully before.
New Update!!! I worked again on the Kubernetes desktop. I deleted Minikube because Minicube uses the same port that the ingress-Nginx server uses to run the project. So, I had decided to put back Kubernetes desktop, also Google cloud Kubernetes engine. And scaffold works perfectly this time.
I have a container that I need to configure for k8s yaml. The workflow on docker run using the terminal looks like this.:
docker run -v $(pwd):/projects \
-w /projects \
gcr.io/base-project/myoh:v1 init *myproject*
This command creates a directory called myproject. To complete the workflow, I need to cd into this myproject folder and run:
docker run -v $(pwd):/project \
-w /project \
-p 8081:8081 \
gcr.io/base-project/myoh:v1
Any idea how to convert this to either a docker-compose or a k8s pods/deployment yaml? I have tried all that come to mind with no success.
The bind mount of the current directory can't be translated to Kubernetes at all. There's no way to connect a pod's filesystem back to your local workstation. A standard Kubernetes setup has a multi-node installation, and if it's possible to directly connect to a node (it may not be) you can't predict which node a pod will run on, and copying code to every node is cumbersome and hard to maintain. If you're using a hosted Kubernetes installation like GKE, it's even possible that the cluster autoscaler will create and delete nodes automatically, and you won't have an opportunity to manually copy things in.
You need to build your application code into a custom image. That can set the desired WORKDIR, COPY the code in, and RUN any setup commands that are required. Then you need to push that to an image repository, like GCR
docker build -t gcr.io/base-project/my-project:v1 .
docker push gcr.io/base-project/my-project:v1
Once you have that, you can create a minimal Kubernetes Deployment to run it. Set the GCR name of the image you built and pushed as its image:. You will also need a Service to make it accessible, even from other Pods in the same cluster.
Try this (untested yaml, but you will get the idea)
apiVersion: apps/v1
kind: Deployment
metadata:
name: myoh-deployment
labels:
app: myoh
spec:
replicas: 1
selector:
matchLabels:
app: myoh
template:
metadata:
labels:
app: myoh
spec:
initContainers:
- name: init-myoh
image: gcr.io/base-project/myoh:v1
command: ['sh', '-c', "mkdir -p myproject"]
containers:
- name: myoh
image: gcr.io/base-project/myoh:v1
ports:
- containerPort: 8081
volumeMounts:
- mountPath: /projects
name: project-volume
volumes:
- name: test-volume
hostPath:
# directory location on host
path: /data
# this field is optional
type: Directory
I've been experimenting with skaffold with a local minikube installation. It's a nice to be able to develop your project on something that is as close as possible to production.
If I use the getting-started example provided on skaffold github repo, everything works just fine, my IDE (intellij idea) stops on the breakpoints and when I modify my code, the changes are reflected instantly.
Now on my personal project which is a bit more complicated than a simple main.go file, things don't work as expected. The IDE stops on the breakpoint but hot code reload are not happening even though I see in the console that skaffold detected the changes made on that particular file but unfortunately the changes are not reflected/applied.
A docker file is used to build an image, the docker file is the following
FROM golang:1.14 AS builder
WORKDIR /app
COPY go.mod go.sum ./
RUN go mod download
COPY . .
RUN CGO_ENABLED=0 go build -o /app.o ./cmd/shortener/shortener.go
FROM alpine:3.12
COPY --from=builder /app.o ./
COPY --from=builder /app ./
EXPOSE 3000
ENV GOTRACEBACK=all
CMD ["./app.o"]
On kubernetes side, I'm creating a deployment and a service as follows:
apiVersion: apps/v1
kind: Deployment
metadata:
name: url-shortener-deployment
spec:
selector:
matchLabels:
app: url-shortener
template:
metadata:
labels:
app: url-shortener
spec:
containers:
- name: url-shortener
image: url_shortener
ports:
- containerPort: 3000
---
apiVersion: v1
kind: Service
metadata:
name: url-shortener-service
spec:
selector:
app: url-shortener
ports:
- port: 3000
nodePort: 30000
type: NodePort
As for skaffold, here's the skaffold.yaml file:
apiVersion: skaffold/v2beta5
kind: Config
metadata:
name: url-shortener
build:
artifacts:
- image: url_shortener
context: shortener
docker:
dockerfile: build/docker/Dockerfile.dev
noCache: false
deploy:
kubectl:
manifests:
- stack/mongo/mongo.yaml
- shortener/deployments/kubernetes/shortener.yaml
I've enabled verbose logging and I notice this in the output whenever I save (CTRL+S) a source code file.
time="2020-07-05T22:51:08+02:00" level=debug msg="Found dependencies for dockerfile: [{go.mod /app true} {go.sum /app true} {. /app true}]"
time="2020-07-05T22:51:08+02:00" level=info msg="files modified: [shortener/internal/handler/rest/rest.go]"
I'm assuming that this means that the change has been detected.
breakpoints works correctly in the IDE but code swap in kubernetes don't seem to be happening
The debug functionality deliberately disables Skaffold's file-watching, which rebuilds and redeploys containers on file change. The redeploy causes existing containers to be terminated, which tears down any ongoing debug sessions. It's really disorienting and aggravating to have your carefully-constructed debug session be torn down because you accidentally saved a change to a comment! 😫
But we're looking at how to better support this more iterative debugging within Cloud Code.
If you're using Skaffold directly, we recently added the ability to re-enable file-watching via skaffold debug --auto-build --auto-deploy (present in v1.12).
I'm using Kubernetes+Helm - and wanting to ask if it possible to get the Docker version as specified in the spec containers. So forexample, I have the deployment below:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: test
spec:
replicas: 1
selector:
matchLabels:
app: test
template:
metadata:
labels:
app: test
spec:
containers:
- name: test
image: myrepo.com/animage:0.0.3
imagePullPolicy: Always
command: ["/bin/sh","-c"]
args: [“work”]
envFrom:
- configMapRef:
name: test
I then have another deployment where I would like to get that Docker version number 0.0.3 and set it as an env var.
Any ideas appreciated.
Thanks.
Short answer: No. At least not directly. Although there are two workarounds I can see that you might find viable.
First, apart of providint image with your version tag, set a label/annotation on the pod indicating it's version and use Downward API to pass that data down to your container
https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/
https://kubernetes.io/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/
Second, if you actually own the process to build that image, you can easily bake the version in during docker build with something like :
Dockerfile:
FROM ubuntu
ARG VERSION=undefined
ENV VERSION=$VERSION
with build command like
docker build --build-arg VERSION=0.0.3 -t myrepo.com/animage:0.0.3 .
which will give you an image with a baked in env var with your version value
I have this repo, and docker-compose up will launch the project, create 2 containers (a DB and API), and everything works.
Now I want to build and deploy to Kubernetes. I try docker-compose build but it complains there's no Dockerfile. So I start writing a Dockerfile and then discover that docker/Dockerfiles don't support loading ENV vars from an env_file or .env file. What gives? How am I expected to build this image? Could somebody please enlighten me?
What is the intended workflow for building a docker image with the appropriate environment variables?
Those environment variables shouldn't be set at docker build step but at running the application on Kubernetes or docker-compose.
So:
Write a Dockerfile and place it at root folder. Something like this:
FROM node
COPY package.json .
RUN npm install
COPY . .
ENTRYPOINT ["npm", "start"]
Modify docker-compose.yaml. In the image field you must specify the name for the image to be built. It should be something like this:
image: YOUR-DOCKERHUB-USERNAME/node-rest-auth-arangodb
There is no need to set user and working_dir
Build the image with docker-compose build (you can also do this with docker build)
Now you can use docker-compose up to run your app locally, with the .env file
To deploy it on Kubernetes you need to publish your image in dockerhub (unless you run Kubernetes locally):
docker push YOUR-DOCKERHUB-USERNAME/node-rest-auth-arangodb
Finally, create a Kubernetes manifest. Sadly kubernetes doesn't support env files as docker-compose do, you'll need to manually set these variables in the manifest:
apiVersion: apps/v1
kind: Deployment
metadata:
name: platform-api
labels:
app: platform-api
spec:
replicas: 1
selector:
matchLabels:
app: platform-api
template:
metadata:
labels:
app: platform-api
spec:
containers:
- name: platform-api
image: YOUR-DOCKERHUB-USERNAME/node-rest-auth-arangodb
ports:
- containerPort: 8080
env:
- name: NODE_ENV
value: develop
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: platform-db
labels:
app: platform-db
spec:
replicas: 1
selector:
matchLabels:
app: platform-db
template:
metadata:
labels:
app: platform-db
spec:
containers:
- name: arangodb
image: YOUR-DOCKERHUB-USERNAME/node-rest-auth-arangodb
ports:
- containerPort: 8529
env:
- name: ARANGO_ROOT_PASSWORD
value: localhost
Deploy it with kubectl create
Please note that this code is just indicative, I don't know exactly your user case. Find more information in docker-compose and kubernetes docs and tutorials. Good luck!
I've updated the project on github, it now all works, and the readme documents how to run it.
I realized that env vars are considered runtime vars, which is why --env-file is an option for docker run and not docker build. This must also (I assume) be why docker-compose.yml has the env_file option, which I assume just passes the file to docker build. And in Kubernetes, I think these are passed in from a configmap. This is done so the image remains more portable; same project can be run with different vars passed in, no rebuild required.
Thanks ignacio-millán for the input.