Package.json file not found when using kubernetes - docker

I trying to setup kubernetes on my local environment using docker. I've built the necessary docker image with this Dockerfile:
FROM node:9.11.1
WORKDIR /usr/src/app
COPY package.json /usr/src/app/
RUN npm install
COPY . /usr/src/app/
EXPOSE 3002
CMD [ "npm", "start" ]
I then pushed this image to my private docker repo on the google cloud repository. Now i can confirm that i can push and pull the image from the cloud repo, so i then built a docker-compose using that repo as the image source file:
version: '3'
services:
redis:
image: redis
ports:
- 6379:6379
networks:
- my-network
mongodb:
image: mongo
ports:
- 27017:27017
volumes:
- ./db:/data/db
networks:
- my-network
my-test-app:
tty: true
image: gcr.io/my-test-app
ports:
- 3002:3002
depends_on:
- redis
- mongodb
networks:
- my-network
volumes:
- .:/usr/src/app
environment:
- REDIS_PORT=6379
- REDIS_HOST=redis
- DB_URI=mongodb://mongodb:27017/
command: bash -c "ls && npm install"
networks:
my-network:
driver: bridge
volumes:
mongodb:
Then finally building off of that i use Kubernetes kompose to generate my deployment file which looks like this:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.12.0 ()
creationTimestamp: null
labels:
io.kompose.service: my-test-app
name: my-test-app
spec:
replicas: 1
strategy:
type: Recreate
template:
metadata:
creationTimestamp: null
labels:
io.kompose.service: my-test-app
spec:
imagePullSecrets:
- gcr-json-key
containers:
- args:
- bash
- -c
- ls && npm install
env:
- name: DB_URI
value: mongodb://mongodb:27017/
- name: REDIS_HOST
value: redis
- name: REDIS_PORT
value: "6379"
image: gcr.io/my-test-app
name: my-test-app
ports:
- containerPort: 3002
resources: {}
tty: true
volumeMounts:
- mountPath: /usr/src/app
name: my-test-app-claim0
restartPolicy: Always
volumes:
- name: my-test-app-claim0
persistentVolumeClaim:
claimName: my-test-app-claim0
status: {}
As you can see in the args section of my yaml i am listing all the files in my directory /usr/src/app However it logs do the only file that appears is a single package-lock.json file which causes the following install command to fail. This error however does not occur when i use docker-compose to launch my app so for some reason only my kubernetes is having trouble. Also i can confirm that my image does contain a package.json file by running an interactive shell. I'm unsure on how to proceed so any help would be appreciated!

You are mounting something else over /usr/src/app where package.json is supposed to be located. That hides all the files in there. Remove the volumes and volumeMounts sections.

Related

Trouble converting docker-compose containing volume to Kubernetes manifest

I am learning Kubernetes and I am starting to convert an existing docker-compose.yml to Kuberbetes manifest one component at a time. Here is the component I am currently trying to convert
version: '3'
services:
mongodb:
container_name: mongodb
restart: always
hostname: mongo-db
image: mongo:latest
networks:
- mynetwork
volumes:
- c:/MongoDb:c:/data/db
ports:
- 27017:27017
networks:
mynetwork:
name: mynetwork
I am able to log into the Mongo instance when the container is running in Docker but I cannot get this working in Kubernetes. Here is the Kubernetes manifests I have tried so far
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongodb
spec:
selector:
matchLabels:
app: mongodb
template:
metadata:
labels:
app: mongodb
spec:
containers:
- name: mongodb
image: mongo:latest
ports:
- containerPort: 27017
volumeMounts:
- name: mongodb-data
mountPath: /data/db
ports:
- containerPort: 27017
hostPort: 27017
volumes:
- name: mongodb-data
hostPath:
path: "C:\\MongoDb"
With this manifest I see the error Error: Error response from daemon: invalid mode: /data/db when I do a kubectl describe pods.
I understand the mapping of volumes from Docker to Kubernetes isn't 1-to-1, so is this reasonable to do in the Kubernetes space?
Thank you and apologies if this feels like a silly question.

Create kubernetes yml file from docker-compose.yml file

I have this docker-compose.yml file which I am using to run up three microservices and one api gateway
version: '3'
services:
serviceone:
container_name: serviceone
restart: always
build: serviceone/
ports:
- '3000:3000'
servicetwo:
container_name: servicetwo
restart: always
build: servicetwo/
ports:
- '3001:3001'
servicethree:
container_name: servicethree
restart: always
build: servicethree/
ports:
- '3002:3003'
apigateway:
container_name: timezoneapigateway
restart: always
build: timezone/
ports:
- '8080:8080'
links:
- serviceone
- servicetwo
- servicethree
Now I want to deploy these dockerimages in one pod in kubernetes so that the api gateway can connect with all the three microservices ,current version of api gateway is working but I am really not getting even a slightest hint of doing this in kubernetes. I am really new to kubernetes can anyone tell me how to design a kubernetes yml file to achieve this
You don't have to run all your service in the same pod. The standard in Kubernetes is to have separate deployments and services for all apps. Here is a deployment manifest for serviceone but you can easily modify it for servicetwo, servicethree and apigateway
apiVersion: apps/v1
kind: Deployment
metadata:
name: serviceone
labels:
app: serviceone
spec:
replicas: 1
selector:
matchLabels:
app: serviceone
template:
metadata:
labels:
app: serviceone
spec:
containers:
- name: serviceone
image: serviceone:latest
ports:
- containerPort: 3001
And the same goes for the service manifest
apiVersion: v1
kind: Service
metadata:
name: serviceone
spec:
selector:
app: serviceone
ports:
- protocol: TCP
port: 3001
targetPort: 3001
Your services will be accessible within the cluster like this:
serviceone:3001
servicetwo:3002
servicethree:3003
timezoneapigateway:8080

Why I can't access my application inside Kubernetes cluster through tomcat server?

I'm running a kubernetes cluster with minikube on ubuntu server 20.04.1 LTS. When I launch the deployments I can access the Tomcat server from the cluster, but I can't open my application, I got the error message :
HTTP Status 404 – Not Found
Here is my docker-compose file :
version: "3"
#NETWORK
networks:
network1:
driver: bridge
volumes:
dir-site:
driver_opts:
device: /smartclass_docker/site/
type: bind
o: bind
# Declare services
services:
# service name
server:
container_name: server
# Container image name
image: repo/myimage
## Image used to build Dockerfile
build:
#dockerfile: Dockerfile
context: ./smart/
# Container ports and host matching
ports:
- "3306:3306"
- "8080:8080"
## Startup Policy
restart driver: bridge
volumes:
dir-site:
driver_opts:
device: /smart/site/
type: bind
o: bind
# Declaration of services
services:
# service name
server:
container_name: server
# name of the image
image: repo/myimage
## Use IMAGE BUILD DOCKERFILE
build:
#dockerfile: Dockerfile
context: ./smart/
# CONTAINER ports
ports:
- "3306:3306"
- "8080:8080"
## Startup policy
restart: always
# connect to network1
networks:
- network1
volumes:
- /data/dockerStore/data/db_server_1:/var/lib/mysql/
restart: always
# connect to network1
networks:
- network1
volumes:
- /data/dockerStore/data/db_server_1:/var/lib/mysql/
Here is my kubernetes deployment file :
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
spec:
replicas: 3
selector:
matchLabels:
app: myapp
# type: LoadBalancer
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: jksun12/vdsaipro
command: ["/run.sh"]
ports:
- containerPort: 80
- containerPort: 3306
- containerPort: 9001
- containerPort: 9002
# volumeMounts:
# - name: myapp-pv-claim
# mountPath: /var/lib/mysql
# volumes:
# - name: myapp-pv-claim
# persistentVolumeClaim:
# claimName: myapp-pv-claim
---
apiVersion: apps/v1
kind: PersistentVolumeClaim
metadata:
name: myapp-pv-claim
labels:
app: myapp
spec:
accesModes:
- ReadWriteMany
resources:
requests:
storage: 4Gi
My application files are in the /smart directory, which is a subdirectory of the directory where I actually run docker-compose and build my main Dockerfile.
How can I solve it ?
Thanks
I checked the logs inside the containers, and I found that the errors come from the extraction of the .war file inside the container.
There was also an error in the version of the tomcat server used for the application. So I changed the version of tomcat to another version.
It's solved now. Thanks

Kubernetes deployment.yaml for django+gunicorn+nginx

I have created docker images using docker-compose.yml as below
version: '2'
services:
djangoapp:
build: .
volumes:
- .:/sig_app
- static_volume:/sig_app
networks:
- nginx_network
nginx:
image: nginx:1.13
ports:
- "80:80"
volumes:
- ./config/nginx/conf.d:/etc/nginx/conf.d
- static_volume:/sig_app
depends_on:
- djangoapp
networks:
- nginx_network
networks:
nginx_network:
driver: bridge
volumes:
static_volume:
I have used docker-compose build and docker-compose up. The three images are created as below
kubernetes_djangoapp
docker.io/python
docker.io/nginx
I want to deploy the application into kubernetes using YAML file.
I am new to kubernetes.
Django application is running with port 8000
and Nginx with port 80
This should work:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-deploy
spec:
replicas: 1
template:
metadata:
labels:
app: my-app
spec:
volumes:
- name: django-nginx
emptyDir: {}
- name: nginx-host
hostPath:
path: /config/nginx/conf.d
containers:
- name: djangoapp
image: kubernetes_djangoapp
volumeMounts:
- name: django-nginx
mountPath: /sig_app
- name: nginx
image: nginx:1.13
ports:
- containerPort: 80
volumeMounts:
- name: django-nginx
mountPath: /sig_app
- name: nginx-host
mountPath: /etc/nginx/conf.d
Note that you will have to modify some things to make it custom yours. I am missing where the image is. You should upload it to docker hub, or any registry of your choice.
About the volumes, here both containers share a non-persistent volume (django-nginx), which maps /sig_app directory in each container with each other. And another one that shares the container nginx (etc/nginx/conf.d) with your host (/config/nginx/conf.d), to pass the config file. A better way would be to use a ConfigMap. Check on that.
So, yeah, set the image for django and let me know if it doesn't work, and we will see whats failing.
Cheers
Take a look at Kompose. It will allow you to simply run the command
kompose up
to immediately deploy your docker-compose configuration to your cluster.
If you want to create a .yaml from your docker-compose file first to inspect and edit, you can run
kompose convert

How to configure Jaeger with elasticsearch?

I have tried executing this docker command to setup Jaeger Agent and jaeger collector with elasticsearch.
sudo docker run \
-p 5775:5775/udp \
-p 6831:6831/udp \
-p 6832:6832/udp \
-p 5778:5778 \
-p 16686:16686 \
-p 14268:14268 \
-e SPAN_STORAGE_TYPE=elasticsearch \
--name=jaeger \
jaegertracing/all-in-one:latest
but this command gives the below error. How to configure Jaeger with ElasticSearch?
"msg":"Failed to init storage factory","error":"health check timeout: no Elasticsearch node available","errorVerbose":"no Elasticsearch node available\
After searching a solution for some time, I found a docker-compose.yml file which had the Jaeger Query,Agent,collector and Elasticsearch configurations.
docker-compose.yml
version: "3"
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:6.3.1
networks:
- elastic-jaeger
ports:
- "127.0.0.1:9200:9200"
- "127.0.0.1:9300:9300"
restart: on-failure
environment:
- cluster.name=jaeger-cluster
- discovery.type=single-node
- http.host=0.0.0.0
- transport.host=127.0.0.1
- ES_JAVA_OPTS=-Xms512m -Xmx512m
- xpack.security.enabled=false
volumes:
- esdata:/usr/share/elasticsearch/data
jaeger-collector:
image: jaegertracing/jaeger-collector
ports:
- "14269:14269"
- "14268:14268"
- "14267:14267"
- "9411:9411"
networks:
- elastic-jaeger
restart: on-failure
environment:
- SPAN_STORAGE_TYPE=elasticsearch
command: [
"--es.server-urls=http://elasticsearch:9200",
"--es.num-shards=1",
"--es.num-replicas=0",
"--log-level=error"
]
depends_on:
- elasticsearch
jaeger-agent:
image: jaegertracing/jaeger-agent
hostname: jaeger-agent
command: ["--collector.host-port=jaeger-collector:14267"]
ports:
- "5775:5775/udp"
- "6831:6831/udp"
- "6832:6832/udp"
- "5778:5778"
networks:
- elastic-jaeger
restart: on-failure
environment:
- SPAN_STORAGE_TYPE=elasticsearch
depends_on:
- jaeger-collector
jaeger-query:
image: jaegertracing/jaeger-query
environment:
- SPAN_STORAGE_TYPE=elasticsearch
- no_proxy=localhost
ports:
- "16686:16686"
- "16687:16687"
networks:
- elastic-jaeger
restart: on-failure
command: [
"--es.server-urls=http://elasticsearch:9200",
"--span-storage.type=elasticsearch",
"--log-level=debug"
]
depends_on:
- jaeger-agent
volumes:
esdata:
driver: local
networks:
elastic-jaeger:
driver: bridge
The docker-compose.yml file installs the elasticsearch, Jaeger collector,query and agent.
Install docker and docker compose first
https://docs.docker.com/compose/install/#install-compose
Then, execute these commands in order
1. sudo docker-compose up -d elasticsearch
2. sudo docker-compose up -d
3. sudo docker ps -a
start all the docker containers - Jaeger agent,collector,query and elasticsearch.
sudo docker start container-id
access -> http://localhost:16686/
As I mentioned in my comment on the OP's first answer above, I was getting an error when running the docker-compose exactly as given:
Error: unknown flag: --collector.host-port
I think this CLI flag has been deprecated by the Jaeger folks since that answer was written. So I poked around in the jaeger-agent documentation a bit:
https://www.jaegertracing.io/docs/1.20/deployment/#discovery-system-integration
https://www.jaegertracing.io/docs/1.20/cli/#jaeger-agent
And I got this to work with a couple of small modifications:
I added port range "14250:14250" to the jaeger-collector ports
I updated the jaeger-agent command input with: command: ["--reporter.grpc.host-port=jaeger-collector:14250"]
Finally, I updated the ellastic search version in the image tag to the latest version they have available at this time (though I doubt this was required).
The updated docker-compose.yaml:
version: "3"
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.9.3
networks:
- elastic-jaeger
ports:
- "127.0.0.1:9200:9200"
- "127.0.0.1:9300:9300"
restart: on-failure
environment:
- cluster.name=jaeger-cluster
- discovery.type=single-node
- http.host=0.0.0.0
- transport.host=127.0.0.1
- ES_JAVA_OPTS=-Xms512m -Xmx512m
- xpack.security.enabled=false
volumes:
- esdata:/usr/share/elasticsearch/data
jaeger-collector:
image: jaegertracing/jaeger-collector
ports:
- "14269:14269"
- "14268:14268"
- "14267:14267"
- "14250:14250"
- "9411:9411"
networks:
- elastic-jaeger
restart: on-failure
environment:
- SPAN_STORAGE_TYPE=elasticsearch
command: [
"--es.server-urls=http://elasticsearch:9200",
"--es.num-shards=1",
"--es.num-replicas=0",
"--log-level=error"
]
depends_on:
- elasticsearch
jaeger-agent:
image: jaegertracing/jaeger-agent
hostname: jaeger-agent
command: ["--reporter.grpc.host-port=jaeger-collector:14250"]
ports:
- "5775:5775/udp"
- "6831:6831/udp"
- "6832:6832/udp"
- "5778:5778"
networks:
- elastic-jaeger
restart: on-failure
environment:
- SPAN_STORAGE_TYPE=elasticsearch
depends_on:
- jaeger-collector
jaeger-query:
image: jaegertracing/jaeger-query
environment:
- SPAN_STORAGE_TYPE=elasticsearch
- no_proxy=localhost
ports:
- "16686:16686"
- "16687:16687"
networks:
- elastic-jaeger
restart: on-failure
command: [
"--es.server-urls=http://elasticsearch:9200",
"--span-storage.type=elasticsearch",
"--log-level=debug"
]
depends_on:
- jaeger-agent
volumes:
esdata:
driver: local
networks:
elastic-jaeger:
driver: bridge
If you would like to deploy the Jaeger with Elasticsearch and Kibana to quickly validate and check the stack e.g. in kind or Minikube, the following snippet may help you.
#######################
## Add jaegertracing helm repo
#######################
helm repo add jaegertracing
https://jaegertracing.github.io/helm-charts
#######################
## Create a target namespace
#######################
kubectl create namespace observability
#######################
## Check and use the jaegertracing helm chart
#######################
helm search repo jaegertracing
helm install -n observability jaeger-operator jaegertracing/jaeger-operator
#######################
## Use the elasticsearch all-in-one operator
#######################
kubectl apply -f https://download.elastic.co/downloads/eck/1.1.2/all-in-one.yaml
#######################
## Create an elasticsearch deployment
#######################
cat <<EOF | kubectl apply -n observability -f -
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
name: quickstart
spec:
version: 7.7.0
nodeSets:
- name: default
count: 1
config:
node.master: true
node.data: true
node.ingest: true
node.store.allow_mmap: false
EOF
PASSWORD=$(kubectl get secret -n observability quickstart-es-elastic-user -o=jsonpath='{.data.elastic}' | base64 --decode)
kubectl create secret -n observability generic jaeger-secret --from-literal=ES_PASSWORD=${PASSWORD} --from-literal=ES_USERNAME=elastic
#######################
## Kibana to visualize the trace data
#######################
cat <<EOF | kubectl apply -n observability -f -
apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
metadata:
name: quickstart
spec:
version: 7.7.0
count: 1
elasticsearchRef:
name: quickstart
EOF
kubectl port-forward -n observability service/quickstart-kb-http 5601
## To get the pw
kubectl get secret quickstart-es-elastic-user -o=jsonpath='{.data.elastic}' | base64 --decode; echo
login:
https://localhost:5601
username: elastic
pw: <see above to outcome of the command>
#######################
## Deploy a jaeger tracing application
#######################
cat <<EOF | kubectl apply -n observability -f -
apiVersion: jaegertracing.io/v1
kind: Jaeger
metadata:
name: simple-prod
spec:
agent:
strategy: DaemonSet
strategy: production
storage:
type: elasticsearch
options:
es:
server-urls: https://quickstart-es-http:9200
tls:
ca: /es/certificates/ca.crt
num-shards: 1
num-replicas: 0
secretName: jaeger-secret
volumeMounts:
- name: certificates
mountPath: /es/certificates/
readOnly: true
volumes:
- name: certificates
secret:
secretName: quickstart-es-http-certs-public
EOF
## to visualize it
kubectl --namespace observability port-forward simple-prod-query-<POP ID> 16686:16686
#######################
## To test the setup
## Of course if you set it up to another namespace it will work, the only thing that matters is the collector URL and PORT
#######################
cat <<EOF | kubectl apply -n observability -f -
apiVersion: v1
kind: List
items:
- apiVersion: apps/v1
kind: Deployment
metadata:
name: jaeger-k8s-example
labels:
app: jaeger-k8s-example
spec:
replicas: 1
selector:
matchLabels:
app: jaeger-k8s-example
strategy:
type: Recreate
template:
metadata:
labels:
app: jaeger-k8s-example
spec:
containers:
- name: jaeger-k8s-example
env:
- name: JAEGER_COLLECTOR_URL
value: "simple-prod-collector.observability.svc.cluster.local"
- name: JAEGER_COLLECTOR_PORT
value: "14268"
image: norbertfenk/jaeger-k8s-example:latest
imagePullPolicy: IfNotPresent
EOF
If Jaeger needs to be set up in Kubernetes cluster as a helm chart, one can use this: https://github.com/jaegertracing/helm-charts/tree/master/charts/jaeger
It can delploy either Elasticsearch or Cassandara as a storage backend. Which is just a matter of right value being passed in to the chart:
storage:
type: elasticsearch
This section shows the helm command as an example:
https://github.com/jaegertracing/helm-charts/tree/master/charts/jaeger#installing-the-chart-using-a-new-elasticsearch-cluster
For people who are using OpenTelemetry, Jaeger, and Elasticsearch, here is the way.
Note the image being used are jaegertracing/jaeger-opentelemetry-collector and jaegertracing/jaeger-opentelemetry-agent.
version: '3.8'
services:
collector:
image: otel/opentelemetry-collector:latest
command: ["--config=/conf/opentelemetry-collector.config.yaml", "--log-level=DEBUG"]
volumes:
- ./opentelemetry-collector.config.yaml:/conf/opentelemetry-collector.config.yaml
ports:
- "9464:9464"
- "55680:55680"
- "55681:55681"
depends_on:
- jaeger-collector
jaeger-collector:
image: jaegertracing/jaeger-opentelemetry-collector
command: ["--es.num-shards=1", "--es.num-replicas=0", "--es.server-urls=http://elasticsearch:9200", "--collector.zipkin.host-port=:9411"]
ports:
- "14250"
- "14268"
- "9411"
environment:
- SPAN_STORAGE_TYPE=elasticsearch
- LOG_LEVEL=debug
restart: on-failure
depends_on:
- elasticsearch
jaeger-agent:
image: jaegertracing/jaeger-opentelemetry-agent
command: ["--config=/config/otel-agent-config.yml", "--reporter.grpc.host-port=jaeger-collector:14250"]
volumes:
- ./:/config/:ro
ports:
- "6831/udp"
- "6832/udp"
- "5778"
restart: on-failure
depends_on:
- jaeger-collector
jaeger-query:
image: jaegertracing/jaeger-query
command: ["--es.num-shards=1", "--es.num-replicas=0", "--es.server-urls=http://elasticsearch:9200"]
ports:
- "16686:16686"
- "16687"
environment:
- SPAN_STORAGE_TYPE=elasticsearch
- LOG_LEVEL=debug
restart: on-failure
depends_on:
- elasticsearch
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch-oss:7.9.0
environment:
- discovery.type=single-node
ports:
- "9200/tcp"
Then just need
docker-compose up -d
Reference: https://github.com/jaegertracing/jaeger/blob/master/crossdock/jaeger-opentelemetry-docker-compose.yml

Resources