below is my start container docker run command:
docker run -it -d --name=aaa--net=host -v /opt/headedness/phantomjs:/data/phantomjs/bin/phantomjs -v /opt/ctcrawler/log:/data/log XXX/app/aaa:latest -id aaa -endpoint http://localhost:8080/c2/ -selenium http://localhost:4444/wd/hub
How to change it to a yaml file? I have try many ways,but still can`t working...
below is my .yaml file(pls help...)
apiVersion: v1
kind: Pod
metadata:
name: aaa
spec:
containers:
- name: aaa
image: xxx/app/aaa:latest
net: "host"
args:
- -id: aaa
- -phantomjs: /data/phantomjs/bin/phantomjs
- -capturedPath: /data/log
- -endpoint: http://wwww/c2/
- -selenium: http://localhost:4444/wd/hub
- -proxy: n/a
imagePullPolicy: Always
imagePullSecrets:
- name: myregistrykey
Your Spec is invalid.
For host network set spec.securityContext.HostNetwork: true
Use hostPath volumes to mount directories on the host.
If it is configuration data, you can use a gitRepo volume or ConfigMap starting from v1.2.
Related
I want to mount directory from SourceContaner to ServerContainer.
ServerContainer:
FROM php:7.2-apache
RUN a2enmod rewrite
# /var/www/html is apache document root.
SourceContaner:
FROM alpine:3.7
# Copy local code to the container image.
COPY ./my_src /var/www/html/my_src
VOLUME /var/www/html/my_src
And, yaml is below.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: test
spec:
...snip...
spec:
containers:
- name: server-container
image: "Server Container image"
ports:
...snip...
volumeMounts:
- name: src-volume
mountPath: /var/www/html/my_src
- name: src-container
image: "Source Container Image"
volumes:
- name: src-volume
hostPath:
path: /var/www/html/my_src
But Source Container "CrashLoopBackOff" occured.
and nothing log is output.
This is not a feature of Kubernetes. There is an old FlexVolume plugin that implements the same behavior as Docker, but it isn’t recommended. You can use an initContainer to copy from the data container into a volume like an emptyDir.
I get a question about sharing Volume by Containers in one pod.
Here is my yaml, pod-volume.yaml
apiVersion: v1
kind: Pod
metadata:
name: volume-pod
spec:
containers:
- name: tomcat
image: tomcat
imagePullPolicy: Never
ports:
- containerPort: 8080
volumeMounts:
- name: app-logs
mountPath: /usr/local/tomcat/logs
- name: busybox
image: busybox
command: ["sh", "-c", "tail -f /logs/catalina.out*.log"]
volumeMounts:
- name: app-logs
mountPath: /logs
volumes:
- name: app-logs
emptyDir: {}
create pod:
kubectl create -f pod-volume.yaml
wacth pod status:
watch kubectl get pod -n default
finally,I got this:
NAME READY STATUS RESTARTS AGE
redis-php 2/2 Running 0 15h
volume-pod 1/2 CrashLoopBackOff 5 6m49s
then,I check logs about busybox container:
kubectl logs pod/volume-pod -c busybox
tail: can't open '/logs/catalina.out*.log': No such file or directory
tail: no files
I don't know where is went wrong.
Is this an order of container start in pod, please help me, thanks
For this case:
Catalina logs file is : catalina.$(date '+%Y-%m-%d').log
And in shell script you should not put * into.
So please try:
command: ["sh", "-c", "tail -f /logs/catalina.$(date '+%Y-%m-%d').log"]
I have a command to run docker,
docker run --name pre-core -itdp 8086:80 -v /opt/docker/datalook-pre-core:/usr/application app
In above command, /opt/docker/datalook-pre-core is host directory, /usr/application is container directory. The purpose is that container directory maps to host directory. So when container crashes, the directory functions as storage and data on it would be saved.
When I am going to use kubernetes to create a pod for this containter, how to write pod.yaml file?
I guess it is something like following:
apiVersion: v1
kind: Pod
metadata:
name: app-ykt
labels:
app: app-ykt
purpose: ykt_production
spec:
containers:
- name: app-ykt
image: app
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
volumnMounts:
- name: volumn-app-ykt
mountPath: /usr/application
volumns:
- name: volumn-app-ykt
????
I do not know what's the exact properties in yaml I shall write in my case?
This would be a hostPath volume: https://kubernetes.io/docs/concepts/storage/volumes/
volumes:
- name: volumn-app-ykt
hostPath:
# directory location on host
path: /opt/docker/datalook-pre-core
# this field is optional
type: Directory
However remember that while a container crash won't move things, other events can cause a pod to move to a different host so you need to be prepared to both deal with cold caches and to clean up orphaned caches.
I have a docker image with the option for property file like,
CMD java -jar /opt/test/test-service.war
--spring.config.location=file:/conf/application.properties
I use the -v volume mount in my docker run command as follows.
-v /usr/xyz/props/application.properties:/conf/application.properties
I am not sure how to achieve the same thing in Kubernetes.
I use minikube to run kubernetes in my local mac.
That should be an host path volume, illustrated with this example pod.
apiVersion: v1
kind: Pod
metadata:
name: test-pd
spec:
containers:
- image: k8s.gcr.io/test-webserver
name: test-container
volumeMounts:
- mountPath: /test-pd
name: test-volume
volumes:
- name: test-volume
hostPath:
# directory location on host
path: /data
# this field is optional
type: Directory
I am trying to run an image using Kubernetes with below Dockerfile
FROM centos:6.9
COPY rpms/* /tmp/
RUN yum -y localinstall /tmp/*
ENTERYPOINT service test start && /bin/bash
Now when I try to deploy this image using pod.yml as shown below,
apiVersion: v1
kind: Pod
metadata:
labels:
app: testpod
name: testpod
spec:
containers:
- image: test:v0.2
name: test
imagePullPolicy: Always
volumeMounts:
- mountPath: /data
name: testpod
volumes:
- name: testod
persistentVolumeClaim:
claimName: testpod
Now when I try to create the pod the image goes into a crashloopbackoff. So how I can make the image to wait in /bin/bash on Kubernetes as when I use docker run -d test:v0.2 it work fines and keep running.
You need to attach a terminal to the running container. When starting a pod using kubectl run ... you can use -i --tty to do that. In the pod yml filke, you can add the following, to the container spec to attach tty.
stdin: true
tty: true
You can put a command like tail -f /dev/null to keep your container always be on, this could be done inside your Dockerfile or in your Kubernetes yaml file.