I'm trying to deploy my application which needs the configuration form /root/properties folder.
With Docker
docker build -t config .
docker run -p8080:8080 -v /root/properties:/root/properties --name config -d config
Running OK.
Now.. With Kubernetes cluster , i'm not able to attach -v as done in docker run.
kubectl create deployment deploy-config --image=localhost:5000/config --port=8080 -v /root/properties
with -v pod not created. How to provide the properties folder path..??
Thanks in advance.
You can do it with a config map
Or possibly a pvc.
It depends on the details which have not really been shared.
Related
I am trying a basic docker test in GCP compute instance. I pulled a tomcat image from the official repo. then ran a command to run the container. Command is :
docker run -te --rm -d -p 80:8080 tomcat
It created a container for me with below id.
3f8ce49393c708f4be4d3d5c956436e000eee6ba7ba08cba48ddf37786104a37
If I do docker ps, I get below
docker run -te --rm -d -p 80:8080 tomcat
3f8ce49393c708f4be4d3d5c956436e000eee6ba7ba08cba48ddf37786104a37
However the tomcat admin console does not open. The reason is tomcat image is trying to create the config files under /usr/local. However, it is a read only file system. So the config files are not created.
Is there a way to ask Docker to create the files in a different location? Or, is there any other way to handle it?
Thanks in advance.
Let's say I have a docker image created using a Dockerfile. At the time of writing the Dockerfile I had to test it repeatedly to realize what I did wrong. To debug a docker image I can simply run a test container and look at its stdout/stderr to see what's wrong with the image.
IMAGE_NAME=authoritative-dns-bind
IMAGE_OPTIONS="
-v $(pwd)/config.yaml:/config.yaml:ro
-p 127.0.0.1:53:53
-p 127.0.0.1:53:53/udp"
docker run -t -i $IMAGE_OPTIONS $IMAGE_NAME
Learning the above was good enough to iteratively create and debug a minimal working Docker container. Now I'm looking for a way to do the same for OpenShift.
I'm pretty much aware of the fact that the container is not ready for OpenShift. My plan is to run it and watch its stdoud/stderr like I did with Docker. One of the people I asked for help came up with a command that looked like exactly what I need.
oc run -i -t --image $IMAGE_NAME --command test-pod -- bash
And the above command seemed for me for fedora:24 and fedora:latest images from the docker registry and I got a working shell. But the same wouldn't happen for my derived image with a containerized service. My explanation is that it probably does an entirely different thing and instead of starting the command interactively it starts it non-interactively and then tries to run bash inside a failed container.
So what I'm looking for is a reasonable way to debug a container image in OpenShift. I expected that I would be able to at least capture and view stdin/stdout of OpenShift containers.
Any ideas?
Update
According to the comment by Graham oc run should indeed work as docker run but it doesn't seem to be the case. With original Fedora images the bash always appears at least upon hitting enter.
# oc run -i -t --image authoritative-dns-bind --command test-auth13 -- bash
Waiting for pod myproject/test-auth13-1-lyng3 to be running, status is Pending, pod ready: false
Waiting for pod myproject/test-auth13-1-lyng3 to be running, status is Pending, pod ready: false
Waiting for pod myproject/test-auth13-1-lyng3 to be running, status is Pending, pod ready: false
...
Waiting for pod myproject/test-auth13-1-lyng3 to be running, status is Pending, pod ready: false
^C
#
I wasn't able to try out the suggested oc debug yet as it seems to require more configuration than just simple image. There's another problem with oc run as that command creates new and new containers that I don't really need. I hope there is a way to start the debug easily and get the container automatically distroyed afterwards.
There are three main commands to debug pods:
oc describe pod $pod-name -- detailed info about the pod
oc logs $pod-name -- stdout and stderr of the pod
oc exec -ti $pod-name -- bash -- get a shell in running pod
To your specific problem: oc run default pull policy is set to Always. This means that OpenShift will try to pull the image until successful and refuse to use the local one.
Once this kuberenetes patch lands in OpenShift origin, the pull policy will be easily configurable.
Please do not consider this a final answer to the question and supersede it with your own better answers...
I'm now using a pod configuration file like the following...
apiVersion: v1
kind: Pod
metadata:
name: "authoritative-dns-server" # pod name, your reference from command line
namespace: "myproject" # default namespace in `oc cluster up`
spec:
containers:
- command:
- "bash"
image: "authoritative-dns-bind" # use your image!
name: "authoritative-dns-bind-container" # required
imagePullPolicy: "Never" # important! you want openshift to use your local image
stdin: true
tty: true
restartPolicy: "Never"
Note the command is explicitly set to bash. You can then create the pod, attach to the container and run the docker command yourself.
oc create -f pod.yaml
oc attach -t -i authoritative-dns-server
/files/run-bind.py
This looks far from ideal and it doesn't really help you debug an ordinary openshift container with standard pod configuration, but at least it's possible to debug, now. Looking forward to better answers.
I am trying to integrate docker into my CI platform. After getting this working properly with a Docker-in-a-docker solution, I came across a blog post by one of the Docker maintainers, where he says that instead of using a Docker-in-a-docker solution for my CI, I should instead simply mount the /var/run/docker.sock to my CI container.
https://jpetazzo.github.io/2015/09/03/do-not-use-docker-in-docker-for-ci/
Simply put, when you start your CI container (Jenkins or other), instead of hacking something together with Docker-in-Docker, start it with:
docker run -v /var/run/docker.sock:/var/run/docker.sock ...
So I tried this. I ran the following command:
docker run -p 8080:8080 -p 50000:50000 -v /var/run/docker.sock:/var/run/docker.sock jenkins
Using jenkins as my CI container.
When running the above command, jenkins starts up properly, and I can jump into the container to see that the docker.sock file is located in the /var/run/ path.
However, when I run the command: docker, the machine returns with the following message:
bash: docker: command not found
Does anyone know what I am missing in order to make this work per the author's instructions?
I am using Docker v. 1.11.1, on a fresh CentOS 7 box.
Thanks in advance
Figured this out today. The above command will work so long as the docker daemon + dependencies are added to the container. In my case, I ended up writing a simple Dockerfile, which also included the line:
RUN curl -sSL https://get.docker.com/ | sh
This installed Docker on the container, and when I ran docker images from within the container, I could see all of the images from my host machine. I am now able to use all of the docker commands from within the container.
I have the following questions:
I am logged into a Kubernetes pod using the following command:
./cluster/kubectl.sh exec my-nginx-0onux -c my-nginx -it bash
The 'ip addr show' command shows its assigned the ip of the pod. Since pod is a logical concept, I am assuming I am logged into a docker container and not a pod, In which case, the pod IP is same as docker container IP. Is that understanding correct?
from a Kubernetes node, I do sudo docker ps and then do the following:-
sudo docker exec 71721cb14283 -it '/bin/bash'
This doesn't work. Does someone know what I am doing wrong?
I want to access the nginx service I created, from within the pod using curl. How can I install curl within this pod or container to access the service from inside. I want to do this to understand the network connectivity.
Here is how you get a curl command line within a kubernetes network to test and explore your internal REST endpoints.
To get a prompt of a busybox running inside the network, execute the following command. (A tip is to use one unique container per developer.)
kubectl run curl-<YOUR NAME> --image=radial/busyboxplus:curl -i --tty --rm
You may omit the --rm and keep the instance running for later re-usage. To reuse it later, type:
kubectl attach <POD ID> -c curl-<YOUR NAME> -i -t
Using the command kubectl get pods you can see all running POD's. The <POD ID> is something similar to curl-yourname-944940652-fvj28.
EDIT: Note that you need to login to google cloud from your terminal (once) before you can do this! Here is an example, make sure to put in your zone, cluster and project:
gcloud container clusters get-credentials example-cluster --zone europe-west1-c --project example-148812
The idea of Kubernetes is that pods are assigned on a host but there is nothing sure or permanent, so you should NOT try to look up the IP of a container or pod from your container, but rather use what Kubernetes calls a Service.
A Kubernetes Service is a path to a pod with a defined set of selectors, through the kube-proxy, which will load balance the request to all pods with the given selectors.
In short:
create a Pod with a label called 'name' for example. let's say name=mypod
create a Service with the selector name=mypod that you call myService for example, to which you assign the port 9000 for example.
then you can curl from a pod to the pods served by this Service using
curl http://myService:9000
This is assuming you have the DNS pod running of course.
If you ask for a LoadBalancer type of Service when creating it, and run on AWS or GKE, this service will also be available from outside your cluster. For internal only service, just set the flag clusterIP: None and it will not be load balanced on the outside.
see reference here:
https://kubernetes.io/docs/concepts/services-networking/service/
https://kubernetes.io/docs/tutorials/services/
Kubernetes uses the IP-per-pod model. All containers in the same pod share the same IP address as if they are running on the same host.
The command should follow docker exec [OPTIONS] CONTAINER COMMAND [ARG...]. In your case, sudo docker exec -it 71721cb14283 '/bin/bash' should work. If not, you should provide the output of your command.
It depends on what image you use. There is nothing special about installing a software in a container. For nginx, try apt-get update && apt-get install curl
There's an official curl team image these days:
https://hub.docker.com/r/curlimages/curl
Run it with:
kubectl run -it --rm --image=curlimages/curl curly -- sh
In Docker i have installed Jenkins successfully. When i create a new job and i would like to execute a sh file from my workspace, what is the best way to add a file to my workspace with Docker? I started my container with this: docker run --name myjenkins -p 8080:8080 -p 50000:50000 -v /var/jenkins_home jenkins
You could copy a file from your file system to the container with a simple command from your terminal.
docker cp [OPTIONS] LOCALPATH|- CONTAINER:PATH
https://docs.docker.com/engine/reference/commandline/cp/
example:
docker cp /yourpaht/yourfile <containerId>:/var/jenkins_home
It depends a bit on how the planned lifecycle of your Jenkins container is. If it is just used temporarily and does no harm if the data is gone, docker cp as NickGnd suggested will do the trick.
But since the working data of Jenkins like jobconfigs, system configs and workspaces will only live inside the container, all of it will be gone once the container is removed, so if you plan to have a longer running Jenkins environment, you might want to persist the data outside of the container so it will survive recreating the container, launching new container versions and so on. This can be done with the option --volume /path/on/host:/path/in/container or its short form -v on docker run.
There is also the option of --volumes-from which you can use to mount to keep the data in one "data container" and mount it into your Jenkins container.
For further information on this, please have a look at The docker volumes documentation