Based on podman-add-ports-to-expose-to-running-pod
as quote From Dominic P:
once the pod is created these attributes are assigned to the “infra”
container and cannot be changed. For example, if you create a pod and
then later decide you want to add a container that binds new ports,
Podman will not be able to do this. You would need to recreate the pod
with the additional port bindings before adding the new container.
I know it's not supported to add new port for running pod
So, what is your suggestion to recreate (backup exist containers on pod) then create new pod and add the new port and restore the containers?
You could generate a Kubernetes pod manifest from your running pod using podman generate kube <podname>, edit the resulting file, and then re-create the pod with podman kube play <manifest>,yaml.
For example:
I create a pod and spin up a webserver...
podman pod create example-pod
podman run -d --name web --pod example-pod alpinelinux/darkhttpd
...only to realize that I forgot to publish port 8080 to the host. So I save the configuration and delete the pod:
podman generate kube example-pod > example-pod.yaml
podman pod rm -f example-pod
Edit the manifest to add the port configuration:
...
spec:
containers:
- image: docker.io/alpinelinux/darkhttpd:latest
name: web
ports:
- containerPort: 8080
hostPort: 8080
...
And then re-create the pod:
podman kube play example-pod.yaml
Related
I am trying to create a pod based on a container image from local machine not from public registry. I am retrieving the status of pod as ImagePullBackoff
Docker file
FROM tensorflow/tensorflow:latest-py3
RUN pip install -q keras==2.3.1
RUN pip install pillow
RUN mkdir -p /app/src
WORKDIR /app/src
COPY . ./
EXPOSE 31700
CMD ["python", "test.py"]
To build the docker image
docker build -t tensor-keras .
To create a pod without using yaml file
kubectl run server --image=tensor-keras:latest
Yaml file
apiVersion: v1
kind: Pod
metadata:
name: server
labels:
app: server
spec:
containers:
- name: tensor-keras
image: tensor-keras:latest
ports:
- containerPort: 31700
I am retreiving the status of the pod as
NAME READY STATUS RESTARTS AGE
server 0/1 ImagePullBackOff 0 27m
Help is highly appreciated thanks
By default, Kubernetes will try to pull your image from a remote container repository. In your case, your image name is not prefixed by a container repository url, so it uses default one, most of the time it is set to Docker Hub.
What is the value of the imagePullPolicy field? For you use-case it should be set to Never to use local image.
Which tool are you using to run your Kubernetes instance?
For example, with minikube, procedure to use a local image is described here: https://stackoverflow.com/a/42564211/2784039
With kind, you should use command kind load docker-image <tensor-keras:latest> o load the image inside your cluser
With k3s, using local image should work out of the box, if imagePullPolicy is set to Never
I have a docker image that uses a volume to write files:
docker run --rm -v /home/dir:/out/ image:cli args
when I try to run this inside a pod the container exit normally but no file is written.
I don't get it.
The container throw errors if it does not find the volume, for example if I run it without the -v option it throws:
Unhandled Exception: System.IO.DirectoryNotFoundException: Could not find a part of the path '/out/file.txt'.
But I don't have any error from the container.
It finishes like it wrote files, but files do not exist.
I'm quite new to Kubernetes but this is getting me crazy.
Does kubernetes prevent files from being written? or am I missing something obvious?
The whole Kubernetes context is managed by GCP composer-airflow, if it helps...
docker -v: Docker version 17.03.2-ce, build f5ec1e2
If you want to have that behavior in Kubernetes you can use a hostPath volume.
Essentially you specify it in your pod spec and then the volume is mounted on the node where your pod runs and then the file should be there in the node after the pod exits.
apiVersion: v1
kind: Pod
metadata:
name: test-pd
spec:
containers:
- image: image:cli
name: test-container
volumeMounts:
- mountPath: /home/dir
name: test-volume
volumes:
- name: test-volume
hostPath:
path: /out
type: Directory
when I try to run this inside a pod the container exit normally but no file is written
First of all, there is no need to run the docker run command inside the pod :). A spec file (yaml) should be written for the pod and kubernetes will run the container in the pod using docker for you. Ideally, you don't need to run docker commands when using kubernetes (unless you are debugging docker-related issues).
This link has useful kubectl commands for docker users.
If you are used to docker-compose, refer Kompose to go from docker-compose to kubernetes:
https://github.com/kubernetes/kompose
http://kompose.io
Some options to mount a directory on the host as a volume inside the container in kubernetes:
hostPath
emptyDir
configMap
Hi I am running kubernetes cluster where I run Logstash container.
But I need to run it with own docker run parameter. If I would run it in docker directly. I would use command:
docker run --log-driver=gelf logstash -f /config-dir/logstash.conf
But I need to run it via Kubernetes pod. My pod looks like:
spec:
containers:
- name: logstash-logging
image: "logstash:latest"
command: ["logstash", "-f" , "/config-dir/logstash.conf"]
volumeMounts:
- name: configs
mountPath: /config-dir/logstash.conf
How to achieve to run Docker container with parameter --log-driver=gelf via kubernetes. Thanks.
Kubernetes does not expose docker-specific options such as --log-driver. A higher abstraction of logging behavior might be added in the future, but it is not in the current API yet. This issue was discussed in https://github.com/kubernetes/kubernetes/issues/15478, and the suggestion was to change the default logging driver for docker daemon in the per-node configuration/salt template.
I want to use kubernetes as my default development environment for that I set up the cluster locally with docker as explained in the official doc. I push my example to a github repository
My set up steps after having a kubernetes cluster running were:
* cd cluster_config/app && docker build --tag=k8s_php_dev . && cd ../..
* kubectl -s http://127.0.0.1:8080 create -f cluster_config/app/app.rc.yml
* kubectl -s http://127.0.0.1:8080 create -f cluster_config/app/app.services.yml
My issues comes since I want to map a local directory as a volume inside my app pod so I can share dynamically the files in there between my local host and the pod, so i can develop, change the files; and dynamically update on the service.
I use a a volume with a hostPath. The pod, replication controller and service are created successfully but the pod do not share the directory not even have the file on the supposed on the mountPath.
What I'm doing wrong?
Thanks
The issue was on the volume definition, the hostPath.path property should hold the absolute address of the directory to mount.
Example:
hostPath:
path: /home/bitgandtter/Documents/development/php/k8s_devel_env
How do I run a docker image that I built locally on Google Container Engine?
You can push your image to Google Container Registry and reference them from your pod manifest.
Detailed instructions
Assuming you have a DOCKER_HOST properly setup , a GKE cluster running the last version of Kubernetes and Google Cloud SDK installed.
Setup some environment variables
gcloud components update kubectl
gcloud config set project <your-project>
gcloud config set compute/zone <your-cluster-zone>
gcloud config set container/cluster <your-cluster-name>
gcloud container clusters get-credentials <your-cluster-name>
Tag your image
docker tag <your-image> gcr.io/<your-project>/<your-image>
Push your image
gcloud docker push gcr.io/<your-project>/<your-image>
Create a pod manifest for your container: my-pod.yaml
id: my-pod
kind: Pod
apiVersion: v1
desiredState:
manifest:
containers:
- name: <container-name>
image: gcr.io/<your-project>/<your-image>
...
Schedule this pod
kubectl create -f my-pod.yaml
Repeat from step (4) for each pod you want to run. You can have multiple definitions in a single file using a line with --- as delimiter.
The setup I use is to deploy my own docker registry combined with ssh port forwarding. For that purpose I set up a ssh server in the cluster and use ~/.ssh/config to configure a port forward to the registry.
Also I use jenkins to build the images right in the cloud.
Step 1: Specify the container in which you have to work on
gcloud container clusters get-credentials [$cluster_name]
Step 2: Tag the docker image you want to run
docker tag nginx gcr.io/first-project/nginx
Step 3: Push image
gcloud docker push gcr.io/first-project/nginx
Step4:Create yaml file (test.yaml)
apiVersion: v1
kind: Pod
containers:
- name : nginx1
image: gcr.io/first-project/nginx
Step 5 : Create the pod
kubectl create –f test.yaml
You could copy the registry authentication key of your private docker registry to the .dockercfg file in the root directory of the minions right before starting the pods.
Or run docker login on minions before starting.
docker login --username=<> --password=<> --email=<> <DockerServer>
Referring to the private docker image in the pod configuration should then work as expected.