Kubernetes (1.10) mountPropagation: Bidirectional not working. - docker

I'm creating a pod with a volumeMount set to mountPropagation: Bidirectional. When created, the container is mounting the volume with "Propagation": "rprivate".
From the k8s docs I would expect mountPropagation: Bidirectional to result in a volume mount propagation of rshared
If I start the container directly with docker this is working.
Some info:
Deployment Yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: test
spec:
selector:
matchLabels:
app: test
strategy:
type: RollingUpdate
template:
metadata:
labels:
app: test
spec:
containers:
- image: gcr.io/google_containers/busybox:1.24
command:
- sleep
- "36000"
name: test
volumeMounts:
- mountPath: /tmp/test
mountPropagation: Bidirectional
name: test-vol
volumes:
- name: test-vol
hostPath:
path: /tmp/test
Resulting mount section from docker inspect
"Mounts": [
{
"Type": "bind",
"Source": "/tmp/test",
"Destination": "/tmp/test",
"Mode": "",
"RW": true,
"Propagation": "rprivate"
}…..
Equivalent Docker run
docker run --restart=always --name test -d --net=host --privileged=true -v /tmp/test:/tmp/test:shared gcr.io/google_containers/busybox:1.24
Resulting Mounts section from docker inspect when created with docker run
"Mounts": [
{
"Type": "bind",
"Source": "/tmp/test",
"Destination": "/tmp/test",
"Mode": "shared",
"RW": true,
"Propagation": "shared"
}...
Output of kubectl version
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.1", GitCommit:"d4ab47518836c750f9949b9e0d387f20fb92260b", GitTreeState:"clean", BuildDate:"2018-04-13T22:29:03Z", GoVersion:"go1.9.5", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.1", GitCommit:"d4ab47518836c750f9949b9e0d387f20fb92260b", GitTreeState:"clean", BuildDate:"2018-04-12T14:14:26Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
Using rke version v0.1.6

this was a regression fixed in 1.10.3 in https://github.com/kubernetes/kubernetes/pull/62633

Related

container running on docker swarm not accessible from outside

I am running my containers on the docker swarm. asset-frontend service is my frontend application which is running Nginx inside the container and exposing port 80. now if I do
curl http://10.255.8.21:80
or
curl http://127.0.0.1:80
from my host where I am running these containers I am able to see my asset-frontend application but it is not accessible outside of the host. I am not able to access it from another machine, my host machine operating system is centos 8.
this is my docker-compose file
version: "3.3"
networks:
basic:
services:
asset-backend:
image: asset/asset-management-backend
env_file: .env
deploy:
replicas: 1
depends_on:
- asset-mongodb
- asset-postgres
networks:
- basic
asset-mongodb:
image: mongo
restart: always
env_file: .env
ports:
- "27017:27017"
volumes:
- $HOME/asset/mongodb:/data/db
networks:
- basic
asset-postgres:
image: asset/postgresql
restart: always
env_file: .env
ports:
- "5432:5432"
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=password
- POSTGRES_DB=asset-management
volumes:
- $HOME/asset/postgres:/var/lib/postgresql/data
networks:
- basic
asset-frontend:
image: asset/asset-management-frontend
restart: always
ports:
- "80:80"
environment:
- ENV=dev
depends_on:
- asset-backend
deploy:
replicas: 1
networks:
- basic
asset-autodiscovery-cron:
image: asset/auto-discovery-cron
restart: always
env_file: .env
deploy:
replicas: 1
depends_on:
- asset-mongodb
- asset-postgres
networks:
- basic
this is my docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
auz640zl60bx asset_asset-autodiscovery-cron replicated 1/1 asset/auto-discovery-cron:latest
g6poofhvmoal asset_asset-backend replicated 1/1 asset/asset-management-backend:latest
brhq4g4mz7cf asset_asset-frontend replicated 1/1 asset/asset-management-frontend:latest *:80->80/tcp
rmkncnsm2pjn asset_asset-mongodb replicated 1/1 mongo:latest *:27017->27017/tcp
rmlmdpa5fz69 asset_asset-postgres replicated 1/1 asset/postgresql:latest *:5432->5432/tcp
My 80 port is open in firewall
following is the output of firewall-cmd --list-all
public (active)
target: default
icmp-block-inversion: no
interfaces: eth0
sources:
services: cockpit dhcpv6-client ssh
ports: 22/tcp 2376/tcp 2377/tcp 7946/tcp 7946/udp 4789/udp 80/tcp
protocols:
masquerade: no
forward-ports:
source-ports:
icmp-blocks:
rich rules:
if i inspect my created network the output is following
[
{
"Name": "asset_basic",
"Id": "zw73vr9xigfx7hy16u1myw5gc",
"Created": "2019-11-26T02:36:38.241352385-05:00",
"Scope": "swarm",
"Driver": "overlay",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "10.0.3.0/24",
"Gateway": "10.0.3.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"9348f4fc6bfc1b14b84570e205c88a67aba46f295a5e61bda301fdb3e55f3576": {
"Name": "asset_asset-frontend.1.zew1obp21ozmg8r1tzmi5h8g8",
"EndpointID": "27624fe2a7b282cef1762c4328ce0239dc70ebccba8e00d7a61595a7a1da2066",
"MacAddress": "02:42:0a:00:03:08",
"IPv4Address": "10.0.3.8/24",
"IPv6Address": ""
},
"943895f12de86d85fd03d0ce77567ef88555cf4766fa50b2a8088e220fe1eafe": {
"Name": "asset_asset-mongodb.1.ygswft1l34o5vfaxbzmnf0hrr",
"EndpointID": "98fd1ce6e16ade2b165b11c8f2875a0bdd3bc326c807ba6a1eb3c92f4417feed",
"MacAddress": "02:42:0a:00:03:04",
"IPv4Address": "10.0.3.4/24",
"IPv6Address": ""
},
"afab468aefab0689aa3488ee7f85dbc2cebe0202669ab4a58d570c12ee2bde21": {
"Name": "asset_asset-autodiscovery-cron.1.5k23u87w7224mpuasiyakgbdx",
"EndpointID": "d3d4c303e1bc665969ad9e4c9672e65a625fb71ed76e2423dca444a89779e4ee",
"MacAddress": "02:42:0a:00:03:0a",
"IPv4Address": "10.0.3.10/24",
"IPv6Address": ""
},
"f0a768e5cb2f1f700ee39d94e380aeb4bab5fe477bd136fd0abfa776917e90c1": {
"Name": "asset_asset-backend.1.8ql9t3qqt512etekjuntkft4q",
"EndpointID": "41587022c339023f15c57a5efc5e5adf6e57dc173286753216f90a976741d292",
"MacAddress": "02:42:0a:00:03:0c",
"IPv4Address": "10.0.3.12/24",
"IPv6Address": ""
},
"f577c539bbc3c06a501612d747f0d28d8a7994b843c6a37e18eeccb77717539e": {
"Name": "asset_asset-postgres.1.ynrqbzvba9kvfdkek3hurs7hl",
"EndpointID": "272d642a9e20e45f661ba01e8731f5256cef87898de7976f19577e16082c5854",
"MacAddress": "02:42:0a:00:03:06",
"IPv4Address": "10.0.3.6/24",
"IPv6Address": ""
},
"lb-asset_basic": {
"Name": "asset_basic-endpoint",
"EndpointID": "142373fd9c0d56d5a633b640d1ec9e4248bac22fa383ba2f754c1ff567a3502e",
"MacAddress": "02:42:0a:00:03:02",
"IPv4Address": "10.0.3.2/24",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.driver.overlay.vxlanid_list": "4100"
},
"Labels": {
"com.docker.stack.namespace": "asset"
},
"Peers": [
{
"Name": "8170c4487a4b",
"IP": "10.255.8.21"
}
]
}
]
Ran into this same issue and it turns out it was a clash between my local networks subnet and the subnet of the automatically created ingress network. This can be verified using docker network inspect ingress and checking if the IPAM.Config.Subnet value overlaps with your local network.
To fix you can update the configuration of the ingress network as specified in Customize the default ingress network; in summary:
Remove services that publish ports
Remove existing network: docker network rm ingress
Recreate using non-conflicting subnet:
docker network create \
--driver overlay \
--ingress \
--subnet 172.16.0.0/16 \ # Or whatever other subnet you want to use
--gateway 172.16.0.1 \
ingress
Restart services
You can avoid a clash to begin with by specifying the default subnet pool when initializing the swarm using the --default-addr-pool option.
docker service update your-service --publish-add 80:80
You can publish ports by updating the service.
Can you try this url instead of the ip adres? host.docker.internal so something like http://host.docker.internal:80
I suggest you verify the "right" behavior using docker-compose first. Then, try to use docker swarm without network specification just to verify there are no network interface problems.
Also, you could use the below command to verify your LISTEN ports:
netstat -tulpn
EDIT: I faced this same issue but I was able to access my services through 127.0.0.1
While running docker provide an port mapping, like
docker run -p 8081:8081 your-docker-image
Or, provide the port mapping in the docker desktop while starting the container.
I got into this same issue. It turns out that's my iptables filter causes external connections not work.
In docker swarm mode, docker create a virtual network bridge device docker_gwbridge to access to overlap network. My iptables has following line to drop packet forwards:
:FORWARD DROP
That makes network packets from physical NIC can't reach the docker ingress network, so that my docker service only works on localhost.
Change iptables rule to
:FORWARD ACCEPT
And problem solved without touching the docker.

Minikube mounted host folders are not working

I am using ubuntu 18 with minikube and virtual box and trying to mount the host's directory in order to get the input data my pod needs.
I found that minikube has issues with mounting host directories, but by default according to your OS and vm driver, there are directories that are mounted by default
I can't find those on my pods. They are simply not there.
I tried to create a persistent volume, it works, I can see it on my dashboard, but I cant mount it to the pod, I used this yaml to create the volume
{
"kind": "PersistentVolume",
"apiVersion": "v1",
"metadata": {
"name": "pv0003",
"selfLink": "/api/v1/persistentvolumes/pv0001",
"uid": "28038976-9ee4-414d-8478-b312a24a6b94",
"resourceVersion": "2030",
"creationTimestamp": "2019-08-08T10:48:23Z",
"annotations": {
"kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"v1\",\"kind\":\"PersistentVolume\",\"metadata\":{\"annotations\":{},\"name\":\"pv0001\"},\"spec\":{\"accessModes\":[\"ReadWriteOnce\"],\"capacity\":{\"storage\":\"5Gi\"},\"hostPath\":{\"path\":\"/data/pv0001/\"}}}\n"
},
"finalizers": [
"kubernetes.io/pv-protection"
]
},
"spec": {
"capacity": {
"storage": "6Gi"
},
"hostPath": {
"path": "/user/data",
"type": ""
},
"accessModes": [
"ReadWriteOnce"
],
"persistentVolumeReclaimPolicy": "Retain",
"volumeMode": "Filesystem"
},
"status": {
"phase": "Available"
}
}
And this yaml to create the job.
apiVersion: batch/v1
kind: Job
metadata:
name: pi31
spec:
template:
spec:
containers:
- name: pi
image: perl
command: ["sleep"]
args: ["300"]
volumeMounts:
- mountPath: /data
name: pv0003
volumes:
- name: pv0003
hostPath:
path: /user/data
restartPolicy: Never
backoffLimit: 1
I also tried to create the volumnes acording to the so called default mount paths but with no success.
I tried to add the volume claim to the job creation yaml, still nothing.
When I mount the drives and create them in the job creation yaml files, the jobs are able to see the data that other jobs create, but it's invisible to the host, and the host's data is invisible to them.
I am running minikube from my main user, and checked the logs in the dashboard, not getting any permissions error
Is there any way to get data into this minikube without setting up NFS? I am trying to use it for an MVP, the entire idea is for it to be simple...
It's not so easy as minikube is working inside VM created in Virtualbox that's why using hostPath you see that VM's file system instead of your PC.
I would really recommend to use minikube mount command - you can find description there
From docs:
minikube mount /path/to/dir/to/mount:/vm-mount-path is the recommended
way to mount directories into minikube so that they can be used in
your local Kubernetes cluster.
So after that you can share your host's files inside minikube Kubernetes.
Edit:
Here is log step-by-step how to test it:
➜ ~ minikube start
* minikube v1.3.0 on Ubuntu 19.04
* Tip: Use 'minikube start -p <name>' to create a new cluster, or 'minikube delete' to delete this one.
* Starting existing virtualbox VM for "minikube" ...
* Waiting for the host to be provisioned ...
* Preparing Kubernetes v1.15.2 on Docker 18.09.6 ...
* Relaunching Kubernetes using kubeadm ...
* Waiting for: apiserver proxy etcd scheduler controller dns
* Done! kubectl is now configured to use "minikube"
➜ ~ mkdir -p /tmp/test-dir
➜ ~ echo "test-string" > /tmp/test-dir/test-file
➜ ~ minikube mount /tmp/test-dir:/test-dir
* Mounting host path /tmp/test-dir into VM as /test-dir ...
- Mount type: <no value>
- User ID: docker
- Group ID: docker
- Version: 9p2000.L
- Message Size: 262144
- Permissions: 755 (-rwxr-xr-x)
- Options: map[]
* Userspace file server: ufs starting
* Successfully mounted /tmp/test-dir to /test-dir
* NOTE: This process must stay alive for the mount to be accessible ...
Now open another console:
➜ ~ minikube ssh
_ _
_ _ ( ) ( )
___ ___ (_) ___ (_)| |/') _ _ | |_ __
/' _ ` _ `\| |/' _ `\| || , < ( ) ( )| '_`\ /'__`\
| ( ) ( ) || || ( ) || || |\`\ | (_) || |_) )( ___/
(_) (_) (_)(_)(_) (_)(_)(_) (_)`\___/'(_,__/'`\____)
$ cat /test-dir/test-file
test-string
Edit 2:
example job.yml
apiVersion: batch/v1
kind: Job
metadata:
name: test
spec:
template:
spec:
containers:
- name: test
image: ubuntu
command: ["cat", "/testing/test-file"]
volumeMounts:
- name: test-volume
mountPath: /testing
volumes:
- name: test-volume
hostPath:
path: /test-dir
restartPolicy: Never
backoffLimit: 4

kubectl create from inside pod

I have a running jenkins pod and i am trying to execute following commands:
sudo kubectl --kubeconfig /opt/jenkins_home/admin.conf apply -f /opt/jenkins_home/ab-kubernetes/ab-back.yml
It is giving following error:
Error from server (NotFound): the server could not find the requested resource
What cound go wrong here?
ab-back.yml file
---
apiVersion: v1
kind: Service
metadata:
name: dg-back-svc
spec:
selector:
app: dg-core-backend-d
type: NodePort
ports:
- name: http
protocol: TCP
port: 8081
nodePort: 30003
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: dg-core-backend-d
spec:
replicas: 1
template:
metadata:
labels:
app: dg-core-backend-d
spec:
containers:
- name: dg-core-java
image: ab/dg-springboot-java:1.0
imagePullPolicy: IfNotPresent
command: ["sh"]
args: ["-c", "/root/post-deployment.sh"]
ports:
- containerPort: 8081
# livenessProbe:
# httpGet:
# path: /
# port: 8080
env:
- name: SPRING_PROFILES_ACTIVE
value: xxx
UPDATE:
kubectl version is as follows :
Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.0", GitCommit:"91e7b4fd31fcd3d5f436da26c980becec37ceefe", GitTreeState:"clean", BuildDate:"2018-06-27T20:17:28Z", GoVersion:"go1.10.2", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.0", GitCommit:"fff5156092b56e6bd60fff75aad4dc9de6b6ef37", GitTreeState:"clean", BuildDate:"2017-03-28T16:24:30Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}
On applying logs as --v=4,kubectl apply is working and giving logs as follows :
I0702 11:40:17.721604 1601 merged_client_builder.go:159] Using in-cluster namespace
I0702 11:40:17.734648 1601 decoder.go:224] decoding stream as YAML
service/dg-back-svc created
deployment.extensions/dg-core-backend-d created
but kubectl create is giving error as :
I0702 11:41:12.265490 1631 helpers.go:201] server response object: [{
"metadata": {},
"status": "Failure",
"message": "the server could not find the requested resource",
"reason": "NotFound",
"details": {
"causes": [
{
"reason": "UnexpectedServerResponse",
"message": "unknown"
}
]
},
"code": 404
}]
Also on doing kubectl get pods --v=10,it is giving log as :
Response Body: {
"metadata": {},
"status": "Failure",
"message": "only the following media types are accepted: application/json, application/yaml, application/vnd.kubernetes.protobuf",
"reason": "NotAcceptable",
"code": 406
}
I0702 12:34:27.542564 2514 request.go:1099] body was not decodable (unable to check for Status): Object 'Kind' is missing in '{
"metadata": {},
"status": "Failure",
"message": "only the following media types are accepted: application/json, application/yaml, application/vnd.kubernetes.protobuf",
"reason": "NotAcceptable",
"code": 406
}'
No resources found.
I0702 12:34:27.542813 2514 helpers.go:201] server response object: [{
"metadata": {},
"status": "Failure",
"message": "unknown (get pods)",
"reason": "NotAcceptable",
"details": {
"kind": "pods",
"causes": [
{
"reason": "UnexpectedServerResponse",
"message": "unknown"
}
]
},
"code": 406
}]
The problem is in versions, try to use the old version of the client or upgrade the server. kubectl supports one version forward and backward skew:
From documentation
a client should be skewed no more than one minor version from the
master, but may lead the master by up to one minor version. For
example, a v1.3 master should work with v1.1, v1.2, and v1.3 nodes,
and should work with v1.2, v1.3, and v1.4 clients.
Kubernetes server doesnt have this extensions/v1beta1 this resources. thats the reason why you cannot create dg-core-backend-d
You can check this by typing kubectl api-versions

Can't connect to node app on kubernetes

I've just finished Google's tutorial on how to implement continuous integration for a Go app on Kubernetes using Jenkins, and it works great. I'm now trying to do the same thing with a Node app that is served on port 3001, but I keep getting this error:
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {},
"status": "Failure",
"message": "services \"gceme-frontend\" not found",
"reason": "NotFound",
"details": {
"name": "gceme-frontend",
"kind": "services"
},
"code": 404
}
The only thing I've changed on the routing side is having the load balancer point to 3001 instead of 80, since that's where the Node app is listening. I have a very strong feeling that the error is somewhere in the .yaml files.
My node server (relevant part):
const PORT = process.env.PORT || 3001;
frontend-dev.yaml: (this is applied to the dev environment)
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: gceme-frontend-dev
spec:
replicas:
template:
metadata:
name: frontend
labels:
app: gceme
role: frontend
env: dev
spec:
containers:
- name: frontend
image: gcr.io/cloud-solutions-images/gceme:1.0.0
resources:
limits:
memory: "500Mi"
cpu: "100m"
imagePullPolicy: Always
ports:
- containerPort: 3001
protocol: TCP
services/frontend.yaml:
kind: Service
apiVersion: v1
metadata:
name: gceme-frontend
spec:
type: LoadBalancer
ports:
- name: http
#THIS PORT ACTUALLY GOES IN THE URL: i.e. gcme-frontend: ****
#when it says "no endpoints available for service, that doesn't mean this one is wrong, it means that target port is not working not exist"
port: 80
#matches port and -port in frontend-*.yaml
targetPort: 3001
protocol: TCP
selector:
app: gceme
role: frontend
Jenkinsfile (for dev branches, which is what I'm trying to get working)
sh("kubectl get ns ${env.BRANCH_NAME} || kubectl create ns ${env.BRANCH_NAME}")
// Don't use public load balancing for development branches
sh("sed -i.bak 's#LoadBalancer#ClusterIP#' ./k8s/services/frontend.yaml")
sh("sed -i.bak 's#gcr.io/cloud-solutions-images/gceme:1.0.0#${imageTag}#' ./k8s/dev/*.yaml")
sh("kubectl --namespace=${env.BRANCH_NAME} apply -f k8s/services/")
sh("kubectl --namespace=${env.BRANCH_NAME} apply -f k8s/dev/")
echo 'To access your environment run `kubectl proxy`'
echo "Then access your service via http://localhost:8001/api/v1/proxy/namespaces/${env.BRANCH_NAME}/services/${feSvcName}:80/"
Are you creating Service or Ingress resources to expose your application to the outside world?
See tutorials:
https://cloud.google.com/kubernetes-engine/docs/tutorials/hello-app
https://cloud.google.com/kubernetes-engine/docs/tutorials/http-balancer
which have working examples you can copy and modify.

How to write a kubernetes pod configuration to start two containers

I would like to create a kubernetes pod that contains 2 containers, both with different images, so I can start both containers together.
Currently I have tried the following configuration:
{
"id": "podId",
"desiredState": {
"manifest": {
"version": "v1beta1",
"id": "podId",
"containers": [{
"name": "type1",
"image": "local/image"
},
{
"name": "type2",
"image": "local/secondary"
}]
}
},
"labels": {
"name": "imageTest"
}
}
However when I execute kubecfg -c app.json create /pods I get the following error:
F0909 08:40:13.028433 01141 kubecfg.go:283] Got request error: request [&http.Request{Method:"POST", URL:(*url.URL)(0xc20800ee00), Proto:"HTTP/1.1", ProtoMajor:1, ProtoMinor:1, Header:http.Header{}, B
ody:ioutil.nopCloser{Reader:(*bytes.Buffer)(0xc20800ed20)}, ContentLength:396, TransferEncoding:[]string(nil), Close:false, Host:"127.0.0.1:8080", Form:url.Values(nil), PostForm:url.Values(nil), Multi
partForm:(*multipart.Form)(nil), Trailer:http.Header(nil), RemoteAddr:"", RequestURI:"", TLS:(*tls.ConnectionState)(nil)}] failed (500) 500 Internal Server Error: {"kind":"Status","creationTimestamp":
null,"apiVersion":"v1beta1","status":"failure","message":"failed to find fit for api.Pod{JSONBase:api.JSONBase{Kind:\"\", ID:\"SSH podId\", CreationTimestamp:util.Time{Time:time.Time{sec:63545848813, nsec
:0x14114e1, loc:(*time.Location)(0xb9a720)}}, SelfLink:\"\", ResourceVersion:0x0, APIVersion:\"\"}, Labels:map[string]string{\"name\":\"imageTest\"}, DesiredState:api.PodState{Manifest:api.ContainerMa
nifest{Version:\"v1beta1\", ID:\"podId\", Volumes:[]api.Volume(nil), Containers:[]api.Container{api.Container{Name:\"type1\", Image:\"local/image\", Command:[]string(nil), WorkingDir:\"\", Ports:[]ap
i.Port(nil), Env:[]api.EnvVar(nil), Memory:0, CPU:0, VolumeMounts:[]api.VolumeMount(nil), LivenessProbe:(*api.LivenessProbe)(nil)}, api.Container{Name:\"type2\", Image:\"local/secondary\", Command:[]string(n
il), WorkingDir:\"\", Ports:[]api.Port(nil), Env:[]api.EnvVar(nil), Memory:0, CPU:0, VolumeMounts:[]api.VolumeMount(nil), LivenessProbe:(*api.LivenessProbe)(nil)}}}, Status:\"\", Host:\"\", HostIP:\"\
", PodIP:\"\", Info:api.PodInfo(nil), RestartPolicy:api.RestartPolicy{Type:\"RestartAlways\"}}, CurrentState:api.PodState{Manifest:api.ContainerManifest{Version:\"\", ID:\"\", Volumes:[]api.Volume(nil
), Containers:[]api.Container(nil)}, Status:\"\", Host:\"\", HostIP:\"\", PodIP:\"\", Info:api.PodInfo(nil), RestartPolicy:api.RestartPolicy{Type:\"\"}}}","code":500}
How can I modify the configuration accordingly?
Running kubernetes on a vagrant vm (yungsang/coreos).
The error in question here is "failed to find fit". This generally happens when you have a port conflict (try and use the same hostPort too many times or perhaps you don't have any worker nodes/minions.
I'd suggest you either use the Vagrant file that is in the Kubernetes git repo (see http://kubernetes.io) as we have been trying to make sure that stays working as Kubernetes is under very active development. If you want to make it work with the CoreOS single machine set up, I suggest you hop on IRC (#google-containers on freenode) and try and get in touch with Kelsey Hightower.
Your pod spec file looks like invalid.
According to http://kubernetes.io/v1.0/docs/user-guide/walkthrough/README.html#multiple-containers, a valid multiple containers pod spec should like this
apiVersion: v1
kind: Pod
metadata:
name: www
spec:
containers:
- name: nginx
image: nginx
volumeMounts:
- mountPath: /srv/www
name: www-data
readOnly: true
- name: git-monitor
image: kubernetes/git-monitor
env:
- name: GIT_REPO
value: http://github.com/some/repo.git
volumeMounts:
- mountPath: /data
name: www-data
volumes:
- name: www-data
emptyDir: {}
Latest doc at http://kubernetes.io/docs/user-guide/walkthrough/#multiple-containers
apiVersion: v1
kind: Pod
metadata:
name: test
spec:
containers:
- name: wp
image: wordpress
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
- name: ng
image: nginx
imagePullPolicy: IfNotPresent

Resources