Run multiple podman containers, like docker-compose - docker

I found some library that can replace docker-compose in podman but it is still under development, so my question is how can I run multiple container together, currently I am using my bash script to run all but it is good just for the first time not updating the container.
I'd prefer at first if there is any way in podman rather than using some other tool.
library (under development) --> https://github.com/muayyad-alsadi/podman-compose

I think the Kubernetes Pod concept is what you're looking for, or at least it allows you to run multiple containers together by following a well-established standard.
My first approach was like you, to do everything as a command to see it working, something like:
# Create a pod, publishing port 8080/TCP from internal 80/TCP
$ podman pod create \
--name my-pod \
--publish 8080:80/TCP \
--publish 8113:113/TCP
# Create a first container inside the pod
$ podman run --detach \
--pod my-pod \
--name cont1-name \
--env MY_VAR="my val" \
nginxdemos/hello
# Create a second container inside the pod
$ podman run --detach \
--pod my-pod \
--name cont2-name \
--env MY_VAR="my val" \
greboid/nullidentd
# Check by
$ podman container ls; podman pod ls
Now that you have a pod, you can export it as a Pod manifest by using podman generate kube my-pod > my-pod.yaml.
As soon as you try your own examples, you will see how not everything is exported as you would expect (like networks or volumes), but at least it serves you as a base where you can continue to work.
Assuming the same example, in a YAML Pod manifest, it looks like this my-pod.yaml:
# Created with podman-2.2.1
apiVersion: v1
kind: Pod
metadata:
labels:
app: my-pod
name: my-pod
spec:
containers:
# Create the first container: Dummy identd server on 113/TCP
- name: cont2-name
image: docker.io/greboid/nullidentd:latest
command: [ "/usr/sbin/inetd", "-i" ]
env:
- name: MY_VAR
value: my val
# Ensure not to overlap other 'containerPort' values within this pod
ports:
- containerPort: 113
hostPort: 8113
protocol: TCP
workingDir: /
# Create a second container.
- name: cont1-name
image: docker.io/nginxdemos/hello:latest
command: [ "nginx", "-g", "daemon off;" ]
env:
- name: MY_VAR
value: my val
# Ensure not to overlap other 'containerPort' values within this pod
ports:
- containerPort: 80
hostPort: 8080
protocol: TCP
workingDir: /
restartPolicy: Never
status: {}
When this file is used like this:
# Use a Kubernetes-compatible Pod manifest to create and run a pod
$ podman play kube my-pod.yaml
# Check
$ podman container ls; podman pod ls
# Output
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1a53a5c0f076 docker.io/nginxdemos/hello:latest nginx -g daemon o... 8 seconds ago Up 6 seconds ago 0.0.0.0:8080->80/tcp, 0.0.0.0:8113->113/tcp my-pod-cont1-name
351065b66b55 docker.io/greboid/nullidentd:latest /usr/sbin/inetd -... 10 seconds ago Up 6 seconds ago 0.0.0.0:8080->80/tcp, 0.0.0.0:8113->113/tcp my-pod-cont2-name
e61c68752e35 k8s.gcr.io/pause:3.2 14 seconds ago Up 7 seconds ago 0.0.0.0:8080->80/tcp, 0.0.0.0:8113->113/tcp b586ca581129-infra
POD ID NAME STATUS CREATED INFRA ID # OF CONTAINERS
b586ca581129 my-pod Running 14 seconds ago e61c68752e35 3
You will be able to access the 'Hello World' served by nginx at 8080, and the dummy identd server at 8113.

Related

How to access kind control plane port from another docker container?

I'm creating a kind cluster with kind create cluster --name kind and I want to access it from another docker container but when I try to apply a Kubernetes file from a container (kubectl apply -f deployment.yml) I got this error:
The connection to the server 127.0.0.1:6445 was refused - did you specify the right host or port?
Indeed when I try to curl kind control-plane from a container, it's unreachable.
> docker run --entrypoint curl curlimages/curl:latest 127.0.0.1:6445
curl: (7) Failed to connect to 127.0.0.1 port 6445 after 0 ms: Connection refused
However kind control-plane is publishing to the right port but only to the localhost.
> docker ps --format "table {{.Image}}\t{{.Ports}}"
IMAGE PORTS
kindest/node:v1.23.4 127.0.0.1:6445->6443/tcp
Currently the only solution I found is to set the host network mode.
> docker run --network host --entrypoint curl curlimages/curl:latest 127.0.0.1:6445
Client sent an HTTP request to an HTTPS server.
This solution don't look to be the most secure. Is there another way like connecting the kind network to my container or something like that that I missed ?
Don't have enough rep to comment on the other answer, but wanted to comment on what ultimately worked for me.
Takeaways
Kind cluster running in it own bridge network kind
Service with kubernetes client running in another container with a mounted kube config volume
As described above the containers need to be in the same network unless you want your service to run in the host network.
The server address for the kubeconfig is the container name + internal port e.g. kind-control-plane:6443. The port is NOT the exposed port in the example below 6443 NOT 38669
CONTAINER ID IMAGE PORTS
7f2ee0c1bd9a kindest/node:v1.25.3 127.0.0.1:38669->6443/tcp
Kube config for the container
# path/to/some/kube/config
apiVersion: v1
clusters:
- cluster:
insecure-skip-tls-verify: true # Don't use in Prod equivalent of --insecure on cli
server: https://<kind-control-plane container name>:6443 # NOTE port is internal container port
name: kind-kind # or whatever
contexts:
- context:
cluster: kind-kind
user: <some-service-account>
name: kind-kind # or whatever
current-context: kind-kind
kind: Config
preferences: {}
users:
- name: <some-service-account>
user:
token: <TOKEN>
Docker container stuff
If using docker-compose you can add the kind network to the container such as:
#docker-compose.yml
services:
foobar:
build:
context: ./.config
networks:
- kind # add this container to the kind network
volumes:
- path/to/some/kube/config:/somewhere/in/the/container
networks:
kind: # define the kind network
external: true # specifies that the network already exists in docker
If running a new container:
docker run --network kind -v path/to/some/kube/config:/somewhere/in/the/container <image>
Container already running?
docker network connect kind <container name>
I don't know exactly why you want to do this. but no problem I think this could help you:
first, lets pull your docker image:
❯ docker pull curlimages/curl
In my kind cluster I got 3 control plane nodes and 3 worker nodes. Here are the pod of my kind cluster:
❯ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
39dbbb8ca320 kindest/node:v1.23.5 "/usr/local/bin/entr…" 7 days ago Up 7 days 127.0.0.1:35327->6443/tcp so-cluster-1-control-plane
62b5538275e9 kindest/haproxy:v20220207-ca68f7d4 "haproxy -sf 7 -W -d…" 7 days ago Up 7 days 127.0.0.1:35625->6443/tcp so-cluster-1-external-load-balancer
9f189a1b6c52 kindest/node:v1.23.5 "/usr/local/bin/entr…" 7 days ago Up 7 days 127.0.0.1:40845->6443/tcp so-cluster-1-control-plane3
4c53f745a6ce kindest/node:v1.23.5 "/usr/local/bin/entr…" 7 days ago Up 7 days 127.0.0.1:36153->6443/tcp so-cluster-1-control-plane2
97e5613d2080 kindest/node:v1.23.5 "/usr/local/bin/entr…" 7 days ago Up 7 days 0.0.0.0:30081->30080/tcp so-cluster-1-worker2
0ca64a907707 kindest/node:v1.23.5 "/usr/local/bin/entr…" 7 days ago Up 7 days 0.0.0.0:30080->30080/tcp so-cluster-1-worker
9c5d26caee86 kindest/node:v1.23.5 "/usr/local/bin/entr…" 7 days ago Up 7 days 0.0.0.0:30082->30080/tcp so-cluster-1-worker3
The container that is interesting for us here is the haproxy one (kindest/haproxy:v20220207-ca68f7d4) which have the role of loadbalancing the enterring traffic to the nodes (and, in our example, especially the control plane nodes.) we can see that the port 35625 of our host machine is mapped to the port 6443 of the haproxy container. (127.0.0.1:35625->6443/tcp)
so, our cluster endpoint is https://127.0.0.1:35625, we can confirm this in our kubeconfig file (~/.kube/config):
❯ cat .kube/config
apiVersion: v1
kind: Config
preferences: {}
users:
- name: kind-so-cluster-1
user:
client-certificate-data: <base64data>
client-key-data: <base64data>
clusters:
- cluster:
certificate-authority-data: <certificate-authority-dataBase64data>
server: https://127.0.0.1:35625
name: kind-so-cluster-1
contexts:
- context:
cluster: kind-so-cluster-1
user: kind-so-cluster-1
namespace: so-tests
name: kind-so-cluster-1
current-context: kind-so-cluster-1
let's run the curl container in background:
❯ docker run -d --network host curlimages/curl sleep 3600
ba183fe2bb8d715ed1e503a9fe8096dba377f7482635eb12ce1322776b7e2366
as expected, we cant HTTP request the endpoint that listen on an HTTPS port:
❯ docker exec -it ba curl 127.0.0.1:35625
Client sent an HTTP request to an HTTPS server.
we can try to use the certificate that is in the field "certificate-authority-data" in our kubeconfig to check if that change something (it should):
Lets create a file named my-ca.crt that contain the stringData of the certificate:
base64 -d <<< <certificate-authority-dataBase64dataFromKubeConfig> > my-ca.crt
since the working directory of the curl docker image is "/" lets copy our cert to this location in the container and verify that it is actually there:
docker cp my-ca.crt ba183fe:/
❯ docker exec -it ba sh
/ $ ls my-ca.crt
my-ca.crt
Let's try again our curl request but with the certificate:
❯ docker exec -it ba curl --cacert my-ca.crt https://127.0.0.1:35625
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {},
"status": "Failure",
"message": "forbidden: User \"system:anonymous\" cannot get path \"/\"",
"reason": "Forbidden",
"details": {},
"code": 403
}
YOU, can get the same result by adding the "--insecure" flag to your curl request:
❯ docker exec -it ba curl https://127.0.0.1:35625 --insecure
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {},
"status": "Failure",
"message": "forbidden: User \"system:anonymous\" cannot get path \"/\"",
"reason": "Forbidden",
"details": {},
"code": 403
}
However, we can't access our cluster with anonymous user ! So lets get a token from kubernetes (cf https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/):
# Create a secret to hold a token for the default service account
kubectl apply -f - <<EOF
apiVersion: v1
kind: Secret
metadata:
name: default-token
annotations:
kubernetes.io/service-account.name: default
type: kubernetes.io/service-account-token
EOF
Once the token controller has populated the secret with a token:
# Get the token value
❯ kubectl get secret default-token -o jsonpath='{.data.token}' | base64 --decode
eyJhbGciOiJSUzI1NiIsImtpZCI6InFSTThZZ05lWHFXMWExQlVSb1hTcHNxQ3F6Z2Z2aWpUaUYwd2F2TGdVZ0EifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJzby10ZXN0cyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkZWZhdWx0LXRva2VuIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImRlZmF1bHQiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiIzYzY0OTg1OS0xNzkyLTQzYTQtOGJjOC0zMDEzZDgxNjRmY2IiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6c28tdGVzdHM6ZGVmYXVsdCJ9.VLfjuym0fohYTT_uoLPwM0A6u7dUt2ciWZF2K9LM_YvQ0UZT4VgkM8UBVOQpWjTmf9s2B5ZxaOkPu4cz_B4xyDLiiCgqiHCbUbjxE9mphtXGKQwAeKLvBlhbjYnHb9fCTRW19mL7VhqRgfz5qC_Tae7ysD3uf91FvqjjxsCyzqSKlsq0T7zXnzQ_YQYoUplGa79-LS_xDwG-2YFXe0RfS9hkpCILpGDqhLXci_gwP9DW0a6FM-L1R732OdGnb9eCPI6ReuTXQz7naQ4RQxZSIiNd_S7Vt0AYEg-HGvSkWDl0_DYIyHShMeFHu1CtfTZS5xExoY4-_LJD8mi
Now lets execute the curl command directly with the token !
❯ docker exec -it ba curl -X GET https://127.0.0.1:35625/api --header "Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6InFSTThZZ05lWHFXMWExQlVSb1hTcHNxQ3F6Z2Z2aWpUaUYwd2F2TGdVZ0EifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJzby10ZXN0cyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkZWZhdWx0LXRva2VuIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImRlZmF1bHQiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiIzYzY0OTg1OS0xNzkyLTQzYTQtOGJjOC0zMDEzZDgxNjRmY2IiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6c28tdGVzdHM6ZGVmYXVsdCJ9.VLfjuym0fohYTT_uoLPwM0A6u7dUt2ciWZF2K9LM_YvQ0UZT4VgkM8UBVOQpWjTmf9s2B5ZxaOkPu4cz_B4xyDLiiCgqiHCbUbjxE9mphtXGKQwAeKLvBlhbjYnHb9fCTRW19mL7VhqRgfz5qC_Tae7ysD3uf91FvqjjxsCyzqSKlsq0T7zXnzQ_YQYoUplGa79-LS_xDwG-2YFXe0RfS9hkpCILpGDqhLXci_gwP9DW0a6FM-L1R732OdGnb9eCPI6ReuTXQz7naQ4RQxZSIiNd_S7Vt0AYEg-HGvSkWDl0_DYIyHShMeFHu1CtfTZS5xExoY4-_LJD8mi" --insecure
{
"kind": "APIVersions",
"versions": [
"v1"
],
"serverAddressByClientCIDRs": [
{
"clientCIDR": "0.0.0.0/0",
"serverAddress": "172.18.0.5:6443"
}
]
}
It works !
I still don't know why you want to do this but I hope that this helped you.
Since It's not what you wanted because here I use host network, You can use this : How to communicate between Docker containers via "hostname" as proposed #SergioSantiago thanks for your comment !
bguess

How to Make Kubectl run a container after pod is created

Intention is to execute gatling perf tests from command line .Equivalent docker command is
docker run --rm -w /opt/gatling-fundamentals/
tarunkumard/tarungatlingscript:v1.0
./gradlew gatlingRun-simulations.RuntimeParameters -DUSERS=500 -DRAMP_DURATION=5 -DDURATION=30
Now to map above docker run in Kubernetes using kubectl, I have created a pod for which gradlewcommand.yaml file is below
apiVersion: v1
kind: Pod
metadata:
name: gradlecommandfromcommandline
labels:
purpose: gradlecommandfromcommandline
spec:
containers:
- name: gradlecommandfromcommandline
image: tarunkumard/tarungatlingscript:v1.0
workingDir: /opt/gatling-fundamentals/
command: ["./gradlew"]
args: ["gatlingRun-simulations.RuntimeParameters", "-DUSERS=500", "-
DRAMP_DURATION=5", "-DDURATION=30"]
restartPolicy: OnFailure
Now pod is created using below command:-
kubectl apply -f gradlewcommand.yaml
Now comes my actual requirement or question that how do i run or trigger kubectl run command so as to run container inside the above pod created? ,mind you pod name is gradlecommandfromcommandline
Here is the command which solves the problem:
kubectl exec gradlecommandfromcommandline -- \
./gradlew gatlingRun-simulations.RuntimeParameters \
-DUSERS=500 -DRAMP_DURATION=5 -DDURATION=30

127.0.0.1:5000: getsockopt: connection refused in Minikube

Using minikube and docker on my local Ubuntu workstation I get the following error in the Minikube web UI:
Failed to pull image "localhost:5000/samples/myserver:snapshot-180717-213718-0199": rpc error: code = Unknown desc = Error response from daemon: Get http://localhost:5000/v2/: dial tcp 127.0.0.1:5000: getsockopt: connection refused
after I have created the below deployment config with:
kubectl apply -f hello-world-deployment.yaml
hello-world-deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: hello-world
spec:
replicas: 1
template:
metadata:
labels:
app: hello-world
tier: backend
spec:
containers:
- name: hello-world
image: localhost:5000/samples/myserver:snapshot-180717-213718-0199
resources:
requests:
cpu: 100m
memory: 100Mi
env:
- name: GET_HOSTS_FROM
value: dns
ports:
- containerPort: 8080
And output from docker images:
REPOSITORY TAG IMAGE ID CREATED SIZE
samples/myserver latest aa0a1388cd88 About an hour ago 435MB
samples/myserver snapshot-180717-213718-0199 aa0a1388cd88 About an hour ago 435MB
k8s.gcr.io/kube-proxy-amd64 v1.10.0 bfc21aadc7d3 3 months ago 97MB
Based on this guide:
How to use local docker images with Minikube?
I have also run:
eval $(minikube docker-env)
and based on this:
https://github.com/docker/for-win/issues/624
I have added:
"InsecureRegistry": [
"localhost:5000",
"127.0.0.1:5000"
],
to /etc/docker/daemon.json
Any suggestion on what I missing to get the image pull to work in minikube?
I have followed the steps in the below answer but when I get to this step:
$ kubectl port-forward --namespace kube-system $(kubectl get po -n kube-system | grep kube-registry-v0 | awk '{print $1;}') 5000:5000
it just hangs like this:
$ kubectl port-forward --namespace kube-system $(kubectl get po -n kube-system | grep kube-registry-v0 | awk '{print $1;}') 5000:5000
Forwarding from 127.0.0.1:5000 -> 5000
Forwarding from [::1]:5000 -> 5000
and I get the same error in minikube dashboard after I create my deploymentconfig.
Based on answer from BMitch I have now tried to create a local docker repository and push an image to it with:
$ docker run -d -p 5000:5000 --restart always --name registry registry:2
$ docker pull ubuntu
$ docker tag ubuntu localhost:5000/ubuntu:v1
$ docker push localhost:5000/ubuntu:v1
Next when I do docker images I get:
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
ubuntu latest 74f8760a2a8b 4 days ago 82.4MB
localhost:5000/ubuntu v1 74f8760a2a8b 4 days ago 82.4MB
I have then updated my deploymentconfig hello-world-deployment.yaml to:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: hello-world
spec:
replicas: 1
template:
metadata:
labels:
app: hello-world
tier: backend
spec:
containers:
- name: hello-world
image: localhost:5000/ubuntu:v1
resources:
requests:
cpu: 100m
memory: 100Mi
env:
- name: GET_HOSTS_FROM
value: dns
ports:
- containerPort: 8080
and
kubectl create -f hello-world-deployment.yaml
But in Minikube I still get similar error:
Failed to pull image "localhost:5000/ubuntu:v1": rpc error: code = Unknown desc = Error response from daemon: Get http://localhost:5000/v2/: dial tcp 127.0.0.1:5000: getsockopt: connection refused
So seems Minikube is not allowed to see the local registry I just created?
It looks like you’re facing a problem with localhost on your computer and localhost used within the context of minikube VM.
To have registry working, you have to set an additional port forwarding.
If your minikube installation is currently broken due to a lot of attempts to fix registry problems,
I would suggest restarting minikube environment:
minikube stop && minikube delete && rm -fr $HOME/.minikube && minikube start
Next, get kube registry yaml file:
curl -O https://gist.githubusercontent.com/coco98/b750b3debc6d517308596c248daf3bb1/raw/6efc11eb8c2dce167ba0a5e557833cc4ff38fa7c/kube-registry.yaml
Then, apply it on minikube:
kubectl create -f kube-registry.yaml
Test if registry inside minikube VM works:
minikube ssh && curl localhost:5000
On Ubuntu, forward ports to reach registry at port 5000:
kubectl port-forward --namespace kube-system $(kubectl get po -n kube-system | grep kube-registry-v0 | awk '{print $1;}') 5000:5000
If you would like to share your private registry from your machine, you may be interested in sharing local registry for minikube blog entry.
If you're specifying the image source as the local registry server, you'll need to run a registry server there, and push your images to it.
You can self host a registry server with multiple 3rd party options, or run this one that is packaged inside a docker container: https://hub.docker.com/_/registry/
This only works on a single node environment unless you setup TLS keys, trust the CA, or tell all other nodes of the additional insecure registry.
You can also specify the imagePullPolicy as Never.
Both of these solutions were already in your linked question and I'm not seeing any evidence of you trying either in this question. Without showing how you tried those steps and experienced a different problem, this question should probably be closed as a duplicate.
it is unclear from your question how many nodes do you have?
If you have more than one, your problem is in your deployment with replicas: 1.
If not, please ignore this answer.
You don't know where and what that replica will be. So if you don't have docker local registry on all of your nodes, and you got unlucky that kubernetes is trying to use some node without docker registry, you will end up with that error.
Same thing happened to me, same error connection refused because deployment went to node without local docker registry.
As I am typing this, I think this can be resolved with ingress.
You do registry as deployment, add service, add volume for images and put it to ingress.
Little more of work but at least all your nodes will be sync (all of your pods sorry).

Private registry with Kubernetes

I'm trying (for tests purpose) to expose to kubernetes a very simple image pong http:
FROM golang:onbuild
EXPOSE 8000
I built the docker image:
docker build -t pong .
I started a private registry (with certificates):
docker run -d --restart=always --name registry -v `pwd`/certs:/certs -e REGISTRY_HTTP_ADDR=0.0.0.0:443 -e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/domain.crt -e REGISTRY_HTTP_TLS_KEY=/certs/domain.key -p 443:443 registry:2.6.2
I created a secret:
kubectl create secret docker-registry regsecret --docker-server=localhost --docker-username=johndoe --docker-password=johndoe --docker-email=johndoe#yopmail.com
I uploaded the image:
docker tag 9c0bb659fea1 localhost/pong
docker push localhost/pong
I had an insecure registry configuration
{
"storage-driver" : "aufs",
"insecure-registries" : [
"localhost"
],
"debug" : true,
"experimental" : true
}
So I tried to create my kubernetes pods with:
apiVersion: v1
kind: Pod
metadata:
name: pong
spec:
containers:
- name: pong
image: localhost/pong:latest
imagePullPolicy: Always
imagePullSecrets:
- name: regsecret
I'm on MacOS with docker Version 17.12.0-ce-mac49 (21995).
If I use image: localhost/pong:latest I got:
waiting:
message: 'rpc error: code = Unknown desc = Error response from daemon: error
parsing HTTP 404 response body: invalid character ''d'' looking for beginning
of value: "default backend - 404"'
reason: ErrImagePull
I'm stuck on it since the beginning of the week, without success.
It was not a problem of registry configuration.
I forgot to mention that I used minikube.
For the flags to be taken into account, I had to delete the minikube configuration and recreate it
minikube delete
minikube start --insecure-registry="10.0.4.0/24"
Hey try to browse your registry using this nice front end app https://hub.docker.com/r/konradkleine/docker-registry-frontend/
Perhaps this will give you some hint , it looks like the registry has some configuration issue...
instead of deleting the cluster first (minikube delete) the configuration json may be editied at ~/.minikube/config/config.json to add this section accordingly:
{
...
"HostOptions": {
...
"InsecureRegistry": [
"private.docker.registry:5000"
],
...
},
...
}
...
}
this only works on started clusters, as the configuration file won't be populated otherwise. the answer above using minikube --insecure-registry="" is fine.

Docker compose yml inheritance

There are two tasks: run app container, run almost the same deploy-app container. The differences for them, for example, that deploy container does not have port sharing.
So, I made configs for this tasks...
./dockerfiles/base.yml:
app:
net: docker_internal_net
environment:
APPLICATION_SERVER: "docker"
./dockerfiles/base.run.yml:
app:
container_name: project-app
# set the build context to the project root
build: ..
volumes:
- /var/log/project/nginx:/var/log/nginx
- /var/log/project/php-fpm:/var/log/php5-fpm
- ..:/var/www/project
./dockerfiles/dev/run.yml:
app:
dockerfile: ./dockerfiles/dev/run-app/Dockerfile
ports:
- "80:80"
- "22:22"
environment:
DEV_SSH_PUBKEY: "$SSH_PUBLIC_KEY"
APPLICATION_PLATFORM: "dev"
./dockerfiles/dev/build.yml:
app:
container_name: project-app-deploy
# set the build context to the project root
build: ../..
dockerfile: ./dockerfiles/dev/build-app/Dockerfile
environment:
APPLICATION_PLATFORM: "dev"
volumes:
- ../..:/var/www/project
So I can tun the app container like this:
$ docker-compose -f ./dockerfiles/base.yml -f ./dockerfiles/base.run.yml -f ./dockerfiles/dev/run.yml up -d app
Creating project-app
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
dae45f3f2c42 dockerfiles_app "/sbin/my_init" 2 seconds ago Up 1 seconds 0.0.0.0:2223->22/tcp, 0.0.0.0:8081->80/tcp project-app
Everything okay. But if then I trying to run deploy-app container, I will receive this message:
$ docker-compose -f ./dockerfiles/base.yml -f ./dockerfiles/dev/build.yml up -d app
Recreating project-app
WARNING: Service "app" is using volume "/var/www/project" from the previous container. Host mapping ".." has no effect. Remove the existing containers (with `docker-compose rm app`) to use the host volume mapping.
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
53059702c09b dockerfiles_app "/sbin/my_init" 6 seconds ago Up 4 seconds 22/tcp, 80/tcp project-app-deploy
This is because both of them are shared one local directory? But why I can run deploy-app container manually without docker-compose?
$ docker run -d --net docker_internal_net -e APPLICATION_SERVER=docker -e APPLICATION_PLATFORM=dev --name project-app-deploy -v ..:/var/www/project mybaseimage
86439874b8df561f529fde0d1e31824d70dc7e2a2377cd529331a2d7fcb00467
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
86439874b8df mybaseimage "/sbin/my_init" 4 seconds ago Up 3 seconds 22/tcp, 80/tcp project-app-deploy
40641f02a09b dockerfiles_app "/sbin/my_init" 2 minutes ago Up 2 minutes 0.0.0.0:2223->22/tcp, 0.0.0.0:8081->80/tcp project-app
I've solved my problem with the extend command in that way:
1) making changes into my ./dockerfiles/dev/build.yml file:
deploy-app:
extends:
file: ../base.yml
service: app
container_name: project-app-deploy
# set the build context to the project root
build: ../..
dockerfile: ./dockerfiles/dev/build-app/Dockerfile
environment:
APPLICATION_PLATFORM: "dev"
volumes:
- ../..:/var/www/project
2) run my deploy app container so:
$ docker-compose -f ./dockerfiles/dev/build.yml up -d deploy-app
Building deploy-app
...
Successfully built 74750fe274c6
Creating lovetime-app-deploy
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
9bb2af79ffaa dev_deploy-app "/sbin/my_init" 5 seconds ago Up 4 seconds 22/tcp, 80/tcp project-app-deploy
812b8824f1f0 dockerfiles_app "/sbin/my_init" 3 minutes ago Up 3 minutes 0.0.0.0:2223->22/tcp, 0.0.0.0:8081->80/tcp project-app
$ docker inspect -f '{{ .Mounts }}' project-app-deploy
[{ ...... /var/www/project rw true}]
Update:
According to documentation, this command is not supported in newer compose versions:
The extends keyword is supported in earlier Compose file formats up to Compose file version 2.1 (see extends in v1 and extends in v2), but is not supported in Compose version 3.x.

Resources