I am planning to use rancher for managing my containers. On my dev box, we plan to bring up several containers each serving a REST api.
I am able to automate the process of building up my containers using jenkins and want to run the container using rancher to take advantage of random host port mapping. I am able to do this using rancher UI but unable to find the way to automate it using cli.
ex:
Jennkins builds Container_A exposes 8080 -> Jenkins also executes rancher cli to run the container mapping 8080 to a random port of host. And the same for Container_B exposing 8080.
Hope my question makes sense.
Thanks
Vijay
You should just be able to do this in the service definition in the Docker compose yaml file:
...
publish:
8080
...
If you generate something in the UI and look at the configuration of the stack, you'll see the corresponding compose yml.
Alternatively, you can use:
rancher run --publish 8080 nginx
then get the randomly assigned port:
rancher inspect <stackname>/<service_name> | jq .publicEndpoints[].port
Related
I have two dockerized applications that are part of a docker network and which both start on the 8080 port. I need them both to be exposed on the host machine, that's why I expose them to 8080 and 8081 correspondingly.
app-1:
ports:
- "8080:8080"
app-2:
ports:
- "8081:8080"
I don't have control over these applications (I cannot change their ports), they are only a part of an end-to-end test suite that needs to be run in order to execute tests.
Problem: Depending on wether I execute tests in a docker container (a 3d application in the same docker-compose file) or locally, I have to use different ports (8080 or 8081) because the requests go either within a docker network or over the host machine. It is inconvenient.
Question: Is there a way to remap ports in the compose file the way that the port will be the same inside and outside the docker network? For instance, it would be great if I could refer to app-2 using the 8081 port inside the docker network.
I would appreciate any tips.
I faced a similar problem and I resolved it using the following method. It was a NodeJS-express application.
I ran a container on the defined port and connected with the CLI of
the container. Found the Environment file in which the port was
defined.
Copied that file using docker cp onto my local machine.
Modified the file and updated the port.
Stopped the Container.
Replaced the environment file inside the container with the updated
file again
using docker cp
Committed that container as an image using docker commit
Ran the container on the updated port and using the newly committed image.
I have a docker image, that contains a python file that accepts arguments from command line using sys.stdin(). I can run the image using the following command
cat file.csv | docker run -i -t my_image
It pipes the contents of file.csv to the image, and i get the output as expected.
Now i want to deploy this image to kubernetes. I can run the image on the server using docker without any problems. But if i curl to it, it should send a response back, but i am not getting it because i do not have a web server listening on any port. I went ahead and built a deployment using the following command.
kubectl run -i my_deployment --image=gcr.io/${PROJECT_ID}/my_image:v1 --port 8080
It built the deployment and i can see the pods running. Then i expose it.
kubectl expose deployment my_deployment --type=LoadBalancer --port 80 --target-port 8080
But if i try to access it using the IP assigned using curl,
curl http://allocated_ip
i get a response "connection refused".
How can deploy this docker image as a service on kubernetes and send contents of a file as an input to the service? Do i need a web server for that?
Kubernetes generally assumes the containers it deploys are long-lived and autonomous. If you're deploying something in a Pod, particularly via a Deployment, it should be able to run on its own without any particular inputs. If it immediately exits, Kubernetes will restart it, and you'll quickly wind up in the dreaded CrashLoopBackOff state.
In short: you need to redesign your container to not use stdin and stdout is its primary interface.
Your instinct to add a network endpoint into the service is probably correct, but Kubernetes won't do that on its own. If you rebuild your application to have, say, a Flask server and listen on a port, that's something you can readily deploy to Kubernetes. If the application expects data to come in on stdin and its results to go to stdout, adding the Kubernetes networking metadata won't help anything: in your example if nothing is listening inside the container on port 8080 then a network connection will never go anywhere.
I am assuming Kubernetes is running on premises. I would do the following.
Create a nginx or apache deployment. Using Helm, it is pretty easy with
helm install stable/nginx-ingress
Create a deployment with the port 8080, or whatever you would expose from running it from with docker. The actual deployment would have an API which I could send content via a POST.
Create a service with port 8080 and targetPort 8080. It should be type ClusterIP.
Create a ingress with the hostname, and servicePort of 8080.
Since you are passing the file as argument when running a command, this makes me think that once you have the content on the container you do not need to update the content of the csv.
The best approach to achieve the read operation of that file, would be to ADD that file on your Dockerfile and the open the file using the open function.
You would have a line like
ADD file.csv /home/file.csv
And in your python code something like :
file_in = open(‘home/file.csv’, ‘r’)
Note that if you want to change the file, you would need to update the Dockerfile, build again, push to the registry and re-deploy to GKE. If you do not want to follow this process, you can use a ConfigMap.
Also, if this answers your question make sure to link your same question on serverfault
How to use confluent/cp-kafka image in docker compose with exposing on localhost and my network container name kafka?
Do not link this as duplicate of:
Connect to docker kafka container from localhost and another docker container
Cannot produce message to kafka from service running in docker
These do not solve my issue because the methods they use are depreciated by confluent/cp-kafka and I want to connect on localhost and on the docker network.
In the configure script on confluent/cp-kafka they do this annoying task:
# By default, LISTENERS is derived from ADVERTISED_LISTENERS by replacing
# hosts with 0.0.0.0. This is good default as it ensures that the broker
# process listens on all ports.
if [[ -z "${KAFKA_LISTENERS-}" ]]
then
export KAFKA_LISTENERS
KAFKA_LISTENERS=$(cub listeners "$KAFKA_ADVERTISED_LISTENERS")
fi
It always sets whatever I give KAFKA_ADVERTISED_LISTENERS to 0.0.0.0! Using the docker network, doing
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://localhost:9093,PLAINTEXT://kafka:9093
I expect the listeners to be either localhost:9092 or 0.0.0.0:9092 and some docker ip PLAINTEXT://172.17.0.1:9093 (whatever kafka resolves to on the docker network)
Currently I can get only one or the other to work. So using localhost, it only works on the host system, no docker containers can access it. Using kafka, it only works in the docker network, no host applications can access it. I want it to work with both. I am using docker compose so that I can have zookeeper, kafka, redis, and my application start up. I have other applications that will startup without docker.
Update
So when I set PLAINTEXT://localhost:9092 I can access kafka running docker, outside of docker.
When I set PLAINTEXT://kafka:9092 I cannot access kafka running docker, outside of docker.
This is expected, however doing this: PLAINTEXT://localhost:9092,PLAINTEXT://kafka:9093 I would expect to access kafka running docker, both inside and outside docker. The confluent/cp-kafka image is wiping out localhost and kafka. Setting them both to 0.0.0.0, then throwing an error that I set 2 different ports to the same ip...
Maybe I'm just clashing into some opinionated docker image and should look for a different image...
Maybe I'm just clashing into some opinionated docker image and should look for a different image...
The image is fine. You might want to read this explanation of the listeners.
tl;dr - you don't want to (and shouldn't?) use the same listener "protocol" in different networks.
Use the advertised.listeners, no need to edit the listeners
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://broker:9092,PLAINTEXT_HOST://localhost:29092
When PLAINTEXT://localhost:9093 is being loaded inside of the container, you need to add port mappings for 9093, which should be self explanatory, and you connect to localhost:9093 and it should work.
Then, if you also had PLAINTEXT://kafka:9092, that will only work within the Docker Compose network overlay, not externally to your DNS servers, because that's how Docker networking works. You should be able to run other applications as part of that Docker network with the --network flag, or link containers using Docker Compose
Keep in mind that if you're running on Mac, the recommended way (as per the Confluent docs) is to run these containers in Docker Machine, in a VM, where you can manage the external port mappings correctly using the --net=host flag of Docker. However, using the blog above, it all works fine on a Mac outside a VM.
I have gitlab ci and gitlab containers. A project is registered with gitlab runner
using docker executor. Everything is OK. I set privileged mode true. There are flags about docker run such as volume share , privileged mode, image , service , link etc. But i could not find the flags in the runner.dockers section about port expose. My aim is to run a pipeline with container can communicate its ports.
Is it possible to implement this issue with gitlab runner ci.
Normally that's what services are for. You'd take a container that you want to expose ports on and define it as a service. That way, there are no exposed ports, but there is a service link which you can use for inter-container communication. That's valid for the Docker executor, in a Kubernetes executor all services are part of the pod and therefore available directly on the localhost.
In other words: if, for example, you need a PostgreSQL for your build job running on its default port of 5432, you just start postgres:latest as a service for your job. You can then reference it via postgres:5432 with a Docker executor and localhost:5432 with Kubernetes executor.
If services do not fit your use case, you might want to expand your question as to where they fail, there might an alternative answer.
I've got a swarm set up with a two nodes, one manager and one worker. I'd like to have a port published in the swarm so I can access my applications and I wonder how I achieve this.
version: '2'
services:
server:
build: .
image: my-hub.company.com/application/server:latest
ports:
- "80:80"
This exposes port 80 when I run docker-compose up and it works just fine, however when I run a bundled deploy
docker deploy my-service
This won't publish the port, so it just says 80/tcp in docker ps, instead of pointing on a port. Maybe this is because I need to attach a load balancer or run some fancy command or maybe add another layer of config to actually expose this port in a multi-host swarm.
Can someone help me understand what I need to configure/do to make this expose a port.
My best case scenario would be that port 80 is exposed, and if I access it from different hostnames it will send me to different applications.
Update:
It seems to work if I run the following commands after deploying the application
docker service update -p 80:80 my-service_server
docker kill <my-service_server id>
I found this repository for running a HA proxy, it seems great and is supported by docker themselves, however I cannot seem to apply this separate to my services using the new swarm mode.
https://github.com/docker/dockercloud-haproxy
There's a nice description in the bottom describing how the network should look:
Internet -> HAProxy -> Service_A -> Container A
However I cannot find a way to link services through the docker service create command, optimally now looks like a way to set up a network, and when I apply this network on a service it will pick it up in the HAProxy.
-- Marcus
As far as I understood for the moment you just can publish ports updating the service later the creation, like this:
docker service update my-service --publish-add 80:80
Swarm mode publishes ports in a different way. It won't show up in docker ps because it's not publishing the port on the host, it publishes the port to all nodes so that takes it can load balancing between service replicas.
You should see the port from docker service inspect my-service.
Any other service should be able to connect to my-service:80
docker service ls will display the port mappings.