Microservice - Docker - Kubernetes - docker

I have created docker container running on : http://localhost:8080/swagger/index.html
I have kubernetes pod running on : http://localhost:32729/swagger/index.html.
As of now i can access container directly using http://localhost:8080/swagger/index.html, however i want to restrict it. instead it can only accessible through kubernetes pod http://localhost:32729/swagger/index.html.
Thanks.

There is expose directive in Dockerfile - if you use any server for container you can use porting.
Kubernetes + nginx: https://kubernetes.io/docs/tutorials/services/connect-applications-service/
Similar on stack without server: what is the relationship between EXPOSE in the dockerfile and TARGETPORT in the service YAML and actual running port in the Pod?

Related

Docker Compose: Remap a containers port to be the same inside the network and on the host

I have two dockerized applications that are part of a docker network and which both start on the 8080 port. I need them both to be exposed on the host machine, that's why I expose them to 8080 and 8081 correspondingly.
app-1:
ports:
- "8080:8080"
app-2:
ports:
- "8081:8080"
I don't have control over these applications (I cannot change their ports), they are only a part of an end-to-end test suite that needs to be run in order to execute tests.
Problem: Depending on wether I execute tests in a docker container (a 3d application in the same docker-compose file) or locally, I have to use different ports (8080 or 8081) because the requests go either within a docker network or over the host machine. It is inconvenient.
Question: Is there a way to remap ports in the compose file the way that the port will be the same inside and outside the docker network? For instance, it would be great if I could refer to app-2 using the 8081 port inside the docker network.
I would appreciate any tips.
I faced a similar problem and I resolved it using the following method. It was a NodeJS-express application.
I ran a container on the defined port and connected with the CLI of
the container. Found the Environment file in which the port was
defined.
Copied that file using docker cp onto my local machine.
Modified the file and updated the port.
Stopped the Container.
Replaced the environment file inside the container with the updated
file again
using docker cp
Committed that container as an image using docker commit
Ran the container on the updated port and using the newly committed image.

Link between docker container and Minikube

Is is possible to link a docker container with a service running in minikube? I have a mysql container which I want to access using PMA pod in minikube. I have tried adding PMA_HOST is the yaml file while creating pod but getting an error on the PMA GUI page mentioning -
mysqli_real_connect(): (HY000/2002): Connection refused
If I understand you correctly, you want to access a service (mysql) running outside kube cluster (minikube) from that kube cluster.
You have two ways to achieve this:
make sure your networking is configured in a way allowinf traffic passing both ways correctly. Then you should be able to access that mysql service directly by it's address or by creating external service inside kube cluster (create Service with no selector and manualy configure external Endpoints
Use something like ie. telepresence.io to expose localy developed service inside remote kubernetes cluster

Rancher CLI random host port mapping

I am planning to use rancher for managing my containers. On my dev box, we plan to bring up several containers each serving a REST api.
I am able to automate the process of building up my containers using jenkins and want to run the container using rancher to take advantage of random host port mapping. I am able to do this using rancher UI but unable to find the way to automate it using cli.
ex:
Jennkins builds Container_A exposes 8080 -> Jenkins also executes rancher cli to run the container mapping 8080 to a random port of host. And the same for Container_B exposing 8080.
Hope my question makes sense.
Thanks
Vijay
You should just be able to do this in the service definition in the Docker compose yaml file:
...
publish:
8080
...
If you generate something in the UI and look at the configuration of the stack, you'll see the corresponding compose yml.
Alternatively, you can use:
rancher run --publish 8080 nginx
then get the randomly assigned port:
rancher inspect <stackname>/<service_name> | jq .publicEndpoints[].port

docker deploy won't publish port in swarm

I've got a swarm set up with a two nodes, one manager and one worker. I'd like to have a port published in the swarm so I can access my applications and I wonder how I achieve this.
version: '2'
services:
server:
build: .
image: my-hub.company.com/application/server:latest
ports:
- "80:80"
This exposes port 80 when I run docker-compose up and it works just fine, however when I run a bundled deploy
docker deploy my-service
This won't publish the port, so it just says 80/tcp in docker ps, instead of pointing on a port. Maybe this is because I need to attach a load balancer or run some fancy command or maybe add another layer of config to actually expose this port in a multi-host swarm.
Can someone help me understand what I need to configure/do to make this expose a port.
My best case scenario would be that port 80 is exposed, and if I access it from different hostnames it will send me to different applications.
Update:
It seems to work if I run the following commands after deploying the application
docker service update -p 80:80 my-service_server
docker kill <my-service_server id>
I found this repository for running a HA proxy, it seems great and is supported by docker themselves, however I cannot seem to apply this separate to my services using the new swarm mode.
https://github.com/docker/dockercloud-haproxy
There's a nice description in the bottom describing how the network should look:
Internet -> HAProxy -> Service_A -> Container A
However I cannot find a way to link services through the docker service create command, optimally now looks like a way to set up a network, and when I apply this network on a service it will pick it up in the HAProxy.
-- Marcus
As far as I understood for the moment you just can publish ports updating the service later the creation, like this:
docker service update my-service --publish-add 80:80
Swarm mode publishes ports in a different way. It won't show up in docker ps because it's not publishing the port on the host, it publishes the port to all nodes so that takes it can load balancing between service replicas.
You should see the port from docker service inspect my-service.
Any other service should be able to connect to my-service:80
docker service ls will display the port mappings.

I can not access my Container Docker Image by HTTP

I created an image with apache2 running locally on a docker container via Dockerfile exposing port 80. Then pushed to my DockerHUB repository
I created a new instance of Container Engine In my project on the Google Cloud. Within this I have two clusters, the Master and the Node1.
Then created a Pod specifying the name of my image in DockerHUB and configuring Ports "containerPort" and "hostPort" for 6379 and 80 respectively.
Node1 accessed via SSH and the command line: $ sudo docker ps -l Then I found that my docker container there is.
I created a service for instance by configuring the ports as in the Pod, "containerPort" and "hostPort" for 6379 and 80 respectively.
I checked the Firewall is available with access to port 80. Even without deems it necessary, I created a rule to allow access through port 6379.
But when I enter http://IP_ADDRESS:PORT is not available.
Any idea about what it's wrong?
If you are using a service to access your pod, you should configure the service to use an external load balancer (similarly to what is done in the guestbook example's frontend service definition) and you should not need to specify a host port in your pod definition.
Once you have an external load balancer created, then you should open a firewall rule to allow external access to the load balancer which will allow packets to reach the service (and pods backing it) running in your cluster.

Resources