Does exist any way to do this:
run one service (container) with main application - server (flask application);
server allows to run another services, them are also flask applications;
but I want to run each new service in separate container ?
For example, I have endpoint /services/{id}/run at the server, each id is some service id. Docker image is the same for all services, each service is running on separate port.
I would like something like this:
request to server - <host>//services/<id>/run -> application at server make some magic command/send message to somewhere -> service with id starts in new container.
I know that at least locally I can use docker-in-docker or simply mount docker socket in container and work with docker inside this container. But I would like to find way to work across multiple machines (each service can run on another machine).
For Kubernetes: I know how create and run pods and deployments, but I can't find how to run new container on command from another container. Can I somehow communicate with k8s from container to run new container?
Generally:
can I run new container from another without docker-in-docker and mounting docker socket;
can I do it with/without Kubernetes?.
Thanks for advance.
I've compiled all of the links that were in the comments under the question. I would advise taking a look into them:
Docker:
StackOverflow control Docker from another container.
The link explaining the security considerations is not working but I've managed to get it with the Webarchive: Don't expose the Docker socket (not even to a container)
Exposing dockerd API
Docker Engine Security
Kubernetes:
Access Clusters Using the Kubernetes API
Kubeflow in the spite of machine learning deployments
Related
I am using the default bridge network for docker (and yes, I am relatively new to docker). I have two docker containers.
The first container provides a service on port 12345. When creating this container, I did not specify the --publish option because I did not want to expose this port to the outside world.
The second container needs to use the service from the first container. However, the application running in this second container was hardcoded to access the service at 127.0.0.1:12345. Clearly, the second container's localhost is not the same as the first container. Is there a way to course docker networking to think that localhost in the second container should actually be connected to the port in the first container, without exposing anything to the outside world?
Option N: (this works but may not be the best solution)
One way you can force this to behave the way you need is through injecting an additional service to bind to the port within on the application container and redirecting it outward.
socat TCP-LISTEN:12345,fork TCP:172.18.0.2:12345
A quick test here, I was able to confirm 127.0.0.1:12345 is treated as the remote 12345
Things to consider:
The two containers needs to be able to reach each other
It breaks the recommendation of one service per container.
Getting the app into the docker container. (yum / apt-get install socat, source build = ?)
Getting it to run on startup on container start/restart.
I can create a docker container by command
docker run <<image_name>>
I can create a service by command
docker service create <<image_name>>
What is the difference between these two in behaviour?
When would I need to create a service over container?
docker service command in a docker swarm replaces the docker run. docker run has been built for single host solutions. Its whole idea is to focus on local containers on the system it is talking to. Whereas in a cluster the individual containers are irrelevant. We simply use swarm services to manage the multiple containers in a cluster. Swarm will orchestrate the containers of the services for us.
docker service create is mainly to be used in docker swarm mode. docker run does not have the concept of scaling up/down. With docker service create you can specify the number of replicas to be created using the --replicas command. This will create and manage multiple replicas of a containers in many different nodes. There are several such options for managing multiple containers using docker service create and other commands under docker service ...
One more note: docker services are for container orchestration systems(swarm). It has built in facility for failure recovery. ie. it recreates a container on failure. docker runwould never recreate a container if it fails. When the docker service commands are used we are not directly asking to perform action like "create a single container", rather we are saying to the orchestration system to "put this job in your queue and when you can get to it perform that action on the swarm". This means it has rollback facilities, failure mitigation and lots of intelligence built in.
You need to consider using docker service create when in swarm mode and docker run when not in swarm mode. You can lookup on docker swarms to understand docker services.
There is no real difference. In the official documentation you can read "Services are really just containers in production".
Services can be declared in "docker-compose.yml" and can be started from it. Once started, they will run as containers.
It is just a common way to name parts of your stack.
I'm building a application with microservices architecture.
So basically, my app look like this
API GATEWAY(port 3000) => USERS-SERVICE(port 9090), AUTH-SERVICE(port 8080), SEND-SMS-SERVICE(port 7070).
all work fine until now.
now I try to implement docker in my project. I build an image for each service
and run container instance for each on my local machine.
now I want to develop new service Customer-Service. and this service run on
http://localhost:3030
.
question:
1) How i can request http://localhost:3030 from api gateway, if in development I run api-gateway from container.
You must understand the network concept, when you start independent docker instance and you don't define the network they will be unreachable between them.
There is other things, you CAN'T access to one micro service hosted in a Docker to other Micro services hosted in other docker image using localhost, localhost is a 127.0.0.1. This is a call for the local machine. Then the concept of docker is like "diferent machines running on a same machine" is like a virtual machine but docker shares the host machine kernel.
You can access to another docker image in 2 ways.
Configure in a host network, which i do not recommend
Create a network, add every docker image instance to this network and call other micro services using the container name. IE you can use http://my-service-1:3400/api/v1/post
I recommend you to use docker-compose.
This is one of my repositories, I created with the propuse of share an Node App using JWT, but this project use Docker and docker-compose
https://github.com/camiloperezv/jwt-template
how you can see, i define an Network attribute in the docker-compose.ymland use this network in all of my services.
In the service section you will put all your micro-services, and in the code you will make the http request using the container name instead of using localhost or an IP address.
In my services y use the build: . this is for development propuse, in production you should use the pre build docker image instead of building it on the production server.
Feel free to use my github code.
Regards
As far as I understand from the question, a new service Costumer-Service runs on http://localhost:3030 on the host machine.
If yes, api-gateway docker container should be started in the host network:
docker run --network host -d <api-gateway_image_name>
After this Costumer-Service will be reachable on localhost:3030 from the api-gateway container.
Lets say I want to edit a config file for an NGINX Docker service that is replicated across 3 nodes.
Currently I list the services using docker service ls.
Then get the details to find a node running a container for that service using docker serivce ps servicename.
Then ssh to a node where one of the containers is running.
Finally, docker exec -it containername bash. Then I edit the config file.
Two questions:
Is there a better way to do this rather than ssh to a node running a container? Maybe there is a swarm or service command to do so?
If I were to edit that config file on one container would that change be replicated to the other 2 containers in the swarm?
The purpose of this exercise would be to edit configuration without shutting down a service.
You should not be exec'ing into containers to change their configuration, and so docker has not created an easy way to do this within Swarm Mode. You could use classic swarm to avoid the need to ssh into the other host, but I still don't recommend this.
The correct way to do this is to migrate your configuration file into a docker config entry. Version your config name. Then when you want to update it, you create a new version with the desired changes, and do a rolling update of your service to use that new configuration.
Unless the config is mounted from an external source like NFS, changes to one config in one container will not apply to other containers running on other nodes. If that config is stored locally inside your container as part of it's internal copy-on-write filesystem, then no changes from one container will be visible in any other container.
I have created a docker container which is running on a particular VM in azure (or consider any cloud).That container has a java/nodejs/Csharp application running which needs to access Jenkins server which is running in a company network.
So will i be able to access jenkins from that docker container?If no,please provide a solution on how to access.
You can use --network=host option to let your container run in the same network context as the server you're trying to connect to if it's accessible from the container host.
Of course you should specify a specific network or routes if possible.
https://docs.docker.com/engine/reference/run/#network-settings