I need to connect to the remote Docker container and send a couple of commands to start PredictionIO services, but from the outside, either via api docker or some of my own. I am new to this, so I have looked in many places, but I have not been able to find something to help me with this, thank you very much.
I was investigating the docker 1.40 api, but without great results.
Related
I am actually new to docker. I have taken basic tutorials on docker and know the commands to docker regarding images, containers.
Now, All my applications servers like running on tomcat9 or nginx and also services like redis , scylla db , activemq are running on the ubuntu servers and installation,everything I am doing it manually.
I am confused like to how to start implementing the docker in my company.
Like for the commercial use, what are the prerequisites, is docker hub account neccessary or else can we use directly like docker pull image_name?
I have searched in many blogs, but could not find the way of implementation.
Install docker on your computer/server first.
Use you cmd/bash/terminal to interact with docker. Just to make sure Docker is installed on you computer by typing on cmd docker ps
If you are using Docker Desktop, you can use Docker desktop to check as well.
Search on hub.docker the image you need. Follow it's instruction, make a cmd docker pull <image> to pull their image first
Use docker run to run you image, if you image need to use a port, make sure that port isn't used by another process.
I am running a CENTOS Server and will be installing the Docker Engine on top of that where needless to say, I will be setting up my containers. I'll initially be setting up two containers: (1) serve my web pages (2) run my database.
My thought process was that I would install FirewallD on the CentOS. My questions are the following:
Do I need to install some sort of firewall within the containers itself? If so, can someone at a high-level tell me how this is done and what firewall I would be installing at the container level?
Do I need to open some ports within FirewallD running on CENTOS to access the Docker Engine / Containers?
As you can tell, this will be my first developing with containers, so do I need to create the containers first on the server and then on from my development machine push the containers to the identified container?
I would appreciate it if I could get some guidance here as I'm tasked to do this, but not sure of the correct path.
Thanks again.
I really have not tried much as I'm not sure where to begin. Currently I have just been doing some research on my use case.
Q) Do I need to install some sort of firewall within the containers itself?
A) No, not really. Containers can only communicate via the ports the configuration specify to open.
Q) Do I need to open some ports within FirewallD running on CENTOS to access the Docker Engine / Containers?
A) TCP/IP port 443 if you want to access the daemon via the REST API. Other wise, and probably more secure, leave remote access off. SSH into the machine and interact with the daemon locally.
Q) ...do I need to create the containers first on the server and then on from my development machine push the containers to the identified container?
A) Create the containers on development, push the image to a repository (Docker Hub is one, AWS ECR is another, you can also host your own). Access the server, then finally pull the images from the repository onto the server.
As for where to begin; At the beginning :D. But really, https://docs.docker.com/get-started/ has a 'getting starting' to start you off. Linux Academy, A Cloud Guru, Lyda, Udemy, and other similar learning resource are all solid starting points.
Hope this helps you on your journey.
I am setting up a series of Linux command line challenges (for internal use/training), similar to those at OverTheWire.org's Bandit. From some reading I have done of their infrastructure, they setup things as such:
All ssh-based games on OverTheWire run in Docker containers. When you
login with SSH to one of the games, a fresh Docker container is
created just for you. Noone else is logged in into your container, nor
are there any files from other players lying around. We opted for this
setup to provide each player with a clean environment to experiment
and learn in, which is automatically cleaned up when you log out.
This seems like an ideal solution, since everyone who logs in gets a completely clean environment (destroyed on logout) so that simultaneous players do not interfere with each other.
I am very new to Docker and understand it in principle, but am unsure about how to setup a similar system - particularly spawn new Docker instances on SSH login to a server and then destroy the instance on logout/disconnection.
I'd appreciate any advice on how to design/implement this kind of setup.
It seems to me there are two main goals here. First undestand what docker really makes and how it works. Second the sistem that orquestates the whole sistem.
Let me make some brief and short introduction. I won't go into details but mainly docker is a plaform that works like a system virtualization that lets you isolate a process, operating system or a whole aplication without any kind of hypervisor. The container shares the kernel of the host system and all that it cointains is islated from the host and the rest of the containers.
So the basic principle you are looking for is a system that orchestrates containers that has an ssh server with the port 22 open. Although there are many ways of how you could reach this goal, one way it can be with this docker sshd server image.
docker run -itd --rm rastasheep/ubuntu-sshd bash
Docker needs a process to keep alive. By using -it you are creating an interactive session with the "bash" interpreter. This will keep alive the container plus lets you start a bash terminal inside an isolated virtual ubuntu server.
--rm: will remove the container once you exists from the container.
rastasheep/ubuntu-sshd: it is the docker image id.
As you can see, there is a lack of a system that connects between your aplication and this docker platform. One approach would it be with a library that python has that uses the docker client programaticaly. As an advice I would recomend you to install docker in your computer and to try to create a couple of ubuntu servers with ssh server and to connect into it from your host. It will help you to see if it's really necesary to have sshd server, the network requisites you will need if so, to traffic all the clients into the containers. Read the oficial docker network documentation.
With the example I had described a new fresh terminal is started and there is no need to connect to the docker via ssh. By using this way you won't need to route the traffic, indentify the host free ports to connect your host to the containers or to check and shutdown the container once the connection has finished. Otherwhise the container will keep alive.
There are many ways where your system can be made and I would strongly recomend to you to start by creating some containers with the docker tool and start to understand how it works.
We want to scale docker containers horizontally based on user demand. Is there any docker api for on demand container creation/deletion, What can be the best approach for the use case given below.
Use case:
We've a service running inside a docker container which is directly accessible to a user.
Each user will be given separate container, So we need to create docker container whenever user requests for service, Also we need to delete containers when they're idle for specific time period.
I believe you are looking for the Docker Engine API, which you can find here.
There is an SDK for Go and Python and a RESTful API for all others.
In the docs you find examples of how to start and stop containers programmatically.
I'm interested in building microservices, but I'm getting a bit stuck on how service discovery should work when I've got multiple instances of a single microservice.
Suppose I've got an "OCR" app that reads text from an image.
Deploying that as 1 instance is easy, however, what if I want 50 instances of those?
I can run docker swarm to spin up get those 50 instances, but how do I send a request to any one of them, i.e. I don't want to have to know the exact container name of a specific instance, I don't care which one I get, as long as it's healthy, just send my request to any of the "OCR" containers.
How do I achieve this?
I've been looking into Consul and it seems very promising.
I especially like the HTTP api, (Although I'm a little unsure of how I would retrieve the url for the service I'm interested in. Would I need to do it before every request to make sure I'm pointing to a healthy instance?).
If I wanted to use consul, what would be the steps be in relation to docker swarm? Do I just need to register the service in consul when the container starts up, and it will automatically get de-registered if it fails right?).
After that, all of my containers just need to be aware of where consul is (And I guess I could stick a load balancer infront of it, incase I ever want to scale out consul itself to a bunch of instances?)
Please let me know if I'm going completely in the wrong direction.
If anyone could also suggest any articles or books on this topic I'd appreciate it.
Thanks.
When you're using Docker Swarm Mode, you get service discovery with load balancing for free.
DNSRR is in the key concept: https://docs.docker.com/engine/swarm/key-concepts/#load-balancing
Say you deploy OCR-app.
docker service create --network dev --name ORC-app --replicas 5 OCR-app:latest
The docker manager will deploy OCR-app in this case five times on nodes of your swarm network. Every other service which is part of the same docker network dev can request the OCR-app by it's name. E.g. GET http://OCR-app:4000/do/something.
Internally docker swarm uses round robin for forward the request automatically to one of the five services.