need help regarding the docker implementation - docker

I am actually new to docker. I have taken basic tutorials on docker and know the commands to docker regarding images, containers.
Now, All my applications servers like running on tomcat9 or nginx and also services like redis , scylla db , activemq are running on the ubuntu servers and installation,everything I am doing it manually.
I am confused like to how to start implementing the docker in my company.
Like for the commercial use, what are the prerequisites, is docker hub account neccessary or else can we use directly like docker pull image_name?
I have searched in many blogs, but could not find the way of implementation.

Install docker on your computer/server first.
Use you cmd/bash/terminal to interact with docker. Just to make sure Docker is installed on you computer by typing on cmd docker ps
If you are using Docker Desktop, you can use Docker desktop to check as well.
Search on hub.docker the image you need. Follow it's instruction, make a cmd docker pull <image> to pull their image first
Use docker run to run you image, if you image need to use a port, make sure that port isn't used by another process.

Related

Use docker image in another docker image

I have two docker images:
CLI tool
Webserver
The CLI tool is a very heavy docker file which takes hours to compile. I am trying to call the CLI tool from the webserver, but not sure how to go from here. Is there a way to make the command created in 1 available in 2?
At this point I tried working with volumes, but no luck. Thanks!
The design of Docker sort-of assumes that containers communicate through a network, not through the command line. So the cleanest solution is to create a simple microservice that wraps the CLI tool and can be called through HTTP.
As a quick and dirty hack, you could also use sshd as such a microservice without writing any code.
An alternative that doesn't involve the network is to make the socket of the Docker daemon available in the webserver container using a bind mount:
docker run -v /var/run/docker.sock:/var/run/docker.sock ...
Then you should be able to communicate with the host daemon from within the container, provided that you have installed the docker command line tool in the image. However, note that this makes your application strongly dependent on Docker, which might not be ideal. Also note that it essentially gives the container root access to the host system!
(Note that this is different from Docker-in-Docker, which is running a second Docker daemon inside a container and is generally not recommended except for specialized use cases.)

Update Docker Images via dockerized Jenkins Job

I run some docker containers on my Synology NAS. Now I also run Jenkins via Docker on the NAS and want to create a job that does the following steps:
Stop all Docker Containers
Delete all unnecessary stuff (-> docker system prune)
Rebuild all Docker images
Run the new Docker image
But I don't know how to access the host system in dockerized Jenkin. SSH to the Host doesn't seem to be a good idea.
Do you have any tips?
The whole point of your Docker images is to run in an isolated sandbox, so it's by design that your image doesn't have access to the native system. SSH is one approach, but risky, as you point out.
A better approach is to set the DOCKER_HOST environment variable to point to the IP of the NAS (which might need to be the virtual network NAS address). You will probably need to experiment a bit with getting the correct address and making sure the hosted docker command has permissions to drive the host's Docker service.
This post in the Synology Forums may get you on the right track.

Is there a way to find the equivalent command line calls Kitematic uses when interacting with docker?

I’m new to docker and am using docker for windows. I’m also using Kitematic to pull repository images, configure the container, start, restart, set ports, etc.
Is there a way to find out exactly what commands Kitematic is sending to docker so I can learn what it is doing?

It is possible to run a command inside a Docker container from another container?

Here's my scenario.
I have 2 Docker containers:
C1: is a container with Ruby (but it could be anything else) that prepares data files on which it must perform a calculation in Julia language
C2: is a container with Julia (or R, or Octave...), used to perform the calculation, so as to avoid installing Julia on the same system or container that run Ruby code
From the host, obviously, I have no problem doing the processing.
Usually when two containers are linked (or belong to the same network) they communicate with each other via a network exposing some door. In this case Julia does not expose any door.
Can I run a command on C2 from C1 similar to what is done between host and C2?
If so, how?
Thanks!
Technically yes, but that's probably not what you want to do.
The Docker CLI is just an interface to the Docker service, which listens at /var/run/docker.sock on the host. Anything that can be done via the CLI can be done by directly communicating with this server. You can mount this socket into a running container (C1) as a volume to allow that container to speak to its host's docker service. Docker has a few permissions that need to be set to allow this; older versions allow containers to run in "privileged" mode, in which case they're allowed to (amongst other things) speak to /var/run/docker.sock with the authority of the host. I believe newer versions of Docker split this permission system up a bit more, but you'd have to look into this. Making this work in swarm mode might be a little different as well. Using this API at a code level without installing the full Docker CLI within the container is certainly possible (using a library or coding up your own interaction). A working example of doing this is JupyterHub+DockerSpawner, which has one privileged Hub server that instantiates new Notebook containers for each logged in user.
I just saw that you explicitly state that the Julia container has no door/interface. Could you wrap that code in a larger container that gives it a server interface while managing the serverless Julia program as a "local" process within the same container?
I needed to solve the same problem. In my case, it all started when I needed to run some scripts located in another container via cron, I tried the following scenarios with no luck:
Forgetting about the two-containers scenario and place all the logic in one container, so inter-container execution is no longer needed: Turns out to be a bad idea since the whole Docker concept is to execute single tasks in each container. In any case, creating a dockerfile to build an image with both my main service (PHP in my case) and a cron daemon proved to be quite messy.
Communicate between containers via SSH: I then decided to try building an image that would take care of running the Cron daemon, that would be the "docker" approach to solve my problem, but the bad idea was to execute the commands from each cronjob by opening an SSH connection to the other container (in your case, C1 connecting via SSH to C2). It turns out it's quite clumsy to implement an inter-container SSH login, and I kept running into problems with permissions, passwordless logins and port routing. It worked at the end, but I'm sure this would add some potential security issues, and I didn't feel it was a clean solution.
Implement some sort of API that I could call via HTTP requests from one container to another, using something like Curl or Wget. This felt like a great solution, but it ultimately meant adding a secondary service to my container (an Nginx to attend HTTP connections), and dealing with HTTP requisites and timeouts just to execute a shell script felt too much of a hassle.
Finally, my solution was to run "docker exec" from within the container. The idea, as described by scnerd is to make sure the docker client interacts with the docker service in your host:
To do so, you must install docker into the container you want to execute your commands from (in your case, C1), by adding a line like this to your Dockerfile (for Debian):
RUN apt-get update && apt-get -y install docker.io
To let the docker client inside your container interact with the docker service on your host, you need to add /var/run/docker.sock as a volume to your container (C1). With Docker compose this is done by adding this to your docker service "volumes" section:
- /var/run/docker.sock:/var/run/docker.sock
Now when you build and run your docker image, you'll be able to execute "docker exec" from within the docker, with a command like this, and you'll be talking to the docker service on the host:
docker exec -u root C2 /path/your_shell_script
This worked well for me. Since, in my case, I wanted the Cron container to launch scripts in other containers, it was as simple as adding "docker exec" commands to the crontab.
This solution, as also presented by scnerd, might not be optimal and I agree with his comments about your structure: Considering your specific needs, this might not be what you need, but it should work.
I would love to hear any comments from someone with more experience with Docker than me!

How to monitoring Docker Container using OMD User

OMD User
# omd create docker-user
# su - docker-user
How to monitor docker container?
Micro services memory usage inside the docker container?
How to configer docker container as check_mk agent?
Iam using Check_mk for monitoring my servers and know want to monitor for docker as well?
Here are two options:
when you deploy your container add the check_mk_agent at/during provisioning and using the Check_MK Web-API, add your host, do discovery, etc.
you can use the following plugin to monitor docker containers.
Alternatively if you are using the enterprise version you can use the current innovation release (1.5.x) which has native Docker support.
This is a late answer but since this came on top of my Google search results, I will take some time to add up to Marius Pana's answer. As of now, the raw version of Check_MK also supports natively dockers. However, if you want dedicated checks inside your docker, you will need to actually install a Check_MK agent inside the docker. To do that, you need to start some sort of shell (generally sh or bash) inside the docker with docker exec -it <id> sh. You can get your docker ID with docker ps.
Now that's the easy part. The hard part is to figure out which package manager you are dealing with inside the docker (if any) and how to install inetd/xinetd or your preferred way of communication for your agent (unless it's already installed). If it's a Ubuntu-based image, you will generally need to start with a apt update, apt-get install xinetd and then you can install your packaged Check_MK agent or install it manually if you prefer. If it's a CentOS-based image, you will instead use yum. If the image is based on Arch Linux, you will probably want to use pacman.
Once you managed to install everything in your docker, you can test by adding your docker IP to Check_MK as a host. Please note that if your docker is using the host IP, you will need to forward port 6556 from your docker to another port on your host since I assume you're already monitoring the host through port 6556.
After you've checked everything is working, 2 more things. If you stop there, a simple restart of your docker will cancel every change you've made, so you need to do a docker commit to save your changes to your container image. And lastly, you will want to plan container updates ahead: you can do reinstall the agent every time a new version of the container is pulled (you could even script this), or you could add instructions to your cont-init.d which would be executed every time you launch your docker.

Resources