Is it possible to run a docker container without using docker? - docker

After we create a docker image, we can run it anywhere by global registry.
I am wondering if we can run it directly on a server which doesn't install docker?
New to docker, sorry if I have made any stupid mistake.
Thank you guys.

You don't necessarily need to use Docker to run Docker containers - the Docker image format is an open specification.
You will need a platform which can understand this specification - and the one provided by Docker is the reference implementation - but there are alternatives such as Rocket.
Ultimately you will need something that can understand and run Docker containers, so unless your servers already have this capability you will need to install new software on them for this purpose.

Related

MLflow run within a docker container - Running with "docker_env" in MLflow project file

We are trying to develop an MLflow pipeline. We have our developing environment in a series of dockers (no local python environment "whatsoever"). This means that we have set up a docker container with MLflow and all requirements necessary to run pipelines. The issue we have is that when we write our MLflow project file we need to use "docker_env" to specify the environment. This figure illustrates what we want to achieve:
MLflow run dind
MLflow inside the docker needs to access the docker daemon/service so that it can either use the "docker-image" in the MLflow project file or pull it from docker hub. We are aware of the possibility of using "conda_env" in the MLflow project file but wish to avoid this.
Our question is,
Do we need to set some sort of "docker in docker" solution to achieve our goal?
Is it possible to set up the docker container in which MLflow is running so that it can access the "host machine" docker daemon?
I have been all over Google and MLflow's documentation but I can seem to find anything that can guide us. Thanks a lot in advance for any help or pointers!
I managed to create my pipeline using docker and docker_env in MLflow. It is not necessary to run d-in-d, the "sibling approach" is sufficient. This approach is described here:
https://jpetazzo.github.io/2015/09/03/do-not-use-docker-in-docker-for-ci/
and it is the preferred method to avoid d-in-d.
One needs to be very careful when mounting volumes within the primary and secondary docker environments: all volume mounts happen in the host machine.
In this case, I would like to suggest a simple alternative:
Install all dependencies in Docker Image
Run MLFlow Project with Conda Env.
then MLFlow will reuse everything cached in the container base environment each time the project run.

Create a Dockerfile from NiFi Docker Container

I'm pretty new to using Docker. I'm needing to deploy a NiFi instance through my employer, but the internal service we need to use requires a Dockerfile, not an image.
The service we're using requires the Dockerfile because each time the repository we're using is updated, the service is pointed to the Dockerfile and initiates the build process from it, then runs/operates the container.
I've already set up the NiFi flow to how it needs to operate, I'm just unsure of how to get a Dockerfile from an already existing container (or if that is even possible?)
I was looking into this myself, apparently there is no real way to do it, but you can inspect the docker container and pretty much get all the commands used to create the container except the OS used which is easy to find, you can spawn a bash into the container and do something like sudo uname -a, which you can just take and make your own docker image with. Usually you can find it on github, though.
docker inspect <image>
or you can do it through the docker desktop UI
You can use the Dockerfile that is in NiFi source code, see in this directory: https://github.com/apache/nifi/tree/main/nifi-docker/dockerhub

deploy .war to a product owner server

I'm to deploy .war file using docker.
and I'm very new to docker.
and there's something little bit confusing when I'm trying that.
I'm confused about the approach I'm creating the Dockerfile.
I don't know that the product owner must installing the tomcat and java jdk in his server manually or I should handle that automatically in my Docker image?
what is common and what is the best practice of that?
No, the product owner doesn't need to install anything that's the beauty of containers approach. The approach is there to solve the problem that it runs on my machine and not on others. So, once you built an image all the product owner need is to install docker on his machine and then it is done. Because the container itself is a virtual machine in which everything required to run the project is installed and taken care of. So, short answer no, product owner doesn't need anything except docker itself.
Glad that you opted to use Docker for this, though few things to take note of :-
You will be needing to create a Dockerfile. Refer [https://stackoverflow.com/a/45870319/2519351][1]
Build a Docker image using the Dockerfile docker build -t <image_name>:<tag>
Install Docker service on your product owner's server
Deploying the Docker Image to your product owner is a bit tricky. As it will require you to transfer the Docker Image built on your machine to the Product owner's server
One option is to push the Docker Image to Docker Hub. Don't opt for this option if you don't want to make your app public.
Another option is to set up a private registry, though this would be an overkill if there is no scale of your deployment. But it is the correct approach.
Another crude option is to take remote control of the Docker daemon running on your product owner's server. This way you can start a docker container on remote server from your local machine. Refer - [https://success.docker.com/article/how-do-i-enable-the-remote-api-for-dockerd][1]
Finally run the Docker container docker -H <remote_server>:<port> run -d <image>:<tag>

New to Docker - how to essentially make a cloneable setup?

My goal is to use Docker to create a mail setup running postfix + dovecot, fully configured and ready to go (on Ubuntu 14.04), so I could easily deploy on several servers. As far as I understand Docker, the process to do this is:
Spin up a new container (docker run -it ubuntu bash).
Install and configure postfix and dovecot.
If I need to shut down and take a break, I can exit the shell and return to the container via docker start <id> followed by docker attach <id>.
(here's where things get fuzzy for me)
At this point, is it better to export the image to a file, import on another server, and run it? How do I make sure the container will automatically start postfix, dovecot, and other services upon running it? I also don't quite understand the difference between using a Dockerfile to automate installations vs just installing it manually and exporting the image.
Configure multiple docker images using Dockerfiles
Each docker container should run only one service. So one container for postfix, one for another service etc. You can have your running containers communicate with each other
Build those images
Push those images to a registry so that you can easily pull them on different servers and have the same setup.
Pull those images on your different servers.
You can pass ENV variables when you start a container to configure it.
You should not install something directly inside a running container.
This defeat the pupose of having a reproducible setup with Docker.
Your step #2 should be a RUN entry inside a Dockerfile, that is then used to run docker build to create an image.
This image could then be used to start and stop running containers as needed.
See the Dockerfile RUN entry documentation. This is usually used with apt-get install to install needed components.
The ENTRYPOINT in the Dockerfile should be set to start your services.
In general it is recommended to have just one process in each image.

Is it correct to run a private Docker registry like a container?

I am trying to install a private Docker registry, but I m not sure how, I have installed it following this tutorial: http://www.jaas.co/2014/10/23/how-to-use-a-local-persistent-docker-registry-on-centos-6-5/
but Registry is running like a container, is that correct? or there are another ways to do that.
Well, the instructions tell you to run it as a container, and there's a Docker image specifically for running a registry as a container, so I'm going to guess that it's OK. If you don't want to run it in a container, you can download the source code and run it directly on your local machine instead.
I have done both - build from scratch and run from container. I highly recommend you run it from a container. It is easier to manage, and it is well documented.

Resources