How to setup git and git-sync in a docker container? - docker

I want to setup up git and git-sync in my new docker container but I am not sure how to do that or if that is the right way to do it? If there is a easier way to do it for example I also use kubernetes and I am trying to see what kubernetes can do as far as git-sync is concerned. Any ideas?

Don't treat Docker container as a VM. Usually you shouldn't go to the container to run commands or to set up settings. Use docker build to build all what do you need (jar file, JVM server, ....) from Dockerfile and use environment variable to handle any settings (or volume with setting file). Your container image entrypoint (cmd) can be some your script (bootstrap.sh), which can handle also some start activities. Generally: your container should be stateless. For versioning use tags. Take your time and read some doc and some real Docker app examples. You will see there what is the best practice.

Related

Create a Dockerfile from NiFi Docker Container

I'm pretty new to using Docker. I'm needing to deploy a NiFi instance through my employer, but the internal service we need to use requires a Dockerfile, not an image.
The service we're using requires the Dockerfile because each time the repository we're using is updated, the service is pointed to the Dockerfile and initiates the build process from it, then runs/operates the container.
I've already set up the NiFi flow to how it needs to operate, I'm just unsure of how to get a Dockerfile from an already existing container (or if that is even possible?)
I was looking into this myself, apparently there is no real way to do it, but you can inspect the docker container and pretty much get all the commands used to create the container except the OS used which is easy to find, you can spawn a bash into the container and do something like sudo uname -a, which you can just take and make your own docker image with. Usually you can find it on github, though.
docker inspect <image>
or you can do it through the docker desktop UI
You can use the Dockerfile that is in NiFi source code, see in this directory: https://github.com/apache/nifi/tree/main/nifi-docker/dockerhub

Should I create a docker container or docker start a stopped container?

From the docker philosophy's point of view it is more advisable:
create a container every time we need to use a certain environment and then remove it after use (docker run <image> all the time); or
create a container for a specific environment (docker run <image>), stop it when it is not necessary and whenever it is initialized again (docker start <container>);
If you docker rm the old container and docker run a new one, you will always get a clean filesystem that starts from exactly what's in the original image (plus any volume mounts). You will also fairly routinely need to delete and recreate a container to change basic options: if you need to change a port mapping or an environment variable, or if you need to update the image to have a newer version of the software, you'll be forced to delete the container.
This is enough reason for me to make my standard process be to always delete and recreate the container.
# docker build -t the-image . # can be done first if needed
docker stop the-container # so it can cleanly shut down and be removed
docker rm the-container
docker run --name the-container ... the-image
Other orchestrators like Docker Compose and Kubernetes are also set up to automatically delete and recreate the container (or Kubernetes pod) if there's a change; their standard workflows do not generally involve restarting containers in-place.
I almost never use docker start. In a Compose-based workflow I generally use only docker-compose up -d, letting it restart things if needed; docker-compose down if I need the CPU/memory resources the container stack was using but not in routine work.
I'm talking with regards to my experience in the industry so take my answer with a grain of salt, because there might be no hard evidence or reference to the theory.
Here's the answer:
TL;DR:
In short, you never need the docker stop and docker start because taking this approach is unreliable and you might lose the container and all the data inside if no proper action is applied beforehand.
Long answer:
You should only work with images and not the containers. Whenever you need some specific data or you need the image to have some customization, you better use docker save to have the image for future use.
If you're just testing out on your local machine, or in your dev virtual machine on a remote host, you're free to use either one you like. I personally take each of the approaches on different scenarios.
But if you're talking about a production environment, you'd better use some orchestration tool; it could be as simple and easy to work with as docker-compose or docker swarm or even Kubernetes on more complex environments.
You better not take the second approach (docker run, docker stop & docker start) in those environments because at any moment in time you might lose that container and if you are solely dependent on that specific container or it's data, then you're gonna have a bad weekend.

How to get "build history" of a docker container when inside a container?

When you are inside a docker container, is there anyway to obtain the "build history" (i.e. the original Dockerfile, or rather list of commands in the original Dockerfile that was used to build that container)?
The reason is that for tracking and version control purposes, it might be useful to indicate what/how the environment was configured when the process was run.
Thanks.
You can do it with
docker history
command. But not inside the container. Container itself does not have Docker and container itself does not hold its own history. To run that command you need to be in the host and not in the container.
docker history documentation
have a great explanation on how to use that command.
docker label is a good way to add additional metadata to your docker images.
Check this for more info.
You can get this data using docker inspect. But these commands can be run from outside the container, to run it from the inside you need to make use of docker remote api's as explained here in this answer.
You can also retrieve details of docker image using docker history through this remote api.
If you want just few details about images like version, etc. Then put those data as environment variable while building the image so that you can refer it later inside your running docker container.

Docker in Docker, Building docker agents in a docker contained Jenkins Server

I am currently running a Jenkins with Docker. When trying to build docker apps, i am facing some doubt on if i should use Docker in Docker (Dind) by binding the /var/run/docker.sock file or by installing another instance of docker in my Jenkins Docker. I actually saw that previously, it was discouraged to use something else than the docker.sock.
I don't actually understand why we should use something else than the docker daemon from the host apart from not polluting it.
sources : https://itnext.io/docker-in-docker-521958d34efd
Best solution for "jenkins in docker container needs docker" case is to add your host as a node(slave) in jenkins. This will make every build step (literally everything) run in your host machine. It took me a month to find perfect setup.
Mount docker socket in jenkins container: You will lose context. The files you want to COPY inside image is located inside workspace in jenkins container and your docker is running at host. COPY fails for sure.
Install docker client in jenkins container: You have to alter official jenkins image. Adds complexity. And you will lose context too.
Add your host as jenkins node: Perfect. You have the contex. No altering the official image.
Without completely understanding why you would need to use Docker in Docker - I suspect you need to meet some special requirements considering the environment in which you build the actual image, may I suggest you multistage building of docker images? You might find it useful as it enables you to first build the building environment and then build the actual image (hence the name 'multistage-building). Check it out here: https://docs.docker.com/develop/develop-images/multistage-build/

Docker backup container with startup parameters

Im facing the same problem since months now and i dont have an adequate solution.
Im running several Containers based on different images. Some of them were started using portainer with some arguments and volumes. Some of them were started using the CLI and docker start with some arguments and parameters.
Now all these settings are stored somewhere. Because if i stop and retart such a container, everything works well again. but, if i do a commit, backup it with tar and load it on a different system and do a docker start, it has lost all of its settings.
The procedure as described here: https://linuxconfig.org/docker-container-backup-and-recovery does not work in my case.
Now im thinking about to write an own web application which will create me some docker compose files based on my setting rather than to just do a docker start with the correct params. This web application should also take care of the volumes (just folders) and do a incremental backup of them with borg to a remote server.
But actually this is only an idea. Is there a way to "extract" a docker compose file of a running containter? So that i can redeploy a container 1:1 to an other server and just have to run docker run mycontainer and it will have the same settings?
Or do i have to write my web app? Or have i missed some page on google and there is already such a solution?
Thank you!
To see the current configuration of a container, you can use:
docker container inspect $container_id
You can then use those configurations to run your container on another machine. There is no easy import/export of these settings to start another container that I'm aware of.
Most people use a docker-compose.yml to define how they want a container run. They also build images with a Dockerfile and transfer them with a registry server rather than a save/load.
The docker-compose.yml can be used with docker-compose or docker stack deploy and allows the configuration of the container to be documented as a configuration file that is tracked in version control, rather than error prone user entered settings. Running containers by hand or starting them with a GUI is useful for a quick test or debugging, but not for reproducibility.
You would like to backup the instance but the commands you're providing are to backup the image. I'd suggest to update your Dockerfile to solve the issue. In case you really want to go down the saving the instance current status, you should use the docker export and docker import commands.
Reference:
https://docs.docker.com/engine/reference/commandline/import/
https://docs.docker.com/engine/reference/commandline/export/
NOTE: the docker export does not export the content of the volumes anyway, I suggest you to should refer to https://docs.docker.com/engine/admin/volumes/volumes/

Resources