Im new in docker.
I have wrote:
docker pull *docker from dockerhub*
docker run *image*
sudo apt-get install nano
And when i restart this image, nano is not installed
Is it possible to turn off resetting data in docker container?
The container filesystem is intrinsically temporary. If you docker run -d image twice, the two copies will each start from a fresh copy of the container filesystem and not share anything. There is no option to disable this.
Correspondingly, it is usually a mistake to install software in an interactive shell in a container, since that installation will be lost as soon as the container exits. It's usually unnecessary to install interactive editors like nano or vim, again since they can't make permanent changes. It's better to install your application and only the specific supporting programs it needs in a Dockerfile.
(There is a Docker command that can create a new image from a container, but this is pretty much never considered a best practice. It's hard to specify things like the command the resulting image should run, a chain of images made this way will grow over time, and it's all but impossible to take security updates from the original image. You may also run afoul of licensing or corporate source-tracking requirements with this approach.)
Related
I have built a Docker image, copied a script into the image, and automatically execute it when I run the image, thanks to this Dockerfile command:
ENTRYPOINT ["/path/to/script/my_script.sh"]
(I had to give it chmod rights in a RUN command to actually make it run)
Now, I'm quite new to Docker, so I'm not sure if what I want to do is even good practice:
My basic idea is that I would rather not always have to create a new container whenever I want to run this script, but to instead find a way to re-execute this script whenever I (re)start the same container.
So, instead of having to type docker run my_image, accomplishing the same via docker (re)start container_from_image.
Is there an easy way to do this, and does it even make sense from a resource parsimony perspective?
docker run is fairly cheap, and the typical Docker model is generally that you always start from a "clean slate" and set things up from there. A Docker container doesn't have the same set of pre-start/post-start/... hooks that, for instance, a systemd job does; there is only the ENTRYPOINT/CMD mechanism. The way you have things now is normal.
Also remember that you need to delete and recreate containers for a variety of routine changes, with the most important long-term being that you have to delete a container to change the underlying image (because the installed software or the base Linux distribution has a critical bug you need a fix for). I feel like a workflow built around docker build/run/stop/rm is the "most Dockery" and fits well with the immutable-infrastructure pattern. Repeated docker stop/start as a workflow feels like you're trying to keep this specific container alive, and in most cases that shouldn't matter.
From a technical point of view you can think of the container environment and its filesystem, and the main process inside the container. docker run is actually docker create plus docker start. I've never noticed the "create" half of this taking substantial time, but if you're doing something like starting a JVM or loading a large dataset on startup, the "start" half will be slow whether or not it's coupled with creating a new container.
For chmod issue you can do something like this
COPY . /path/to/script/my_script.sh
RUN chmod 777 -R /path/to/script/my_script.sh
For rerun script issue
The ENTRYPOINT specifies a command that will always be executed when the container starts.
It can be either
docker run container_from_image
or
docker start container_from_image
So whenever your container start your ENTRYPOINT command will be executed.
You can refer this for more detail
I use Docker to execute a website I make.
When a release have to be delivered, I have to build a new Docker image and start a new Container from it.
The problem is that images et containers are accumulating and taking huge space.
Besides the delivery, I need to stop the running container and delete it and the source image too.
I don't need Docker command lines but a checklist or a process to not forget anything.
For instance:
-Stop running container
-Delete stopped container
-Delete old image
-Build new image
-Start new container
Am I missing something?
I'm not used to Docker, maybe there are best practices to this pretty classical use case?
The local workflow that works for me is:
Do core development locally, without Docker. Things like interactive debuggers and live reloading work just fine in a non-Docker environment without weird hacks or root access, and installing the tools I need usually involves a single brew or apt-get step. Make all of my pytest/junit/rspec/jest/... tests pass.
docker build a new image.
docker stop && docker rm the old container.
docker run a new container.
When the number of old images starts to bother me, docker system prune.
If you're using Docker Compose, you might be able to replace the middle set of steps with docker-compose up --build.
In a production environment, the sequence is slightly different:
When your CI system sees a new commit, after running the repository's local tests, it docker build && docker push a new image. The image has a unique tag, which could be a timestamp or source control commit ID or version tag.
Your deployment system (could be the CI system or a separate CD system) tells whatever cluster manager you're using (Kubernetes, a Compose file with Docker Swarm, Nomad, an Ansible playbook, ...) about the new version tag. The deployment system takes care of stopping, starting, and removing containers.
If your cluster manager doesn't handle this already, run a cron job to docker system prune.
You should use:
docker system df
to investigate the space used by docker.
After that you can use
docker system prune -a --volumes
to remove unused components. Containers you should stop them yourself before doing this, but this way you are sure to cover everything.
Background
I have a large Python service that runs on a desktop PC, and I need to have it run as part of a K8S deployment. I expect that I will have to make several small changes to make the service run in a deployment/pod before it will work.
Problem
So far, if I encounter an issue in the Python code, it takes a while to update the code, and get it deployed for another round of testing. For example, I have to:
Modify my Python code.
Rebuild the Docker container (which includes my Python service).
scp the Docker container over to the Docker Registry server.
docker load the image, update tags, and push it to the Registry back-end DB.
Manually kill off currently-running pods so the deployment restarts all pods with the new Docker image.
This involves a lot of lead time each time I need to debug a minor issue. Ideally, I've prefer being able to just modify the copy of my Python code already running on a pod, but I can't kill it (since the Python service is the default app that is launched, with PID=1), and K8S doesn't support restarting a pod (to my knowledge). Alternately, if I kill/start another pod, it won't have my local changes from the pod I was previously working on (which is by design, of course; but doesn't help with my debug efforts).
Question
Is there a better/faster way to rapidly deploy (experimental/debug) changes to the container I'm testing, without having to spend several minutes recreating container images, re-deploying/tagging/pushing them, etc? If I could find and mount (read-write) the Docker image, that might help, as I could edit the data within it directly (i.e. new Python changes), and just kill pods so the deployment re-creates them.
There are two main options: one is to use a tool that reduces or automates that flow, the other is to develop locally with something like Minikube.
For the first, there are a million and a half tools but Skaffold is probably the most common one.
For the second, you do something like ( eval $(minikube docker-env) && docker build -t myimagename . ) which will build the image directly in the Minikube docker environment so you skip steps 3 and 4 in your list entirely. You can combine this with a tool which detects the image change and either restarts your pods or updates the deployment (which restarts the pods).
Also FWIW using scp and docker load is very not standard, generally that would be combined into docker push.
I think your pain point is the container relied on the python code. You can find a way to exclude the source code from docker image build phase.
For my experience, I will create a docker image only include python package dependencies, and use volume to map source code dir to the container path, so you don't need to rebuild the image if no dependencies are added or removed.
Example
I have not much experience with k8s, but I believe it must be more or less the same as docker run.
Dockerfile
FROM python:3.7-stretch
COPY ./python/requirements.txt /tmp/requirements.txt
RUN pip install --no-cache-dir -r /tmp/requirements.txt
ENTRYPOINT ["bash"]
Docker container
scp deploy your code to the server, and map your host source path to the container source path like this:
docker run -it -d -v /path/to/your/python/source:/path/to/your/server/source --name python-service your-image-name
With volume mapping, your container no longer depend on the source code, you can easily change your source code without rebuilding your image.
I built an image based on ubuntu:16.04. While building the image I did run some commands including apt-get -y update.
Then, I browse the image by docker run -it myimage bash and I found there are some omitted files for logrotate in there comparing to normal ubuntu16.04 OS.
For instance, /etc/logrotate.conf, /usr/sbin/logrotate, /var/lib/logrotate/status.
I could't find them anywhere even with running find / -name logrotate*. the find command only show/etc/logrotate.d(/etc/cron.daily, /etc/cron.weekly also exist).
Seeing trances of logrotate like logrotate.d, I assume logrotate must exist in there.
However, why it doesn't have not only those files but even also executable?
I want them because I want to try it(How can I monitor what logrotate is doing)
How could I make it with the ubuntu16:04 image?
Each container should have only one concern (I.e: no services, no daemons, no additional tools). Having that in mind, it make sense for logrotate configuration files to be omitted, since logrotate daemon is not there.
Alternatives to manage logs on containers could be using shared volumes (logrotate could then run on the host) (an example of that) or delegating to docker logging drivers.
I'm new to Docker and I think having understood that Docker is a Software virtualization tool (by opposition to OS virtualization). I understand, by this image, that Docker provides a very blank environment with a given file structure and is executing on the kernel Host. What we need to do is to put our application and its dependencies (with no OS) to have a very light portable container of our app.
But it seems there is a dark side of Docker : each Dockerfile begins with a "FROM ".
I saw this and this but I'm not sure to understand. It sounds that Docker is near an kind of simplified OS virtualizer.
I was interesting in the advantage of images size. But if we have to install an OS on each image my "portable" application will be quite heavy quickly.
Is there really no way to use a "blank image" ?
You can start with FROM scratch which is an empty filesystem.
Please see the section on Creating a Base Image if you'd like to spin up your own minimal root file system.
You might be surprised how many dependencies your application actually has on the root file system, and in the end, it is usually more efficient to use one of the standard root file systems in your FROM statement, as Charles Duffy commented above.
empty/Dockerfile
FROM scratch
WORKDIR /
build and check size
docker build empty/ -t empty
docker images | grep empty
This may be a bit too late. But I just had a use case where I needed to create a bare bone container that I could launch as part of multi-container docker-compose and get into it afterwards via /bin/bash. Keep in mind, a docker container must run a service and the container will be in existence only for as long as the service is running. So, I created this container with just python in it. I copied a 2 line python script that just makes it sleep. Here's what I did.
1. Create the python script wait_service.py with the following code:
import time
time.sleep(1000)
2. Create the Dockerfile with just the following lines:
FROM python:2.7
RUN mkdir -p /test
WORKDIR /test
COPY wait_service.py /test/
CMD python wait_service.py
3. Build and run the container. Using the container id, I could then get inside it. Please adjust the sleep time based on how long you want to keep this container.
Your application haveto have some underlying OS, without, there is no way for it to start..
I think the most basic one in the docker index is busybox, so a FROM busybox will give you a very minimal setup.
Docker is also using a lot of caching for each of its layers. So every docker container that uses FROM centos:centos7 at the top will only use 1 single set of minimal centos7 image.
The base images are very minimalistic, so it is nothing to worry about..