Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I recently started migrating my self-hosted services to docker. To simplify maintenance, I'm using docker-compose. Some of the containers depend on each other, others are independent.
With more than 20 containers and almost 500 lines of code, maintainability has now decreased.
Are there good alternatives to keeping one huge docker-compose file?
That's a big docker-compose.yml! Break it up into more than one docker-compose file.
You can pass multiple docker-compose files into one docker-compose command. If the many different containers break up logically, one way of making them easier to work with is breaking apart the docker-compose.yml by container grouping, logical use case, or if you really wanted to you could do one docker-compose file per service (but you'd have 20 files).
You can then use a bash alias or a helper script to run docker-compose.
# as an alias
alias foo='docker-compose -f docker-compose.yml -f docker-compose-serviceA.yml -f docker-compose-serviceB-yml $#'
Then:
# simple docker-compose up with all docker-compose files
$ foo -d up
Using a bash helper file would be very similar, but you'd be able to keep the helper script updated as part of your codebase in a more straightforward way:
#!/bin/bash
docker-compose -f docker-compose.yml \
-f docker-compose-serviceA.yml \
-f docker-compose-serviceB.yml \
-f docker-compose-serviceC.yml \
$#
Note: The order of the -f <file> flags does matter, but if you aren't overriding any services it may not bite you. Something to keep in mind anyway.
You could look at kubernetes
If you didn't want to go all in, you can use minikube
or maybe kubernetes baked into docker on the edge channel for windows or mac but that is beta so perhaps not for a production system
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 1 year ago.
Improve this question
According to here, in order to docker-run multiple commands, I need
docker run image_name /bin/bash -c "cd /path/to/somewhere; python a.py"
However, I can specify the default shell in the Dockerfile with the SHELL instruction. So given that I can already specify the default shell of a container, why is it necessary to specify again with /bin/bash -c when running the docker container? What was the rationale behind not automatically using the shell specified with the SHELL instruction?
the SHELL instruction in the Dockerfile is only used for the Dockerfile RUN instructions (done during image creation)
docker run is a completely different command (it creates a container and run a command in it). The command you specify after the image name depends on the way the image is built. Some containers allow to run arbitrary command and also /bin/bash (if installed).
The default command is specify within you docker file with the Dockerfile CMD instruction. It default to empty.
You can also specify an ENTRYPOINT instruction that will run the CMD. The default is /bin/sh.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 1 year ago.
Improve this question
I'm a bit confused as to how to go ahead with docker.
I can build an image with the following Dockerfile:
FROM condaforge/mambaforge:4.10.1-0
# Use bash as shell
SHELL ["/bin/bash", "-c"]
# Set working directory
WORKDIR /work_dir
# Install vim
RUN ["apt-get", "update"]
RUN ["apt-get", "install", "-y", "vim"]
# Start Bash shell by default
CMD /bin/bash
I build it with docker build --rm . -t some_docker but then I'd like to enter the container, and install things individually interactively, so that later on I can export the whole image with all additional installations. So I then can start it interactively with docker run -it some_docker, after which I do my things. I would then like to export it.
So here are my specific questions:
Is there an easier way to build (and keep) the image available so that then I can come back to it at another point? When I run docker ps -a I see so many images that I dont know what they do since many of them dont have any tag.
After building I get the warning Use 'docker scan' to run Snyk tests against images to find vulnerabilities and learn how to fix them. Is this a problem and if so, how to solve it?
How can I specify in my Dockerfile (or docker build?) that ports for rstudio should be open? I saw that docker-compose allows you to specify ports: 8787:8787, how do I do it in here?
With docker ps -a, what you're seeing is container rather than images. To list images, use docker image ls instead. Whether you should delete images depends on what containers you're going to run in the future. Docker uses layer architecture with Copy-on-write strategy. So for example, in the future, if you want to use the image FROM condaforge/mambaforge:4.10.1-0, docker won't have to download and install it again. Your example is fairly simple, but with more complicated apps, it may take a lot of time to build images and run container from scratch (the longest I have experienced is about 30 mins). However, if storage is your concern, just go ahead delete images that you don't use very often. Read more
Yes, of course. However, it depends on the details that you have from docker scan. To see more details, you can run docker scan --file PATH_TO_DOCKERFILE DOCKER_IMAGE. Read more
Dockerfile is for building images, and Docker-compose file is for orchestrating containers. That's why you cannot publish ports in Dockerfile. This also creates problems like security or conflicts. All you can do is to expose container ports, then run docker run -d -P --name app_name app_image_name to publish all ports exposed in the container.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
In case multiple containers are running then deleting one by one is time wasting
docker container stop $(docker container ls –aq) && docker system prune –af ––volumes
The above command tells Docker to stop the containers listed in the parentheses.
Inside the parentheses, you tell Docker to generate a list of all the containers, and then the information is passed back to the container stop command and stops all the containers.
The && attribute tells Docker to remove all stopped containers and volumes.
–af means this should apply to all containers (a) without a required confirmation (f).
Docker cli command :
docker rm -f $(docker ps -qa)
or
docker system prune
Create an Alias to do so every time
vi .bash_profile
alias dockererase='docker rm -f $(docker ps -qa)'
source ~/.bash_profile
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 3 years ago.
Improve this question
I've been using Docker for quite a while for some development. Now, I am trying to learn some more advanced stuff using Kubernetes.
In some course I found information that I should run
eval $(minikube docker-env)
That would register a few environmental variables: DOCKER_TLS_VERIFY, DOCKER_HOST, DOCKER_CERT_PATH and DOCKER_API_VERSION. What would it do? Wouldn't that break my day to day work on default values with my host?
Also, is it possible to switch context/config for my local Docker somehow similar to kubectl config use-context?
That command points Docker’s environment variables to use a Docker daemon hosted inside the Minikube VM, instead of one running locally on your host or in a Docker Desktop-managed VM. This means that you won’t be able to see or run any images or Docker-local volumes you had before you switched (it’s a separate VM). In the same way that you can $(minikube docker-env) to “switch to” the Minkube VM’s Docker, you can $(minikube docker-env -u) to “switch back”.
Mostly using this actually only makes sense if you’re on a non-Linux host and get Docker via a VM; this lets you share the one Minikube/Docker VM and not have to launch two separate VMs, one for Docker and one not.
If you’re going to use Minikube, you should use it the same way you’d use a real, remote Kubernetes cluster: set up a Docker registry, docker build && docker push images there, and reference it in your Deployment specs. The convolutions to get things like live code reloading in Kubernetes are tricky, don’t work on any other Kubernetes setup, and aren’t what you’d run in production.
The said command will only manipulate the current shell. Opening up a new one will allow you to continue working with your normal workflow as the docker CLI will for example per default connect to the daemon socket at /var/run/docker.sock.
I don't know of a tool that will allow you to switch those settings with a single command and based on a context name as kubectl allows you to. You could however write an alias. For bash you could for example just execute:
$ echo 'alias docker-context-a="eval \$(minikube docker-env)"' >> ~/.bashrc
Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 25 days ago.
Improve this question
I'm new to using docker and am configuring a container.
I am unable to edit /etc/hosts (but need to for some software I'm developing). Auto-edit (via sudo or root) of the file says its on a read only file system. Manual (vim) edit of the file says its read-only and I'm unable to save changes as root (file permissions are rw for owner (root)).
I can however modify other files and add files in /etc.
Is there some reason for this?
Can I change the Docker configuration to allow edit of /etc/hosts?
thanks
UPDATE 2014-09
See #Thomas answer:
/etc/hosts is now writable as of Docker 1.2.
Original answer
You can use this hack in the meanwhile
https://unix.stackexchange.com/questions/57459/how-can-i-override-the-etc-hosts-file-at-user-level
In your Dockerfile:
ADD your_hosts_file /tmp/hosts
RUN mkdir -p -- /lib-override && cp /lib/x86_64-linux-gnu/libnss_files.so.2 /lib-override
RUN perl -pi -e 's:/etc/hosts:/tmp/hosts:g' /lib-override/libnss_files.so.2
ENV LD_LIBRARY_PATH /lib-override
/etc/hosts is now writable as of Docker 1.2.
From Docker's blog:
Note, however, that changes to these files are not saved during a
docker build and so will not be preserved in the resulting image. The
changes will only “stick” in a running container.
This is currently a technical limitation of Docker, and is discussed further at https://github.com/dotcloud/docker/issues/2267.
It will eventually be lifted.
For now, you need to work around it, e.g. by using a custom dnsmasq server.
I have recently stumbled upon a need to add an entry into /etc/hosts file as well (in order to make sendmail working).
I ended up making it part of the Dockerfile's CMD declaration like this:
CMD echo "127.0.0.1 noreply.example.com $(hostname)" >> /etc/hosts \
&& sendmailconfig \
&& cron -f
So it effectively is not a part of the image, but it is always available after creating a container from the image.
You could do that without change your /etc/hosts file. Just add extra_hosts into your docker-compose.yml like this example bellow:
myapp:
image: docker.io/bitnami/laravel:9
ports:
- 80:8080
extra_hosts:
- "myapp.test:0.0.0.0"
- "subdomain.myapp.test:0.0.0.0"
References:
How can I add hostnames to a container on the same docker network?