I'm experimenting with Docker, and I set up a Node App.
The App is in a GIT Repo in my Gogs Container.
I want to keep all the code inside my container, so at the app root I have my Dockerfile.
I want to create a Shell script to automatically ReBuild my Container and ReRun it.
This script is calling later through a "webhook-container" during a GIT push.
The Docker CLI has only a build and a run command. But both fails if a image or a container with the name already exists.
What is the best practice to handle this?
Remark: I don't want to keep my app sources on the host and update only the source and restart the container!
I like the idea that my entire app is a container.
You can remove docker containers and images before running build or run commands.
to remove all containers:
docker rm $(docker ps -a -q)
to remove all images:
docker rmi $(docker images -q)
to remove a specific container:
docker rm -f containerName
then after executing the relevant commands above, then run your script. your script will typically build, run or pull as required.
Related
I'm working on 2 projects that both use Docker, in separate directories.
In the 2nd project, for a new local build, the first command given (of a series of commands) is the following:
docker container stop $(docker container ls -a -q) && docker system prune -a -f --volumes
However, as a side effect, this kills the containers in the 1st project, also destroying the databases associated with it as well.
This is annoying because I have to constantly rebuild and re-seed the database in the 1st project.
How can I edit the first command such that it only effects the project in the current directory?
Note that this project is also using docker-compose, which I know is good at noting the current directory, so maybe we could make use of docker-compose instead.
The full list of commands given for a new local build are:
docker container stop $(docker container ls -a -q) && docker system prune -a -f --volumes
docker stack rm up
docker-compose -f docker-compose.local.yml build
docker stack deploy up --compose-file docker-compose.local.yml
Thank you very much in advance for any help.
-Michael
I pulled a docker image from https://github.com/snowflakedb/SnowAlert.
But after pulling the Image, I don't see any containers running. I used
docker container ls command and it returned no containers.
Now due to some usecase I want to modify a file inside this docker image.
What is the correct way to do it?
Thanks
Here is what resolved my issue.
After pulling the repository from the github I ran below commands
sudo docker images
This will display the names of existing images. Now copy the Image name in which you need to do the modification.
sudo docker run -it <copied_image_name> bash
This will open a bash where all the files are residing. Do the modification where ever it is required then type exit
Copy the container Id from below command
sudo docker ps -a
Now we need to commit the changes into new Image using below command
sudo docker commit <container_id> <new_image_name>
Thats all
Now do sudo docker images this will display the newly created image with the modified content.
Thanks
Don't confuse image with container: https://phoenixnap.com/kb/docker-image-vs-container.
You need to "up" image to create appropriate container for it: https://docs.docker.com/get-started/part2/#run-your-image-as-a-container
If you want to inspect/edit any container use command (it like ssh to container's server):
docker exec -it CONTAINER_ID bash
To get all docker containers and find their IDs use command:
docker ps -a
I would like to add a scheduled job (fortnightly) to a machine using puppet to remove all containers on machine.
Currently I need to do sudo docker rm -f $(sudo docker ps -a -q) manually after sshing to that machine, which I want to automate.
Preferably using module: https://forge.puppet.com/puppetlabs/docker.
Can't see any option to kill and remove containers (also new to puppet). Even using docker-compose using puppet is fine.
Any ideas? Thanks.
The docs you linked say:
To remove a running container, add the following code to the manifest file. This also removes the systemd service file associated with the container.
docker::run { 'helloworld':
ensure => absent,
}
Regarding the docker command sudo docker rm -f $(sudo docker ps -a -q) to remove containers via ssh, you have a better one:
$ docker container prune --help
Usage: docker container prune [OPTIONS]
Remove all stopped containers
Options:
--filter filter Provide filter values (e.g. 'until=<timestamp>')
-f, --force Do not prompt for confirmation
So the equivalent would be:
docker container prune --force
And you can automate this ssh command via puppet, no need to manually ssh into the machine. Check their docs to run shell commands without installing an agent, or use Bolt command if you already have an agent installed on the remote host.
I have a Jenkins job that replaces a docker container with the latest image overnight. Usually this works but occasionally this fails with the error:
docker: Error response from daemon: Conflict. The container name "/demo-api" is already in use by container
The Jenkins job uses the following:
docker stop demo-api
./api_container.sh
api_container.sh does a docker pull and docker run --name demo-api -t -d --rm.
However when I ssh on in the morning after a failure and run docker ps the container is no longer running so looks like it does stop eventually, just not in time for the docker run command that tries to start it with the new image.
Questions
Does the docker stop command not block until it returns?
Should I handle this differently in my Jenkins job script?
I've seen there's also a docker wait command. Should I be using that too in my script?
Pretty sure you have a race condition here. Stop will return before the --rm takes effect. So it's a race between the --rm handled by the engine and the api_container.sh script.
I'd use an explicit docker rm to avoid the race. Note the docker rm may fail depending on where --rm is in its processing, and I'd handle that with a short sleep just to be sure it's done.
docker stop demo-api
docker rm demo-api || sleep 5
./api_container.sh
Or you can switch to a docker rm -f which will kill and delete the container in one step. Probably what you really want, and less error prone, but can leave volumes in a bad state if the app dies ungracefully.
docker rm -f demo-api
./api_container.sh
docker stop will stop your container.
All the stopped container can be found in this command.
docker ps --filter "status=exited
The error you got says The container name "/demo-api" is already in use by container
It means there is already a container using demo-api name, which is true since stopping a docker container will not remove that container and that container name will exists.
All you have to do is
either
Run docker run command without specifying --name option which gives name to your container demo-api. So that each time your script pulls and run the container it will get a new random container name.
OR
If you want to keep your container name same demo-api then rather than stopping the container using docker stop Just remove the container all together docker rm -f demo-api
UPDATE
I just saw you updated your question.
Stopping a container which was ran using --rm option should remove that
container all together.
The error you got seems to only exists when the name is already in-use by another container.
So far you can try to run your script in a while loop and check if that occurs during this run.
Here is the script I used (but didn't got any issue), try this on your particular machine where this issue occurs.
#!/bin/bash
i=20
while [ $i -gt 0 ]
do
docker stop demo-api
docker pull alpine
docker run --name demo-api -t -d --rm alpine sh
i=$((i - 1))
done
I'm trying to build a few docker containers and I found the iteration process of editing the Dockerfile, and scripts run within it, clunky. I'm looking for best practices and to find out how others go about.
My initial process was:
docker build -t mycontainer mycontainer
docker run mycontainer
docker exec -i -t < container id > "/bin/bash" # get into container to debug
docker rm -v < container id >
docker rmi mycontainer
Repeat
This felt expensive for each iteration, especially if it was typo.
This alternate process required a little bit less iteration:
Install vim in dockerfile
docker run mycontainer
docker exec -i -t < container id > "/bin/bash" # get into container to edit scripts
docker cp to copy edited files out when done.
If I need to run any command, I carefully remember and update the Dockerfile outside the container.
Rebuild image without vim
This requires fewer iterations, but is not painless since everything's very manual and I have to remember which files changed and got updated.
I've been working with Docker in production since 0.7 and I've definitely felt your pain.
Dockerfile Development Workflow
Note: I always install vim in the container when I'm in active development. I just take it out of the Dockerfile when I release.
Setup tmux/gnu screen/iTerm/your favorite vertical split console utility.
On the right console I run:
$ vim Dockerfile
On the left console I run:
$ docker build -t username/imagename:latest . && docker run -it -name dev-1 username/imagename:latest
Now split the left console horizontally, so that the run STDOUT is above and a shell is below. Here you will run:
docker exec -it dev-1
and edits internally or do tests with:
docker exec -it dev-1 <my command>
Every time you are satisfied with your work with the Dockerfile save (:wq!) and then in the left console run the command above. Test the behavior. If you are not happy run:
docker rm dev-1
and then edit again and repeat step #3.
Periodically, when I've built up too many images or containers I do the following:
Remove all containers: docker rm $(docker ps -qa)
Remove all images: docker rmi $(docker images -q)
I assume the files you're editing in your Alternate process are files that make up part of the application you're deploying? Such as a Bash or Python script?
That being the case, you could mount them as a volume, during your debugging process, rather than mounting them inside the docker, so that when you edit them, they are immediately changed within the docker and the host.
So for example, if your code is at /home/dragonx/codefiles, do
docker run -v /home/dragonx/codefiles:/opt/codefiles mycontainer
Then when you edit those files, either from the host or within the container, they are available in the container but you don't need to copy them out before killing the docker.
Here is the simplest way to "build a few docker containers":
docker run -it --name=my_cont1 --hostname=my_host1 ubuntu:15.10
docker run -it --name=my_cont2 --hostname=my_host2 ubuntu:15.10
...
...
docker run -it --name=my_contn --hostname=my_hostn ubuntu:15.10
That would create 'n' number of containers.
After the very first "docker run ..." command, you will be put in a Bash shell. You can do your things there, exit and run the next "docker run ..." command.
Exiting from the Bash shell does not remove the containers. They are all still there in the "Exited" status. You can list them with the docker ps -a command. And you can always get back on to them by:
docker start -ia my_cont1