Do I understand Docker correctly?
docker run -it --rm --name verdaccio -p 4873:4873 -d verdaccio/verdaccio
gets verdaccio if it does not exist yet on my server and runs it on a specific port. -d detaches it so I can leave the terminal and keep it running right?
docker exec -it --user root verdaccio /bin/sh
lets me ssh into the running container. However whatever apk package that I add would be lost if I rm the container then run the image again, as well as any edited file. So what's the use of this? Can I keep the changes in the image?
As I need to edit the config.yaml that is present in /verdaccio/conf/config.yaml (in the container), my only option to keep this changes is to detach the data from the running instance? Is there another way?
V_PATH=/path/on/my/server/verdaccio; docker run -it --rm --name
verdaccio -p 4873:4873 \
-v $V_PATH/conf:/verdaccio/conf \
-v $V_PATH/storage:/verdaccio/storage \
-v $V_PATH/plugins:/verdaccio/plugins \
verdaccio/verdaccio
However this command would throw
fatal--- cannot open config file /verdaccio/conf/config.yaml: ENOENT: no such file or directory, open '/verdaccio/conf/config.yaml'
You can use docker commit to build a new image based on the container.
A better approach however is to use a Dockerfile that builds an image based on verdaccio/verdaccio with the necessary changes in it. This makes the process easily repeatable (for example if a new version of the base image comes out).
A further option is the use of volumes as you already mentioned.
Related
I'm new to Docker, so this may be an obvious question that I'm just not using the right search terms to find an answer to, so my apologies if that is the case.
I'm trying to stand up a new CI/CD Pipeline using a purpose built container. So far, I've been using someone else's container, but I need more control over the available dependencies, so I need my own container. To that end, I've built a container (Ubuntu), and I have a local (host) directory for the dependencies, and another for the project I'm building. Both are connected to the container using Docker Volumes (-v option), like this.
docker run --name buildbox \
-v /projectpath:/home/project/ \
-v /dependencies:/home/libs \
buildImage buildScript.sh
Since this is going to eventually live in a Docker repo and be accessed by a GitLab CI/CD Pipeline, I want to store the dependencies directory in as small of a container as possible that I can push up to the Docker repo alongside my Ubuntu build container. That way I can have the Pipeline pull both containers, map the dependencies container to the build container (--volumes-from), and map the project to be built using the -v option; e.g.:
docker run --name buildbox \
-v /projectpath:/home/project/ \
--volumes-from depend_vol \
buildImage buildScript.sh
Thus, I pull buildImage and depend_vol from the Docker repo, run buildImage while attaching the dependencies container and project directory as volumes, then run the build script (and extract the build artifact when it's done). The reason I want them separate is in case I want to create different build containers that use common libraries, or if I want to create version specific dependency containers without having a full OS stored (I have plans for this).
Now, I could just start a lightweight generic container (like busybox) and copy everything into it, but I was wondering if there was simply a way to attach the volume and then store the contents in the image when the container shuts down. Everything I've seen about making a portable data store / volume starts with all the data already copied into the container.
But I want to take my local host dependencies directory and store it in a container. Is there a straightforward way to do this? Am I missing something obvious?
So this works, if not what I was hoping for, since I'm still doing a lot of file copy (just with tarballs).
# Create a tarball of the files on the host to store, don't store the full path
tar -cvf /home/projectFiles.tar -C /home/projectFiles/ .
# Start a lightweight docker container (busybox) with a volume connection to the host (/home:/backup), then extract the tarball into the container
# cd to the drive root and untar the tarball
docker run --name libraryVolume \
-v /home:/backup \
busybox \
/bin/sh -c \
"cd / && mkdir /projectLibs && tar -xvf /backup/projectFiles.tar -C /projectLibs"
# Don't forget to commit the container image
docker commit libraryVolume
That's it. Then push to the repo.
To use it, pull the repo, then start the data volume:
docker run --name projLib \
-v /projectLibs \
--entrypoint "/bin/sh" \
libraryVolume
Then start the container (projBuild) that is going to reference the data volume (projLib).
docker run --it --name projBuild \
--volumes-from=projLib \
-v /home/mySourceCode:/buildProject \
--entrypoint /buildProject/buildScript.sh \
builderImage
Seems to work.
I have to install a lot of missing node-red nodes to the container. Keeping the (named) container and running it with docker start works fine.
Now I want to keep the installed nodes in a separate external directory. If I mount /data do an external directory it basically works but doesn't help since the nodes are installed in ~/node_modules. If I try to mount ~/node_modules to an external directory, node-red can't start.
So what can I do to keep the nodes I installed independent of the executed container?
EDIT:
Meanwhile I did run the image as follows:
#!/bin/bash
sudo -E docker run -it --rm -p 1893:1880 -p 11893:11880
\ -e TZ=Europe/Berlin -e NPM_CONFIG_PREFIX=/data/node_modules/
\ -e NODE_PATH=/usr/src/node-red/node_modules:/data/node_modules:/data/node_modules/lib/node_modules
\ --log-driver none --mount type=bind,source="$(pwd)"/data,target=/data
\ --name myNodeRed nodered/node-red
but the additional installed nodes, that are in directory /data/node_modules/lib/node_modules are still not visible.
EDIT 2:
Meanwhile I tried to keep the container. So it became obvious, that nodes installed using npm install -g are fully ignored.
The default user for the Node-RED instance inside the container is not root (as is usual) so you need to make sure any volume you mount on to the /data location is writable by that user. You can do this by passing in the user id to the container to have it match the external user that has write permission to the mount point:
docker run -it --rm -v $(pwd)/data:/data -u $USER -e TZ=Europe/Berlin
\ -p 1893:1880 -p 11893:11880 --log-driver none
\ --name myNodeRed nodered/node-red
Node-RED nodes should not be installed with the -g option, you should use the build in Palette Manager or if you really need to use the command line, run npm install <node-name> in the /data directory inside the container (But you will need to restart the container for the newly installed nodes to be picked up, which is again why you should use the Palette Manager)
It's CentOS 7, already installed vi and vim in my CentOS and I can use them. I run docker in CentOS, when I excute this line below:
docker exec -it mysolr /bin/bash
I cannot use vi/vim in the solr container:
bash: vim: command not found
Why is that and how do I fix it so I can use vi/vim to edit file in docker container?
A typical Docker image contains a minimal set of libraries and utilities to run one specific program. Additionally, Docker container filesystems are not long-lived: it is extremely routine to delete and recreate a container, for instance to use a newer version of a base image.
The upshot of this is that you never want to directly edit files in a Docker container, and most images aren't set up with "rich" editing tools. (BusyBox contains a minimal vi and so most Alpine-based images will too.) If you make some change, it will be lost as soon as you delete the container. (Similarly, you usually can install vim or emacs or whatever, but it will get lost as soon as the container is deleted: installing software in a running container isn't usually a best practice.)
There are two good ways to deal with this, depending on what kind of file it is.
If the file is part of the application, like a source file, edit, debug, and test it outside of Docker space. Once you're convinced it's right (by running unit tests and by running the program locally), docker build a new image with it, and docker run a new container with the new image.
ed config.py
pytest
docker build -t imagename .
docker run -d -p ... --name containername imagename
...
ed config.py
pytest
docker build -t imagename .
docker stop containername
docker run -d -p ... --name containername imagename
If the file is configuration that needs to be injected when the application starts, the docker run -v option is a good way to push it in. You can directly edit the config file on your host, but you'll probably need to restart (or delete and recreate) the container for it to notice.
ed config.txt
docker run \
-v $PWD/config.txt:/etc/whatever/config.txt \
--name containername -p ... \
imagename
...
ed config.txt
docker stop containername
docker rm containername
docker run ... imagename
I'm following this guide to install docker for my GitLab server running on Ubuntu 16.4.
When I execute the following command:
docker run -d --name gitlab-runner --restart always \
-v /srv/gitlab-runner/config:/etc/gitlab-runner \
-v /var/run/docker.sock:/var/run/docker.sock \
gitlab/gitlab-runner:latest
So far so good. However, when I run the next command to register the runner from this guide:
docker run --rm -t -i -v /srv/gitlab-runner/config:/etc/gitlab-runner --name gitlab-runner gitlab/gitlab-runner register
I keep getting the message:
docker: Error response from daemon: Conflict. The container name "/gitlab-runner" is already in use by container "b055ded012f9d0ed085fe84756604464afbb11871b432a21300064333e34cb1d". You have to remove (or rename) that container to be able to reuse that name.
However, when I run docker container list to see the list of containers, it's empty.
Anyone know how I can fix this error?
Just to add my 2-cents as I've also recently been through those GitLab documents to get the Docker GitLab runner working.
Following the Docker image installation and configuration guide, it tells you to start that container, however that I believe, is a mistake, and you want to do that after registering the Runner.
If you did run:
docker run -d --name gitlab-runner --restart always \
-v /srv/gitlab-runner/config:/etc/gitlab-runner \
-v /var/run/docker.sock:/var/run/docker.sock \
gitlab/gitlab-runner:latest
Just remove the docker container with docker rm -f gitlab-runner, and move on to registering the runner.
docker run --rm -t -i -v /srv/gitlab-runner/config:/etc/gitlab-runner --name gitlab-runner gitlab/gitlab-runner register
This would register the runner, and also place the configuration in /srv/gitlab-runner/config/config.toml on the local machine.
You can then run the original docker run:
docker run -d --name gitlab-runner --restart always \
-v /srv/gitlab-runner/config:/etc/gitlab-runner \
-v /var/run/docker.sock:/var/run/docker.sock \
gitlab/gitlab-runner:latest
(NB, if this doesn't work because of the name being in use again - just run the docker rm -f gitlab-runner command again - you won't lose the gitlab-runner configuration).
And that would stand up the Docker gitlab-runner with the configuration set from the register command.
Hope this helps!
You're trying to run two containers with the same name? Where did these instructions come from? Then in your response you're saying you get the error 'No such container: gitlab-runner-config' but that's not the name of any of the containers you're trying to run?
Seems that your first container is meant to be called gitlab-runner-config based on everything else I see in there, including your volumes-from. Probably that's why gitlab-runner doesn't show up in docker ps, because you're trying to get volumes from a container that doesn't exist. Try clearing everything, and then run the following:
$ docker run -d --name gitlab-runner-config --restart always \
-v /srv/gitlab-runner/config:/etc/gitlab-runner \
-v /var/run/docker.sock:/var/run/docker.sock \
gitlab/gitlab-runner:latest
...
$ docker run -d --name gitlab-runner --restart always \
-v /var/run/docker.sock:/var/run/docker.sock \
--volumes-from gitlab-runner-config \
gitlab/gitlab-runner:latest
EDIT: OK so I read the guide, you're following the instructions wrong. It's saying in step 2, either do the one command, or the two afterwards. Either do a combined config and run container (which is called gitlab-runner) or do a config container (called gitlab-runner-config) then a runner container (called gitlab-runner). You're doing multiple steps with the same container name but mixing them up.
Run docker ps -a and you will see all your containers (even the not running ones), if you use the --rm option on run your container will be removed when stopped if that is the behaviour you are after.
You could always just skip the whole --name option if you want to create more than one of the same image and don't care about the name.
I also came across this, and opened an issue against the GitLab documentation. Here's my comment in there:
Actually, I think the issue might be something different:
On step 3, clicking on the link takes you to https://docs.gitlab.com/runner/register/index.html#docker.
In doing this, you land on the right section, near the end of the page. But this also means that you miss one important bit of information at the top of the page:
Before registering a Runner, you need to first:
Install it on a server separate than where GitLab is installed on
Obtain a token for a shared or specific Runner via GitLab's interface
That is, the documentation instructions recommend and assume that the gitlab runner container is on another machine. Thus they are not expected to work for containers on the same one.
My suggestion would be to add a note after the register step to check the registration requirements at the top of the page first.
Other than that, #johnharris85's answer would work for registering the runner on the same machine. The only extra thing you'd need to do is to add the --network="host" option to the command to do the registration. That is:
sudo docker run --rm -t -i \
-v /srv/gitlab-runner/config:/etc/gitlab-runner \
--network="host" --name gitlab-runner-register \
gitlab/gitlab-runner register
There are various articles like this, this and this and many more, that explains how to use X11 forwarding to run GUI apps on Docker. I am using a Centos Docker container.
However, all of these approaches use
docker run
with all appropriate options in order to visualize the result. Any use of docker run creates a new image and performs the operation on top of that.
A way to work in the same container is to use docker start followed by docker attach and then executing the commands on the prompt of the container. Additionally, the script (let's say xyz.sh) that I intend to run on Docker container resides inside a folder MyFiles in the root directory of the container and accepts a parameter as well
So is there a way to run the script using docker start and/or docker attach while also X11-forwarding it?
This is what I have tried, although would like to avoid docker run and instead use docker start and docker attach
sudo docker run -it \
--env="DISPLAY" \
--volume="/tmp/.X11-unix:/tmp/.X11-unix:rw" \
centos \
cd MyFiles \
./xyz.sh param1
export containerId='docker ps -l -q'
This in turn throws up an error as below -
/usr/bin/cd: line 2: cd: MyFiles/: No such file or directory
How can I run the script xyz.sh under MyFiles on the Docker container using docker start and docker attach?
Also since the location and the name of the script may vary, I would like to know if it is mandatory to include each of these path in the system path variable on the Docker container or can it be done at runtime also?
It looks to me your problem is not with X11 forwarding but with general Docker syntax.
You could do it rather simply:
sudo docker run -it \
--env="DISPLAY" \
--volume="/tmp/.X11-unix:/tmp/.X11-unix:rw" \
-w MyFiles \
--rm \
centos \
bash -c xyz.sh param1
I added:
--rm to avoid stacking old dead containers.
-w workdir, obvious meaning
/bin/bash -c to get sure your script is interpreted by bash.
How to do without docker run:
run is actually like create then start. You can split it in two steps if you prefer.
If you want to attach to a container, it must be running first. And for it to be running, there must be a process currently running inside.