Run bitcoind with bitcoind.conf in docker - docker

I know docker, but less about bitcoind.
Now I want to use this docker image to start my own test environment:
The description tells me:
docker volume create --name=bitcoind-data
docker run -v bitcoind-data:/bitcoin --name=bitcoind-node -d \
-p 8333:8333 \
-p 127.0.0.1:8332:8332 \
kylemanna/bitcoind
Now I want to now how I have to add my bitcoind.conf?
This isn't provided anywere? Can I use it at container startup or docker exec?

The repository contains a documentation file dedicated to your issue: https://github.com/kylemanna/docker-bitcoind/blob/master/docs/config.md

Related

docker: Error response from daemon: invalid volume specification

I'm currently following this tutorial to run a model on Docker that was built using the Google Cloud AutoML Vision:
https://cloud.google.com/vision/automl/docs/containers-gcs-tutorial
I'm having trouble running the container, specifically running this command:
sudo docker run --rm --name ${CONTAINER_NAME} -p ${PORT}:8501 -v ${YOUR_MODEL_PATH}:/tmp/mounted_model/0001 -t ${CPU_DOCKER_GCR_PATH}
I have my environment variables set up right (did an echo $<env_var>). I do not have a /tmp/mounted_model/0001 directory on my local system. My model path is configured to be the model location on the cloud storage.
${YOUR_MODEL_PATH} must be a directory on the host on which you're running the container.
Your question suggests that you're using the Cloud Storage bucket path but you cannot do this.
Reviewing the tutorial, I think the instructions are confusing.
You are told to:
gsutil cp \
${YOUR_MODEL_PATH} \
${YOUR_LOCAL_MODEL_PATH}/saved_model.pb
So, your command should probably be:
sudo docker run \
--rm \
--interactive --tty \
--name=${CONTAINER_NAME} \
--publish=${PORT}:8501 \
--volume=${YOUR_LOCAL_MODEL_PATH}:/tmp/mounted_model/0001 \
${CPU_DOCKER_GCR_PATH}
NB I added --interactive --tty to make debugging easier; it's optional
NB ${YOUR_LOCAL_MODEL_PATH} not ${YOUR_MODEL_PATH}
NB The command should not be -t ${CPU_DOCKER_GCR_PATH} omit the -t
I've not run through this tutorial.

why do i keep seeing nginx index.html on localhost when i run my docker image

I installed and run nginx on my linux machine to understand the configurations etc. After a while i decided to remove it safely by following this thread in order to use it in docker
By following this documentaion i run this command
sudo docker run --name ngix -d -p 8080:80 pillalexakis/myrestapi:01
And i saw ngix's homepage at localhost
Then i deleted all ngix images & stopped all containers and i also run this command
sudo docker system prune -a
But now restarted my service by this command
sudo docker run -p 192.168.2.9:7777:8085 phillalexakis/myfirstapi:01 and i keep seeing at localhost ngix index.html
How can i totally remove it ?
Note: I'm new with docker and i might have missed a lot of things. Let me know what extra docker commands should i run in order provide better information.
Assuming your host have been preparing as below
your files (index.html, js, etc) under folder - /myhost/nginx/html
your nginx configuration - /myhost/nginx/nginx.conf
Solution
map your files (call volume) on the fly from outside docker image via docker cli
This is the command
docker run -it --rm -d -p 8080:80 --name web \
-v /myhost/nginx/html:/usr/share/nginx/html \
-v /myhost/nginx/nginx.conf:/etc/nginx/nginx.conf \
nginx
copy your files into docker image by build your own docker image via Dockerfile
This is your Dockerfile under /myhost/nginx
FROM nginx:latest
COPY ./html/index.html /usr/share/nginx/html/index.html
This is the command to build your docker image
cd /myhost/nginx
docker build -t pillalexakis/nginx .
This is the command to run your docker image
docker run -it --rm -d -p 8080:80 --name web \
pillalexakis/nginx

How an app in docker container access DB in windows?

OS: Windows server 2016
I have an App wrote in Go and put in a docker container. The App has to access "D:\test.db". How can I do that?
Using docker volumes and by using the -v or --mount flag when you start your container.
A modified example from the Docker docs:
$ docker run -d \
--mount source=myvol2,target=/app \
nginx:latest
you just need to replace nginx:latext with your image name and adapt source and target as you need.
Another example (also from the docs) using -v and mounting in read-only mode:
$ docker run -d \
-v nginx-vol:/usr/share/nginx/html:ro \
nginx:latest

Conflict. The container name "/gitlab-runner" is already in use by container

I'm following this guide to install docker for my GitLab server running on Ubuntu 16.4.
When I execute the following command:
docker run -d --name gitlab-runner --restart always \
-v /srv/gitlab-runner/config:/etc/gitlab-runner \
-v /var/run/docker.sock:/var/run/docker.sock \
gitlab/gitlab-runner:latest
So far so good. However, when I run the next command to register the runner from this guide:
docker run --rm -t -i -v /srv/gitlab-runner/config:/etc/gitlab-runner --name gitlab-runner gitlab/gitlab-runner register
I keep getting the message:
docker: Error response from daemon: Conflict. The container name "/gitlab-runner" is already in use by container "b055ded012f9d0ed085fe84756604464afbb11871b432a21300064333e34cb1d". You have to remove (or rename) that container to be able to reuse that name.
However, when I run docker container list to see the list of containers, it's empty.
Anyone know how I can fix this error?
Just to add my 2-cents as I've also recently been through those GitLab documents to get the Docker GitLab runner working.
Following the Docker image installation and configuration guide, it tells you to start that container, however that I believe, is a mistake, and you want to do that after registering the Runner.
If you did run:
docker run -d --name gitlab-runner --restart always \
-v /srv/gitlab-runner/config:/etc/gitlab-runner \
-v /var/run/docker.sock:/var/run/docker.sock \
gitlab/gitlab-runner:latest
Just remove the docker container with docker rm -f gitlab-runner, and move on to registering the runner.
docker run --rm -t -i -v /srv/gitlab-runner/config:/etc/gitlab-runner --name gitlab-runner gitlab/gitlab-runner register
This would register the runner, and also place the configuration in /srv/gitlab-runner/config/config.toml on the local machine.
You can then run the original docker run:
docker run -d --name gitlab-runner --restart always \
-v /srv/gitlab-runner/config:/etc/gitlab-runner \
-v /var/run/docker.sock:/var/run/docker.sock \
gitlab/gitlab-runner:latest
(NB, if this doesn't work because of the name being in use again - just run the docker rm -f gitlab-runner command again - you won't lose the gitlab-runner configuration).
And that would stand up the Docker gitlab-runner with the configuration set from the register command.
Hope this helps!
You're trying to run two containers with the same name? Where did these instructions come from? Then in your response you're saying you get the error 'No such container: gitlab-runner-config' but that's not the name of any of the containers you're trying to run?
Seems that your first container is meant to be called gitlab-runner-config based on everything else I see in there, including your volumes-from. Probably that's why gitlab-runner doesn't show up in docker ps, because you're trying to get volumes from a container that doesn't exist. Try clearing everything, and then run the following:
$ docker run -d --name gitlab-runner-config --restart always \
-v /srv/gitlab-runner/config:/etc/gitlab-runner \
-v /var/run/docker.sock:/var/run/docker.sock \
gitlab/gitlab-runner:latest
...
$ docker run -d --name gitlab-runner --restart always \
-v /var/run/docker.sock:/var/run/docker.sock \
--volumes-from gitlab-runner-config \
gitlab/gitlab-runner:latest
EDIT: OK so I read the guide, you're following the instructions wrong. It's saying in step 2, either do the one command, or the two afterwards. Either do a combined config and run container (which is called gitlab-runner) or do a config container (called gitlab-runner-config) then a runner container (called gitlab-runner). You're doing multiple steps with the same container name but mixing them up.
Run docker ps -a and you will see all your containers (even the not running ones), if you use the --rm option on run your container will be removed when stopped if that is the behaviour you are after.
You could always just skip the whole --name option if you want to create more than one of the same image and don't care about the name.
I also came across this, and opened an issue against the GitLab documentation. Here's my comment in there:
Actually, I think the issue might be something different:
On step 3, clicking on the link takes you to https://docs.gitlab.com/runner/register/index.html#docker.
In doing this, you land on the right section, near the end of the page. But this also means that you miss one important bit of information at the top of the page:
Before registering a Runner, you need to first:
Install it on a server separate than where GitLab is installed on
Obtain a token for a shared or specific Runner via GitLab's interface
That is, the documentation instructions recommend and assume that the gitlab runner container is on another machine. Thus they are not expected to work for containers on the same one.
My suggestion would be to add a note after the register step to check the registration requirements at the top of the page first.
Other than that, #johnharris85's answer would work for registering the runner on the same machine. The only extra thing you'd need to do is to add the --network="host" option to the command to do the registration. That is:
sudo docker run --rm -t -i \
-v /srv/gitlab-runner/config:/etc/gitlab-runner \
--network="host" --name gitlab-runner-register \
gitlab/gitlab-runner register

Docker how to pass a relative path as an argument

I would like to run this command:
docker run docker-mup deploy --config .deploy/mup.js
where docker-mup is the name the image, and deploy, --config, .deploy/mup.js are arguments
My question: how to mount a volume such that .deploy/mup.js is understood as the relative path on the host from where the docker run command is run?
I tried different things with VOLUME but it seems that VOLUME does the contrary: it exposes a container directory to the host.
I can't use -v because this container will be used as a build step in a CI/CD pipeline and as I understand it, it is just run as is.
I can't use -v because this container will be used as a build step in a CI/CD pipeline and as I understand it, it is just run as is.
Using -v to expose your current directory is the only way to make that .deploy/mup.js file inside your container, unless you are baking it into the image itself using a COPY directive in your Dockerfile.
Using the -v option to map a host directory might look something like this:
docker run \
-v $PWD/.deploy:/data/.deploy \
-w /data \
docker-mup deploy --config .deploy/mup.js
This would map (using -v ...) the $PWD/.deploy directory onto /data/.deploy in your container, set the current working directory to /data (using -w ...), and then run deploy --config .deploy/mup.js.
Windows - Powershell
If you're inside the directory you want to bind mount, use ${pwd}:
docker run -it --rm -d -p 8080:80 --name web -v ${pwd}:/usr/share/nginx/html nginx
or $pwd/. (forward slash dot):
docker run -it --rm -d -p 8080:80 --name web -v $pwd/.:/usr/share/nginx/html nginx
Just $pwd will cause an error:
docker run -it --rm -d -p 8080:80 --name web -v $pwd:/usr/share/nginx/html nginx
Variable reference is not valid. ':' was not followed by a valid variable name character. Consider using ${} to
delimit the name
Mounting a subdirectory underneath your current location, e.g. "site-content", $pwd/ + subdir is fine:
docker run -it --rm -d -p 8080:80 --name web -v $pwd/site-content:/usr/share/nginx/html nginx
In my case there was no need for $pwd, and using the standard current folder notation . was enough. For reference, I used docker-compose.yml and ran docker-compose up.
Here is a relevant part of docker-compose.yml.
volumes:
- '.\logs\:/data'

Resources