I'm trying to link a child mongo container to a parent node container using the Docker remote API v1.7.
I see the Links property in HostConfig which I'm guessing is passed to the
POST /containers/<id>/start request like
{
"Links": ["<container-name>:<alias>", ...]
}
I don't see how to name the mongo container to use when starting the node container. Is there an API analogy to the CLI -name flag for docker run?
Do I need to make a separate GET /containers/<id>/json request and live with the auto-generated name?
In the current (1.8) API, the -name flag is passed as a query string to POST /v1.8/containers/create – i.e. like this:
POST /v1.8/containers/create?name=redis_ambassador
(POST body left out for brevity)
I figured this out by using Geoffrey Bachelet's excellent suggestion of using socat as a proxy for all of my docker CLI commands using the following commands:
# on one terminal
sudo socat -t100 -v UNIX-LISTEN:/tmp/proxysocket.sock,mode=777,reuseaddr,fork UNIX-CONNECT:/var/run/docker.sock
# on a second terminal
export DOCKER_HOST="unix:///tmp/proxysocket.sock"
Subsequent docker cli commands will be proxied through socat and their CLI calls will be displayed on the other terminal
Related
I'm creating an application that will allow users to upload video files that will then be put through some processing.
I have two containers.
Nginx container that serves the website where users can upload their video files.
Video processing container that has FFmpeg and some other processing stuff installed.
What I want to achieve. I need container 1 to be able to run a bash script on container 2.
One possibility as far as I can see is to make them communicate over HTTP via an API. But then I would need to install a web server in container 2 and write an API which seems a bit overkill.
I just want to execute a bash script.
Any suggestions?
You have a few options, but the first 2 that come time mind are:
In container 1, install the Docker CLI and bind mount
/var/run/docker.sock (you need to specify the bind mount from the
host when you start the container). Then, inside the container, you
should be able to use docker commands against the bind mounted
socket as if you were executing them from the host (you might also
need to chmod the socket inside the container to allow a non-root
user to do this.
You could install SSHD on container 2, and then ssh in from container 1 and run your script. The advantage here is that you don't need to make any changes inside the containers to account for the fact that they are running in Docker and not bare metal. The down side is that you will need to add the SSHD setup to your Dockerfile or the startup scripts.
Most of the other ideas I can think of are just variants of option (2), with SSHD replaced by some other tool.
Also be aware that Docker networking is a little strange (at least on Mac hosts), so you need to make sure that the containers are using the same docker-network and are able to communicate over it.
Warning:
To be completely clear, do not use option 1 outside of a lab or very controlled dev environment. It is taking a secure socket that has full authority over the Docker runtime on the host, and granting unchecked access to it from a container. Doing that makes it trivially easy to break out of the Docker sandbox and compromise the host system. About the only place I would consider it acceptable is as part of a full stack integration test setup that will only be run adhoc by a developer. It's a hack that can be a useful shortcut in some very specific situations but the drawbacks cannot be overstated.
I wrote a python package especially for this use-case.
Flask-Shell2HTTP is a Flask-extension to convert a command line tool into a RESTful API with mere 5 lines of code.
Example Code:
from flask import Flask
from flask_executor import Executor
from flask_shell2http import Shell2HTTP
app = Flask(__name__)
executor = Executor(app)
shell2http = Shell2HTTP(app=app, executor=executor, base_url_prefix="/commands/")
shell2http.register_command(endpoint="saythis", command_name="echo")
shell2http.register_command(endpoint="run", command_name="./myscript")
can be called easily like,
$ curl -X POST -H 'Content-Type: application/json' -d '{"args": ["Hello", "World!"]}' http://localhost:4000/commands/saythis
You can use this to create RESTful micro-services that can execute pre-defined shell commands/scripts with dynamic arguments asynchronously and fetch result.
It supports file upload, callback fn, reactive programming and more. I recommend you to checkout the Examples.
Running a docker command from a container is not straightforward and not really a good idea (in my opinion), because :
You'll need to install docker on the container (and do docker in docker stuff)
You'll need to share the unix socket, which is not a good thing if you have no idea of what you're doing.
So, this leaves us two solutions :
Install ssh on you're container and execute the command through ssh
Share a volume and have a process that watch for something to trigger your batch
It was mentioned here before, but a reasonable, semi-hacky option is to install SSH in both containers and then use ssh to execute commands on the other container:
# install SSH, if you don't have it already
sudo apt install openssh-server
# start the ssh service
sudo service start ssh
# start the daemon
sudo /usr/sbin/sshd -D &
Assuming you don't want to always be root, you can add default user (in this case, 'foobob'):
useradd -m --no-log-init --system --uid 1000 foobob -s /bin/bash -g sudo -G root
#change password
echo 'foobob:foobob' | chpasswd
Do this on both the source and target containers. Now you can execute a command from container_1 to container_2.
# obtain container-id of target container using 'docker ps'
ssh foobob#<container-id> << "EOL"
echo 'hello bob from container 1' > message.txt
EOL
You can automate the password with ssh-agent, or you can use some bit of more hacky with sshpass (install it first using sudo apt install sshpass):
sshpass -p 'foobob' ssh foobob#<container-id>
I believe
docker exec -it <container_name> <command>
should work, even inside the container.
You could also try to mount to docker.sock in the container you try to execute the command from:
docker run -v /var/run/docker.sock:/var/run/docker.sock ...
I run several docker commands in gitlab-ci.yml.
Some of them require current machine IP address to be passed to them, like this:
docker build --pull -t my_image . --add-host=<my service>:<current ip>
$CI_SERVER_HOSTNAME is not the one, its value is gitlab.com. I need actual IP address of the CI machine as ifconfig would see it from .gitlab-cy.yml file.
I am not finding any $CI_... variable for that. I know extraction from ifconfig is possible, but won't work when the docker commands executed one-by-one on Mac.
Note: I know it's usually something like 172.0.0.x, but need an exact one plus I wonder if the variable for it exists.
In order to get the ip address of the machine that the runner is executed.
We will use the Gitlab API https://docs.gitlab.com/ee/api/runners.html#get-runners-details
GET /runners/:id
This API call will return the details of the runner with this specific :id. When a job executes this id is available on CI_RUNNER_ID predefined variable.
By combining all this and utilizing jq and sed
We get the following one liner that returns the ip address of the runner that is executing the current job
curl -s --header "PRIVATE-TOKEN: <your access token>" https://gitlab.com/api/v4/runners/${CI_RUNNER_ID} | jq '.ip_address' | sed 's/^"\(.*\)"$/\1/'
Hello i have an Ubuntu VM (using bridged adapter) in which i'm running a docker container in which im starting Rundeck with a pre-build war file in a mounted Volume.When i run the war the first time it creates its files and the config file:
#loglevel.default is the default log level for jobs:
ERROR,WARN,INFO,VERBOSE,DEBUG
loglevel.default=INFO
rdeck.base=/home/rundeck/rundeck
#rss.enabled if set to true enables RSS feeds that are public (non-authenticated)
rss.enabled=false
server.address=7d142a279564
grails.serverURL=http://7d142a279564:4440
dataSource.dbCreate = update
dataSource.url = jdbc:h2:file:/home/rundeck/rundeck/server/data/grailsdb;MVCC=true
# Pre Auth mode settings
rundeck.security.authorization.preauthenticated.enabled=false
rundeck.security.authorization.preauthenticated.attributeName=REMOTE_USER_GROUPS
rundeck.security.authorization.preauthenticated.delimiter=,
# Header from which to obtain user name
rundeck.security.authorization.preauthenticated.userNameHeader=X-Forwarded-Uuid
# Header from which to obtain list of roles
rundeck.security.authorization.preauthenticated.userRolesHeader=X-Forwarded-Roles
# Redirect to upstream logout url
rundeck.security.authorization.preauthenticated.redirectLogout=false
rundeck.security.authorization.preauthenticated.redirectUrl=/oauth2/sign_in
rundeck.log4j.config.file=/home/rundeck/rundeck/server/config/log4j.properties
As you see "server.address" and "grails.serverURL" get the default image ID as IP.
I can't access the container using this url,but i can access it using localhost:4440.But after loging in in rundeck it redirects me to "grails.serverURL" which gives "Server Not Found" as stated before.
This is how im starting the container:
sudo docker run -it -v /path/to/source:/path/to/dest -p 4440:4440 <imageID>
When i change the "server.address" and "grails.serverURL" to localhost or 127.0.0.1 i can't access the container at all.
Sorry if the question was answered before I'm new at docker and been at this for several days now,couldn't find a solution,Thanks!
I'm no expert in rundeck, but looking at the documentation rundeck image has two env vars for setting the URL and address RUNDECK_GRAILS_URL and RUNDECK_SERVER_ADDRESS
docker run -d -e RUNDECK_GRAILS_URL=http://127.0.0.1:4440 -e RUNDECK_SERVER_ADDRESS=0.0.0.0 -p 4440:4440 rundeck/rundeck.
Now you can access your application at http://localhost:4440
In case if you're running your docker container in a remote server, then update your RUNDECK_GRAILS_URL as RUNDECK_GRAILS_URL=http://<remote_server_ip>:4440.
Now you can access your app at http://remote_server_ip:4440
I'm trying to test an ASP. NET Core 2 dockerized application in VSTS. It is set up inside the docker container via docker-compose. The tests make requests via addresses stored in config (or taken from environment variables, if set).
Right now, the build is set up like this:
Run compose command to restore and publish the app.
Run compose to create and run docker containers.
Run a bash script (explained below).
Run tests.
First of all, I found out that I can't use http://localhost:port inside VSTS. It works fine on my local machine, but it does not work on the server.
I've found this article that points out the need to use container's real IP to access it. I've tried 2 of the methods described in the referenced question, but none of them worked.
When using docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' container_name_or_id, I get Template parsing error: template: :1:24: executing "" at <.NetworkSettings.Net...>: map has no entry for key "NetworkSettings" (the problem is with the command itself)
And when using docker inspect $(sudo docker ps | grep wiremocktest_microservice.gateway | head -c 12) | grep -e \"IPAddress\"\:[[:space:]]\"[0-2] | grep -o '[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}', I actually get the IP and can pass it to tests, but then something strange happens. Namely, they start to time out. I tried to replicate this locally, and it does. Every request that I make to this IP times out (easily checked in browser).
What address do I need to use to access the containers in VSTS, and why can't I use localhost?
I've run into similar problem with having a Azure Storage service running in a container for unit tests (Gradle & Kotlin project). Locally everything's working and it's possible to connect to the container by using localhost:10000 (the port is published to the host machine in run command). But this doesn't work on VSTS build pipeline and neither does when trying to connect with the IP of the container.
I've found a solution that works at least in this case: I created a custom container network and connected my Azure Storage container and the VSTS agent container to that network. After that it's possible to connect to my custom container from the tests by using the container name and internal port number e.g. my-storage-container:10000.
So I created a script that creates the container network, starts my container in that network and then connects also the VSTS agent by grepping the container ID from process list. Its' something like this:
docker network create my-custom-network
docker run --net=my-custom-network -d --name azure-storage-container -t -p 10000:10000 -v ${SCRIPT_DIR}/azurite:/opt/azurite/folder arafato/azurite
CONTAINER_ID=`docker ps -a | awk '{ print $1,$2 }' | grep microsoft/vsts-agent | awk '{print $1 }'`
docker network connect my-custom-network ${CONTAINER_ID}
After that my tests can connect to the Azure storage container with http://azure-storage-container:10000 with no problems.
Hello I am very new on Docker and I want to make some initial configuration in the couchbase like that:
Create a bucket
Set admin password
I want to automate these two process. Because:
When I first run couchbase-db on docker (docker-compose up -d couchbase db) I am going localhost:8091 and I am setting admin password. If I didnt do this when first running, I could not run couchbase properly.
What are ways to do this? Is there any images for doing this? Can I change Docker file for initial configuration?
Thanks.
Running Couchbase inside Docker container is quite trivial. You just need to run below command and you are done. And, yes as you mentioned once below command is run just launch url and configure the Boubhbase through web console.
$sudo docker run -d --name cb1 couchbase
Above command runs Couchbase in detached mode (-d) and name is cb1.
I have provided more details about it on my blog, here.
I have been searching myself and have struck gold, I'd like to share it here for the record.
There is indeed an image for what the OP seeks to do: pre-configure server when the container is created.
This method uses the Couchbase API in a custom image that runs this shell script to setup the server and any worker nodes, if needed.
Here's an example to set credentials:
curl -v http://127.0.0.1:8091/settings/web -d port=8091 -d username=Administrator -d password=password
https://github.com/arun-gupta/docker-images/blob/master/couchbase/configure-node.sh
Here's how to use the image with docker-compose
https://github.com/arun-gupta/docker-images/tree/master/couchbase
After the image is pulled, modify the configure-node.sh to fit your needs. Then run in single or in swarm mode.
Sources:
https://blog.couchbase.com/couchbase-using-docker-compose/
https://docs.couchbase.com/server/4.0/rest-api/rest-bucket-create.html