My teammates and I are working on a backend, which would provide ros gazebo simulation that is hosted online. Due to the fact that it is very hard to let several instance of ros running on the same machine, we have decided that we will have a container running for each instance of ros gazebo. However, it there any tool to orchestrate the containers (running, stopping)? It's not like kubernetes style of managing containers, but like I need to spin a container per api call. Thanks in advance!
You can use curl to consume Docker API. Reference's documentation: https://docs.docker.com/engine/api/sdk/examples/#run-a-container
curl --unix-socket /var/run/docker.sock -H "Content-Type: application/json" \
-d '{"Image": "alpine", "Cmd": ["echo", "hello world"]}' \
-X POST http://localhost/v1.41/containers/create
curl --unix-socket /var/run/docker.sock -X POST http://localhost/v1.41/containers/1c6594faf5/start
curl --unix-socket /var/run/docker.sock -X POST http://localhost/v1.41/containers/1c6594faf5/wait
curl --unix-socket /var/run/docker.sock "http://localhost/v1.41/containers/1c6594faf5/logs?stdout=1"
Where v1.41 is the API version of Docker. To check what version you are running just execute docker version
This way you have to make these calls from the host executing the Docker process though. If you need to execute remote calls to the host, the only thing you can do is to write a small executable that exposes a web interface that you can use to call internal Docker API (https://docs.rs/docker-api/latest/docker_api/)
Related
I'm following the datadog guide here: https://docs.datadoghq.com/database_monitoring/setup_postgres/aurora/?tab=docker
which says to run this docker command:
docker run -e "DD_API_KEY=${DD_API_KEY}" \
-v /var/run/docker.sock:/var/run/docker.sock:ro \
-l com.datadoghq.ad.check_names='["postgres"]' \
-l com.datadoghq.ad.init_configs='[{}]' \
-l com.datadoghq.ad.instances='[{
"dbm": true,
"host": "<AWS_INSTANCE_ENDPOINT>",
"port": 5432,
"username": "datadog",
"password": "<UNIQUEPASSWORD>"
}]' \
gcr.io/datadoghq/agent:${DD_AGENT_VERSION}
That's all well and good, the labels are easy to configure; what's not clear to me is how to set the task definition for the volume (ideally in the console)
I'm not sure how to translate -v /var/run/docker.sock:/var/run/docker.sock:ro into these inputs:
I currently have this in my Dockerfile (but I think that's only part of the solution - and potentially incorrect):
VOLUME ["/var/run/docker.sock:/var/run/docker.sock:ro"]
That mapping is knowing as mounting the docker socket. Which means you are giving your container access to the docker daemon. Which in turns means it's a big deal. It works and some specific scenarios require that (e.g. a Jenkins container may need that to be able to launch new worker containers on the host). I don't know enough datadog to say what they use this for.
You don't map it inside of the Dockerfile but you are on the right path in terms of where you'd want to map it on the ECS console (see here).
Please note that this setup would not be supported using Fargate (only EC2).
the image runs successfully.But i can not get the response.
here is the run command.The boot-1.0-SNAPSHOT.jar is just a simple SpringBoot project.
docker run -d -p 8888:8888 -v /usr/makoto/boot-1.0-SNAPSHOT.jar:/usr/makoto/boot-1.0-SNAPSHOT.jar --name makoto java:8u111 java -jar /usr/makoto/boot-1.0-SNAPSHOT.jar
here is the curl command
curl -GET 127.0.0.1:8888/ttt
The problem has been solved.When use Centos in the vmware,docker runs successfully.
Anyway,I am grateful for your help.The reason caused this problem is still unknown.Welcome you write down your conjecture.I will test it one by one.
It's simply because you are trying to curl IP 0.0.0.0:8888, You should curl localhost:8888 instead
curl -GET localhost:8888/ttt
if you want to call a container from the host you can use the localhost, 127.0.0.1 with the exposed port for the container.
I've started ArangoDB from a docker container with -e ARANGO_NO_AUTH=1 and mapped the volumes /var/lib/arangodb3 and /var/lib/arangodb3-apps to my local drive. Next i wanted to create a new app but when i click Services from the web interface i get following error:
GET http://127.0.0.1:8529/_db/_system/_admin/aardvark/foxxes 400 (Bad Request)
Do i need to be authenticated to do that or is this a docker problem? There are no errors in the log.
Im using the latest version from docker hub in this case version 2.8.9
Docker command:
docker run -e ARANGO_NO_AUTH=1 -p 8529:8529 --name arangodb-i -v /home/me/projects/dbs/arango/db:/var/lib/arangodb3 -v /home/me/projects/dbs/arango/apps:/var/lib/arangodb3-apps arangodb/arangodb
ArangoDB Info:
INFO ArangoDB 3.0.0 [linux] 64bit, using VPack 0.1.30, ICU 54.1, V8 5.0.71.39, OpenSSL 1.0.1k 8 Jan 2015
The error message comes in both Chrome and Firefox but not in curl.
This error does not occur when i install arangodb in ubuntu, only when i run it with docker.
It seems like the docker image needs the Authorization header set, but because of the ARANGO_NO_AUTH it doesn't seem to matter what it's set to:
docker run --rm -e ARANGO_NO_AUTH=1 -p 8529:8529 arangodb/arangodb:3.0.0
curl -H "Authorization: foo bar" http://127.0.0.1:8529/_db/_system/_admin/aardvark/foxxes
[{"mountId":"81","mount":"/_api/gharial","name":"gharial","description":"ArangoDB Graph Module","author":"ArangoDB GmbH","system":true,"development":false,"contributors":[{"name":"Michael Hackstein","email":"m.hackstein#arangodb.com"}],"license":"Apache License, Version 2.0","version":"3.0.0","path":"/usr/share/arangodb3/js/apps/system/_api/gharial/APP","config":{},"deps":{},"scripts":{}},{"mountId":"75","mount":"/_admin/aardvark","name":"aardvark","description":"ArangoDB Admin Web Interface","author":"ArangoDB GmbH","system":true,"development":false,"contributors":[{"name":"Heiko Kernbach","email":"heiko#arangodb.com"},{"name":"Michael Hackstein","email":"m.hackstein#arangodb.com"},{"name":"Lucas Dohmen","email":"lucas#arangodb.com"}],"license":"Apache License, Version 2.0","version":"3.0.0","path":"/usr/share/arangodb3/js/apps/system/_admin/aardvark/APP","config":{},"deps":{},"scripts":{}}]
This also works:
curl --user foo:bar http://127.0.0.1:8529/_db/_system/_admin/aardvark/foxxes
The 2.8.9 image does not have this issue.
I am deploying a simple hello world nginx container with marathon, and everything seems to work well, except that I have 6 containers that will not deregister from consul. docker ps shows none of the containers are running.
I tried using the /v1/catalog/deregister endpoint to deregister the services, but they keep coming back. I then killed the registrator container, and tried deregistering again. They came back.
I am running registrator with
docker run -d --name agent-registrator -v /var/run/docker.sock:/tmp/docker.sock --net=host gliderlabs/registrator consul://127.0.0.1:8500 -deregister-on-success -cleanup
There is 1 consul agent running.
Restarting the machine (this is a single node installation on a local vm) does not make the services go away.
How do I make these containers go away?
Using the http api for removing services is another much nicer solution. I just figured out how to manually remove services before I figured out how to use the https api.
To delete a service with the http api use the following command:
curl -v -X PUT http://<consul_ip_address>:8500/v1/agent/service/deregister/<ServiceID>
Note that your is a combination of three things: the IP address of host machine the container is running on, the name of the container, and the inner port of the container (i.e. 80 for apache, 3000 for node js, 8000 for django, ect) all separated by colins :
Heres an example of what that would actually look like:
curl -v -X PUT http://1.2.3.4:8500/v1/agent/service/deregister/192.168.1.1:sharp_apple:80
If you want an easy way to get the ServiceID then just curl the service that contains a zombie:
curl -s http://<consul_ip_address>:8500/v1/catalog/service/<your_services_name>
Heres a real example for a service called someapp that will return all the services under it:
curl -s http://1.2.3.4:8500/v1/catalog/service/someapp
Don't use catalog, instead of using agent, the reason is catalog is maintained by agents, it will be resync-back by agent even if you remove it from catalog, remove zombie services shell script:
leader="$(curl http://ONE-OF-YOUR-CLUSTER:8500/v1/status/leader | sed
's/:8300//' | sed 's/"//g')"
while :
do
serviceID="$(curl http://$leader:8500/v1/health/state/critical | ./jq '.[0].ServiceID' | sed 's/"//g')"
node="$(curl http://$leader:8500/v1/health/state/critical | ./jq '.[0].Node' | sed 's/"//g')"
echo "serviceID=$serviceID, node=$node"
size=${#serviceID}
echo "size=$size"
if [ $size -ge 7 ]; then
curl --request PUT http://$node:8500/v1/agent/service/deregister/$serviceID
else
break
fi
done
curl http://$leader:8500/v1/health/state/critical
json parser jq is used for field retrieving
Here is how you can absolutely delete all the zombie services: Go into your consul server, find the location of the json files containing the zombies and delete them.
For example I am running consul in a container:
docker run --restart=unless-stopped -d -h consul0 --name consul0 -v /mnt:/data \
-p $(hostname -i):8300:8300 \
-p $(hostname -i):8301:8301 \
-p $(hostname -i):8301:8301/udp \
-p $(hostname -i):8302:8302 \
-p $(hostname -i):8302:8302/udp \
-p $(hostname -i):8400:8400 \
-p $(hostname -i):8500:8500 \
-p $(ifconfig docker0 | awk '/\<inet\>/ { print $2}' | cut -d: -f2):53:53/udp \
progrium/consul -server -advertise $(hostname -i) -bootstrap-expect 3
Notice the flag -v /mnt:/data this is where all the data consul is storing is located. For me it was located in /mnt. Under this directory you will find several other directories.
config raft serf services tmp
Go into services and you will see the files that contain the json info of your services, find any ones that contains the info of zombies and delete them. Then restart consul. Then repeat for each server in your cluster that has zombies on it.
In a Consul Cluster the Agents are considered authoritative. If you use the the HTTP Api /v1/catalog/deregister endpoint to deregister services, it will keep coming back as long as other Agents have known about that service. It's the way that the Gossip protocol works.
If you want Services to go away immediately you need to deregister the host agent properly by issuing a consul leave before killing the service on the node.
This is one of the problems with Consul and registrator, if the service doesn't have a check associated with it, the service will stick around until it's de-registered and be "active". So it's good practice to have services register a health check as well. That way they will at least be critical if registrator messes up and forgets to de-register the service (which I see happens a lot). Alex's answer, of erasing the files in consul's data/services directory (then consul reload) definitely works to erase the service, but registrator will re-add them, if the containers are still around and running. Apparently the newer registrator versions are better at cleanup, but I've had mixed success. Now I don't use registrator at all, since it doesn't add health checks. I use nomad to run my containers (also from hashicorp) and it will create the service AND create the health check, and does a great job of cleaning up after itself.
Try to switch to v5
docker run -d --name agent-registrator -v /var/run/docker.sock:/tmp/docker.sock gliderlabs/registrator:v5 -internal consul://172.16.0.4:8500
I'd like to be able to easily clean up containers after they exit. Is this possible with the remote API? (Other than discovering the exit myself and removing with the DELETE/containers endpoint)
larsks answer is now outdated. Docker Remote API 1.25 shifted --rm functionality from client to server. There is an AutoRemove flag under HostConfig when creating a container that does exactly this.
The --rm option in the Docker client is entirely a client side option. This is, for example, why you can't combine -d with --rm -- because the client is only able to remove the container on exit if it stays attached to the container.
You could write a clean up script that would periodically run docker ps -f status=exited -q and clean up the result.
You could also achieve something more automated by monitoring the Docker API's /events endpoint and responding immediately to container exits, I guess.