I started my node container with this flags:
daemon=1
printtoconsole=1
testnet=1
rpcport=9332
rpcallowip=0.0.0.0/0
rpcuser=user
rpcpassword=password
rpcbind=0.0.0.0
server=1
I opened port in my docker-compose :
node:
image: bitcoin-sv
container_name: 'node'
restart: always
ports:
- '9332:9332'
I can call methods from bitcoin-cli in my container
docker exec -it node bash
root#9196d074e4d8:/opt/bitcoin-sv# ./bitcoin-cli getinfo
But I cannot call it from curl
curl --user user --data-binary '{"jsonrpc": "1.0", "id":"curltest", "method": "getinfo, "params": ["", 0.1, "donation", "seans outpost"] }' -H 'content-type: text/plain;' http://127.0.0.1:9332
Enter host password for user 'user':
curl: (52) Empty reply from server
How can i call it from curl? Maybe i have to call to cli?
not sure what can be your issue, but the first approach would be do the curl inside the container in order to verify that the HTTP interface is working properly. So you should try this:
docker exec -it node bash
root#9196d074e4d8:/opt/bitcoin-sv# curl --user user --data-binary '{"jsonrpc": "1.0", "id":"curltest", "method": "getinfo, "params": ["", 0.1, "donation", "seans outpost"] }' -H 'content-type: text/plain;' localhost:9332
Once you are sure the interface is working inside the container, you can move forwred and try it from the host.
Related
I've googled the f out of this.
here is the log driver config for my docker container in the compose file
driver: gelf
options:
gelf-address: "http://graylog:12201"
I created a GELF HTTP input in the admin console.
I know that graylog is accessible at 12201, because if I ssh into a container and run
curl -X POST -H 'Content-Type: application/json' -d '{ "version": "1.1", "host": "example.org", "short_message": "A short message", "level": 5, "_some_info": "test" }' 'http://graylog:12201/gelf'
Then I can see the log message.
The problem is, it seems like I have to add the /gelf to the address, but docker complains if I try to do that. But no other curl commands work without it, and I can't seem to get it to work with TCP or UDP at all. So.... what am I doing wrong?
Inside my virtual machine, I have the following docker-compose.yml file:
services:
nginx:
image: "nginx:1.23.1-alpine"
container_name: parse-nginx
ports:
- "80:80"
mongo-0:
image: "mongo:5.0.6"
container_name: parse-mongo-0
volumes:
- ./mongo-0/data:/data/db
- ./mongo-0/config:/data/config
server-0:
image: "parseplatform/parse-server:5.2.4"
container_name: parse-server-0
ports:
- "1337:1337"
volumes:
- ./server-0/config-vol/configuration.json:/parse-server/config/configuration.json
command: "/parse-server/config/configuration.json"
The configuration.json file specified for server-0 is as follows:
{
"appId": "APPLICATION_ID_00",
"masterKey": "MASTER_KEY_00",
"readOnlyMasterKey": "only",
"databaseURI": "mongodb://mongo-0/test"
}
After using docker compose up, I execute the following command from the VM:
curl -X POST -H "X-Parse-Application-Id: APPLICATION_ID_00" -H "Content-Type: application/json" -d '{"score":1000,"playerName":"Sean Plott","cheatMode":false}' http://localhost:1337/parse/classes/GameScore
The output is:
{"objectId":"yeHHiu01IV","createdAt":"2022-08-25T02:36:06.054Z"}
I use the following command to get inside the nginx container:
docker exec -it parse-nginx sh
Pinging parse-server-0 shows that it does resolve into a proper IP address. I then run the modified version of the curl command above changing localhost with that host name:
curl -X POST -H "X-Parse-Application-Id: APPLICATION_ID_00" -H "Content-Type: application/json" -d '{"score":1000,"playerName":"Sean Plott","cheatMode":false}' http://parse-server-0:1337/parse/classes/GameScore
It gives me a 504 error like this:
...
<title>504 DNS look up failed</title>
</head>
<body><div class="message-container">
<div class="logo"></div>
<h1>504 DNS look up failed</h1>
<p>The webserver reported that an error occurred while trying to access the website. Please return to the previous page.</p>
...
However if I use no_proxy as follows, it works:
no_proxy="parse-server-0" curl -X POST -H "X-Parse-Application-Id: APPLICATION_ID_00" -H "X-Parse-Master-Key: MASTER_KEY_00" -H "Content-Type: application/json" -d '{"score":1000,"playerName":"Sean Plott","cheatMode":false}' http://parse-server-0:1337/parse/classes/GameScore
The output is again something like this:
{"objectId":"ICTZrQQ305","createdAt":"2022-08-25T02:18:11.565Z"}
I am very perplexed by this. Clearly, parse-server-0 is reachable with ping. How can it then throws a 504 error without using no_proxy? The parse-nginx container is using default settings and configuration. I do not set up any proxy. I am using it to test the curl command from another container to parse-mongo-0. Any help would be greatly appreciated.
The contents of /etc/resolv.conf is:
nameserver 127.0.0.11
options edns0 trust-ad ndots:0
Running echo $HTTP_PROXY inside parse-nginx returns:
http://10.10.10.10:8080
This value is null inside the VM.
Your proxy server doesn't appear to be running in this docker network. So when the request goes to that proxy server, it will not query the docker DNS on this network to resolve the other container names.
If your application isn't making requests outside of the docker network, you can remove the proxy settings. Otherwise, you'll want to set no_proxy for the other docker containers you will be accessing.
Please check the value of echo $http_proxy. Please note the downcase here. If this value is set, that means curl is configured to use the proxy. You're getting 504 while DNS resolution most probably because your parse-nginx container isn't able to reach the ip 10.10.10.10. And specifying no_proxy tells it to ignore the http_proxy env var (overriding it) and make the request without any proxy.
Inside my VM, this is the contents of the ~/.docker/config.json file:
{
"proxies":
{
"default":
{
"httpProxy": "http://10.10.10.10:8080",
"httpsProxy": "http://10.10.10.10:8080"
}
}
}
This was implemented a while back as an ad hoc fix for some network issues. A security certificate was later implemented. I completely forgot about the fix. Clearing the ~/.docker/config.json file, and redoing docker compose up fixes the issue. I no longer need no_proxy to make curl works. Everything is as it should be now. Thank you so much for all the help.
I am deploying vault docker image on Ubuntu 16.04, I am successful initializing it from inside the image itself, but I cant get any Rest Responses, and even curl does not work.
I am doing the following:
Create config file local.json :
{
"listener": [{
"tcp": {
"address": "127.0.0.1:8200",
"tls_disable" : 1
}
}],
"storage" :{
"file" : {
"path" : "/vault/data"
}
}
"max_lease_ttl": "10h",
"default_lease_ttl": "10h",
}
under /vault/config directory
running the command to start the image
docker run -d -p 8200:8200 -v /home/vault:/vault --cap-add=IPC_LOCK vault server
entering bash terminal of the image :
docker exec -it containerId /bin/sh
Running inside the following command
export VAULT_ADDR='http://127.0.0.1:8200' and than vault init
It works fine, but when I am trying to send rest to check if vault initialized:
Get request to the following url : http://Ip-of-the-docker-host:8200/v1/sys/init
Getting No Response.
even curl command fails:
curl http://127.0.0.1:8200/v1/sys/init
curl: (56) Recv failure: Connection reset by peer
Didnt find anywhere online with a proper explanation what is the problem, or if I am doing something wrong.
Any Ideas?
If a server running in a Docker container binds to 127.0.0.1, it's unreachable from anything outside that specific container (and since containers usually only run a single process, that means it's unreachable by anyone). Change the listener address to 0.0.0.0:8200; if you need to restrict access to the Vault server, bind it to a specific host address in the docker run -p option.
Normally, get that code on master host's dashboard:
$ sudo docker run --rm --privileged -v /var/run/docker.sock:/var/run/docker.sock -v /var/lib/rancher:/var/lib/rancher rancher/agent:v1.2.2 http://192.168.0.100:8080/v1/scripts/5D8B3FD489C00C7F361A:2483142400000:WvMClyNFLXQnT9pLuii3D0sYA
If want to deploy multiple nodes automatic to other hosts, it's necessary to get this code from master:
5D8B3FD489C00C7F361A:2483142400000:WvMClyNFLXQnT9pLuii3D0sYA
Then every node just add agent with this code is good. Is it right?
But, how to get it by cli from master?
Rancher has API, which enables you to interact with it remotely. What you require is called registrationTokens. Now, how to access them.
First, set up API tokens in your Rancher. Go to API -> Keys -> Add Account API Key and create the keys. If you can't find the buttons, your URL would be 192.168.0.100:8080/env/1a5/api/keys.
Now you know the keys and from remote host you can do something like this:
curl -u "${RANCHER_ACCESS_KEY}:${RANCHER_SECRET_KEY}" \
-X GET \
'http://192.168.0.100:8080/v2-beta/projects/1a5/registrationtokens'
Your result will be a JSON with required data:
{
...
"data": [
{
"id": "1c3",
"type": "registrationToken",
"links": {
...
},
"actions": {
...
},
...
"command": "sudo docker run --rm --privileged -v /var/run/docker.sock:/var/run/docker.sock -v /var/lib/rancher:/var/lib/rancher rancher/agent:v1.2.2 http://192.168.0.100:8080/v1/scripts/AAAAAAAAAAAAAAAAAAAA:0000000000000:ZZZZZZZZZZZZZZZZZZZZZZZZZZ",
...
}],
...
}
I'm new to docker. I have read the tutorial in docker remote API . In aspect of creating container. It show me too many param to fill. I want to know what is equivalent to this command :
docker run -d -p 5000:5000 --restart=always --name registry
registry:2.
I have no idea about it. Can anyone tell me? Thanks!
Original answer (July 2015):
That would be (not tested directly), as in this tutorial (provided the remote API is enabled):
First create the container:
curl -v -X POST -H "Content-Type: application/json" -d '{"Image": " registry:2.",}' http://localhost:2376/containers/create?name=registry
Then start it:
curl -v -X POST -H "Content-Type: application/json" -d '{"PortBindings": { "5000/tcp": [{ "HostPort": "5000" }] },"RestartPolicy": { "Name": "always",},}' http://localhost:2376/containers/registry/start?name=registry
Update February 2017, for docker 1.13+ see rocksteady's answer, using a similar idea but with the current engine/api/v1.26.
More or less just copying VonCs answer in order to update to todays version of docker (1.13) and docker remote api version (v1.26).
What is different:
All the configuration needs to be done when the container is created, otherwise the following error message is returned when starting the container the way VonC did.
{"message":"starting container with non-empty request body was deprecated since v1.10 and removed in v1.12"}
First create the container: (including all the configuration)
curl -v -X POST -H "Content-Type: application/json" -d #docker.conf http://localhost:2376/containers/create?name=registry
The file docker.conf looks like this:
{
"Image": registry:2.",
"ExposedPorts": {
"5000/tcp": {}
},
"HostConfig": {
"PortBindings": {
"5000/tcp": [
{
"HostPort": "5000"
}
]
},
"RestartPolicy": {
"Name": "always"
}
"AutoRemove": true
}
}
Then start it: (the parameter name is not necessary, the container is just named registry)
curl -v -X POST -H "Content-Type: application/json" http://localhost:2376/containers/registry/start
Create docker container in Docker Engine v1.24
Execute the post request -
curl -X POST -H "Content-Type: application/json" http://DOCKER_SERVER_HOST:DOCKER_PORT/v1.24/containers/create?name=containername
In the request body, you can specify the JSON parameters like
{
"Hostname": "172.x.x.x",
"Image": "docker-image-name",
"Volumes": "",
"Entrypoint": "",
"Tty": true
}
It creates your docker container
Start the container
Execute the POST request
curl -X POST http://DOCKER_SERVER_HOST:DOCKER_PORT/v1.24/containers/containername/start
Reference link - https://docs.docker.com/engine/api/v1.24/