Gelf logging with docker-compose - docker

I've googled the f out of this.
here is the log driver config for my docker container in the compose file
driver: gelf
options:
gelf-address: "http://graylog:12201"
I created a GELF HTTP input in the admin console.
I know that graylog is accessible at 12201, because if I ssh into a container and run
curl -X POST -H 'Content-Type: application/json' -d '{ "version": "1.1", "host": "example.org", "short_message": "A short message", "level": 5, "_some_info": "test" }' 'http://graylog:12201/gelf'
Then I can see the log message.
The problem is, it seems like I have to add the /gelf to the address, but docker complains if I try to do that. But no other curl commands work without it, and I can't seem to get it to work with TCP or UDP at all. So.... what am I doing wrong?

Related

Why no_proxy must be specified for CURL to work in this scenario?

Inside my virtual machine, I have the following docker-compose.yml file:
services:
nginx:
image: "nginx:1.23.1-alpine"
container_name: parse-nginx
ports:
- "80:80"
mongo-0:
image: "mongo:5.0.6"
container_name: parse-mongo-0
volumes:
- ./mongo-0/data:/data/db
- ./mongo-0/config:/data/config
server-0:
image: "parseplatform/parse-server:5.2.4"
container_name: parse-server-0
ports:
- "1337:1337"
volumes:
- ./server-0/config-vol/configuration.json:/parse-server/config/configuration.json
command: "/parse-server/config/configuration.json"
The configuration.json file specified for server-0 is as follows:
{
"appId": "APPLICATION_ID_00",
"masterKey": "MASTER_KEY_00",
"readOnlyMasterKey": "only",
"databaseURI": "mongodb://mongo-0/test"
}
After using docker compose up, I execute the following command from the VM:
curl -X POST -H "X-Parse-Application-Id: APPLICATION_ID_00" -H "Content-Type: application/json" -d '{"score":1000,"playerName":"Sean Plott","cheatMode":false}' http://localhost:1337/parse/classes/GameScore
The output is:
{"objectId":"yeHHiu01IV","createdAt":"2022-08-25T02:36:06.054Z"}
I use the following command to get inside the nginx container:
docker exec -it parse-nginx sh
Pinging parse-server-0 shows that it does resolve into a proper IP address. I then run the modified version of the curl command above changing localhost with that host name:
curl -X POST -H "X-Parse-Application-Id: APPLICATION_ID_00" -H "Content-Type: application/json" -d '{"score":1000,"playerName":"Sean Plott","cheatMode":false}' http://parse-server-0:1337/parse/classes/GameScore
It gives me a 504 error like this:
...
<title>504 DNS look up failed</title>
</head>
<body><div class="message-container">
<div class="logo"></div>
<h1>504 DNS look up failed</h1>
<p>The webserver reported that an error occurred while trying to access the website. Please return to the previous page.</p>
...
However if I use no_proxy as follows, it works:
no_proxy="parse-server-0" curl -X POST -H "X-Parse-Application-Id: APPLICATION_ID_00" -H "X-Parse-Master-Key: MASTER_KEY_00" -H "Content-Type: application/json" -d '{"score":1000,"playerName":"Sean Plott","cheatMode":false}' http://parse-server-0:1337/parse/classes/GameScore
The output is again something like this:
{"objectId":"ICTZrQQ305","createdAt":"2022-08-25T02:18:11.565Z"}
I am very perplexed by this. Clearly, parse-server-0 is reachable with ping. How can it then throws a 504 error without using no_proxy? The parse-nginx container is using default settings and configuration. I do not set up any proxy. I am using it to test the curl command from another container to parse-mongo-0. Any help would be greatly appreciated.
The contents of /etc/resolv.conf is:
nameserver 127.0.0.11
options edns0 trust-ad ndots:0
Running echo $HTTP_PROXY inside parse-nginx returns:
http://10.10.10.10:8080
This value is null inside the VM.
Your proxy server doesn't appear to be running in this docker network. So when the request goes to that proxy server, it will not query the docker DNS on this network to resolve the other container names.
If your application isn't making requests outside of the docker network, you can remove the proxy settings. Otherwise, you'll want to set no_proxy for the other docker containers you will be accessing.
Please check the value of echo $http_proxy. Please note the downcase here. If this value is set, that means curl is configured to use the proxy. You're getting 504 while DNS resolution most probably because your parse-nginx container isn't able to reach the ip 10.10.10.10. And specifying no_proxy tells it to ignore the http_proxy env var (overriding it) and make the request without any proxy.
Inside my VM, this is the contents of the ~/.docker/config.json file:
{
"proxies":
{
"default":
{
"httpProxy": "http://10.10.10.10:8080",
"httpsProxy": "http://10.10.10.10:8080"
}
}
}
This was implemented a while back as an ad hoc fix for some network issues. A security certificate was later implemented. I completely forgot about the fix. Clearing the ~/.docker/config.json file, and redoing docker compose up fixes the issue. I no longer need no_proxy to make curl works. Everything is as it should be now. Thank you so much for all the help.

call bitcoind rpc api over docker

I started my node container with this flags:
daemon=1
printtoconsole=1
testnet=1
rpcport=9332
rpcallowip=0.0.0.0/0
rpcuser=user
rpcpassword=password
rpcbind=0.0.0.0
server=1
I opened port in my docker-compose :
node:
image: bitcoin-sv
container_name: 'node'
restart: always
ports:
- '9332:9332'
I can call methods from bitcoin-cli in my container
docker exec -it node bash
root#9196d074e4d8:/opt/bitcoin-sv# ./bitcoin-cli getinfo
But I cannot call it from curl
curl --user user --data-binary '{"jsonrpc": "1.0", "id":"curltest", "method": "getinfo, "params": ["", 0.1, "donation", "seans outpost"] }' -H 'content-type: text/plain;' http://127.0.0.1:9332
Enter host password for user 'user':
curl: (52) Empty reply from server
How can i call it from curl? Maybe i have to call to cli?
not sure what can be your issue, but the first approach would be do the curl inside the container in order to verify that the HTTP interface is working properly. So you should try this:
docker exec -it node bash
root#9196d074e4d8:/opt/bitcoin-sv# curl --user user --data-binary '{"jsonrpc": "1.0", "id":"curltest", "method": "getinfo, "params": ["", 0.1, "donation", "seans outpost"] }' -H 'content-type: text/plain;' localhost:9332
Once you are sure the interface is working inside the container, you can move forwred and try it from the host.

FORBIDDEN/12/index read-only / allow delete (api) problem

When importing items into my Rails app I keep getting the above error being raised by SearchKick on behalf of Elasticsearch.
I'm running Elasticsearch in a Docker. I start my app by running docker-compose up. I've tried running the command recommended above but i just get "No such file or directory" returned. Any ideas?
I do have port 9200 exposed to outside but nothing seems to help. Any ideas?
Indeed, running curl -XPUT -H "Content-Type: application/json" http://localhost:9200/_all/_settings -d '{"index.blocks.read_only_allow_delete": null}' as suggested by #Nishant Saini resolves the very similar issue I ran just into.
I hit disk watermarks limits on my machine.
Use the following command in linux:
curl -s -H 'Content-Type: application/json' -XPUT 'http://localhost:9200/_all/_settings?pretty' -d ' {
"index":{
"blocks" : {"read_only_allow_delete":"false"}
}
}'
the same command in Kibana's DEV TOOL format :
PUT _all/_settings
{
"index":{
"blocks" : {"read_only_allow_delete":"false"}
}
}

Logstash with fluent input codec not working

I have been using logstash with gelf already and wanted to check out fluent input (mainly due to TCP based docker log-driver for fluent as opposed to UDP only gelf). My configuration for testing is this:
input {
gelf {
port => 12345
}
tcp {
codec => fluent
port => 23456
}
}
filter {
}
output {
stdout { codec => rubydebug { metadata => true } }
}
I can send gelf logs using:
docker run -it \
--log-driver gelf \
--log-opt gelf-address=udp://localhost:12345 \
--log-opt tag=gelf-test \
ubuntu:16.04 /bin/bash -c 'echo $(date -u +"%Y-%m-%dT%H:%M:%SZ") Hello gelf'
However the fluent version does not work:
docker run -it \
--log-driver fluentd \
--log-opt fluentd-address=localhost:23456 \
--log-opt tag=fluent-test \
ubuntu:16.04 /bin/bash -c 'echo $(date -u +"%Y-%m-%dT%H:%M:%SZ") Hello fluent'
I can verify that logstash is receiving input:
echo 'Hello TCP' | nc localhost 23456
An error occurred. Closing connection {:client=>"172.17.0.1:42012", :exception=>#, :backtrace=>["org/jruby/RubyTime.java:1073:in at'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-event-2.4.0-java/lib/logstash/timestamp.rb:32:inat'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-codec-fluent-2.0.4-java/lib/logstash/codecs/fluent.rb:41:in decode'", "org/msgpack/jruby/MessagePackLibrary.java:195:ineach'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-codec-fluent-2.0.4-java/lib/logstash/codecs/fluent.rb:40:in decode'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-tcp-3.0.6/lib/logstash/inputs/tcp.rb:153:inhandle_socket'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-tcp-3.0.6/lib/logstash/inputs/tcp.rb:143:in `server_connection_thread'"], :level=>:error}
I also disabled the fluent codec and sent fluent logs and logstash properly errors there as well and parses the fluent msgpack as message field of a regular TCP event as expected.
Received an event that has a different character encoding than you
configured.
{:text=>"\x94\xABfluent-test\xD2X¢鄣log\xD9\\"2017-03-10T12:58:17Z
Hello
fluent\r\xACcontainer_id\xD9#9cbd13eb83a02a1a4d4f83ff063d4e40b4419b7dcbcef960e4689495caa5c132\xAEcontainer_name\xAF/ecstatic_kilby\xA6source\xA6stdout\xC0",
:expected_charset=>"UTF-8", :level=>:warn}
{
"message" => "\\x94\\xABfluent-test\\xD2X¢鄣log\\xD9\\\"2017-03-10T12:58:17Z Hello fluent\\r\\xACcontainer_id\\xD9#9cbd13eb83a02a1a4d4f83ff063d4e40b4419b7dcbcef960e4689495caa5c132\\xAEcontainer_name\\xAF/ecstatic_kilby\\xA6source\\xA6stdout\\xC0",
"#version" => "1",
"#timestamp" => "2017-03-10T12:58:18.069Z",
"host" => "172.17.0.1",
"port" => 42016
}
I have no other ideas, has anybody run into this issue or have any ideas on how to debug further?
would you please try a Fluentd instance ?, on that way would be easier to determinate where the issue is. Doing a quick view looks like Logstash Fluent codec is not working properly.
Unfortunately you can't send messages from fluentd directly to logstash using the existing plugins (it's a shame really).
If you wish to do so, use this open-source plugin which sends the data directly to logstash tcp input (no need for fluentd codec) and also support sending data via secured SSL/TLS protocol.
Seen on this thread.

How to use docker remote api to create container?

I'm new to docker. I have read the tutorial in docker remote API . In aspect of creating container. It show me too many param to fill. I want to know what is equivalent to this command :
docker run -d -p 5000:5000 --restart=always --name registry
registry:2.
I have no idea about it. Can anyone tell me? Thanks!
Original answer (July 2015):
That would be (not tested directly), as in this tutorial (provided the remote API is enabled):
First create the container:
curl -v -X POST -H "Content-Type: application/json" -d '{"Image": " registry:2.",}' http://localhost:2376/containers/create?name=registry
Then start it:
curl -v -X POST -H "Content-Type: application/json" -d '{"PortBindings": { "5000/tcp": [{ "HostPort": "5000" }] },"RestartPolicy": { "Name": "always",},}' http://localhost:2376/containers/registry/start?name=registry
Update February 2017, for docker 1.13+ see rocksteady's answer, using a similar idea but with the current engine/api/v1.26.
More or less just copying VonCs answer in order to update to todays version of docker (1.13) and docker remote api version (v1.26).
What is different:
All the configuration needs to be done when the container is created, otherwise the following error message is returned when starting the container the way VonC did.
{"message":"starting container with non-empty request body was deprecated since v1.10 and removed in v1.12"}
First create the container: (including all the configuration)
curl -v -X POST -H "Content-Type: application/json" -d #docker.conf http://localhost:2376/containers/create?name=registry
The file docker.conf looks like this:
{
"Image": registry:2.",
"ExposedPorts": {
"5000/tcp": {}
},
"HostConfig": {
"PortBindings": {
"5000/tcp": [
{
"HostPort": "5000"
}
]
},
"RestartPolicy": {
"Name": "always"
}
"AutoRemove": true
}
}
Then start it: (the parameter name is not necessary, the container is just named registry)
curl -v -X POST -H "Content-Type: application/json" http://localhost:2376/containers/registry/start
Create docker container in Docker Engine v1.24
Execute the post request -
curl -X POST -H "Content-Type: application/json" http://DOCKER_SERVER_HOST:DOCKER_PORT/v1.24/containers/create?name=containername
In the request body, you can specify the JSON parameters like
{
"Hostname": "172.x.x.x",
"Image": "docker-image-name",
"Volumes": "",
"Entrypoint": "",
"Tty": true
}
It creates your docker container
Start the container
Execute the POST request
curl -X POST http://DOCKER_SERVER_HOST:DOCKER_PORT/v1.24/containers/containername/start
Reference link - https://docs.docker.com/engine/api/v1.24/

Resources