When importing items into my Rails app I keep getting the above error being raised by SearchKick on behalf of Elasticsearch.
I'm running Elasticsearch in a Docker. I start my app by running docker-compose up. I've tried running the command recommended above but i just get "No such file or directory" returned. Any ideas?
I do have port 9200 exposed to outside but nothing seems to help. Any ideas?
Indeed, running curl -XPUT -H "Content-Type: application/json" http://localhost:9200/_all/_settings -d '{"index.blocks.read_only_allow_delete": null}' as suggested by #Nishant Saini resolves the very similar issue I ran just into.
I hit disk watermarks limits on my machine.
Use the following command in linux:
curl -s -H 'Content-Type: application/json' -XPUT 'http://localhost:9200/_all/_settings?pretty' -d ' {
"index":{
"blocks" : {"read_only_allow_delete":"false"}
}
}'
the same command in Kibana's DEV TOOL format :
PUT _all/_settings
{
"index":{
"blocks" : {"read_only_allow_delete":"false"}
}
}
Related
Inside my virtual machine, I have the following docker-compose.yml file:
services:
nginx:
image: "nginx:1.23.1-alpine"
container_name: parse-nginx
ports:
- "80:80"
mongo-0:
image: "mongo:5.0.6"
container_name: parse-mongo-0
volumes:
- ./mongo-0/data:/data/db
- ./mongo-0/config:/data/config
server-0:
image: "parseplatform/parse-server:5.2.4"
container_name: parse-server-0
ports:
- "1337:1337"
volumes:
- ./server-0/config-vol/configuration.json:/parse-server/config/configuration.json
command: "/parse-server/config/configuration.json"
The configuration.json file specified for server-0 is as follows:
{
"appId": "APPLICATION_ID_00",
"masterKey": "MASTER_KEY_00",
"readOnlyMasterKey": "only",
"databaseURI": "mongodb://mongo-0/test"
}
After using docker compose up, I execute the following command from the VM:
curl -X POST -H "X-Parse-Application-Id: APPLICATION_ID_00" -H "Content-Type: application/json" -d '{"score":1000,"playerName":"Sean Plott","cheatMode":false}' http://localhost:1337/parse/classes/GameScore
The output is:
{"objectId":"yeHHiu01IV","createdAt":"2022-08-25T02:36:06.054Z"}
I use the following command to get inside the nginx container:
docker exec -it parse-nginx sh
Pinging parse-server-0 shows that it does resolve into a proper IP address. I then run the modified version of the curl command above changing localhost with that host name:
curl -X POST -H "X-Parse-Application-Id: APPLICATION_ID_00" -H "Content-Type: application/json" -d '{"score":1000,"playerName":"Sean Plott","cheatMode":false}' http://parse-server-0:1337/parse/classes/GameScore
It gives me a 504 error like this:
...
<title>504 DNS look up failed</title>
</head>
<body><div class="message-container">
<div class="logo"></div>
<h1>504 DNS look up failed</h1>
<p>The webserver reported that an error occurred while trying to access the website. Please return to the previous page.</p>
...
However if I use no_proxy as follows, it works:
no_proxy="parse-server-0" curl -X POST -H "X-Parse-Application-Id: APPLICATION_ID_00" -H "X-Parse-Master-Key: MASTER_KEY_00" -H "Content-Type: application/json" -d '{"score":1000,"playerName":"Sean Plott","cheatMode":false}' http://parse-server-0:1337/parse/classes/GameScore
The output is again something like this:
{"objectId":"ICTZrQQ305","createdAt":"2022-08-25T02:18:11.565Z"}
I am very perplexed by this. Clearly, parse-server-0 is reachable with ping. How can it then throws a 504 error without using no_proxy? The parse-nginx container is using default settings and configuration. I do not set up any proxy. I am using it to test the curl command from another container to parse-mongo-0. Any help would be greatly appreciated.
The contents of /etc/resolv.conf is:
nameserver 127.0.0.11
options edns0 trust-ad ndots:0
Running echo $HTTP_PROXY inside parse-nginx returns:
http://10.10.10.10:8080
This value is null inside the VM.
Your proxy server doesn't appear to be running in this docker network. So when the request goes to that proxy server, it will not query the docker DNS on this network to resolve the other container names.
If your application isn't making requests outside of the docker network, you can remove the proxy settings. Otherwise, you'll want to set no_proxy for the other docker containers you will be accessing.
Please check the value of echo $http_proxy. Please note the downcase here. If this value is set, that means curl is configured to use the proxy. You're getting 504 while DNS resolution most probably because your parse-nginx container isn't able to reach the ip 10.10.10.10. And specifying no_proxy tells it to ignore the http_proxy env var (overriding it) and make the request without any proxy.
Inside my VM, this is the contents of the ~/.docker/config.json file:
{
"proxies":
{
"default":
{
"httpProxy": "http://10.10.10.10:8080",
"httpsProxy": "http://10.10.10.10:8080"
}
}
}
This was implemented a while back as an ad hoc fix for some network issues. A security certificate was later implemented. I completely forgot about the fix. Clearing the ~/.docker/config.json file, and redoing docker compose up fixes the issue. I no longer need no_proxy to make curl works. Everything is as it should be now. Thank you so much for all the help.
I can't manage to push an image to a private registry using the docker API. I have read everything I found everywhere and tried everything with no luck...
I tried :
curl -X POST -H "X-Registry-Auth:XXXXXXXXXXXXXXX" http://dockerapiurl:2375/images/registryurl/python/push?tag=6
OR
curl -X POST -H 'X-Registry-Auth:{"username": "xxxxxx","password": "xxxxx", "serveraddress": "xxxx.url.net", "auth": ""}' http://dockerapiurl:2375/images/registryurl/python/push?tag=6
I always get the same error :
{"errorDetail":{"message":"errors:\ndenied: requested access to the resource is denied\nunauthorized: authentication required\n"},"error":"errors:\ndenied: requested access to the resource is denied\nunauthorized: authentication required\n"}
If I use docker push in CLI mode everything works, what am I doing wrong?
Thanks!!
it needs to be encoded in base 64, try this
XRA=`echo "{ \"username\": \"xxxxxx\", \"password\": \"xxxxxx\", \"email\": \"youmail#example.org\", \"serveraddress\": \"xxxxxx\" }" | base64 --wrap=0`
curl -X POST -d "" -H "X-Registry-Auth: $XRA" http://dockerapiurl:2375/images/registryurl/python/push?tag=6
end result should look like this
curl -X POST -d "" -H "X-Registry-Auth: eyAidXNlcm5hbWUiOiAieHh4eHh4IiwgInBhc3N3b3JkIjogInh4eHh4eCIsICJlbWFpbCI6ICJ5b3VtYWlsQGV4YW1wbGUub3JnIiB9Cg==" http://dockerapiurl:2375/images/registryurl/python/push?tag=6
I am trying to run a simple Apache Beam pipeline with the DirectRunner that reads from a Pub/Sub subscription and writes the messages to disk.
The pipeline works fine when I run it against GCP, however when I try to run it against my local Pub/Sub emulator, it doesn't seem to be doing anything.
I am using a custom Options class that extends the org.apache.beam.sdk.io.gcp.pubsub.PubsubOptions class.
public interface Options extends PubsubOptions {
#Description("Pub/Sub subscription to read the input from")
#Required
ValueProvider<String> getInputSubscription();
void setInputSubscription(ValueProvider<String> valueProvider);
}
The pipeline is quite simple
pipeline
.apply("Read Pub/Sub Messages", PubsubIO.readMessagesWithAttributes()
.fromSubscription(options.getInputSubscription()))
.apply("Add a fixed window", Window.into(FixedWindows.of(Duration.standardSeconds(WINDOW_SIZE))))
.apply("Convert Pub/Sub To String", new PubSubMessageToString())
.apply("Write Pub/Sub messages to local disk", new WriteOneFilePerWindow());
The pipeline is executed with the following options
mvn compile exec:java \
-Dexec.mainClass=DefaultPipeline \
-Dexec.cleanupDaemonThreads=false \
-Dexec.args=" \
--project=my-project \
--inputSubscription=projects/my-project/subscriptions/my-subscription \
--pubsubRootUrl=http://127.0.0.1:8681 \
--runner=DirectRunner"
I am using this Pub/Sub emulator docker image and executing it with the following command:
docker run --rm -ti -p 8681:8681 -e PUBSUB_PROJECT1=my-project,topic:my-subscription marcelcorso/gcloud-pubsub-emulator:latest
Is there more configuration required to make this work?
Turns out that an Apache Beam pipeline is unable to read from a local Pub/Sub emulator if you have GOOGLE_APPLICATION_CREDENTIALS environment variable set.
Once I removed this environment variable which was pointing to a GCP service account, the pipeline worked seamlessly with the local Pub/Sub emulator.
You can troubleshoot the local emulator by issuing manual HTTP requests to it (via curl), like so:
$ curl -d '{"messages": [{"data": "c3Vwc3VwCg=="}]}' -H "Content-Type: application/json" -X POST localhost:8681/v1/projects/my-project/topics/topic:publish
{
"messageIds": ["5"]
}
$
$ curl -d '{"returnImmediately":true, "maxMessages":1}' -H "Content-Type: application/json" -X POST localhost:8681/v1/projects/my-project/subscriptions/my-subscription:pull
{
"receivedMessages": [{
"ackId": "projects/my-project/subscriptions/my-subscription:9",
"message": {
"data": "c3Vwc3VwCg==",
"messageId": "5",
"publishTime": "2019-04-30T17:26:09Z"
}
}]
}
$
Or by pointing the gcloud command-line tool at it:
$ CLOUDSDK_API_ENDPOINT_OVERRIDES_PUBSUB=localhost:8681 gcloud pubsub topics list
Also, note that when the emulator comes up, it creates the topic and subscription from scratch, so there are no messages on them. If your pipeline expects to immediately pull messages on the subscription, that would explain why it seems “stuck”. Note that when you run the pipeline at GCP, the topic and subscription you use there may already have messages on them.
I am using Icinga Version 2.4.2 to monitor services on several hosts. I would like to be able to place certain hosts in maintenance mode for a set amount of time using a cli tool or rest API instead of the Web UI.
Is this possible and if so what tool/api should I use?
If I cannot do this through a remote tool/api what command should I use on the server or client to place clients in maintenance mode?
Update: It seems like the rest api has a solution. This set of permissions works:
object ApiUser "root" {
password = "foobar"
permissions = [ "console", "objects/query/Host", "objects/query/Service", "actions/schedule-downtime", "actions/remove-downtime"]
}
Then the following allows me to make and remove downtimes:
curl -k -s -u root:foobar -H 'Accept: application/json' -X POST "https://localhost:5665/v1/actions/schedule-downtime?filter=host.name==%22${TARGET}%22&type=Host" -d '{ "start_time": "1528239116", "end_time": "1528325561", "duration": 1000, "author": "root", "comment": "downtime on $TARGET" }' | jq .
curl -k -s -u root:foobar -H 'Accept: application/json' -X POST "https://localhost:5665/v1/actions/remove-downtime?filter=host.name==%22${TARGET}%22&type=Host" | jq .
Right now the only issue with this I am having is how to pass in variables for the start and stop dates. Attempting this keeps resulting in the following error:
{
"status": "Invalid request body: Error: lexical error: invalid char in json text.\n { \"start_time\": $current_time,\n (right here) ------^\n\n",
"error": 400
}
I am trying to call my F5 Big IP REST API to update some VIP configurations, for example I want to update the VIP description using this command:
curl -s -k --tlsv1.2 -u admin:password -H "Content-Type: application/json" -X PUT https://ManagmentIP/mgmt/tm/ltm/virtual/~MyPool~MyVIP_887 {"description":"THIS IS JUST A TEST"}
I am getting this error:
{"code":400,"message":"0107028c:3: The source (::%10) and destination (10.62.185.3%10) addresses for virtual server (/MyPool/MyVIP_887) must be be the same type (IPv4 or IPv6).","errorStack":[],"apiError":3}
My F5 Big IP version: BIG-IP 12.1.3 Build 0.0.378 Final
Am I missing something?
The answer is taken from F5 DevCentral:
You have to use -d 'data' = The JSON data to send. Note that you need to quote the entire json blob, and each "name":"value" pairs must be quoted. When you have nested quotes, make sure you escape () them.
Refer the cookbook if it helps.
So something like,
curl -sku admin -H "Content-Type: application/json" -X PATCH
https:///mgmt/tm/ltm/virtual/ -d
'{"description": "Hello World!"}'