I've installed Elastic Search on my Ubuntu Docker container and installed it successfully, but when I try to run this:
curl -X GET "localhost:9200/entries?pretty"
I got 404 error:
{
"error" : {
"root_cause" : [
{
"type" : "index_not_found_exception",
"reason" : "no such index [entries]",
"resource.type" : "index_or_alias",
"resource.id" : "entries",
"index_uuid" : "_na_",
"index" : "entries"
}
],
"type" : "index_not_found_exception",
"reason" : "no such index [entries]",
"resource.type" : "index_or_alias",
"resource.id" : "entries",
"index_uuid" : "_na_",
"index" : "entries"
},
"status" : 404
}
I've also restarted elasticsearch service successfully, but it still doesn't work.
Feedbin need Elastic Search to run and it tried to request "http://localhost:9200/entries" but failed. I don't know what's wrong with my configuration...
curl -X GET "localhost:9200/entries?v&pretty" will list the mappings for the index named entries. Do you have that index in your cluster?
First check if your cluster is up and running using curl -X GET "localhost:9200?pretty". If this returns a response with a non-null cluster_uuid, it means your ES Cluster is up and running.
Next, check for the health of your ES cluster using curl -X GET "localhost:9200/_cluster/health?pretty". Does it show either green or yellow in status field?
Next, list all the indices using curl -X GET "localhost:9200/_cat/indices?pretty". This will show you if entries index exists or not.
Once you have that index, you can use curl -X GET "localhost:9200/entries/_search?pretty" to search documents. It will return top 10 documents.
Listing curl commands you could leverage to validate the Elastic Search installation, cluster health, index list & index info.
To check if ElasticSearch is installed successfully : curl -XGET localhost:9200/
To get a list of indices and information about each index : curl http://localhost:9200/_status
To get a list of all indices in your cluster : curl http://localhost:9200/_aliases?pretty=true
To check Elasticsearch cluster health : curl http://localhost:9200/_cat/health
Related
I have defined a file with name - play.rego
package play
default hello = false
hello {
m := input.message
m == "world"
}
I also have file called -input.json
{ "message": "world"}
I now want to use the policy to evaluate on input data using opa server -
opa run --server
I also then registered the policy using below command -
curl -X PUT http://localhost:8181/v1/policies/play --data-binary #play.rego
and then I run below command for evaluating policy on the query -
curl -X POST http://localhost:8181/v1/policies/v1/data/play --data-binary '{"message": "world"}'
But the server always responds with nothing.
I need help fixing the problem?
The URL of the second request is wrong (should not contain v1/policies), and the v1 API requires you to wrap the input document inside an input attribute. Try:
curl -X POST http://localhost:8181/v1/data/play --data-binary '{"input":{"message": "world"}}'
I've deployed the OPA docker plugin as per instruction. And everything was fine until I've tried to create custom docker API permissions for docker exec.
I've added following section to authz.rego file:
allow {
user_id := input.Headers["Authz-User"]
users[user_id].readOnly
input.path[0] == "/v1.41/containers/busybox/exec"
input.Method == "POST"
}
But it still gives me error when I try to run following bash command: docker exec -it busybox sh under Bob test user as per instruction.
journalctl -u docker.service provides following error:
level=error msg="AuthZRequest for POST /v1.41/containers/busybox/exec returned error: authorization denied by plugin openpolicyagent/opa-docker-authz-v2:0.4: request rejected by administrative policy"
The funny thing is when I comment out input.path section it works as full RW user so the rule works but the strict mention of API path - does not. Maybe I'm specifying it in a wrong way?
Tried different variations like:
input.path == ["/v1.41/containers/busybox/exec"]
input.path = ["/v1.41/containers/busybox/exec"]
input.path = ["/v1.41*"]
input.path = ["/v1.41/*"]
input.path = ["/v1.41%"]
input.path = ["/v1.41/%"]
Also would appreciate advice on how to allow exec operations for any container not only the specified one.
Thanks in advance!
Looking at the input map provided to OPA, you should find both input.Path, input.PathPlain and input.PathArr:
input := map[string]interface{}{
"Headers": r.RequestHeaders,
"Path": r.RequestURI,
"PathPlain": u.Path,
"PathArr": strings.Split(u.Path, "/"),
"Query": u.Query(),
"Method": r.RequestMethod,
"Body": body,
"User": r.User,
"AuthMethod": r.UserAuthNMethod,
}
There's no lowercase input.path available, but using any of the other alternatives should work.
I am new on Icinga2, using 2.4.0 version and I am trying to execute some API calls but I have found a problem when I have tried to create a service manually.
This is the command that I execute to create a service called api_dummy_service_1 for api_dummy_host_1 host:
curl -u $ICINGA2_API_USER:$ICINGA2_API_PASSWORD \
-H 'Accept: application/json' -X PUT \
-k "https://$ICINGA2_HOST:$ICINGA2_API_PORT/v1/objects/services/api_dummy_host_1!api_dummy_service_1" \
-d '{ "templates": [ "generic-service" ], "attrs": { "display_name": "api_dummy_service_1", "check_command" : "dns", "vars.dns_lookup": "google-public-dns-a.google.com.", "vars.dns_expected_answer": "8.8.8.8", "host_name": "api_dummy_host_1" } }' | python -m json.tool
When I execute it, the following error message appears:
-bash: !api_dummy_service_1: event not found
I have examinated Icinga logs, I have activated debug mode on Icinga also and tried to search information related to this in internet with no results.
Can anyone help me please? Thanks in advance!
Issue fixed! After doing more test in detail we have detected that the problem was related to the URL that we use to connect with icinga2 API, the ! character must be escaped.
I have changed ! to %21 and the command works
I'm trying to add documents to a specific core that I have created before with this command:
./solr create -c kba_oct_2011
Than I add the schema using curl utility:
curl http://localhost:8983/solr/kba_oct_2011/schema -X POST -H 'Content-type:application/json' --data-binary '{
"add-field" : {
"name":"title",
"type":"text_general",
"indexed":true,
"stored":true
},
"add-field" : {
"name":"description",
"type":"text_general",
"indexed":true,
"stored":true
},
"add-field" : {
"name":"date",
"type":"date",
"indexed":true,
"stored":true
}
}'
And in the end I'm trying to add documents using post command:
./post -c kba_oct_2011 /home/IPA/IPA_ws/Dataset_news_KBA/2011/10_October/*.xml
In the directory /home/IPA/IPA_ws/Dataset_news_KBA/2011/10_October/ there are 72.420 files and when I execute the command I have this error:
-bash: ./post: Argument list too long
Someone can I help me?
Thanks!
This occurs beacuse your expanded argument list expands command line input limit.
Use xargs to provide argument list to your post command.
Is there a Heroku cli command that will allow us to purge Elasticsearch indices?
This is the error that's returned which is causing our application to be non-functional:
reached maximum index count for current plan - status:500
Upgrading our current plan is not a feasible option due to being in stages at this time.
Please advise on how to cure this.
You should be able to simply delete the indices you want via a direct curl call. You first need to retrieve the URL of your ES cluster using the heroku config:get command and retrieving the appropriate variable:
If you've installed the SearchBox addon:
SB_URL=$(heroku config:get SEARCHBOX_URL)
curl -XDELETE http://<SB_URL>/your_index
If you've installed the Found addon:
FOUND_URL=$(heroku config:get FOUNDELASTICSEARCH_URL)
curl -XDELETE http://<FOUND_URL>/your_index
Last but not least if you've installed the Bonsai addon:
BONSAI_URL=$(heroku config:get BONSAI)
curl -XDELETE http://<BONSAI_URL>/your_index
To test this out:
# create one index => OK
> curl -XPOST http://<***_URL>/your_index_1
{"acknowledged":true}
# create second index => OK
> curl -XPOST http://<***_URL>/your_index_2
{"acknowledged":true}
# create n'th index allowed by your plan => NOT OK
> curl -XPOST http://<***_URL>/your_index_n
{"error":"ElasticsearchGenerationException[reached maximum index count for current plan]","status":500}
# delete first index => OK
> curl -XDELETE http://<***_URL>/your_index_1
{"acknowledged":true}
# create n'th index again => OK
> curl -XPOST http://<***_URL>/your_index_n
{"acknowledged":true}
Shortly after posting this, I found a non-cli solution.
log into Heroku
Select the "Resources" tab
Select "SearchBox Elasticsearch" (this is what I have installed, yours may be different)
Select Dashboard > Indices
Select the trash icon next to the index you want to delete