Elasticsearch: reached maximum index count for current plan - status:500 - ruby-on-rails

Is there a Heroku cli command that will allow us to purge Elasticsearch indices?
This is the error that's returned which is causing our application to be non-functional:
reached maximum index count for current plan - status:500
Upgrading our current plan is not a feasible option due to being in stages at this time.
Please advise on how to cure this.

You should be able to simply delete the indices you want via a direct curl call. You first need to retrieve the URL of your ES cluster using the heroku config:get command and retrieving the appropriate variable:
If you've installed the SearchBox addon:
SB_URL=$(heroku config:get SEARCHBOX_URL)
curl -XDELETE http://<SB_URL>/your_index
If you've installed the Found addon:
FOUND_URL=$(heroku config:get FOUNDELASTICSEARCH_URL)
curl -XDELETE http://<FOUND_URL>/your_index
Last but not least if you've installed the Bonsai addon:
BONSAI_URL=$(heroku config:get BONSAI)
curl -XDELETE http://<BONSAI_URL>/your_index
To test this out:
# create one index => OK
> curl -XPOST http://<***_URL>/your_index_1
{"acknowledged":true}
# create second index => OK
> curl -XPOST http://<***_URL>/your_index_2
{"acknowledged":true}
# create n'th index allowed by your plan => NOT OK
> curl -XPOST http://<***_URL>/your_index_n
{"error":"ElasticsearchGenerationException[reached maximum index count for current plan]","status":500}
# delete first index => OK
> curl -XDELETE http://<***_URL>/your_index_1
{"acknowledged":true}
# create n'th index again => OK
> curl -XPOST http://<***_URL>/your_index_n
{"acknowledged":true}

Shortly after posting this, I found a non-cli solution.
log into Heroku
Select the "Resources" tab
Select "SearchBox Elasticsearch" (this is what I have installed, yours may be different)
Select Dashboard > Indices
Select the trash icon next to the index you want to delete

Related

Elastic Search got 404 when requesting `entries`

I've installed Elastic Search on my Ubuntu Docker container and installed it successfully, but when I try to run this:
curl -X GET "localhost:9200/entries?pretty"
I got 404 error:
{
"error" : {
"root_cause" : [
{
"type" : "index_not_found_exception",
"reason" : "no such index [entries]",
"resource.type" : "index_or_alias",
"resource.id" : "entries",
"index_uuid" : "_na_",
"index" : "entries"
}
],
"type" : "index_not_found_exception",
"reason" : "no such index [entries]",
"resource.type" : "index_or_alias",
"resource.id" : "entries",
"index_uuid" : "_na_",
"index" : "entries"
},
"status" : 404
}
I've also restarted elasticsearch service successfully, but it still doesn't work.
Feedbin need Elastic Search to run and it tried to request "http://localhost:9200/entries" but failed. I don't know what's wrong with my configuration...
curl -X GET "localhost:9200/entries?v&pretty" will list the mappings for the index named entries. Do you have that index in your cluster?
First check if your cluster is up and running using curl -X GET "localhost:9200?pretty". If this returns a response with a non-null cluster_uuid, it means your ES Cluster is up and running.
Next, check for the health of your ES cluster using curl -X GET "localhost:9200/_cluster/health?pretty". Does it show either green or yellow in status field?
Next, list all the indices using curl -X GET "localhost:9200/_cat/indices?pretty". This will show you if entries index exists or not.
Once you have that index, you can use curl -X GET "localhost:9200/entries/_search?pretty" to search documents. It will return top 10 documents.
Listing curl commands you could leverage to validate the Elastic Search installation, cluster health, index list & index info.
To check if ElasticSearch is installed successfully : curl -XGET localhost:9200/
To get a list of indices and information about each index : curl http://localhost:9200/_status
To get a list of all indices in your cluster : curl http://localhost:9200/_aliases?pretty=true
To check Elasticsearch cluster health : curl http://localhost:9200/_cat/health

Curl a URL list from a file and make it faster with parallel

Right now i'm using the followin code:
while read num;
do M=$(curl "myurl/$num")
echo "$M"
done < s.txt
where s.txt contains a list (1 per line) of a part of the url.
Is it correct to assume that curl is running sequentially?
Or is it running in thread/jobs/multiple conn at a time?
I've found this online:
parallel -k curl -s "http://example.com/locations/city?limit=100\&offset={}" ::: $(seq 100 100 30000) > out.txt
The problem is that my sequence is coming from a file or from a variable (one element per line) and i can't adapt it to my needs
I've not fully understood how to pass the list to parallel
Should i save all the curl commands in the list and run it with parallel -a ?
Regards,
parallel -j100 -k curl myurl/{} < s.txt
Consider spending an hour walking through man parallel_tutorial. Your command line will love you for it.

MALFORMED_REQUEST on PayPal execute payment

I've lost almost two whole days trying to solve PayPal related issues.
I'm trying to execute a payment after user's approval, but I'm getting MALFORMED_REQUEST everytime I send the Curl request.
def execute
# Get the access_token
token = get_paypal_token(false)
payer_id = params[:PayerID]
payment_id = params[:paymentId]
# Creating the data object
stringJson = {:payer_id => "#{payer_id}"}
# You can see that I hard-coded the payer_id to ensure the JSON is correct
curlString = `curl -v -H "Authorization: Bearer #{token}" -H "Content-Type: application/json" -d '{"payer_id" : "QULQSFESGMCX2"}' https://api.sandbox.paypal.com/v1/payments/payment/#{payment_id}/execute/`
end
And the response is:
"{\"name\":\"MALFORMED_REQUEST\",\"message\":\"The request JSON is not well formed.\",\"information_link\":\"https://developer.paypal.com/webapps/developer/docs/api/#MALFORMED_REQUEST\",\"debug_id\":\"6431589715660\"}"
Looks like you are running it from windows. I tried your command from my windows and my test server receiving the json as '{payer_id. It is happening because of your single quote.
I have changed the single quotes into double and it is working fine here with me.
-d "{\"payer_id\" : \"QULQSFESGMCX2\"}"
You can run the curl command from the console and see what happens, after that you can push it inside your ruby script code.

Jenkins REST API: get a build number of a job that just started

I'm starting the job by issuing this request
/POST /jenkins/job/jobName/build
but it returns nothing. I want to get the build number that just has started.
Although late , posting an answer so it can benefit any other seekers :
We can get the latest/current build no. using the following API :
<JENKINS_URI>/job/<JOB_NAME>/lastBuild/buildNumber
This will provide the current executing or the last successful build.
Example :
curl -X GET <JENKINS_URI>/<JOB_NAME>/lastBuild/buildNumber --user <user>:<key>
o/p : 20
This is how i do it, First trigger the Job with the rest api call. From this call's response, get the queue location. And using this queue location, fetch the build number. The build number will not be generated immediately, thats why the while loop.
JENKINS_EMAIL=<Email>
JENKINS_TOKEN=<API Key>
JENKINS_URL=<Jenkins Server URL>
JENKINS_JOB=<JOB>
# Trigger Job and get queue location
location=$(curl -X POST -s -I -u $JENKINS_EMAIL:$JENKINS_TOKEN "${JENKINS_URL}${JENKINS_JOB}/buildWithParameters?pass=ok" | grep location | awk '{ print $NF }')
location2=${location//[$'\t\r\n']}
# Wait till build number is generated
while true ; do
buildnumber=$(curl -X GET -s -u $JENKINS_EMAIL:$JENKINS_TOKEN "${location2}api/json" | jq '.executable.number')
if [[ $buildnumber != "null" ]]; then
echo "Build Started. Build number is : "$buildnumber"
break
else
echo "Still in Queue"
sleep 1
fi
done
Jenkins - How to access BUILD_NUMBER environment variable
use ${BUILD_NUMBER} to get the current build.
already extensive answers are available in stack overflow. kindly surf it before raising the question.

Jenkins - Posting results to a external monitoring job is adding garbage to the build job log

I have a external monitor job that I'm pushing the result of another job to it with curl and base on this link :
Monitoring external jobs
After I create the job I just need to run a curl command with the body encoded in HEX to the specified url and then a build will be created and the output will be added to it but what I get instead is part of my output in clear text and the rest in weird characters like so :
Started
Asking akamai to purge this urls:
http://xxx/sites/all/modules/custom/uk.png http://aaaaaasites/all/modules/custom/flags/jp.png
<html><head><title>401 Unauthorized</title> </h�VC��&�G����CV�WF��&��VC�������R&R��BWF��&��VBF�66W72F�B&W6�W&6S�����&�G�����F����F�RW&�F �6�V6�7FGW2�bF�R&WVW7B�2��F�RF��RF�v�B�2��6�Ɩ�r&6�w&�V�B��"F�6�V6�7FGW2�bF�RF�6�W#�v�F��rf�"���F�W&vRF��6O request please keep in mind this is an estimated time
Waiting for another 60 seconds
Asking akamai to purge this urls:
...
..
..
This is how I'm doing it :
export output=`cat msg.out|xxd -c 256 -ps`
curl -k -X POST -d "<run><log encoding=\"hexBinary\">$output</log><result>0</result> <duration>2000</duration></run>" https://$jenkinsuser:$jenkinspass#127.0.0.1/jenkins/job/akamai_purge_results/postBuildResult -H'.crumb:c775f3aa15464563456346e'
If I cat that file is all fine and even if I edit it with vi I can't see any problem with it.
Do you guys have any idea how to fix this ?
Could it be a problem with the hex encoding ? ( I tried hex/enc/dec pages with the result of xxd and they look fine)
Thanks.
I had the same issue, and stumbled across this: http://blog.markfeeney.com/2010/01/hexbinary-encoding.html
From that page, you can get the encoding you need via this command:
echo "Hello world" | hexdump -v -e '1/1 "%02x"'
48656c6c6f20776f726c640a
An excerpt from the explanation:
So what the hell is that? -v means don't suppress any duplicate data
in the output, and -e is the format string. hexdump's very particular
about the formatting of the -e argument; so careful with the quotes.
The 1/1 means for every 1 byte encountered in the input, apply the
following formatting pattern 1 time. Despite this sounding like the
default behaviour in the man page, the 1/1 is not optional. /1 also
works, but the 1/1 is very very slightly more readable, IMO. The
"%02x" is just a standard-issue printf-style format code.
So in your case, you would do this (removing 'export' in favor of inline variable)
OUTPUT=`cat msg.out | hexdump -v -e '1/1 "%02x"'` curl -k -X POST -d "<run><log encoding=\"hexBinary\">$OUTPUT</log><result>0</result> <duration>2000</duration></run>" https://$jenkinsuser:$jenkinspass#127.0.0.1/jenkins/job/akamai_purge_results/postBuildResult -H'.crumb:c775f3aa15464563456346e'

Resources