How to log all queries made to elasticsearch container? - docker

I'm trying to debug my app. When I hit the production elasticsearch host through my python app, the results are returned. When I change it to localhost, it works when I hit it manually through the browser, but not through the app.
I'd like to log all queries that are hitting my elasticsearch container, I've tried env variables such as "DEBUG=TRUE" or "DEBUG=*", and no requests are being logged (even when hitting it manually and results are returned).
Any idea how I'd do this?
Thanks

You can use the slow queries log with a really reduced treshold. See https://www.elastic.co/guide/en/elasticsearch/reference/current/index-modules-slowlog.html for more details on this feature. For example:
index.search.slowlog.threshold.query.debug: 0s
Using cluster or index settings api you're able to change this settings while running the cluster.
curl -XPUT "http://localhost:9200/_all/_settings" -H "content-type: application/json" -d'
{
"index.search.slowlog.threshold.query.debug": "0s"
}'
There are even more settings you can use to log and monitor index, fetch or search duration.

Related

Docker login to Gitea registry fails even though curl succeeds

I'm using Gitea (on Kubernetes, behind an Ingress) as a Docker image registry. On my network I have gitea.avril aliased to the IP where it's running. I recently found that my Kubernetes cluster was failing to pull images:
Failed to pull image "gitea.avril/scubbo/<image_name>:<tag>": rpc error: code = Unknown desc = failed to pull and unpack image "gitea.avril/scubbo/<image_name>:<tag>": failed to resolve reference "gitea.avril/scubbo/<image_name>:<tag>": failed to authorize: failed to fetch anonymous token: unexpected status: 530
While trying to debug this, I found that I am unable to login to the registry, even though curling with the same credentials succeeds:
$ curl -k -u "scubbo:$(cat /tmp/gitea-password)" https://gitea.avril/v2/_catalog
{"repositories":[...populated list...]}
# Tell docker login to treat `gitea.avril` as insecure, since certificate is provided by Kubernetes
$ cat /etc/docker/daemon.json
{
"insecure-registries": ["gitea.avril"]
}
$ docker login -u scubbo -p $(cat /tmp/gitea-password) https://gitea.avril
WARNING! Using --password via the CLI is insecure. Use --password-stdin.
Error response from daemon: Get "https://gitea.avril/v2/": received unexpected HTTP status: 530
The first request shows up as a 200 OK in the Gitea logs, the second as a 401 Unauthorized.
I get a similar error when I kubectl exec onto the Gitea container itself, install Docker, and try to docker login localhost:3000 - after an error indicating that server gave HTTP response to HTTPS client, it falls back to the http protocol and similarly reports a 530.
I've tried restart Gitea with GITEA__log__LEVEL=Debug, but that didn't result in any extra logging. I've also tried creating a fresh user (in case I have some weirdness cached somewhere) and using that - same behaviour.
EDIT: after increasing log level to Trace, I noticed that successful attempts to curl result in the following lines:
...rvices/auth/basic.go:67:Verify() [T] [638d16c4] Basic Authorization: Attempting login for: scubbo
...rvices/auth/basic.go:112:Verify() [T] [638d16c4] Basic Authorization: Attempting SignIn for scubbo
...rvices/auth/basic.go:125:Verify() [T] [638d16c4] Basic Authorization: Logged in user 1:scubbo
whereas attempts to docker login result in:
...es/container/auth.go:27:Verify() [T] [638d16d4] ParseAuthorizationToken: no token
This is the case even when doing docker login localhost:3000 from the Gitea container itself (that is - this is not due to some authentication getting dropped by the Kubernetes Ingress).
I'm not sure what could be causing this - I'll start up a fresh Gitea registry to compare.
EDIT: in this Github issue, the Gitea team pointed out that standard docker authentication includes creating a Bearer token which references the ROOT_URL, explaining this issue.
Text below preserved for posterity:
...Huh. I have a fix, and I think it indicates some incorrect (or, at least, unexpected) behaviour; but in fairness it only comes about because I'm doing some pretty unexpected things as well...
TL;DR attempting to docker login to Gitea from an alternative domain name can result in an error if the primary domain name is unavailable; apparently because, while doing so, Gitea itself makes a call to ROOT_URL rather than localhost
Background
Gitea has a configuration variable called ROOT_URL. This is, among other things, used to generate the copiable "HTTPS" links from repo pages. This is presumed to be the "main" URL on which users will access Gitea.
I use Cloudflared Tunnels to make some of my Kubernetes services (including Gitea) available externally (on <foo>.scubbo.org addresses) without opening ports to the outside world. Since Cloudflared tunnels do not automatically update DNS records when a new service is added, I have written a small tool[0] which can be run as an initContainer "before" restarting the Cloudflared tunnel, to refresh DNS[1].
Cold-start problem
However, now there is a cold-start problem:
(Unless I temporarily disable this initContainer) I can't start Cloudflared tunnels if Gitea is unavailable (because it's the source for the initContainer's image)
Gitea('s public address) will be unavailable until Cloudflared tunnels start up.
To get around this cold-start problem, in the Cloudflared initContainers definition, I reference the image by a Kubernetes Ingress name (which is DNS-aliased by my router) gitea.avril rather than by the public (Cloudflared tunnel) name gitea.scubbo.org. The cold-start startup sequence then becomes:
Cloudflared tries to start up, fails to find a registry at gitea.avril, continues to attempt
Gitea (Pod and Ingress) start up
Cloudflared detects that gitea.avril is now responding, pulls the Cloudflared initContainer image, and successfully deploys
gitea.scubbo.org is now available (via Cloudflared)
So far, so good. Except that testing now indicates[2] that, when trying to docker login (or docker pull, or presumably, many other docker commands) to a Gitea instance will result in a call to the ROOT_URL domain - which, if Cloudflared isn't up yet, will result in an error[3].
So what?
My particular usage of this is clearly an edge case, and I could easily get around this in a number of ways (including moving my "Cloudflared tunnel startup" to a separately-initialized, only-privately-available registry). However, what this reduces to is that "docker API calls to a Gitea instance will fail if the ROOT_URL for the instance is unavailable", which seems like unexpected behaviour to me - if the API call can get through to the Gitea service in the first place, it should be able to succeed in calling itself?
However, I totally recognize that the complexity of fixing this (going through and replacing $ROOT_URL with localhost:$PORT throughout Gitea) might not be worth the value. I'll open an issue on the Gitea team, but I'd be perfectly content with a WILLNOTFIX.
Footnotes
[0]: Note - depending on when you follow that link, you might see a red warning banner indicating "_Your ROOT_URL in app.ini is https://gitea.avril/ but you are visiting https://gitea.scubbo.org/scubbo/cloudflaredtunneldns_". That's because of this very issue!
[1]: Note from the linked issue that the Cloudflared team indicate that this is unexpected usage - "We don't expect the origins to be dynamically added or removed services behind cloudflared".
[2]: I think this is new behaviour, as I'm reasonably certain that I've done a successful "cold start" before. However, I wouldn't swear to it.
[3]: After I've , the error is instead error parsing HTTP 404 response body: unexpected end of JSON input: "" rather than the 530-related errors I got before. This is probably a quirk of Cloudflared's caching or DNS behaviour. I'm working on a minimal reproducing example that circumvents Cloudflared.

How to use an ssh Api Key in Jenkins

I wanted to know if you can help me with this problem, I am currently using Newman to load a collection of test cases that I made in Postman, I am trying to run a JOB in Jenkins, when I run the command:
newman run collection.json -e sendoment.json
It works, the collection runs in Jenkins but it shows me this error:
connect ECONNREFUSED 127.0.0.1:8095
I know that I have to pass the ssh code to it so that it recognizes the port and gives me access, but I don't know what commands to use to send it.
I currently have an id_rsa.pub that I use to perform this action locally.
My question is, how can I send this file or how can I make sure the ports will be there? I do not have privileges to enter the Managment, so I cannot add the variable as a plugin, I have seen this in other blogs either here or in Postman.
It is just a collection in json format with various calls to various EndPoints, where the states they have are validated.
To solve this action I had to create a method that would allow me to log in to the virtual machine that I have in Jenkins, since, when performing the action, this port is blocked. 2 Cookies are sent to perform the Log In action.

Test AWS SQS End Point

I have an AWS SQS queue, and for some reason my connections to the the end point are being refused.
If I try:
curl -Is https://sqs.[REDACTED].amazonaws.com/[REDACTED]/[REDACTED]| head -1
I get "HTTP/1.1 404 Not Found" error. I suspect that this is the root cause of the issue I am having with being able to access the queue with an external application. However, I have the conditions set really broadly:
How do I know the issue I am experiencing is not a permissions issue? Is there a way for me to test the queue is actually responsive?
If so, is there a way in CURL for me to get any or all data from a message to verify I am receiving content, and then build my application around my response?
You can use the following curl request:
curl -i https://sqs.{region}.amazonaws.com/{account}/{name}/\?Action=ReceiveMessage
If you want to use long-polling:
while true; do
curl -i https://sqs.{region}.amazonaws.com/{account}/{name}/\?Action=ReceiveMessage\&WaitTimeSeconds\=20
done
See the ReceiveMessage SQS API docs for all supported parameters.
Another alternative is to use the AWS CLI instead, which also works with authentication:
aws sqs receive-message --queue-url https://sqs.{region}.amazonaws.com/{account}/{name}

How to use Tika in server mode

On Tika's website it says (concerning tika-app-1.2.jar) it can be used in server mode. Does anyone know how to send documents and receive parsed text from this server once it is running?
Tika supports two "server" modes. The simpler and original is the --server flag of Tika-App. The more functional, but also more recent is the JAX-RS JSR-311 server component, which is an additional jar.
The Tika-App Network Server is very simple to use. Simply start Tika-App with the --server flag, and a --port ### flag telling it what port to listen on. Then, connect to that port and send it a single file. You'll get back the html version. NetCat works well for this, something like java -jar tika-app.jar --server --port 12345 followed by nc 127.0.0.1 12345 < MyFileToExtract will get you back the html
The JAX-RS JSR-311 server component supports a few different urls, for things like metadata, plain text etc. You start the server with java -jar tika-server.jar, then do HTTP put calls to the appropriate url with your input document and you'll get the resource back. There are loads of details and examples (including using curl for testing) on the wiki page
The Tika App Network Server is fairly simple, only supports one mode (extract to HTML), and is generally used for testing / demos / prototyping / etc. The Tika JAXRS Server is a fully RESTful service which talks HTTP, and exposes a wide range of Tika's modes. It's the generally recommended way these days to interface with Tika over the network, and/or from non-Java stacks.
Just adding to #Gagravarr's great answer.
When talking about Tika in server mode, it is important to differentiate between two versions which can otherwise cause confusion:
tika-app.jar has the --server --port 9998 options to start a simple server
tika-server.jar is a separate component using JAX-RS
The first option only provides text extraction and returns the content as HTML. Most likely, what you really want is the second option, which is a RESTful service exposing many more of Tika's features.
You can simply download the tika-server.jar from the Tika project site. Start the server using
java -jar tika-server-x.x.jar -h 0.0.0.0
The -h 0.0.0.0 (host) option makes the server listen for any incoming requests, otherwise without it it would only listen for requests from localhost. You can also add the -p option to change the port, otherwise it defaults to 9998.
Then, once the server has started you can simply access it using your browser. It will list all available endpoints.
Finally to extract meta data from a file you can use cURL like this:
curl -T testWORD.doc http://example.com:9998/meta
Returns the meta data as key/value pairs one per line. You can also have Tika return the results as JSON by adding the proper accept header:
curl -H "Accept: application/json" -T testWORD.doc http://example.com:9998/meta
[Update 2015-01-19] Previously the comment said that tika-server.jar is not available as download. Fixed that since it actually does exist as a binary download.
To enhance Gagravarr perfect answer:
If your document is got from a WEB server => curl -u
"http://myserver-domain/*path-to-doc*/doc-name.extension" | nc
127.0.0.1 12345
And it is even better if the document is protected by password => curl -u
login:*password*
"http://myserver-domain/*path-to-doc*/doc-name.extension" | nc
127.0.0.1 12345

Neo4j Multiple server instances Error 401

I am new to neo4j,I just follow the neo4j official manual:
install two instances on one machine ,my environment is ubuntu-11.10.I had success start up the neo4j service and entered the websites http://localhost:7474/webadmin/ .But when I tried to run the "DELETE /db/data/cleandb/secret-key' command in its http console .It returned error 401. Any idea about this?
Which version of neo4j are you using?
You have to configure two different ports for the two servers. Think you did this.
The clean-db-addon doesn't come out of the box, you have to download it and and copy it in the plugins directory and adjust the neo4j-server.properties config file.
org.neo4j.server.thirdparty_jaxrs_classes=org.neo4j.server.extension.test.delete=/cleandb
org.neo4j.server.thirdparty.delete.key=<please change secret-key>
Then you can call it for each of your servers with:
curl -X DELETE http://localhost:<port>/cleandb/secret-key

Resources