Nginx reverse proxy with node docker seems not working - docker

I set up a docker containing a nodejs server and which receives arguments from a path. I can communicate well with my docker instance when I run a curl command (I have an error message, but this is normal)
I'm trying to communicate with my docker instance from the outside with my Nginx server, but I'm having some problems.
Indeed, when I enter the access url, I get a 404 error. My configuration file looks like this.
when I try I go to the access url, I do have a 404 error.
My concern is on the "location" part of my configuration, but I don't really see how to solve the problem. If you have any leads, I'll take them :)

Related

Docker login to Gitea registry fails even though curl succeeds

I'm using Gitea (on Kubernetes, behind an Ingress) as a Docker image registry. On my network I have gitea.avril aliased to the IP where it's running. I recently found that my Kubernetes cluster was failing to pull images:
Failed to pull image "gitea.avril/scubbo/<image_name>:<tag>": rpc error: code = Unknown desc = failed to pull and unpack image "gitea.avril/scubbo/<image_name>:<tag>": failed to resolve reference "gitea.avril/scubbo/<image_name>:<tag>": failed to authorize: failed to fetch anonymous token: unexpected status: 530
While trying to debug this, I found that I am unable to login to the registry, even though curling with the same credentials succeeds:
$ curl -k -u "scubbo:$(cat /tmp/gitea-password)" https://gitea.avril/v2/_catalog
{"repositories":[...populated list...]}
# Tell docker login to treat `gitea.avril` as insecure, since certificate is provided by Kubernetes
$ cat /etc/docker/daemon.json
{
"insecure-registries": ["gitea.avril"]
}
$ docker login -u scubbo -p $(cat /tmp/gitea-password) https://gitea.avril
WARNING! Using --password via the CLI is insecure. Use --password-stdin.
Error response from daemon: Get "https://gitea.avril/v2/": received unexpected HTTP status: 530
The first request shows up as a 200 OK in the Gitea logs, the second as a 401 Unauthorized.
I get a similar error when I kubectl exec onto the Gitea container itself, install Docker, and try to docker login localhost:3000 - after an error indicating that server gave HTTP response to HTTPS client, it falls back to the http protocol and similarly reports a 530.
I've tried restart Gitea with GITEA__log__LEVEL=Debug, but that didn't result in any extra logging. I've also tried creating a fresh user (in case I have some weirdness cached somewhere) and using that - same behaviour.
EDIT: after increasing log level to Trace, I noticed that successful attempts to curl result in the following lines:
...rvices/auth/basic.go:67:Verify() [T] [638d16c4] Basic Authorization: Attempting login for: scubbo
...rvices/auth/basic.go:112:Verify() [T] [638d16c4] Basic Authorization: Attempting SignIn for scubbo
...rvices/auth/basic.go:125:Verify() [T] [638d16c4] Basic Authorization: Logged in user 1:scubbo
whereas attempts to docker login result in:
...es/container/auth.go:27:Verify() [T] [638d16d4] ParseAuthorizationToken: no token
This is the case even when doing docker login localhost:3000 from the Gitea container itself (that is - this is not due to some authentication getting dropped by the Kubernetes Ingress).
I'm not sure what could be causing this - I'll start up a fresh Gitea registry to compare.
EDIT: in this Github issue, the Gitea team pointed out that standard docker authentication includes creating a Bearer token which references the ROOT_URL, explaining this issue.
Text below preserved for posterity:
...Huh. I have a fix, and I think it indicates some incorrect (or, at least, unexpected) behaviour; but in fairness it only comes about because I'm doing some pretty unexpected things as well...
TL;DR attempting to docker login to Gitea from an alternative domain name can result in an error if the primary domain name is unavailable; apparently because, while doing so, Gitea itself makes a call to ROOT_URL rather than localhost
Background
Gitea has a configuration variable called ROOT_URL. This is, among other things, used to generate the copiable "HTTPS" links from repo pages. This is presumed to be the "main" URL on which users will access Gitea.
I use Cloudflared Tunnels to make some of my Kubernetes services (including Gitea) available externally (on <foo>.scubbo.org addresses) without opening ports to the outside world. Since Cloudflared tunnels do not automatically update DNS records when a new service is added, I have written a small tool[0] which can be run as an initContainer "before" restarting the Cloudflared tunnel, to refresh DNS[1].
Cold-start problem
However, now there is a cold-start problem:
(Unless I temporarily disable this initContainer) I can't start Cloudflared tunnels if Gitea is unavailable (because it's the source for the initContainer's image)
Gitea('s public address) will be unavailable until Cloudflared tunnels start up.
To get around this cold-start problem, in the Cloudflared initContainers definition, I reference the image by a Kubernetes Ingress name (which is DNS-aliased by my router) gitea.avril rather than by the public (Cloudflared tunnel) name gitea.scubbo.org. The cold-start startup sequence then becomes:
Cloudflared tries to start up, fails to find a registry at gitea.avril, continues to attempt
Gitea (Pod and Ingress) start up
Cloudflared detects that gitea.avril is now responding, pulls the Cloudflared initContainer image, and successfully deploys
gitea.scubbo.org is now available (via Cloudflared)
So far, so good. Except that testing now indicates[2] that, when trying to docker login (or docker pull, or presumably, many other docker commands) to a Gitea instance will result in a call to the ROOT_URL domain - which, if Cloudflared isn't up yet, will result in an error[3].
So what?
My particular usage of this is clearly an edge case, and I could easily get around this in a number of ways (including moving my "Cloudflared tunnel startup" to a separately-initialized, only-privately-available registry). However, what this reduces to is that "docker API calls to a Gitea instance will fail if the ROOT_URL for the instance is unavailable", which seems like unexpected behaviour to me - if the API call can get through to the Gitea service in the first place, it should be able to succeed in calling itself?
However, I totally recognize that the complexity of fixing this (going through and replacing $ROOT_URL with localhost:$PORT throughout Gitea) might not be worth the value. I'll open an issue on the Gitea team, but I'd be perfectly content with a WILLNOTFIX.
Footnotes
[0]: Note - depending on when you follow that link, you might see a red warning banner indicating "_Your ROOT_URL in app.ini is https://gitea.avril/ but you are visiting https://gitea.scubbo.org/scubbo/cloudflaredtunneldns_". That's because of this very issue!
[1]: Note from the linked issue that the Cloudflared team indicate that this is unexpected usage - "We don't expect the origins to be dynamically added or removed services behind cloudflared".
[2]: I think this is new behaviour, as I'm reasonably certain that I've done a successful "cold start" before. However, I wouldn't swear to it.
[3]: After I've , the error is instead error parsing HTTP 404 response body: unexpected end of JSON input: "" rather than the 530-related errors I got before. This is probably a quirk of Cloudflared's caching or DNS behaviour. I'm working on a minimal reproducing example that circumvents Cloudflared.

Can't send HTTP Request from inside a docker container

I have a java application as JAR. This JAR application runs fine from my machine, meaning it can send and receive HTTP Requests to and from an API Endpoint (let's call this endpoint example.com/api/).
And then i built a docker image of this JAR Application, and tried to run the image as container from my docker desktop. But then i got this error.
the error i got
it seems like my application cant reach the url from inside the docker container. I tried to set the proxy in Settings -> Resources -> Proxies -> Manual proxies configuraton, and put my company proxy since i'm inside my company network. But still it doesn't work.
I tried to google this problem but almost nothing shows up (anything that shows up have little correlation with my problem). Anyone knows what seems to be the problem? What should I do?
First check if your container is able to communicate with the endpoint. Ping or curl it from the container shell. If you use proxy, set environment variables in container:
export http_proxy=http://server-ip:port
export https_proxy=https://server-ip:port

How to redirect apache public url to docker container?

I have a webapi project that is located inside a Docker container. A self-signed certificate is available and connected to Apache. When contacting the address https://server_ip/ I get a test page Apache with information that everything works correctly. Docker without Apache also works correctly when requested http://server_ip/controller_name I get the correct response. How can I make user requests at https://server_ip/controller_name redirect to docker container?
Edit: Help me please :(

Getting HTTP ERROR 404 with Jenkins

I am getting below error when trying to access jenkins pipeline url. I tried clearing the browser cache, tried different browsers etc but no luck. the same pipeline url works fine for other users but not for me. any ideas why it throwing 404 error for me? many thanks!
HTTP ERROR 404
Problem accessing /job/jenkins/job/test/. Reason:
Not Found
Powered by Jetty:// 9.4.z-SNAPSHOT
After loging in as an administrator, use the url http://localhost:8080.
It initially takes to the url that says jenkins in it's name, which will not work. The URL you want to access is http://localhost:8080
Also if you have a different port binded you can try to call the url as http://[ip]:[port]/jenkins
If you get such error like it was mentioned above you should access through the URL "http://localhost:8081/jenkins/", but not only "http://localhost:8081".
Btw my port is 8081 because of the circumstance that my 8080 port is already used.
Have a good day!
There can be probably one of these reasons :
You do not have the access to the job.
You do have access to the job but you are not logged in . Try to login to jenkins in another window and check remember me on this computer , then open that url.
You are trying to access it from another server which is not whitelisted from the jenkins master server ,i.e it is not allowed access.
These are the best guesses I coud get .If these do not work then someone needs to manually check the url you are entering and other environment related issues themselves.
There is a common mistake that most of the people making.(while running jenkins.war from CMD)
Please ensure that your tomcat server is 'up and running' locally.
Follow these steps.
try restarting your jenkins service with $sudo service jenkins restart
I have faced the same issue and identified JIRA and Jenkins are installed on same port 8080. Jenkins service is starting first because of that JIRA was not working. Then I have edited Jenkins.xml file with port 8081 and restarted the services it was working fine.
When I ran jenkins.war from CMD, I faced the same issue. Practically when you run jenkins.war from CMD, localhost:8080 is where jenkins is available. But if you run the startup.bat file, then the path you have set, say, localhost:8080/jenkins will work.
If you are using a hook this error occurs. This is a known issue in GIT showing 404 error. Way around of the above said problem is to use the NIC ID instead of using "localhost".
I used a docker container to start jenkins locally for a test purpose.
Here is the cmd: docker run -p 8080:8080 -p 50000:50000 jenkins reffered to official documentation: https://hub.docker.com/_/jenkins?tab=description.
After started the container, I browsed into http://localhost:8080 and got
HTTP ERROR 404
Problem accessing /job/jenkins/job/test/. Reason:
Not Found
Powered by Jetty:// 9.4.z-SNAPSHOT
I just removed exposing the JNLP port which is 50000
And the command to start docker was: docker run -p 8080:8080 jenkins
And now, I was able to browse the application at http://localhost:8080 without not found error.
Thanks

nginx inside a docker container doesn't add access-control-allow-origin header as per the conf file

tl;dr - how to add a specific header in nginx response explicitly when running nginx inside a docker container?
I have deployed ELK stack inside a docker container on a RHEL 7.1 using sebp/elk:latest image. I also want to render my own scatterplots that I have developed, apart from Kibana graphs. I am rendering those pages through a separate nginx webserver I install and run in the same docker image. This is because Kibana 4 (in the sebp image) doesn't give a freedom to choose another web server like Kibana 3, and I can't possibly edit URLs/ Pages rendered by Kibana 4 as it is using its own inbuilt non-nginx webserver as far as I could understood. Now, the issue is, when I deploy my scatterplts to nginx root location and retrieve from browser, I get below error.
XMLHttpRequest cannot load http:///_search?size=500&. No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'http://IP-of-my-server' is therefore not allowed access.
I had faced this issue while running ELK without docker but this link has helped me - http://enable-cors.org/server_nginx.html
Now it doesn't seem to work, I am pushing the conf from my host to container while building the docker image and when a container is spun up from the image I could login and see nginx is running and my nginx.conf is being used but when I analyse actual response, no such header is added to the response even though it should be as I have added it in nginx.
Nginx 1.4 is being used. There is no issue of port mapping and I am not running any nginx on the host, as some of you might suspect if those pages are really being rendered by nginx of the container or the host.
Please help if you have faced this issue and resolved. Does the header gets added into response if you are running webserver from inside the container or there is bug in docker or add_header is not supported in my nginx version?
When I open a session of chrome with disabled web security, I get my scatterplots in chrome perfectly.
chrome.exe --user-data-dir="C:/Chrome dev session" --disable-web-security
So scatterplot code or something else definitely doesn't have any issue. It's only with the header being absent in the response even though I explicitely try to add it through conf/
Thanks in advance, sorry for a bit long post.
Just realized issue was with elasticsearch response and not in nginx conf. If you see carefully that header is not presence in response from :9200, so there is a module named "http" and you can edit its properties in elasticsearch.yml file.
Below link helped, had to allow that header through its APIs.
https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-http.html

Resources