Jupyterhub with dockerspawner does not run - error 502 - docker

I'm trying to set up a multiuser jupyter setup. For this case I've set up an jupyterhub with RemoteCSVAuthenticator and DockerSpawner.
Authentication seems to work fine and also if I log in, a docker-container is started. But after logging in I only get an 502 Error-Message:
502 : Bad Gateway
The error was:
Failed to check authorization (upstream problem)
The jupyterhost logfile shows no errors. The dockercontainer is the plain
jupyterhub/singleuser.
Can anyone tell me where to start?
After trying to digg deeper into the problem I found that if I try to access the jupyter-process inside the docker container (e.g. http://172.17.0.36:8888/) it always throws error 404 - page not found. I don't think this is normal. Maybe this is the cause for the configurable-http-proxy to throw the "Bad Gateway" error.

Finally I found the problem. Since in our company we need to set a proxy I set $http_proxy and $https_proxy inside the Docker-Container. This made the jupyterhub-single running inside docker unable to open the connection to the host. My solution was to setup an local proxy on my host and forward local conenctions to the host everything else goes through the company's proxy.

Related

Docker login to Gitea registry fails even though curl succeeds

I'm using Gitea (on Kubernetes, behind an Ingress) as a Docker image registry. On my network I have gitea.avril aliased to the IP where it's running. I recently found that my Kubernetes cluster was failing to pull images:
Failed to pull image "gitea.avril/scubbo/<image_name>:<tag>": rpc error: code = Unknown desc = failed to pull and unpack image "gitea.avril/scubbo/<image_name>:<tag>": failed to resolve reference "gitea.avril/scubbo/<image_name>:<tag>": failed to authorize: failed to fetch anonymous token: unexpected status: 530
While trying to debug this, I found that I am unable to login to the registry, even though curling with the same credentials succeeds:
$ curl -k -u "scubbo:$(cat /tmp/gitea-password)" https://gitea.avril/v2/_catalog
{"repositories":[...populated list...]}
# Tell docker login to treat `gitea.avril` as insecure, since certificate is provided by Kubernetes
$ cat /etc/docker/daemon.json
{
"insecure-registries": ["gitea.avril"]
}
$ docker login -u scubbo -p $(cat /tmp/gitea-password) https://gitea.avril
WARNING! Using --password via the CLI is insecure. Use --password-stdin.
Error response from daemon: Get "https://gitea.avril/v2/": received unexpected HTTP status: 530
The first request shows up as a 200 OK in the Gitea logs, the second as a 401 Unauthorized.
I get a similar error when I kubectl exec onto the Gitea container itself, install Docker, and try to docker login localhost:3000 - after an error indicating that server gave HTTP response to HTTPS client, it falls back to the http protocol and similarly reports a 530.
I've tried restart Gitea with GITEA__log__LEVEL=Debug, but that didn't result in any extra logging. I've also tried creating a fresh user (in case I have some weirdness cached somewhere) and using that - same behaviour.
EDIT: after increasing log level to Trace, I noticed that successful attempts to curl result in the following lines:
...rvices/auth/basic.go:67:Verify() [T] [638d16c4] Basic Authorization: Attempting login for: scubbo
...rvices/auth/basic.go:112:Verify() [T] [638d16c4] Basic Authorization: Attempting SignIn for scubbo
...rvices/auth/basic.go:125:Verify() [T] [638d16c4] Basic Authorization: Logged in user 1:scubbo
whereas attempts to docker login result in:
...es/container/auth.go:27:Verify() [T] [638d16d4] ParseAuthorizationToken: no token
This is the case even when doing docker login localhost:3000 from the Gitea container itself (that is - this is not due to some authentication getting dropped by the Kubernetes Ingress).
I'm not sure what could be causing this - I'll start up a fresh Gitea registry to compare.
EDIT: in this Github issue, the Gitea team pointed out that standard docker authentication includes creating a Bearer token which references the ROOT_URL, explaining this issue.
Text below preserved for posterity:
...Huh. I have a fix, and I think it indicates some incorrect (or, at least, unexpected) behaviour; but in fairness it only comes about because I'm doing some pretty unexpected things as well...
TL;DR attempting to docker login to Gitea from an alternative domain name can result in an error if the primary domain name is unavailable; apparently because, while doing so, Gitea itself makes a call to ROOT_URL rather than localhost
Background
Gitea has a configuration variable called ROOT_URL. This is, among other things, used to generate the copiable "HTTPS" links from repo pages. This is presumed to be the "main" URL on which users will access Gitea.
I use Cloudflared Tunnels to make some of my Kubernetes services (including Gitea) available externally (on <foo>.scubbo.org addresses) without opening ports to the outside world. Since Cloudflared tunnels do not automatically update DNS records when a new service is added, I have written a small tool[0] which can be run as an initContainer "before" restarting the Cloudflared tunnel, to refresh DNS[1].
Cold-start problem
However, now there is a cold-start problem:
(Unless I temporarily disable this initContainer) I can't start Cloudflared tunnels if Gitea is unavailable (because it's the source for the initContainer's image)
Gitea('s public address) will be unavailable until Cloudflared tunnels start up.
To get around this cold-start problem, in the Cloudflared initContainers definition, I reference the image by a Kubernetes Ingress name (which is DNS-aliased by my router) gitea.avril rather than by the public (Cloudflared tunnel) name gitea.scubbo.org. The cold-start startup sequence then becomes:
Cloudflared tries to start up, fails to find a registry at gitea.avril, continues to attempt
Gitea (Pod and Ingress) start up
Cloudflared detects that gitea.avril is now responding, pulls the Cloudflared initContainer image, and successfully deploys
gitea.scubbo.org is now available (via Cloudflared)
So far, so good. Except that testing now indicates[2] that, when trying to docker login (or docker pull, or presumably, many other docker commands) to a Gitea instance will result in a call to the ROOT_URL domain - which, if Cloudflared isn't up yet, will result in an error[3].
So what?
My particular usage of this is clearly an edge case, and I could easily get around this in a number of ways (including moving my "Cloudflared tunnel startup" to a separately-initialized, only-privately-available registry). However, what this reduces to is that "docker API calls to a Gitea instance will fail if the ROOT_URL for the instance is unavailable", which seems like unexpected behaviour to me - if the API call can get through to the Gitea service in the first place, it should be able to succeed in calling itself?
However, I totally recognize that the complexity of fixing this (going through and replacing $ROOT_URL with localhost:$PORT throughout Gitea) might not be worth the value. I'll open an issue on the Gitea team, but I'd be perfectly content with a WILLNOTFIX.
Footnotes
[0]: Note - depending on when you follow that link, you might see a red warning banner indicating "_Your ROOT_URL in app.ini is https://gitea.avril/ but you are visiting https://gitea.scubbo.org/scubbo/cloudflaredtunneldns_". That's because of this very issue!
[1]: Note from the linked issue that the Cloudflared team indicate that this is unexpected usage - "We don't expect the origins to be dynamically added or removed services behind cloudflared".
[2]: I think this is new behaviour, as I'm reasonably certain that I've done a successful "cold start" before. However, I wouldn't swear to it.
[3]: After I've , the error is instead error parsing HTTP 404 response body: unexpected end of JSON input: "" rather than the 530-related errors I got before. This is probably a quirk of Cloudflared's caching or DNS behaviour. I'm working on a minimal reproducing example that circumvents Cloudflared.

Docker "The proxy server received an invalid response from an upstream server." after system reboot

I'm running a Docker MERN stack on CentOS 7 with WHM, CPanels and Apache, everything works fine until I reboot the server. I get the following error on the webpage:
The proxy server received an invalid response from an upstream server.
The proxy server could not handle the request
Reason: Error reading from remote server
Additionally, a 502 Bad Gateway error was encountered while trying to use an ErrorDocument to handle the request.
After searching around stackoverflow I found that if I run this command my problem is solved:
iptables -t filter -F
My question is, whats causing my problem? How do I configure my server so I don't need to run this command everytime my server reboots? Do I make a script to run this command everytime it restarts? Do I configure iptables?
Just to answer my own question, I was dumb and forgor to open proper outbound ports, I only opened inbound ports.

Nginx reverse proxy with node docker seems not working

I set up a docker containing a nodejs server and which receives arguments from a path. I can communicate well with my docker instance when I run a curl command (I have an error message, but this is normal)
I'm trying to communicate with my docker instance from the outside with my Nginx server, but I'm having some problems.
Indeed, when I enter the access url, I get a 404 error. My configuration file looks like this.
when I try I go to the access url, I do have a 404 error.
My concern is on the "location" part of my configuration, but I don't really see how to solve the problem. If you have any leads, I'll take them :)

Nextcloud in docker behind traefik on unraid

I'm running traefik as a reverse proxy on my unraid (6.6.6)
Apps like, sonarr/radarr, nzbget, organizr, all work fine. But that's mostly due to the fact that these are super easy to set up. You only need 4 traefik specific labels and that's it. 
traefik.enable=true
traefik.backend=radarr
traefik.frontend.rule=PathPrefix: /radarr
traefik.port=7878
traefik.frontend.auth.basic.users=username:password
So far so good, everything is using ssl and working great. 
But as soon as I have to configure some extra stuff for the containers to work behind a reverse proxy I get lost. I've read dozens of guides regarding nextcloud, but I can't get it to work. 
Currently I'm using the linuxserver/nextcloud docker and from my internal network it's working great. I got everything set up, added users and smb shares and everybody can connect fine. But I can't get it to work behind traefik using a subdirectory. It's probably just some traefik labels I need to add to the nextcloud container, but I'm simply too much of a newb to know which ones I need. 
My first issue was that nextcloud forces https, which traefik doesn't like unless you configure some stuff. So for now I'm just using the traefik.frontend.auth.forward.tls.insecureSkipVerify=true label to work around this. I know it's potentially a security issue, but if I'm not mistaken it only opens up the possibility of a man in the middle attack. Which shouldn't be too much of an issue since both traefik and nextcloud are running on the same machine (and besides everything else is going over http). 
So now that I got that working I get a Error 500 message when I try to open mydomain.tld/nextcloud. 
The traefik log says "Error calling . Cause: Get : unsupported protocol scheme \"\""
I tried adding some labels I found in a guide (https://www.smarthomebeginner.com/traefik-reverse-proxy-tutorial-for-docker/#NextCloud_Your_Own_Cloud_Storage)
"traefik.frontend.headers.SSLRedirect=true"
"traefik.frontend.headers.STSSeconds=315360000"
"traefik.frontend.headers.browserXSSFilter=true"
"traefik.frontend.headers.contentTypeNosniff=true"
"traefik.frontend.headers.forceSTSHeader=true"
"traefik.frontend.headers.SSLHost=mydomain.tld"
"traefik.frontend.headers.STSPreload=true"
"traefik.frontend.headers.frameDeny=true"
I just thought I'd try it, maybe I get lucky.
Sadly I didn't. Still Error 500. 
In your traefik logs enable using:
loglevel = "DEBUG"
More info here:https://docs.traefik.io/configuration/logs/
After doing this I realized that my docker label was not correctly applying the InsecureSkipVerify = true line in my config. The error I was able to see in the logs was:
500 Internal Server Error' caused by: x509: cannot validate certificate for 172.17.0.x because it doesn't contain any IP SANs"
To work around this I had to add InsecureSkipVerify = true directly to the traefik.toml file for this to work correctly.

Getting HTTP ERROR 404 with Jenkins

I am getting below error when trying to access jenkins pipeline url. I tried clearing the browser cache, tried different browsers etc but no luck. the same pipeline url works fine for other users but not for me. any ideas why it throwing 404 error for me? many thanks!
HTTP ERROR 404
Problem accessing /job/jenkins/job/test/. Reason:
Not Found
Powered by Jetty:// 9.4.z-SNAPSHOT
After loging in as an administrator, use the url http://localhost:8080.
It initially takes to the url that says jenkins in it's name, which will not work. The URL you want to access is http://localhost:8080
Also if you have a different port binded you can try to call the url as http://[ip]:[port]/jenkins
If you get such error like it was mentioned above you should access through the URL "http://localhost:8081/jenkins/", but not only "http://localhost:8081".
Btw my port is 8081 because of the circumstance that my 8080 port is already used.
Have a good day!
There can be probably one of these reasons :
You do not have the access to the job.
You do have access to the job but you are not logged in . Try to login to jenkins in another window and check remember me on this computer , then open that url.
You are trying to access it from another server which is not whitelisted from the jenkins master server ,i.e it is not allowed access.
These are the best guesses I coud get .If these do not work then someone needs to manually check the url you are entering and other environment related issues themselves.
There is a common mistake that most of the people making.(while running jenkins.war from CMD)
Please ensure that your tomcat server is 'up and running' locally.
Follow these steps.
try restarting your jenkins service with $sudo service jenkins restart
I have faced the same issue and identified JIRA and Jenkins are installed on same port 8080. Jenkins service is starting first because of that JIRA was not working. Then I have edited Jenkins.xml file with port 8081 and restarted the services it was working fine.
When I ran jenkins.war from CMD, I faced the same issue. Practically when you run jenkins.war from CMD, localhost:8080 is where jenkins is available. But if you run the startup.bat file, then the path you have set, say, localhost:8080/jenkins will work.
If you are using a hook this error occurs. This is a known issue in GIT showing 404 error. Way around of the above said problem is to use the NIC ID instead of using "localhost".
I used a docker container to start jenkins locally for a test purpose.
Here is the cmd: docker run -p 8080:8080 -p 50000:50000 jenkins reffered to official documentation: https://hub.docker.com/_/jenkins?tab=description.
After started the container, I browsed into http://localhost:8080 and got
HTTP ERROR 404
Problem accessing /job/jenkins/job/test/. Reason:
Not Found
Powered by Jetty:// 9.4.z-SNAPSHOT
I just removed exposing the JNLP port which is 50000
And the command to start docker was: docker run -p 8080:8080 jenkins
And now, I was able to browse the application at http://localhost:8080 without not found error.
Thanks

Resources