When i access jenkins i get this error-
Proxy Error
The proxy server received an invalid response from an upstream server.
The proxy server could not handle the request GET /.
Reason: Error reading from remote server
i tried the init script to restart jenkins but it fails saying 8080 is already in use. I changed the jenkins default port but i still get the above error. any pointers on how to solve this?
Easiest thing:
Stop the web server on which you are running jenkins.
Run netstat -a.
Is port 8080 in use?
If it is, you will need to change the port of the web server to something other than 8080. (9090 is easy to remember).
If 8080 is in use, you should have received an error when you tried to start up the web server. Check the log for the web server.
Incidentally, did you really mean you changed the jenkins default port? Or is this the default port of the web server to which jenkins is running?
Related
I have setup a docker registry using docker-compose, largely following the recipe published by Docker here: https://docs.docker.com/registry/recipes/nginx/
Nginx and my registry start, and I am able to issue docker login from a different machine:
docker login https://myhost.mydomain.net
Once logged in I can push and pull images as expected.
Now I need a way to manage content in the remote registry. To that end, I defined a context:
docker context create myregistry-prod --docker "host=https://myhost.mydomain.net"
The command results in this message, which appears to arise during basic authentication:
error during connect: Post "http://myhost.mydomain.net/v1.24/auth": dial tcp 192.168.176.71:80: connectex: No connection could be made because the target machine actively refused it.
I assumed that a context using https would operate inside a TLS connection, so I'm surprised to see the client attempting to open port 80. By design, I have no program listening on port 80, hence the connection is refused.
Note that I am able to fetch the catalog using this URL in a browser, https://myhost.mydomain.net/v2/_catalog . The browser prompts for basic credentials, I supply them and get back the expected result. It appears that the Docker API working as expected passing through the Nginx container and being serviced by the registry container.
So, the question is, how do I go about diagnosing the issue? Did I make an error defining the context?
I'm quite sure I have a misunderstanding. This is my first attempt at docker compose and my first attempt at using nginx in front of Docker Registry. I will redact and post nginx.conf and docker-compose.yml if you need them but I am guessing it's a client-side problem. Any help you might offer will be greatly appreciated.
I am trying to run a nginx forward proxy in kubernetes but I'm facing some issues.
My configuration:
nginx configured as forward proxy with http connect module, running on docker: Dockerfile - listens to 8888
K8S with istio 1.4 Deployment, service, gateway and virtual service configuration, host exposed to 36088
Firefox for testing
My steps:
For local testing, I'm configuring the connection settings in Firefox to point to the instance of nginx running in Docker on localhost:8888. With this configuration, the proxy is behaving as expected, denying and allowing traffic as per the nginx.conf.
For testing my pod in K8S, I can run kubectl port-forward name-of-my-pod 8888:8080 and configure Firefox to use the proxy forwarded on localhost:8080. As per point 1, the proxy works as expected and I can see the traffic hitting my pod in the logs.
Finally, To test my istio/AKS configuration, I can hit https://proxy.mydomain.net:36088 (defined in the gatwway) with a web browser. The url responds just fine and I can see the pod outputting some logs.
Sadly, though, when I configure Firefox to make use of proxy.mydomain.net:36088, I am getting connection timeout and I can see that the traffic is not actually hitting my pod and I am not getting any logs.
In other words, the proxy doesn't seem to be working when I use it as a proxy, but it responds fine when I access its URL a normal website.
Based on the fact that the traffic doesn't seem to hit my pod, I guess that I need to configure something else in istio/aks to ensure that my service/pod works as a proxy, but I don't know if my guess is right. Is anything obvious that am I missing?
After a lot of digging, we managed to make this work.
This is how our templates look like: deployment, gateway, service and virtualservice
The crucial point was:
Adding the .Values.istio.port.number to the istio-ingressgateway
configuring .Values.istio.port.protocol to TCP
configuring the gateway as PASSTHROUGH
I've configured NGINX in a cloud instance with reverse proxy to a docker container. The app sends emails using nodemailer - Gmail SMTP but isn't working inside a docker container.
My guess:
Missing ports configurations
A mail proxy or something is needed...
I tried exposing ports 587 and 465 in the Dockerfile with no success (Not sure if that's correct or if I need something else).
Other considerations:
The container runs it's own server using Koa.
The cloud instance will host more containers that may send mail too. Each with their own domain and reverse proxy configurations.
Your help is really appreciated!
UPDATE
Running the app in the container: Gmail is giving a 534 response code (invalid login error)
Still working fine runnning the app outside the container.
Gmail authentication was giving login errors running the app in the container.
The correct way is to configure it through OAuth2 and it works flawlessly.
Here's the tutorial I found that helped me out: https://alexb72.medium.com/how-to-send-emails-using-a-nodemailer-gmail-and-oauth2-fe19d66451f9
Thanks timsmelik for your help.
I have a docker container called A that runs on Server B. Server B is running rsyslog server.
I have a script running on A that generates logs, and these logs are sent via a python facility that forwards these logs to either a SIEM or a syslog server.
I would like to send this data to port 514 on Server B so that rsyslog server can receive it.
I can do this if I specify the servername in the python script as serverB.fqdn however it doesnt work when I tried to use localhost or 127.0.0.1
I assume this is expected behaviour because I guess container A refers to localhost or 127.0.0.1 as itself, hence failing to send. Is there a way for me to send logs to Server B that it sits on without having to go over the network (which I assume it does when it connects to the fqdn) so the network overhead can be reduced?
Thanks J
You could use a Unix socket for this.
Here's an article on how to Use Unix sockets with Docker
I want to run Grails on https on localhost. I have already configured HTTPS and can see the Apache page when localhost:443 is hit. Currently Grails runs on 8080. When I try running grails with grails -Dserver.port.https=443 run-app -https, I get Permission denied. I know this requires some kind of root access below port 1024. But when I try sudo grails run-app, this gives me command not found.
Any possible solutions?
Generally, it's a bad idea to run your web app as root. Practically speaking, your app becomes super exploitable. Any security flaw in your setup will suddenly give the attacker full root access to the server.
This is why it's more common to do one of the following:
Run a proxy such as apache or nginx or haproxy on port 443 with https, and grails on port 8080 without https. Set up the proxy to forward all requests to your grails app at 8080. Make sure the grails app only listens to localhost, so you can't go directly to yoursite.com:8080.
Run Grails at 8080, with https, only listening on localhost, and set up a netfilter/iptables rule to forward traffic on 443 to localhost 8080.
The two setups are essentialy the same. The main difference is whether to use a user level setup, or rely on an OS level service such as netfilters.