Mitmproxy downgrade from https to http - jenkins

We have a Jenkins, a gtw application that accepts HTTP requests which later forwards the data to a bit bucket server. The flow is like below:
Jenkins->Gtw(HTTP)->BitBucket url (HTTP and HTTPS).
On jenkins the requests are sent via HTTPS,
We would like to know if mitmproxy can be used as middle man that can downgrade the https to http.
Or if it is a possible way to do that on the jenkins container.

You can do that with mitmproxy as a reverse proxy (see https://docs.mitmproxy.org/stable/concepts-modes/#reverse-proxy). If this is a production setup, I'd recommend using nginx instead, which has better performance characteristics.

Related

HTTP website behind HTTPS Let's Encrypt NGINX Route

I can't seem to figure out if this is common practice or not, but I want to create a website (Running on a container) and then have traffic forwarded to the website from a wildcard on my domain, I want to secure it and use Nginx Proxy Manager and Let's Encrypt to manage the certificate.
Do I keep the website running on my internal server as just HTTP:80 and redirect traffic to to via Nginx? My current site is just a serverside Blazor webapp.
I've seen other people do this, but it makes me wonder if that is indeed secure, at some point between Nginx and the internal server it is not encrypted. Is my understanding correct?
I imagine it looks something like this:
Client connects securely to Nginx Proxy Manager (HTTPS)
Nginx Proxy Manager then decrypts and forwards to the Internal Website (HTTP)
Is my understanding correct?
Is this common practice, or is there a better way to achieve what I want?

How to setup HAProxy to add access token to client requests

I have a client that can only make requests without authentication information.
I would like to use HAProxy or a similar proxy solution to add OAuth authentication to these client requests.
I already succeeded to add a Bearer token to the client requests. See below for the haproxy.cfg with some placeholders.
frontend front
mode http
bind *:8080
default_backend servers
http-request add-header Authorization "Bearer {{ .Env.ACCESS_TOKEN}}"
backend servers
mode http
server server1 myserver.com:443 ssl
The problem is that the access tokens have a TTL of 24 hours. So I need to refresh them or get a new token periodically.
Does HAProxy support this already?
I can write some script to get a new access token periodically, update the config and restart HAProxy. Is this a good approach when running HAProxy in docker? Are there better solutions?
You could give a try to create/test your script using Lua, it is now supported in the latest versions, check How Lua runs in HAProxy.
An example of this but using Nginx + Lua, can be found in this project: https://github.com/jirutka/ngx-oauth

How to get HTTPS URL logs using SQUID

I need URL logs on my network using SQUID and Mikrotik I am able to get HTTP traffic, but I am not getting HTTPS traffic. How to get HTTPS traffic using SQUID and Mikrotik? another way is also fine.
I run a DNS server, with that, I log the DNS requests on the mikrotik.
username || DNS/URL (website.com)
A quick way to test:
Download and install pi-hole on a rapsberry pi, make that the DNS server of your mikrotik and then in pi-hole you will see the DNS queries for each client on the mikrotik. You can then use the actual files on the raspberry pi to create an API for you or use the build in APIs in pi-hole.
Not sure how to do this via squid or web proxy.

How set cookie sent from server to a client on a different port

I have a backend server (powered by Rails), whose APIs are used by a HTML5 frontend that runs on a Node simple development server.
Both are on the same host: my machine.
When I login from the frontend to the backend, rails sent me the session cookie. I can see it in the response headers, the problem is that browsers do not save it.
Policies are right, If I serve the same frontend directly from the rails app cookies are set right.
The only difference I can see is that when the frontend run on Node server, It runs on the port 8080 and rails is on the port 3000. I knew that cookies are not supposed to be port specific, so I am missing what is happening here.
Any thoughts? solutions?
(I need to be able to keep the setup this way, so to have the frontend served from Node and the backend on rails on different ports)
You're correct that cookies are port agnostic, and that the browser will send the same cookies to myapp.local:3000 as myapp.local:8080--except not through XMLHttpRequest (XHR, a.k.a., AJAX) when doing a cross-site request (CORS).
Solution: The request can be told to include cookies and auth headers by setting withCredentials to true on any XMLHttpRequest object. See: https://developer.mozilla.org/en-US/docs/Web/API/XMLHttpRequest/withCredentials
Or if using the Fetch API, set the option credentials: 'include'. See: https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API/Using_Fetch
Alternative: since you tagged webpack-dev-server in your question, you might be interested in proxying requests to your Rails API through the webpack-dev-server to avoid any CORS issues in the first place. This is done in your weback.config:
proxy: {
'/some/path': {
target: 'https://other-server.example.com',
secure: false
}
}
See: https://webpack.js.org/configuration/dev-server/#devserverproxy

Spring Security, OpenID, and mod_proxy

I have an application using spring-security's OpenID implementation. The app server sits behind a proxy. The proxy is apache httpd with mod_proxy. If the proxy connects to the app server via HTTP, the application will tell the OpenID authenticator to redirect back via HTTP rather than HTTPS like I would prefer. It seems to pull the protocol dynamically and only sees HTTP. If I configure the proxy to use HTTPS, I run into this problem. So is there a way to operate spring security behind a proxy which uses HTTP?
A little extra mod_proxy and Glassfish configuration solved this problem for me:
https://serverfault.com/questions/496888/ssl-issue-with-mod-proxy

Resources