How to get HTTPS URL logs using SQUID - url

I need URL logs on my network using SQUID and Mikrotik I am able to get HTTP traffic, but I am not getting HTTPS traffic. How to get HTTPS traffic using SQUID and Mikrotik? another way is also fine.

I run a DNS server, with that, I log the DNS requests on the mikrotik.
username || DNS/URL (website.com)
A quick way to test:
Download and install pi-hole on a rapsberry pi, make that the DNS server of your mikrotik and then in pi-hole you will see the DNS queries for each client on the mikrotik. You can then use the actual files on the raspberry pi to create an API for you or use the build in APIs in pi-hole.
Not sure how to do this via squid or web proxy.

Related

How to expose a service from minikube to be able to access it from another device in the same network?

I've created a service inside minikube (expressjs API) running on my local machine,
so when I launch the service using minikube service wedeliverapi --url I can access it from my browser with localhost:port/api
But I also want to access that service from another device so I can use my API from a flutter mobile application. How can I achieve this goal?
Due to small amount of information and to clarify everything- I am posting a general Community wiki answer.
The solution to solve this problem was to use reverse proxy server. In this documentation is definiton what exactly is reverse proxy server .
A proxy server is a go‑between or intermediary server that forwards requests for content from multiple clients to different servers across the Internet. A reverse proxy server is a type of proxy server that typically sits behind the firewall in a private network and directs client requests to the appropriate backend server. A reverse proxy provides an additional level of abstraction and control to ensure the smooth flow of network traffic between clients and servers
Common uses for a reverse proxy server include:
Load balancing
Web acceleration
Security and anonymity
This is the guide where one can find basic configuration of a proxy server.
See also this article.

Serving dockerized microservices over HTTPS

I'm currently struggling with docker and SSL. Let me give you an overview on what I'm trying to do.
I built a microservice-based architecture which is composed by a react web application and some "backend" services written in python and exposed with gunicorn on docker containers. I need to serve it over SSL because of Auth0 which needs the https communication. So, I built the server, bought a domain and got the SSL certificate for the domain with let's encrypt.
Now, here are the troubles, since mi services communicates to each other with a docker network, say services-network. For this reason they refer each other with the url `service:port/example.
At the moment I'm able to successfully connect to my web app with https but whenever this tries to contact the "backend" services the connection is refused because of it came from a non-secure resource (I used http://service:port/endpoint).
I tried to use the let's encrypt certificate generated for the webapp but the communication is blocked with message requests.exceptions.SSLError: HTTPSConnectionPool(host='service', port=8081): Max retries exceeded with url: /endpoint (Caused by SSLError(CertificateError("hostname 'service' doesn't match 'domain.com'",),))
I understand that a possible workaround for this error is to make the services communicate each other without using the docker network but the external one. Anyway I think that is not a good practice and that the communication among containers needs to be done through the docker network.
Finally, my question is: which is the best way to make the containers communicate through https over the docker network?
I personally like to use nginx as a reverse proxy. You would configure it normally and set it to proxy_pass <dockerIp:port>.
Many people like to use traefik.io which has many features including Let's Encrypt integration.

Connect to rails server remotely from raspberry pi

I have ssh'd into my rasberry pi and built a rails application.
Now how do I load the rails app from another machine?
I have tried IP:port in a web browser, but this fails.
Can I use ssh from a web browser to load the rails server process?
Are there gems I need to install to do this?
Is there any good documentation that I have missed?
SOLUTION
use ngrok to tunnel https://medium.com/#karimbutt/using-ngrok-to-create-a-publicly-accessible-web-facing-raspberry-pi-server-35deef8c816a#.sraso7zar
Maybe the problem is with the IP address you're trying to use. Servers don't necessarily forward their public IP traffic to localhost automatically.
Perhaps you could configure the IP address somehow, I don't know (others might?). Alternatively, you have a use a "local tunnel" service like ngrok or localtunnel. What these do is create a public URL for your localhost (i.e. your "loopback" address), so anyone can access it.
I spoke with a Ngrok author via email. He ensured me that I shouldn't need to expect any downtime from the service or to have to manually restart it. Although keep in mind that if you're on the free plan, whenever you restart Ngrok you're going to get a different URL. He also described it as kind of like a "souped up SSH -R"

Jenkins Server - Issues with setting URL

I am trying to set up an internal Jenkins server for our QA team and facing some issues with the server URL. This is inside a corporate network and all sort of firewall and proxy settings are in place, however we need to access the server only with in our internal network. This server runs from a Mac Mini. I was able to install and access the server without any issues using localhost:8080.
I tried to set a custom URL (something like testjenkins.local:8080)under the Manage Jenkins option and never was able to access the server. The only option worked for me is with the IP address (IP:8080). I was able to access the server from other machines in the network using this URL.
The real problem with the above setup is that the machine IP changes(I am not able to make it static), and hence wont be able to get an always working URL.
Highly appreciate if any one guide me in the wright direction.
Given you have a dynamic IP on your server, a good alternative would be using ngrok. Ngrok can expose the port 8080 of that server to the internet via secure tunnels, and you can access it via an URL, so changes in the IP won't affect it.
However, ngrok exposes the server to the whole Internet. To make it accessible only for your team you can add authentication in both ngrok tunnel and Jenkins server (would it work for you?).

JIRA Usage on AWS

I just set up JIRA on my ec2 instance after installing it via .bin installer file. But when I hit the ec2 url:
ec2-xxxxx.xxxxx.amazonaws.com
It is hitting the test success page for apache2 which I installed after JIRA installation.
How do I get to determine the correct URL for JIRA and hit the JIRA app?
Thanks
JIRA defaut http port is 8080. So you need access it via
ec2-xxxxx.xxxxx.amazonaws.com:8080
if you are not following the detault setting, then you need make sure which port are set by this document Changing JIRA's TCP Ports
You may need open the firewall port 8080 and set in one security group which you assign port 22 to be opened. Otherwise, you can't directly access that port.
Apart from the previous answer you may wish to ensure the following:
Your AWS EC2 Instance security group have the port opened
Your AWS VPC ACL allows TCP traffic on this port
Your VPC have an internet gateway
Your VPC have the routes configured
Your Apache proxy is configured to point to the Tomcat port
Your Tomcat is configured
You have enabled port allocation using setcap utility
Your local machine firewall enables the connection (in Red Hat ipconfig is enabled by default and blocks the connections)
As you can see it may be tricky to install Jira on AWS. It may be a good idea to use a deployment service like Deploy4Me to do this quickly.

Resources