Setting up Pwnmachine slef hosted docker embed - docker

trying to setup [pwnmachinev2]https://github.com/yeswehack/pwn-machine properly
PwnMachine is a self hosting solution based on docker aiming to provide an easy to use pwning station for bug hunters.
The basic install include a web interface, a DNS server and a reverse proxy.
Installation
Using Docker
Clone the repository locally on your machine
git clone https://github.com/yeswehack/pwn-machine.git
Enter in the repository previously cloned
cd pwn-machine/
Configure the .env <--Having trouble on this step
If you start to build direclty the project, you will be faced with an error:
${LETS_ENCRYPT_EMAIL?Please provide an email for let's encrypt}" # Replace with your email address or create a .env file
We highly recommend to create a .env file in the PwnMachine directory and to configure an email. It's used for let's encrypt to have a SSL certificate.
LETS_ENCRYPT_EMAIL="your_email#domain.com"
Building
Build the project (using option -d will start the project in background, it's optional). Building can take several minutes (depending on your computer and network connection).
docker-compose up --build -d
Once everything is done on docker side, you should be able to access on the PwnMachine at http://your_address_ip
Starting pm_powerdns-db_1 ... done
Starting pm_redis_1 ... done
Starting pm_powerdns_1 ... done
Starting pm_filebeat_1 ... done
Recreating traefik ... done
Recreating pm_manager_1 ... done
First run & configuration
Password and 2FA configuration
When you start the PwnMachine for the first time, we ask users to set a new password and 2FA authentication. This is mandatory to continue. You can use Google Authenticator, Authy, Keepass... anything you want that allows you to set up 2FA authentication.
After this, you are ready to use the PwnMachine!
How to setup DNS
Create a new DNS zone
First, we need to create a new DNS zone. Go on DNS > ZONES
Name: domain.com
Nameserver: ns.domain.com.
Postmaster: noreply.example.com.
Click on the button to save the configuration and the this new DNS zone
Create a new DNS rule
Zone: example.com.
Name: *.example.com.
Type: A
Add a new record
your_adress_ip
Click on the button +
Click on the button to save the configuration
Now you need to update your DNS servers at your ISP with the one that has just been configured.
How to expose a docker container on a subdomain and use HTTPS
For this example, we will create a new subdomain like manager.example.com to expose the PwnMachine interface on it and accessible in HTTPS.
Go on DOCKER > CONTAINERS
Right click on pm_manager
Click on Expose via traefik
A new window should open:
Name: pm_manager-router
Rule: Host(manager.example.com) && PathPrefix(/)
Entrypoint: https
Select "Middlewares"
Service: pm_manager-service
---- TLS ----
Cert Resolver: Let's Encrypt staging - DNS
Domain: *.example.com
Now, wait the DNS propagation and after some minutes you should be able to connect on manager.example.com.
I was able to get it started and access it at http://127.0.0.1/
but got a bit confused when setting up the records
im trying to set it up so i can access it over the web i.e c25.tech/payload.dtd
c25.tech is my domain , I have threw hostinger
I hope that someone can help me out thanks.
screenshot1
screenshot2
screenshot3
screenshot3

Related

DDEV - create SFTP user

I have created two containers (ddev-website-web and ddev-api-web) via DDEV.
Now I want to access the website container from the api container via SFTP.
How can I create a SFTP user in DDEV for the website container? Is this possible at all?
The containers are already connected via a router.
I think
Install sshd using this technique from ddev-contrib will work for you, at least will get you started with having an ssh server
Add vsftpd by adding to webimage_extra_packages inthe config.ssh.yaml: webimage_extra_packages: [vsftpd, openssh-server] to your .ddev/config.yaml
From there, you may have some extra config to do based on https://linuxopsys.com/topics/install-vsftpd-ftp-server-on-debian

How to connect via http instead of default https on nifi docker container

I am currently running latest versions Nifi and Postgresql via docker compose.
as of 1.14 version update of Nifi, when you accesss the UI on web it connects via https, thus asking you for ID and Password every time you log in. Its too cumbersome to go to nifi-app.log file and look for credentials every time I access the UI. I know that you can change the setting where it keeps https as the default method but I am not sure how to do that in a docker container. Can anyone help me with this?
You could use some env like AUTH in the documentation
You can find the full explanations here

Programmatically check if Cloud Run domain mapping has done

I'm developing a service which will have a subdomain for each customer. So far I've set a DNS rule on Google Domains as
* | CNAME | 3600 | ghs.googlehosted.com.
and then I add the mapping for each subdomain in the Cloud Run console. I want to do all this programmatically everytime a new user registers.
The DNS rule will handle automatically any new subdomain, and to map it to the service I'll use the gcloud command:
gcloud beta run domain-mappings create --service frontend --domain sub.domain.com
Now, how can I check when the Cloud Run provisioning has done so that I can notify the customer that the platform is ready to use? I could CRON every minute the command gcloud beta run domain-mappings describe --domain sub.domain.com, parse the JSON output and check if the status has done. It's expensive, but it should work.
The problem is that even if the gcloud cli or the web console mark the provisioning as done, the platform isn't reachable for another 5-10 minutes, resulting in a ERR_CONNECTION_REFUSED error. The service logs show that a request to the subdomain is being made, but somehow it won't serve it.
I ended up using a load balancer as suggested. I followed this doc "Setting up a load balancer with Cloud Run, App Engine, or Cloud Functions", the only different thing is that I provided my own wildcard certificate (thanks to Let's Encrypt and certbox).
Now I can just use the Google Domains' API to instantly create a subdomain.

Setting Grafana domain in Docker container

I'm running Grafana from the docker image on docker hub here (v6.7.4). I would like to add a notification to Microsoft Teams and have the links direct back to the domain I am hosting Grafana on.
I have added the MSTeams webhook to Grafana, and it successfully sends notifications. Now, when I click on "view rule" in the notification, it opens localhost:3000 since that is the default domain for Grafana.
In trying to configure this to point to grafana.my.domain, I have followed this configuration of the Grafana Docker image as well as looked at the configuration file settings, specifically the domain and root_url settings.
Based on the docker configuration, I have tried passing GF_SERVER_DOMAIN=grafana.my.domain, as well as settings for GF_SERVER_SERVER_FORM_SUB_PATH, GF_SERVER_ROOT_URL, and most combinations of those. I have also attempted to alter a sample.ini file that is shipped with the docker container to include the block:
[server]
domain = grafana.my.domain
I then mounted the .ini file as /grafana/config.ini:/etc/grafana/grafana.ini (based on this) in my docker-compose file, but it did not pick up on it.
Still, when the notification is clicked on within Teams, I get directed to localhost:3000. Am I missing something with the configuration here? It seems passing the environment variable is all that should be needed based on the documentation.

Requests through another machine

Is it possible to make requests for example with Savon through something like ssh-tunnel. I can run this stuff from my stage server whose IP is whitelisted in the service I'm sending requests to. But of course I want to do the development on my computer :P so is there any option to do that? I've already tried savon's proxy: option in many combinations such as
proxy: "http://name:password#my_stage_server.com"
etc. I'm using Ruby on Rails.
SSH tunnels are the way to go. They are easy to set up, use this in one terminal session:
ssh -L 8080:servicehost:80 myuser#stagingserver
Once established, leave it open. It'll open port 8080 on your localhost as a tunnel to the TCP service at host:443. Point savon to http://localhost:8080/some/url/to/service to access the service running on http://servicehost/some/url/to/service.
If you need this frequently, it's convenient to add it to your ssh config file, which is located at ~/.ssh/config. It's a plain text file, the example above would look like this:
Host staging
HostName hostname.domain
LocalForward 8080 servicehost:80
User myuser
With this configuration you can open the tunnel by simply issuing ssh staging. There are more options you could set, please refer to the MAN page for details.
Hostname resolution
Keep in mind that the hostname servicehost must be resolvable from your staging server, not your development machine. You can use IP addresses, too.

Resources