Run proxy server on iOS - ios

For an enterprise application I want to run a proxy server continuously locally on iOS. Steps I have taken so far:
Use NEPacketTunnelProvider to create a tunnel
Tunnel the traffic to 127.0.0.1:8080
Start Proxy Server from the network extension (this works!)
Step 3 works, however, it seems like after starting up it stops working. I could imagine this having to do something with not being able to run such a process continuously. Does anyone have an idea or a pointer?

Related

How to debug my requests from Docker image?

I run my application that grabs data from an external API in a Docker container (alpine). I use Docker Desktop 4.1.1 on macOS Monterey 12.5.
Every now and then my app needs to refresh its auth token. Everything works well.
But sometimes I get timeouts on request to refresh the token (lets say it's auth.example.com).
I think auth.example.com might be rate limiting those calls but:
It works no problem when I request same thing from my host (outside Docker) at the same time when it timing out in a container
After I restart Docker it works right away from inside a container
Issue disappears after some (random?) time. Sometimes it's 30 minutes, sometimes it's hours
I tested it from different containers made from different, clean (Debian, alpine, Ubuntu) images - calls to auth.example.com are timing out from all of them
I tried telneting telnet auth.example.com 443 and it timeouts inside Docker and works well from my host
At the same time telnet google.com 443 works well from inside my containers
I tried running hundreds of those requests from my host in a loop to see if it gets blocked but it doesn't (and my app inside a container requests that only once an hour maybe)
Seems like Docker is adding something in the request that allows auth.example.com to filter those requests maybe?
But I tried sending requests from inside my container and from my host to RequestBin and all headers look the same.
I tried using mitmproxy and Proxyman to watch the requests but auth.example.com uses SSL pinning and I was not able to configure it properly.
I don't know how to debug that further. Any ideas?
(I am using Spotify's API, with Spotipy library, and calls that time out are made to accounts.spotify.com).

how to get SSH access over the internet without SSH access?

ok that sounds weird I know. however, my Raspberry Pi server was connected to Tailscale and I was able to do everything, however I installed and removed Pi hole and when i removed something called "iproute2" I lost connection to tailscale. however I still can access stuff such as portainer and any docker app through Cloudflare. now is there anyway to access my ssh again? is there any docker app that allows me send commands or so? all I need is either to send "sudo ngrok tcp 22" or "tailscale up",, thanks

Is it possible to run ssl offline?

I have a web-app deployed on cloud with ssl (using freeencrypt with nginx)
The app is dockerized.
Is it possible for me to run it on localhost just by copying it and run docker-compose up?
Is it possible for me to run it on localhost just by copying it and run docker-compose up?
Sure, that's entirely possible. There's nothing particularly different about running it locally vs running it remotely: in both cases, you're still interacting with your web app with a browser over a network connection.
The only tricky bit may be in ensuring that you can continue to use the appropriate hostname so that your SSL certificate will validate correctly. The easiest way to do this is probably to modify your /etc/hosts file to map the hostname to the ip address of your webapp container. This will override DNS. Just remove to remove the modification when you're done testing, otherwise you won't be able to reach the remote site!

ruby/rails grpc server restart without disconnecting clients

I am using the following code snippet to start a grpc server which works fine. But whenever I need to deploy new code to the server, what is the right way for me to restart the server? Should I just kill the server process, and let client to handle the error message? Or is there a way for enabling master/worker mode like unicorn does?
s = GRPC::RpcServer.new
s.run_till_terminated
There is no such support for rolling out new deployments that's built in to the ruby-gRPC.
However, it should be possible for applications with multiple server instances to do rolling restarts. E.g., note that if gRPC connects to a server and starts to make RPC's to it and that server gets shut down, then gRPC will internally notice that the connection went bad and it will try to make its next RPC on a newly connection (the default gRPC behavior will be to perform its next RPC on the next resolved address that can be successfully connected to, and this might mean reconnecting to the same address for which the connection just broke). Note too that gRPC servers use SO_REUSEPORT by default, so one could potentially run multiple servers on the same port.

Docker services stops communicating after some time

I have together 6 containers running in docker swarm. Kafka+Zookeeper, MongoDB, A, B, C and Interface. Interface is the main access point from public - only this container publish the port - 5683. The interface container connects to A, B and C during startup. I am using docker-compose file + docker stack deploy, each service has a name which is used as host for interface. Everything starts successfully and works fine. After some time (20 mins,1h,..) I am not able to make request to interface. Interface receives my requests but application lost connection with service A,B,C or all of them. If I restart interface, it's able to reconnect to services A,B,C.
I firstly thought it's problem of application so I expose 2 new ports on each service (interface, A,B,C) and connect with profiler and debugger to them. Application is running properly, no leaks, no blocked threads, normally working and waiting for connections. Debugger shows me that when I make a request to interface and interface tries to request service A, Connection reset by peer exception was thrown.
During this debugging I found out interesting stuff. I attached debugger to interface when the services started and also debugger was disconnected after some time. + I was not able to reconnect it, until I made request to the container -> application. PRoblem - handshake failed.
Another interesting stuff that I found out was that I was not able to request neither interface. So I used wireshark to see what's going on and: SYN - ACK was fine. Then application post some data and interface respond with FIN,ACK. I assume that this also happen when interface tries to request service A and it FIN the connection. Codebase of Interface, A,B and C is the same regarding netty server.
Finally, I don't think it's a application issue. Why? I tried to deploy containers not as services. I run each container separately, published the ports of each and endpoint of services were set to localhost. (not overlay network). And it is working. Containers run without problem. + I didn't say at the beginning, that the the java applications (interface, A,B,C) runs without problem when they are running as standalone application - not in docker.
Could you please help me what could be the issue? Why the docker in case of overlay network is closing sockets?
I am using newest docker. I used also older.
Finally, I was able to solve the problem.
What was happening, one more time. Interface opens permanent TCP connection to A,B,C. When you try to run these services A,B,C as a standalone java applications, everything is working. When we dockerize them and run in swarm, it was working only few minutes. Strange was that the connection between Interface and another service was interrupted in the moment when you made a request from client to interface.
After many many unsuccessful tests and debugging each container I tried to run each docker container separately, with mapped ports and as endpoint I specified localhost. (each container exposed ports and interface was connecting to localhost) Funny thing happen, it was working. When you run containers like this, different network driver for container is used. Bridge one. If you run it in swarm, overlay network driver is used.
So it had to be something with the docker network, not with application itself. Next step was tcpdump from each container after couple of minutes, when it should stop working. It was very interesting.
Client -> Interface (OK, request accepted)
Interface ->(forward request because it belongs to A) A
Interface -> A [POST]
A -> Interface [RESET]
A was reseting opened TCP communication after couple of minutes without communication. Why?
Docker uses IP Virtual Server and IPVS maintains its own connection table. The default timeout for CLOSE_WAIT connections in IPVS table is 60 seconds. Hence when the server sends something after 60 seconds, the IPVS connection is no longer available and the packet looks invalid for a new TCP session and gets RST. On the client side, the connection remains forever in FIN_WAIT2 state because the app still has the socket open; kernel's fin_wait timer kicks in only for orphaned TCP sockets.
This is what I read about it and how understand it. I am not sure if my explanation of problem is correct, but based on these assumptions I implemented ping-pong between Interface and A,B,C services in case there is no communication for <60seconds. And, it’s working.
Got the same issue.
Specified
endpoint_mode: dnsrr
to properties of the service which plays "server" role and it works just fine.
https://forums.docker.com/t/tcp-timeout-that-occurs-only-in-docker-swarm-not-simple-docker-run/58179

Resources