blazor run in docker,How to get client IP? - docker

string loginip = Request.Headers["X-Forwarded-For"].FirstOrDefault();// not get
string loginip = HttpContext.Connection.RemoteIpAddress?.ToString();// not get,only get docker ip
Is there any other way?

You're on the right track where you're using the X-Forwarded-For.
It's the responsibility of the process that's forwarding the HTTP Request to the container to add the value(s) to that header.
This normally involves using a reverse proxy such as nginx.
https://www.thepolyglotdeveloper.com/2017/03/nginx-reverse-proxy-containerized-docker-applications/

Related

How to use cookies between two docker containers (docker compose)

Graph of network structure
I am following the flows inspired by these Ben Awad videos: https://www.youtube.com/watch?v=iD49_NIQ-R4 https://www.youtube.com/watch?v=25GS0MLT8JU.
The general pattern is access-token in memory, refresh token as httponly cookie*. This seems pretty secure and dev friendly.
However since both my node frontend and my api backend are dockerized: during SSR I want to use the local connection to the backend, not through the DNS. By default this is a bridge network. This comes with a problem. Since the internal uri of the backend is http://backend, not http://localhost:8000 (or DNS name in production), the cookie does not apply to that domain, even though it really is the same app as we got the cookie from.
So: what is the best solution, and how do I implement it?
Ideas for solutions:
To not use local connection and let the frontend container use host network
To "rename" the local connection from http://backend to http://localhost
To somehow set two cookies, one for http://backend and one for localhost
Store the refresh token somewhere thats not a cookie
You can use Nginx to solve this problem, since the header of your frontend will include a cookie for verification when a request is sent to your backend server, you can bind each server to a different port on the host machine, then add a CNAME record on your domain control panel to direct all request sent to (let's as) api.mydomain.com, to serve mydomain.com then in your Nginx config, you can do something like this
Nginx Config
server {
server_name mydomain.com;
listen 80;
location '/' {
proxy_pass 'http://localhost:8000/';
}
}
server {
listen 80;
server_name api.mydomain.com;
location '/' {
proxy_pass 'http://localhost:7000/';
}
}
then you can use the svelte externalFetch to change the path on the server side, so when the request hit the server, instead of fetching the specified URL, you can override it with the local host URL, like this:
src/hooks.ts
export async function externalFetch(request) {
if (request.url.includes('api.mydomain.com')) {
const localPath = new URL(request.url.pathname, 'http://localhost:7000')
request = new Request(localPath.href, request);
}
}

Do a http request from lua before haproxy routing a request

I have a Lua proxy that needs to route requests. Each request destination is established based on the response from another HTTP request with a header from the initial request. My understanding is that HAProxy is an event-driven software, so blocking system calls are absolutely forbidden and my code is blocking because is doing an HTTP request.
I read about yielding after the request but I think it won't help since the HTTP request is already started. The library for doing the request is https://github.com/JakobGreen/lua-requests#simple-requests
local requests = require('requests')
core.register_fetches('http_backend', function(txn)
local dest = txn.sf:req_fhdr('X-dest')
local url = "http://127.0.0.1:8080/service";
local response = requests.get(url.."/"+dest);
local json = response.json()
return json.field
end )
How do I convert my code to be non-blocking?
You should consider using HAProxy's SPOE which was created exactly for these blocking scenarios.
I managed to do it using Lua. The thing I was making wrong was using require('requests') this is blocking. Ideally for HA never use a Lua external library. I have to deal with plain sockets and do an HTTP request and very important to use HA core method core.tcp() instead of Lua sockets.

gRPC endpoint with non-root path

Maybe (hopefully) I'm missing something very simple, but I can't seem to figure this out.
I have a set of gRPC services that I would like to put behind a nghttpx proxy. For this I need to be able to configure my client with a channel on a non-root url. Eg.
channel = grpc.insecure_channel('localhost:50051/myapp')
stub = MyAppStub(channel)
This wasn't working immediately through the proxy (it just hangs), so I tested with a server on the sub context.
server = grpc.server(executor)
service_pb2.add_MyAppServicer_to_server(
MyAppService(), server)
server.add_insecure_port('{}:{}/myapp'.format(hostname, port))
server.start()
I get the following
E1103 21:00:13.880474000 140735277326336 server_chttp2.c:159]
{"created":"#1478203213.880457000","description":"OS Error",
"errno":8,"file":"src/core/lib/iomgr/resolve_address_posix.c",
"file_line":115,"os_error":"nodename nor servname provided, or not known",
"syscall":"getaddrinfo","target_address":"[::]:50051/myapp"}
So the question is - is it possible to create gRPC channels on non-root urls?
As confirmed here, this is not possible. I will route traffic via subdomains in nghttpx.

Getting IP Address current server

I used symfony 1.4 to create my application.
I'd like to get the IP adress of the current server to put it within soap request
So, how can i get the IP address of the current server?
For most situations, using $_SERVER['SERVER_ADDR']; will work. If that doesn't work you can try $ip = gethostbyname(gethostname());
If you have access to the $request object and it is a sfWebRequest (typical request from a browser) you can use:
$request->getPathInfoArray()['SERVER_ADDR']
Premise of the following method: your domain name has only one IP resolution
Using PHP:
gethostbyname($_SERVER['SERVER_NAME'])
$_SERVER['SERVER_NAME']will generally return your domain name (server_name / ServerName is configured in Nginx / Apache server), and then use gethostbyname().
About $_SERVER['SERVER_ADDR'], it often return a LAN IP address (I only have one server, one domain name, no reverse proxy; cloud server).
About gethostname()
In the test, it returns the name of the server (host name, not the domain name you use), and then uses gethostbyname(), will return a LAN IP.
More can be used https://checkip.amazonaws.com/ Get the current IP.

Disable web access via direct IP address on AWS OpsWorks Nginx/Unicorn server

I have a Rails app running on an AWS OpsWorks Nginx/Unicorn Rails Layer. I want my app to only process requests to api.mydomain.com and have my web server directly return a 404 if any request is made using the server's IP address.
I've implemented a custom cookbook that overrides unicorn/templates/default/nginx_unicorn_web_app.erb (from the opsworks-cookbooks repo: https://github.com/aws/opsworks-cookbooks). I copied the template file that exists in this repository and added a new server block at the top of the template:
server {
listen 80;
server_name <%= #instance[:ip] %>;
return 404;
}
I stopped and started my server to ensure that the customized template file gets used, but when I issue a request using the server's IP address it still gets routed to my Rails app.
Is this <%= #instance[:ip] %> not correct? Is there a way to log from within this template file so that I can more easily debug what is going wrong? I tried using Chef::Log.info, but my message didn't seem to get logged.
Thanks!
Edit: For anyone else having this issue... The answer below about setting up a default server block fixed one of my issues. My other issue was related to the fact that my cookbook updates were not even making their way to my instance and needed to manually refresh the cookbook cache: http://docs.aws.amazon.com/opsworks/latest/userguide/workingcookbook-installingcustom-enable-update.html
EC2 instances have a private (typically RFC-1918) IP address. The Internet Gateway translates traffic to that address from the public address. If that private address is the address <%= #instance[:ip] %> returns, then obviously, this configuration isn't going to do what you want.
Even if not, this isn't the correct approach.
Instead, you should define the default behavior of Nginx -- which is the first server block -- to throw the error, and later in the config, declare a server block with the api DNS hostname and the behavior you want for normal operation.
See Why is nginx responding to any domain name?.
Try adding a location block around the return statement "location /" refers to root
server {
listen 80;
server_name <%= #instance[:ip] %>;
location / {
return 404;
}
}

Resources