How can I configure uwsgi to pass in the request path unmodified as PATH_INFO? I.e. if there is a request https://example.com/foo%5F/../bar?x=y, I want PATH_INFO to be literally /foo/../%5Fbar, and not /_bar.
The uWSGI documentation says uWSGI is able to rewrite request variables in lot of advanced ways, but I am unable to find any way to set individual request variables, at least not without modifying the source code of uwsgi.
The reason I want to do is that I have a frontend application which takes user input and then sends a request to http://backend.app/get/USER_INPUT. Trouble is, there is an uwsgi in between, and when the user input is ../admin/delete-everything, the request goes to http://backend.app/admin/delete-everything!
(This uwsgi change I desire will not be the only fix; the frontend app should certainly validate user input, and the backend app should not offer /admin to the frontend app in the first place. But as a measure of defense-in-depth, I'd like my requests to pass uwsgi unmodified.)
I am running bare uWSGI without nginx, i.e. uwsgi --http 0.0.0.0:8000 --wsgi-file myapp/wsgi.py --master --processes 8 --threads 2.
For what it's worth, the backend app that looks into PATH_INFO is Django.
My previous answer holds true for the clients which do url parsing at the source. This answer is applicable, when you can actually get the correct request.
The wsgi.py is run by uwsgi and the application object is called as callable. This in case of Django is WSGIHanlder, which has below code
class WSGIHandler(base.BaseHandler):
request_class = WSGIRequest
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.load_middleware()
def __call__(self, environ, start_response):
set_script_prefix(get_script_name(environ))
signals.request_started.send(sender=self.__class__, environ=environ)
print(environ)
request = self.request_class(environ)
response = self.get_response(request)
response._handler_class = self.__class__
status = '%d %s' % (response.status_code, response.reason_phrase)
response_headers = [
*response.items(),
*(('Set-Cookie', c.output(header='')) for c in response.cookies.values()),
]
start_response(status, response_headers)
if getattr(response, 'file_to_stream', None) is not None and environ.get('wsgi.file_wrapper'):
response = environ['wsgi.file_wrapper'](response.file_to_stream)
return response
I created a sample view to test the same
from django.http import HttpResponse
def index(request, **kwargs):
return HttpResponse("Hello, world. You're at the polls index. " + request.environ['PATH_INFO'])
def index2(request, **kwargs):
return HttpResponse("Hello, world. You're at the polls index2. " + request.environ['PATH_INFO'])
and registered them using below code
from django.urls import include, path
from polls.views import index2, index
urlpatterns = [
path('polls2/', index2, name='index2'),
path('polls2/<path:resource>', index2, name='index2'),
path('polls/', index, name='index'),
path('polls/<path:resource>', index, name='index'),
]
So what you need is overriding this class. Below is an example
import django
from django.core.handlers.wsgi import WSGIHandler
class MyWSGIHandler(WSGIHandler):
def get_response(self, request):
request.environ['ORIGINAL_PATH_INFO'] = request.environ['PATH_INFO']
request.environ['PATH_INFO'] = request.environ['REQUEST_URI']
return super(MyWSGIHandler, self).get_response(request)
def get_wsgi_application():
"""
The public interface to Django's WSGI support. Should return a WSGI
callable.
Allows us to avoid making django.core.handlers.WSGIHandler public API, in
case the internal WSGI implementation changes or moves in the future.
"""
django.setup()
return MyWSGIHandler()
application = get_wsgi_application()
After this can you can see the below results
$ curl --path-as-is "http://127.0.0.1:8000/polls/"
Hello, world. You're at the polls index. /polls/
$ curl --path-as-is "http://127.0.0.1:8000/polls2/"
Hello, world. You're at the polls index2. /polls2/
$ curl "http://127.0.0.1:8000/polls2/../polls/"
Hello, world. You're at the polls index. /polls/
$ curl --path-as-is "http://127.0.0.1:8000/polls2/../polls/"
Hello, world. You're at the polls index. /polls2/../polls/%
As you can see the change to PATH_INFO doesn't change which view is picked. As polls2 still picks index function
After digging a bit more, I realised there is another path and path_info variable. The class for the same is picked using path_info
So we update our function like below
class MyWSGIHandler(WSGIHandler):
def get_response(self, request):
request.environ['ORIGINAL_PATH_INFO'] = request.environ['PATH_INFO']
request.environ['PATH_INFO'] = request.environ.get('REQUEST_URI', request.environ['ORIGINAL_PATH_INFO'])
request.path = request.environ['PATH_INFO']
request.path_info = request.environ.get('REQUEST_URI', request.environ['PATH_INFO'])
return super(MyWSGIHandler, self).get_response(request)
After this change, we get the desired results
$ curl --path-as-is "http://127.0.0.1:8000/polls2/../polls/"
Hello, world. You're at the polls index2. /polls2/../polls/
So your problem has mostly nothing to do with uwsgi or Django as such. To demonstrated the issue, I created a simple flask app with a catch all handler
from flask import Flask
app = Flask(__name__)
#app.route('/', defaults={'path': ''})
#app.route('/<path:path>')
def catch_all(path):
return 'You want path: %s' % path
if __name__ == '__main__':
app.run()
Now when you run this and make a curl request
$ curl -v http://127.0.0.1:5000/tarun/../lalwani
* Rebuilt URL to: http://127.0.0.1:5000/lalwani
* Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to 127.0.0.1 (127.0.0.1) port 5000 (#0)
> GET /lalwani HTTP/1.1
> Host: 127.0.0.1:5000
> User-Agent: curl/7.54.0
> Accept: */*
>
* HTTP 1.0, assume close after body
< HTTP/1.0 200 OK
< Content-Type: text/html; charset=utf-8
< Content-Length: 22
< Server: Werkzeug/0.15.2 Python/3.7.3
< Date: Fri, 26 Jul 2019 07:45:16 GMT
<
* Closing connection 0
You want path: lalwani%
As you can see that the server never had a chance to even know we requested this. Now lets do it again and ask curl not to tamper the url
$ curl -v --path-as-is http://127.0.0.1:5000/tarun/../lalwani
* Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to 127.0.0.1 (127.0.0.1) port 5000 (#0)
> GET /tarun/../lalwani HTTP/1.1
> Host: 127.0.0.1:5000
> User-Agent: curl/7.54.0
> Accept: */*
>
* HTTP 1.0, assume close after body
< HTTP/1.0 200 OK
< Content-Type: text/html; charset=utf-8
< Content-Length: 31
< Server: Werkzeug/0.15.2 Python/3.7.3
< Date: Fri, 26 Jul 2019 07:48:17 GMT
<
* Closing connection 0
You want path: tarun/../lalwani%
Now you can see that my app did receive the actual path. Now let's see the same case in a browser, with app not even running
As you can even though my service is not even running but the browser itself refactored the call to /lalwani instead of /tarun/../lalwani. So there is nothing that could have been done at your end to even correct the issue, until unless you are using a client which supports disabling the url parsing at source
Related
My Jenkins - 2.263.1(LTS) deployed through tomcat and i have installed Prometheus metrics plugin - 2.0.8 and restarted the service.
My jenkins base URL - http://jenkins-server:8080/jenkins
But my prometheus end-point - http://jenkins-server:8080/jenkins/prometheus not showing any metrics data.
I have added below in my prometheus.yml
- job_name: 'jenkins'
metrics_path: '/jenkins/prometheus'
scheme: http
static_configs:
- targets: ['jenkins-server:8080']
Currently LDAP authentication and Project-based Matrix Authorization configured. Also i have tried with domain credential password and token in my prometheus.yml but still it doesn't show the plugin generated data in my end-point. Just shows the blank page on my browsers(IE and Chrome).
basic_auth:
username: domain-user-id
password: 98qw37asdkdsjfeiq1dedsewe
Curl response
$ curl -v jenkins-server:8080/jenkins/prometheus
* Trying 206.25.26.27...
* TCP_NODELAY set
* Connected to jenkins-server (206.25.26.27) port 8080 (#0)
> GET /jenkins/prometheus HTTP/1.1
> Host: jenkins-server:8080
> User-Agent: curl/7.58.0
> Accept: */*
>
< HTTP/1.1 302
< Cache-Control: private
< Expires: Thu, 01 Jan 1970 00:00:00 GMT
< X-Content-Type-Options: nosniff
< Location: /jenkins/prometheus/
< Content-Length: 0
< Date: Wed, 17 Feb 2021 11:42:00 GMT
<
* Connection #0 to host jenkins-server left intact
$ curl -X GET jenkins-server:8080/jenkins/prometheus/
$ curl -X GET http://jenkins-server:8080/jenkins/prometheus/
Empty response for above commands. Please share some pointers to resolve this issue. thanks in advance.
#poshak, Generated access key and tired on my browser with https://jenkins_ipaddres:portnumber/jenkins/metrics/accesskey Now i can able to view the below.
Is these data is enough for promethues?
Try to generate Access Keys in the metrics section and access the url https://jenkins_ipaddres:portnumber/metrics/accesskey you would now be able to view the metrics.
Path to generate the Access Keys:
Jenkins > Manage Jenkins > Configure Systems > Metrics >> Add >> Generate >> Save
Thanks
It was an Jenkins Prometheus plugin issue. After upgrading it to 2.0.9 issue solved.
I am using the authentication code mode of Huawei account kit to login users to my app. To check the app server to account server behavior, I use the cURL command shown bellow to obtain the access token from the authorization code. But the following command would return an error.
curl -v -H "Content-Type:application/x-www-form-urlencoded" -d #body.txt -X POST https://oauth-login.cloud.huawei.com/oauth2/v3/token
the "body.txt" file contains the required information for the request:
grant_type=authorization_code&
code=DQB6e3x9zFqHIfkHR2ctp7htDs5tG5p6jXTkTCeoAAULtuS69PntuuD9pwqHrdXyvrlezuRc/aq+zuDU7OnQdRpImnvZcEX+RIOijYMXYu1j+zxpQ+W/J50Z7pY1qhyxZtavqkELY+6o2jSifaiIxC/MJc7KgqKV3jGn9kUIEZovSnM&
client_id=my_id&
client_secret=my_secrete&
redirect_uri=hms://redirect_uri
The command returns:
> POST /oauth2/v3/token HTTP/1.1
> Host: oauth-login.cloud.huawei.com
> User-Agent: curl/7.64.0
> Accept: */*
> Content-Type:application/x-www-form-urlencoded
> Content-Length: 430
>
* upload completely sent off: 430 out of 430 bytes
< HTTP/1.1 400 Bad Request
< Date: Mon, 23 Nov 2020 03:38:21 GMT
< Content-Type: application/json
< Content-Length: 67
< Connection: keep-alive
< Cache-Control: no-store
< Pragma: no-cache
< Server: elb
<
* Connection #0 to host oauth-login.cloud.huawei.com left intact
{"sub_error":20152,"error_description":"invalid code","error":1101}
What should I do to get this API call working using cURL as expected?
Authentication code must be urlencoded before sent. The command in the question used that code without urlencoding non-letter characters. Please use the same command with encoded authorization code as parameter to "code" to perform the request to acquire access token
Encoding could be done inline by if doing so is desired
curl --data-urlencode "para1=value1"
Please refer to: Link or using online tool such as : Link
Using other tools to acquire access token is possible as long as the parameters are properly encoded with %2x format.
According to the error information {"sub_error":20152,"error_description":"invalid code","error":1101}, the problem is caused by incorrect code parameters.
It is recommended that you can check whether the value of code in the request is the same as the Authorization Code obtained by the mobile app.
FOR Details,see docs.
I am constantly getting Failed to open TCP connection to :80 (Cannot assign requested address - connect(2) for nil port 80) (Errno::EADDRNOTAVAIL) while using the ruby faraday gem. I don't have alot of experience with ruby on rails.
I have a docker ruby on rails service running on elastic beanstalk that is using puma with ssl. CMD ["bundle", "exec", "puma", "-C", "config/puma.rb"] I have a network load balancer configured with it to forward 443 -> 8443 (i've experimented with both self signed certs and real wild card certs).
ssl_bind '0.0.0.0', '8443', {
key: '/var/app/ssl/something.key',
cert: '/var/app/ssl/something.crt'
}
This configuration works as expected
Puma starting in single mode...
* Version 3.12.0 (ruby 2.5.1-p57), codename: Llamas in Pajamas
* Min threads: 5, max threads: 5
* Environment: staging
* Listening on tcp://0.0.0.0:3000
* Listening on ssl://0.0.0.0:8443?cert=/var/app/ssl/something.crt&key=/var/app/ssl/something.key&verify_mode=none
and I can get the healthz status page with both types of certs. Using httpie and --verify=no for the self signed.
$ http https://backend.something.com/healthz
HTTP/1.1 200 OK
Cache-Control: max-age=0, private, must-revalidate
Content-Type: application/json; charset=utf-8
{
"name": "my-backend-service",
"version": "0.0.1"
}
I have another ruby on rails backend service that makes api requests to this service using faraday. Ive removed some of the request/response code from my actual.
def connection(baseUrl, options = {})
conn = Faraday.new(url: baseUrl) do |c|
# dont really know if this is needed or not
# http.use_ssl? is always false
c.adapter :net_http do |http|
http.verify_mode = OpenSSL::SSL::VERIFY_NONE # if http.use_ssl?
end
end
I don't know if this http.verify_mode is actually working. I cant really find that method anywhere around here on ruby-doc.org
If you try to make a request it will just end up faiing.
conn = connection(https://backend.something.com)
response = conn.post '/foo', params[:foo].to_json
The logs show from starting to parameters and then the http.rb:939 error. I realize the parameters aren't valid here but they aren't my problem.
Started POST "/foo"
Processing by FooController#create as */*
Parameters: {"paramter"=>"something", "paramter"=>"something", "paramter"=>"something"}
ERROR -- : /usr/local/lib/ruby/2.5.0/net/http.rb:939:in `rescue in block in connect': Failed to open TCP connection to :80 (Cannot assign requested address - connect(2) for nil port 80) (Errno::EADDRNOTAVAIL)
If I make the same request from httpie or curl to this service I get the expected results over both http/https.
$ http POST https://backend.something.com parameter="something" parameter="something" parameter="something"
HTTP/1.1 201 Created
Cache-Control: max-age=0, private, must-revalidate
Content-Type: application/json; charset=utf-8
If you inspect the conn object it kinda still seems like it is sitting on its default value for #url_prefix = http:/. Found in the docs above, but I don't know if I'm looking at the correct thing the correct ruby way. I imagined that Faraday.new(url: baseUrl) would parse the correct schema, which is https.
#<Faraday::Connection:0x0000565351701430 #parallel_manager=nil, #headers={"User-Agent"=>"Faraday v0.15.4"}, #params={}, #options=#<Faraday::RequestOptions (empty)>, #ssl=#<Faraday::SSLOptions (empty)>, #default_parallel_manager=nil, #builder=#<Faraday::RackBuilder:0x0000565351700f58 #handlers=[FaradayMiddleware::EncodeJson, FaradayMiddleware::ParseJson, Faraday::Adapter::NetHttp]>, #url_prefix=#<URI::HTTP http:/>, #manual_proxy=false, #proxy=nil, #temp_proxy=nil>
It seems that the the address you are trying to connect to is invalid. From what I can see in the error:
ERROR -- : /usr/local/lib/ruby/2.5.0/net/http.rb:939:in `rescue in block in connect': Failed to open TCP connection to :80 (Cannot assign requested address - connect(2) for nil port 80) (Errno::EADDRNOTAVAIL)
it is nil. Ensure that the host is properly set.
I have build some login token auth Apis using Ruby on Rails, it works well on local, I have a user built into the local and heroku database, and if I do this :
curl -v -H "Content-Type:application/json" -X POST -d '{"session":{"password":"12345678","email":"example#zapserver.com"}}' http://api.zapserver.dev/sessions/
I can get the correct JSON response from the server.
But, when I do the same call to Heroku, which would be something like this:
curl -k -v -H "Content-Type:application/json" -X POST -d '{"session":{"password":"12345678","email":"example#zapserver.com"}}' https://api.appname.herokuapp.com/sessions/
I got a 404 Not Found error.
I have done the rails db:migrate and I still got the same error.
Any ideas?
EDIT
I got literally nothing from Heroku log, I used a heroku logs --tail command and nothing happened.
Here's the error message I got from the Curl:
* Trying 50.19.245.201...
* TCP_NODELAY set
* Connected to api.zapserver.heroku.com (50.19.245.201) port 443 (#0)
* TLS 1.2 connection using TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
* Server certificate: *.herokuapp.com
* Server certificate: DigiCert SHA2 High Assurance Server CA
* Server certificate: DigiCert High Assurance EV Root CA
> POST /sessions/ HTTP/1.1
> Host: api.zapserver.heroku.com
> User-Agent: curl/7.51.0
> Accept: */*
> Content-Type:application/json
> Content-Length: 67
>
* upload completely sent off: 67 out of 67 bytes
< HTTP/1.1 404 Not Found
< Connection: keep-alive
< Server: Cowboy
< Date: Sun, 12 Feb 2017 14:30:09 GMT
< Content-Length: 494
< Content-Type: text/html; charset=utf-8
< Cache-Control: no-cache, no-store
<
<!DOCTYPE html>
<html>
<head>
<meta name="viewport" content="width=device-width, initial-scale=1">
<meta charset="utf-8">
<title>No such app</title>
<style media="screen">
html,body,iframe {
margin: 0;
padding: 0;
}
html,body {
height: 100%;
overflow: hidden;
}
iframe {
width: 100%;
height: 100%;
border: 0;
}
</style>
</head>
<body>
<iframe src="//www.herokucdn.com/error-pages/no-such-app.html"></iframe>
</body>
* Curl_http_done: called premature == 0
* Connection #0 to host api.zapserver.heroku.com left intact
So the problem is that Heroku wants you to pay for the domain if you wanna use something like api.example.com, so I change the routing a bit from api.example.com to xxx.com/api in route.rb using:
namespace :api, defaults: { format: :json }, path: '/api' do
instead of the code in my old route.rb
namespace :api, defaults: { format: :json }, constraints: { subdomain: 'api' }, path: '/' do
And now if I do the Curl like this:
curl -k -v -H "Content-Type:application/json" -X POST -d '{"session":{"password":"12345678","email":"example#zapserver.com"}}' https://example.com/api/sessions/
I can get the correct JSON back from the server.
It's the way domain routing works. When you own example.com, all requests to any address that end with this domain (e.g. mail.example.com, www.example.com, etc.) are routed to your configured DNS server. This server converts the address to an IP address based on the configuration of the DNS records.
The root of your domain, in this case, herokuapp.com is owned by Heroku, so when you create an app, they can create a DNS entry for that app and point it internally to your instance. There would be no mechanism in which application code running in a Heroku app would be able to create additional subdomains without giving some kind of API access to their internal networking configuration, which no company I can think of would ever allow. So essentially, this could never be a valid address: https://api.appname.herokuapp.com
If you create a custom domain, you can certainly use any address you like as long as it ends with your domain, but you'll still have to configure those names on your your new DNS host also.
So I would say that it has nothing to do with Heroku wanting to charge more, it's simply the way DNS works.
404 error means that the page is not found and hence a routing error? Can you post your heroku logs for more detail? thx
Also, as mentioned below. rails db:migrate does not effect heroku DB. hence you would want to run heroku run rake db:migrate, and other rake comands
Your code will not able to decode the user properly. I have the solid solution for this.
go to application_controller.rb
and in your authenticate method relplace the code with this:
def authenticate_api_request
header = request.headers['Authorization']
header = header.split(' ').last if header
begin
# #decoded = JsonWebToken.decode(header)
key = Rails.application.secrets.secret_key_base. to_s
decoded = JWT.decode(header, key)[0]
HashWithIndifferentAccess.new decoded
#current_user = User.find(decoded.values[0])
rescue ActiveRecord::RecordNotFound => e
render json: { errors: e.message }, status: :unauthorized
rescue JWT::DecodeError => e
render json: { errors: e.message }, status: :unauthorized
end
end
When trying to purge neo4j (1.8.2) with the cleandb extension (for neo4j 1.8), it fails:
[path] ? curl -v -X DELETE 'http://localhost:7475/db/cleandb/12sE$lkj3%'
* About to connect() to localhost port 7475 (#0)
* Trying 127.0.0.1...
* Connected to localhost (127.0.0.1) port 7475 (#0)
> DELETE /db/cleandb/12sE$lkj3% HTTP/1.1
> User-Agent: curl/7.29.0
> Host: localhost:7475
> Accept: */*
>
< HTTP/1.1 500 Internal Server Error
< Content-Length: 0
< Server: Jetty(6.1.25)
<
* Connection #0 to host localhost left intact
Obviously, I do not only get a 500 error code, also the db is not purged.
Of course, access URL and "secret-key" of the plugin are set up as used in the curl request:
org.neo4j.server.thirdparty_jaxrs_classes=org.neo4j.server.extension.test.delete=/db/cleandb
org.neo4j.server.thirdparty.delete.key=12sE$lkj3%
I would conveniently add the cleandb tag, but I lack the 1500 reputation.
Any ideas? Thanks in advance!
EDIT
(The reason I use cleandb is to set up unittests in neo4django).
/EDIT
Hm, I have the cleandb extension working locally against 1.8.2 and 1.9. For example, you can run
from neo4django.db import connection
from pdb import set_trace; set_trace()
connection.cleandb()
and trace the cleandb Python call, and it gets a 200 and accompanying response body,
{\n "node-indexes" : [ ],\n "nodes" : 4,\n "relationship-indexes" : [ ],\n "relationships" : 0,\n "maxNodesToDelete" : 1000\n}
I'm not sure what the difference between curl and the Python call might be- any chance you could try the above in a module, run it, and see what happens?
EDIT:
The cleandb extension is unmanaged, so you can't (IIRC?) set the URL to '/db/cleandb', it needs to be on its own root- I use '/cleandb'. LMK if that helps!
EDIT:
Aw, disregard that, '/db/' urls seem to work fine. Maybe you could use the 'install_local_neo4j.bash' script (https://github.com/scholrly/neo4django/blob/master/install_local_neo4j.bash) to install a copy of Neo4j and set it up the same way, if that works for you? And if so, maybe we can see how the setups differ...
It only works with Neo4j versions up to 1.7 I think.
Didn't update it anymore b/c you can do that cleanup with cypher now, see: http://neo4j.org/resources/cypher
start n=node(*)
match n-[r?]->()
where id(n) <> 0
delete n,r