Django graphene GraphiQL page not loading when running from Uvicorn - graphene-django

Not sure what I set wrong but I am not getting the graphiql interface when running in uvicorn using uvicorn mysite.asgi:application:
[32mINFO[0m: Started server process [[36m14872[0m]
[32mINFO[0m: Waiting for application startup.
[32mINFO[0m: ASGI 'lifespan' protocol appears unsupported.
[32mINFO[0m: Application startup complete.
[32mINFO[0m: Uvicorn running on [1mhttp://127.0.0.1:8000[0m (Press CTRL+C to quit)
[32mINFO[0m: 127.0.0.1:52463 - "GET /graphql/ HTTP/1.1" 200
Not Found: /static/graphene_django/graphiql.js
[33mWARNING[0m: Not Found: /static/graphene_django/graphiql.js
[32mINFO[0m: 127.0.0.1:52463 - "GET /static/graphene_django/graphiql.js HTTP/1.1" 404
Not Found: /static/graphene_django/graphiql.js
[33mWARNING[0m: Not Found: /static/graphene_django/graphiql.js
[32mINFO[0m: 127.0.0.1:52463 - "GET /static/graphene_django/graphiql.js HTTP/1.1" 404
but it loads fine when I do python manage.py runserver
Here is what I have installed:
Python 3.8.2
Django==3.0.5
uvicorn==0.11.3
graphene==2.1.8
graphene-django==2.9.0
graphql-core==2.3.1
In settings.py I have:
STATIC_URL = '/static/'
STATIC_ROOT = os.path.join(BASE_DIR, 'staticfiles')
STATICFILES_DIRS = [os.path.join(BASE_DIR, "static"),]
# Graphene
GRAPHENE = {
'SCHEMA': 'mysite.schema.schema'
}

Just set DEBUG = True in your settings.py file.

Django is not serving static files by itself outside of the development server, you have to do it by yourself in some way. You can either set up a reverse proxy like nginx and instruct it to serve all the static files from your staticfiles directory or, only for development, you can instruct django to serve them, by adding to your urls.py:
from django.conf import settings
from django.views.static import serve
# Put the line provided below into your `urlpatterns` list.
url(r'^(?P<path>.*)$', serve, {'document_root': settings.STATIC_ROOT})
Make sure to use the later method only for the development, as it may have serious performance impact.
For both of the methods, you have to collect your static files as well using python manage.py collectstatic, as django, outside of the development server, needs all the static files to be collected in one place, your STATIC_ROOT.

Related

Docker: Go server does not respond

I was learning how to dockerize Go Apps. I created a simple REST API
package main
import "github.com/gin-gonic/gin"
func main() {
server := gin.Default()
server.GET("/", func(ctx *gin.Context) {
ctx.JSON(200, "HELLO SERVER")
})
server.Run("127.0.0.1:3000")
}
and here is the Dockerfile
FROM golang:1.18.3-alpine3.16
RUN mkdir /app
COPY . /app
WORKDIR /app
RUN go mod download
RUN go build -o main .
EXPOSE 3000
CMD "/app/main"
Thing is, when build and run the container, it appears the app starts normally as usual
[GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached.
[GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production.
- using env: export GIN_MODE=release
- using code: gin.SetMode(gin.ReleaseMode)
[GIN-debug] GET / --> main.main.func1 (3 handlers)
[GIN-debug] [WARNING] You trusted all proxies, this is NOT safe. We recommend you to set a value.
Please check https://pkg.go.dev/github.com/gin-gonic/gin#readme-don-t-trust-all-proxies for details.
[GIN-debug] Listening and serving HTTP on 127.0.0.1:3000
But here is the issue, when I use Postman and make a get request to 127.0.0.1:3000/, it shows it could not get any response (i.e, the response it gets when there is no such server to connect)
When I run the app using 'go run main.go' it works fine.
It would be really great if you could help me out with this
maybe you should listen on 0.0.0.0:3000
func main() {
server := gin.Default()
server.GET("/", func(ctx *gin.Context) {
ctx.JSON(200, "HELLO SERVER")
})
// server.Run("127.0.0.1:3000")
server.Run("0.0.0.0:3000")
}

gunicorn occasionally freezes until worker timeout in trivial docker web app with flask

I built a trivial docker web app with flask and gunicorn. Everything works, but occasionally when making a request, the response hangs. I see nothing in logging until the worker times out. So it seems like the worker is busy with something, times out, then a new worker picks up the request and immediately responds.
I set it up with only 1 worker. I know I can just add another worker. But there are no other requests other than my manual poking. Nothing else happening. So I'm super curious what else this one worker or the master gunicorn worker would be doing in the container (heartbeat, error handling that expensive)?
My Dockerfile:
FROM python:3.6-slim
WORKDIR /app
COPY . .
RUN pip install -r requirements.txt --no-cache-dir
EXPOSE 5000
CMD ["gunicorn", "-w", "1", "-t", "30", "--worker-tmp-dir", "/dev/shm", "-b", "0.0.0.0:5000", "app:app"]
My trivial app:
import logging
import model
from flask import Flask, request, jsonify
app = Flask(__name__)
#app.errorhandler(Exception)
def handle_error(e):
code = 500
app.log_exception(e)
return jsonify(message=str(e)), code
#app.route("/predict", methods=["POST", "GET"])
def predict():
result = model.predict(None)
return jsonify(result)
model = model.mcascorer()
if __name__ == '__main__':
app.run(debug=True, host='0.0.0.0')
else:
gunicorn_logger = logging.getLogger('gunicorn.error')
app.logger.handlers = gunicorn_logger.handlers
app.logger.setLevel(gunicorn_logger.level)
My super tiny little "model" I call predict on:
class mcascorer:
def predict_proba(self, features):
return 'hello'
def predict(self, features):
return 'hello'
Usually responds immediately but during timeouts log looks like this:
[2020-05-21 18:09:28 +0000] [9] [DEBUG] Closing connection.
[2020-05-21 18:09:58 +0000] [1] [CRITICAL] WORKER TIMEOUT (pid:9)
[2020-05-21 18:09:58 +0000] [9] [INFO] Worker exiting (pid: 9)
[2020-05-21 18:09:58 +0000] [11] [INFO] Booting worker with pid: 11
[2020-05-21 18:09:58 +0000] [11] [DEBUG] GET /predict
It appears the worker is blocked on something else - but I have no idea what that would be. A heartbeat query coming in at the same time shouldn't take that long, but it hangs for many seconds - entire duration of timeout I set for the worker actually. The only other thing happening is error logging, but not sure why that would block or take so long. Even if it were writing to disk, this seems odd.
The closest issue I could find is here: Docker: Running a Flask app via Gunicorn - Worker timeouts? Poor performance?
which links to this article:
https://pythonspeed.com/articles/gunicorn-in-docker/
I followed their guide and updated the Dockerfile for tmp memory location "--worker-tmp-dir", "/dev/shm"
I did not add more workers. I know I can but I would really like to know what's going on rather than blindly throwing resources at it. Any ideas are much appreciated.

Odoo project agile jira importer addon installation error

I'm trying to add this app to my Odoo 11 addons. I'm running it in a docker container.
It has been successfully added and updated into my app list. But when getting to install it, I get this log in the Odoo GUI (Unable to install module "project_agile_jira" because an external dependency is not met: No module named Jira)
And this log in the terminal:
2020-04-16 11:10:41,747 1 INFO Nilecode odoo.addons.base.module.module: ALLOW access to module.button_immediate_install on ['project_agile_jira'] to user m.mahdi#nilecode.com #1 via 172.17.0.1
2020-04-16 11:10:41,747 1 INFO Nilecode odoo.addons.base.module.module: User #1 triggered module installation
2020-04-16 11:10:41,748 1 INFO Nilecode odoo.addons.base.module.module: ALLOW access to module.button_install on ['project_agile_jira'] to user m.mahdi#nilecode.com #1 via 172.17.0.1
2020-04-16 11:10:42,103 1 INFO Nilecode werkzeug: 172.17.0.1 - - [16/Apr/2020 11:10:42] "POST /web/dataset/call_button HTTP/1.1" 200 -
202
Your help would be highly appreciated.

Why is the main.dart configuration getting ignored in my Dart Aqueduct server

My main.dart file for my Aqueduct server is
import 'package:dart_server/dart_server.dart';
Future main() async {
final app = Application<DartServerChannel>()
..options.configurationFilePath = "config.yaml"
..options.port = 3000; // changed from 8888
final count = Platform.numberOfProcessors ~/ 2;
await app.start(numberOfInstances: 1); // changed from count > 0 ? count : 1
print("Application started on port: ${app.options.port}.");
print("Use Ctrl-C (SIGINT) to stop running the application.");
}
I changed the port number and the number of instances, but when I start the server with
aqueduct serve
I still get port 8888 and two instances:
-- Aqueduct CLI Version: 3.1.0+1
-- Aqueduct project version: 3.1.0+1
-- Preparing...
-- Starting application 'dart_server/dart_server'
Channel: DartServerChannel
Config: /Users/jonathan/Documents/Programming/Tutorials/Flutter/backend/backend_app/dart_server/config.yaml
Port: 8888
[INFO] aqueduct: Server aqueduct/1 started.
[INFO] aqueduct: Server aqueduct/2 started.
Only if I explicitly start the server like this
aqueduct serve --port 3000 --isolates 1
do I get port 3000 and one instance:
-- Aqueduct CLI Version: 3.1.0+1
-- Aqueduct project version: 3.1.0+1
-- Preparing...
-- Starting application 'dart_server/dart_server'
Channel: DartServerChannel
Config: /Users/jonathan/Documents/Programming/Tutorials/Flutter/backend/backend_app/dart_server/config.yaml
Port: 3000
[INFO] aqueduct: Server aqueduct/1 started.
Why didn't changing main.dart affect it? (I saved the file after making changes.) Is there somewhere else that I need to make the update?
I don't find it in any documentation but it seems that when you run "aqueduct serve" command, the bin/main.dart file isn't executed.
The aqueduct serve command uses its own configuration on command line. You need to specify the port using the -port option.
If you want to use your main.dart file you can also execute the server directly using
dart bin/main.dart
in your project folder.

Passing remote_user to lua file

I am following the next tuto section LDAP Authentication. The configuration nginx file and the lua script are here and here. After the commands
sbin/nginx -p $PWD -c conf/nginx-ldap-auth.conf
python backend-sample-app.py
python nginx-ldap-auth-daemon.py
According the log of nginx-ldap-auth-daemon.py I have success with login, i.e. 200 OK auth user admin. But I get a 500 Internal Server Error. In the lua.log I get
/usr/local/openresty/nginx/authorize_es_ldap.lua: in function </usr/local/openresty/nginx/authorize_es_ldap.lua:1> while sending to client, client: 127.0.0.1, server: , request: "GET / HTTP/1.1", host: "localhost:8881", referrer: "http://localhost:8881/"
2016/09/29 23:35:27 [error] 23987#0: *10 lua entry thread aborted: runtime error: /usr/local/openresty/nginx/authorize_es_ldap.lua:50: attempt to concatenate global 'role' (a nil value)
I think that the problem is because in the tutorial there is a gap, that is how to pass the remote_user variable to lua script. I am trying to add self.send_header('LDAPUser',ctx['user']) around the line 204, before to end_headers and after to seld.send_response(200).
Could you help me please?

Resources