Rasa X deployment on shared server - docker

I m trying to deploy rasa on my shared server. I have follow the Docker Compose Installation documentation to deploy rasa. And tried both with script and manual deployment. But it's not working.
As it shared server my 80 and 443 ports are used, therefore i change rasa/nginx container ports to 8080 and 8443, in docker-compose.yml file
When I hit to http://<server_ip>:8080 its get redirected to http://<server_ip>/api/health and finally shows unable to connect.
And when I hit url http://<server_ip>:8080/conversations then it shows blank page with title "Rasa X".
Edit:
Still not able to figure out what was the issue. But now url http://<server_ip>:8080/ returning 502 Bad Gateway
From log docker-compose logs:
[error] 17#17: *40 connect() failed (111: Connection refused) while connecting to upstream, client: 43.239.112.255, server: , request: "GET / HTTP/1.1", upstream: "http://192.168.64.6:5002/", host: "http://<server_ip>:8080"
Any idea what causing it?

It seem that RASA X 0.35.0 is not compatible with RASA OPEN SOURCE 2.2.4 on server.
When I changed versions, from
RASA_X_VERSION=0.35.0
RASA_VERSION=2.2.4
RASA_X_DEMO_VERSION=0.35.0
to
RASA_X_VERSION=0.34.0
RASA_VERSION=2.1.2
RASA_X_DEMO_VERSION=0.34.0
Then it's works.

Can you also define the ports in config.yml file as shown below for duckling server

Related

Puma stuck, can not connect from nginx

I'm working with nginx and single puma, It working normally but suddenly nginx can not connect to puma and I got some error from nginx like
*11193882 epoll_wait() reported that client prematurely closed connection, so upstream connection is closed too while sending request to upstream, client: xxx.xx.xx.xx, server: _, request: "GET /ping HTTP/1.1", upstream: "http://unix:////tmp/sockets/puma.sock:/ping", host: "host_name"
I don't know why puma stuck and can not connect from nginx with no error from puma_error.log
Check CPU no problem
Check memory no problem
No error on puma_error.log
Does anyone has ideal for this problem? How do I know where the problem is coming from?

GitLab can't reach PlantUml in docker container

So I have GitLab EE server (Omnibus) installed and set up on Ubuntu 20.04.
Next, following official documentation found on GitLab PlantUML integration, I started PlantUML in a docker container which I did with the following command:
docker run -d --name plantuml -p 8084:8080 plantuml/plantuml-server:tomcat
Next, I also configured /etc/gitlab/gitlab.rb file and added next line for redirection as my GitLab server is using SSL:
nginx['custom_gitlab_server_config'] = "location /-/plantuml/ { \n proxy_cache off; \n proxy_pass http://plantuml:8080/; \n}\n"
In the GitLab server GUI in admin panel, in Settings -> General, when I expand PlantUML, I set the value of PlantUML URL to (two ways):
1st approach:
https://HOSTNAME:8084/-/plantuml
Then, when trying to reach it through the browser through this address(https://HOSTNAME:8084/-/plantuml), I get
This site can’t provide a secure connection.
HOSTNAME sent an invalid response.
ERR_SSL_PROTOCOL_ERROR
2nd approach:
Also I tried to put before that I tried different value in in Settings -> General -> PlantUML -> PlantUML URL:
https://HOSTNAME/-/plantuml
Then, when trying to reach it through the browser through this address (https://HOSTNAME/-/plantuml), I get
502
Whoops, GitLab is taking too much time to respond
In both cases when I trace logs with gitlab-ctl tail I get the same errors:
[crit] *901 SSL_do_handshake() failed (SSL: error:141CF06C:SSL routines:tls_parse_ctos_key_share:bad key share) while SSL handshaking, client: CLIENT_IP, server: 0.0.0.0:443
[error] 1123593#0: *4 connect() failed (113: No route to host) while connecting to upstream
My question is which of the above two ways is correct to access PlantUML with the above configuration and is there any configuration I am missing?
I believe the issue is that you are running the plantuml in a docker container and then trying to reach it via gitlab (on localhost) with name.
In order to check if that is the issue please change
proxy_pass http://plantuml:8080/
to
proxy_pass http://localhost:8080/
and trying again with the first approach.
Your second approach seems to be missing the container port in the url.

Deploy Java jar on App Engine Flexible is failing because of health checks

During the deployment GAE health checks are failing because of connection refused error. Container is exposing same port as GAE expects - 8080. After connecting with SSH to the container and doing curl 127.0.0.1/liveness_check, it works, however trying to manually query from gae instance itself is resulting with connection refused error.
Disabling health checks allows the deployment to finish but when accessing the service URL we receive nginx 502 bad gateway error.
Looks like nginx cannot access container port, or something else, I did try to deploy the image on GCE and it works.
app.yaml is pretty standard, it's using a custom VPC.
From GAE service logs:
[error] 33#33: *407 connect() failed (111: Connection refused) while connecting to upstream, client: 172.217.20.180, server: , request: "GET / HTTP/1.1", upstream: "http://172.17.0.1:8080/", host: "XXXXXXXXX"

NGINX and SPRINGBOOT in DOCKER container GOT 502 Bad Gateway

I deployed my springboot project in docker container opening port 8080 as well as an nginx server opening port 80
enter image description here
When I use
curl http://localhost:8080/heya/index
it returns normally
But when I use
curl http://localhost/heya/index
hoping I can reach from nginx proxy,it failed. And I checked the log, it says
*24#24: 11 connect() failed (111: Connection refused) while connecting to upstream, client: 172.17.0.1, server: , request: "GET /heya/index HTTP/1.1", upstream: "http://127.0.0.1:8080/heya/index", host: "localhost"
Here is my nginx.conf
enter image description here
I cannot figure it out and need help.
I finally got the answer!!
I ran nginx container and webapp container using host network mode, and it worked.
111: Connection refused) while connecting to upstream
is saying Nginx can't connect to the upstream server.
Your
proxy_pass http://heya;
is telling Nginx that the upstream is talking the HTTP protocol [on the default port 80] on the hostname heya. Unless you're running multiple containers in the same Compose network, it's unlikely that the hostname would be heya.
If the Java application is running on port 8080 inside the same container, talking the HTTP protocol, the correct proxy_pass would be
proxy_pass http://localhost:8080;
(since localhost in the container's view is the container itself).

docker nginx-proxy "Bad Gateway"

I have what I think is exactly the setup prescribed in the documentation. Easy peasy, development-only, no SSL ... But I'm getting "Bad Gateway."
docker exec ... cat /etc/nginx/conf.d/default.conf
... seems to correctly identify the internal IP-address of the other-container of interest ... which means that scanning for ENV VIRTUAL_HOST obviously worked:
upstream my_site.local {
[...]
server 172.16.238.5:80 # CORRECT!
}
When I do docker logs app_server I see ... silence. The server isn't being contacted.
When I do docker logs nginx_proxy I see this:
failed (111: connection refused) while connecting to upstream, client 172.16.238.1 [...] upstream: "172.16.238.5:80/"
The other container specifies EXPOSE 80 ... so, why is the connection being refused and who is refusing it?
Well, as I said above, I realized the error of my ways and did this:
VIRTUAL_PROTO=fastcgi
VIRTUAL_ROOT=/var/www
... and within the Dockerfile of the app container I apparently did need to EXPOSE 9000. (This being the default port used by php-fpm for FastCGI purposes.)

Resources