So I have GitLab EE server (Omnibus) installed and set up on Ubuntu 20.04.
Next, following official documentation found on GitLab PlantUML integration, I started PlantUML in a docker container which I did with the following command:
docker run -d --name plantuml -p 8084:8080 plantuml/plantuml-server:tomcat
Next, I also configured /etc/gitlab/gitlab.rb file and added next line for redirection as my GitLab server is using SSL:
nginx['custom_gitlab_server_config'] = "location /-/plantuml/ { \n proxy_cache off; \n proxy_pass http://plantuml:8080/; \n}\n"
In the GitLab server GUI in admin panel, in Settings -> General, when I expand PlantUML, I set the value of PlantUML URL to (two ways):
1st approach:
https://HOSTNAME:8084/-/plantuml
Then, when trying to reach it through the browser through this address(https://HOSTNAME:8084/-/plantuml), I get
This site can’t provide a secure connection.
HOSTNAME sent an invalid response.
ERR_SSL_PROTOCOL_ERROR
2nd approach:
Also I tried to put before that I tried different value in in Settings -> General -> PlantUML -> PlantUML URL:
https://HOSTNAME/-/plantuml
Then, when trying to reach it through the browser through this address (https://HOSTNAME/-/plantuml), I get
502
Whoops, GitLab is taking too much time to respond
In both cases when I trace logs with gitlab-ctl tail I get the same errors:
[crit] *901 SSL_do_handshake() failed (SSL: error:141CF06C:SSL routines:tls_parse_ctos_key_share:bad key share) while SSL handshaking, client: CLIENT_IP, server: 0.0.0.0:443
[error] 1123593#0: *4 connect() failed (113: No route to host) while connecting to upstream
My question is which of the above two ways is correct to access PlantUML with the above configuration and is there any configuration I am missing?
I believe the issue is that you are running the plantuml in a docker container and then trying to reach it via gitlab (on localhost) with name.
In order to check if that is the issue please change
proxy_pass http://plantuml:8080/
to
proxy_pass http://localhost:8080/
and trying again with the first approach.
Your second approach seems to be missing the container port in the url.
Related
I'm using WSL(v2) and have a web server running inside it.
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
kind-control-plane Ready control-plane 3d21h v1.25.0
And I can ping my "website" from within WSL [ping mysite.com] just fine
On my Windows side, I have Docker Desktop running.
Under Containers:
NAME IMAGE STATUS PORTS
kind-control-plane kindest/node:<none> Running 443,42725,80
I have also enabled all [WSL] related options under Docker Desktop -> Settings
I want to access this website in Chrome and tried:
localhost:443 = err 400
localhost:42725 = Client sent an HTTP request to an HTTPS server. With https, err 403
localhost:80 = err 404
I also tried adding the port 42725 to my firewall allowed list but nothing seems to change.
I searched and found multiple Q&As pointing me here:
https://learn.microsoft.com/en-us/windows/wsl/networking
I tried these:
netsh interface portproxy add v4tov4 listenport=80listenaddress=0.0.0.0 connectport=80connectaddress={hostname -I}
netsh interface portproxy add v4tov4 listenport=443listenaddress=0.0.0.0 connectport=443connectaddress={hostname -I}
but nothing changes.
However, when I do this
netsh interface portproxy add v4tov4 listenport=42725listenaddress=0.0.0.0 connectport=42725connectaddress={hostname -I}
And then try to restart:
$ docker start kind-control-plane
Error response from daemon: Ports are not available: exposing port TCP 127.0.0.1:42725 -> 0.0.0.0:0: listen tcp 127.0.0.1:42725: bind: An attempt was made to access a socket in a way forbidden by its access permissions.
Error: failed to start containers: kind-control-plane
I also tried a completely different [listenport] but it didn't work either.
I'm actually not sure what ports to put inside the listenport/connectport options. Also, should I change listenaddress=0.0.0.0 to something else?
Any suggestions? Thanks!
Hello i am having trouble with vagrant setup.
So i am trying to ping serverless API which runs on http://localhost:3000/ (and it's outside vagrant project).
Right now my vagrant project runs on https://localhost:4443/.
Overall trying to CURL request from vagrant project to another serverless project.
Tried to use http://localhost:3000/ in CURL request but getting Failed to connect to localhost port 3000: Connection refused
Tried to use VM ip 10.0.2.15 address same
Tried to do port forwarding in vagrantfile config.vm.network :forwarded_port, guest: 3000, host: 3000 and use machine IP address 192.168.0.16, getting empty response from server, when i try to do telnet 192.168.0.16 3000 getting
Trying 192.168.0.16...
Connected to 192.168.0.16.
Escape character is '^]'.
Connection closed by foreign host.
Any idea what to try?
I had to use VM IP address something like
curl -X GET http://10.0.2.2:3000
These errors may be caused due to follow reasons, ensure the following steps are followed. To connect the local host with the local virtual machine(host). Here, I'am connecting http://localhost:3001/ to the http://abc.test Steps to be followed:
1.We have to allow CORS, placing Access-Control-Allow-Origin: in header of request may not work. Install a google extension which enables a CORS request.*
2.Make sure the credentials you provide in the request are valid.
3.Make sure the vagrant has been provisioned. Try vagrant up --provision this make the localhost connect to db of the homestead.
Try changing the content type of the header. header:{ 'Content-Type' : 'application/x-www-form-urlencoded; charset=UTF-8;application/json' }
this point is very important.
I m trying to deploy rasa on my shared server. I have follow the Docker Compose Installation documentation to deploy rasa. And tried both with script and manual deployment. But it's not working.
As it shared server my 80 and 443 ports are used, therefore i change rasa/nginx container ports to 8080 and 8443, in docker-compose.yml file
When I hit to http://<server_ip>:8080 its get redirected to http://<server_ip>/api/health and finally shows unable to connect.
And when I hit url http://<server_ip>:8080/conversations then it shows blank page with title "Rasa X".
Edit:
Still not able to figure out what was the issue. But now url http://<server_ip>:8080/ returning 502 Bad Gateway
From log docker-compose logs:
[error] 17#17: *40 connect() failed (111: Connection refused) while connecting to upstream, client: 43.239.112.255, server: , request: "GET / HTTP/1.1", upstream: "http://192.168.64.6:5002/", host: "http://<server_ip>:8080"
Any idea what causing it?
It seem that RASA X 0.35.0 is not compatible with RASA OPEN SOURCE 2.2.4 on server.
When I changed versions, from
RASA_X_VERSION=0.35.0
RASA_VERSION=2.2.4
RASA_X_DEMO_VERSION=0.35.0
to
RASA_X_VERSION=0.34.0
RASA_VERSION=2.1.2
RASA_X_DEMO_VERSION=0.34.0
Then it's works.
Can you also define the ports in config.yml file as shown below for duckling server
So I've been facing a weird problem, and I'm not sure where the fault is. I'm running a container using docker-compose, and the following nginx configuration works great:
server {
location / {
proxy_pass http://container_name1:1337;
}
}
Where container_name was the name of the service I gave in docker-compose.yml file. It resolves to the IP perfectly and it works. However, the moment I change the above file to this:
upstream backend {
least_conn;
server container_name1:1337;
server container_name2:1337;
}
server {
location / {
proxy_pass http://backend;
}
}
It stops working completely and in error logs I get the following:
2020/03/17 13:16:03 [error] 8#8: *11 no live upstreams while connecting to upstream, client: xxxxxx, server: codedamn.com, request: "GET /HTTP/1.1", upstream: "http://backend/", host: "xxxxx"
Why is that? Is nginx not able to resolve DNS when inside upstream blocks? Could anyone help with this problem?
NOTE: This happens only on production (Ubuntu 16.04), on local (macOS Catalina), the same configuration works fine. I'm totally confused after discovering this.
Update 1: The following works:
upstream backend {
least_conn;
server container_name1:1337;
}
But not with more than one server. Why?!
Alright. Figured it out. This is because docker-compose creates contianers randomly and nginx quickly marks the containers as down (I was deploying this on production when there was some traffic). The app containers weren't ready, but nginx was, so it marked them down and stopped forwarding any traffic.
For now, instead of syncing up docker-compose container creation order (which was a bit hacky, as I discovered), I disabled the failed attempts of nginx to automatically mark service as down by writing:
server app_inst1:1337 max_fails=0;
which lets nginx still forward the traffic to a particular service (and my docker is configured to restart the container in case it crashes), which is fine.
I am opening some question I somewhat asked before but now the problem seems to be pretty linked to ssh.
I have installed Gitlab in /home/myuser/gitlab.
I created a rep test
Following instructions, I added a remote git#localhost:root/testing.git (Gitlab's server runs on port 3000)
Now, when I try to push, I get this error message:
$ git push -u origin master
ssh: Could not resolve hostname mylocalhost: nodename nor servname provided, or not known
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
Now, I found that there was a problem with my ssh connect. Here's my /home/myhome/.ssh/config file
Host mylocalhost
Hostname localhost
PORT 3000
User git
IdentityFile /home/myuser/.ssh/id_rsa.pub
When I run ssh mylocalhost I get this message
ssh_exchange_identification: Connection closed by remote host
On verbose mode, it seems that the connection is established on the right port but the porcess fails here debug1: ssh_exchange_identification: HTTP/1.1 400 Bad Request.
I tried to update my /System/Library/LaunchDaemons/ssh.plist (I am using OSX) to forward port listening to 3000 but then the Gitlab Webrick Rails server won't run anymore. L tried to change git remote set-url origin mylocalhost:testing.git
Gitlab's HTTP interface is probably running on port 3000, but SSH isn't running there to push the repository to.
The message of ssh: connect to host localhost port 22: Connection refused means that the SSH client was unable to connect to the SSH server at localhost on port 22. I'd ensure you've installed gitlab correctly and Gitlab is running correctly. Also ensuring the ssh server is running and able to accept connections on port 22.
You cannot hope to use the port 3000 if you are using a fully specified url
git#localhost:root/testing.git
As I have explained to you before:
The idea of the ssh config file is to define an entry "foobar" which will set the right server name (Hostname), ssh private key (IdentityFile), and user under which the ssh session is opened.
That would allow to do 'ssh foobar' (without having to put git#xxx, and with non-standard public/private keys files).
You can define as many entry as you want, allowing you to use different user and keys.
So you cannot define origin with git#xxx. You must type:
git remote add origin mylocalhost:testing.git
Or, if origin is already defined, and you need to change its url:
git remote set-url origin mylocalhost:testing.git
(no 'root/' you don't specify any path in front of a gitlab repo: gitlab will deduce the full path of the repo)
But you need to be sure your sshd starts on port 3000, and that gitlab.yml contains that port number.
ssh: Could not resolve hostname mylocalhost: nodename nor servname provided, or not known
That means ssh cannot find ~/.ssh/config, with a mylocalhost entry in it.
Make sure that file exist.
Your previous question put in ~/.git/config, which has nothing to do with ssh.
At last, I figured out what was the problem. I had to set properly the ~/.ssh/config file
Host mylocalhost
Hostname localhost
//I deleted a line specifying the PORT to 3000
User git
IdentityFile /home/myuser/.ssh/id_rsa //It was previously set to /home/myuser/.ssh/id_rsa.pub
Then I reinstalled a key but I still got some problem. Finally, setting the perm of the file /home/myuser/.ssh/id_rsa to 644 and it worked just fine. For information, I found searching in the web that some settings could work with a 700 or a 600 chmod perm, but for me 644 did the trick