I have tried to install gitlab on archlinux following https://wiki.archlinux.org/index.php/gitlab
As 8080 is a well current port, I have switched to 8033.
When I try to connect to the website, it prints me 402 error.
If I have a look to nginx/gitlab_errors.log I have :
2015/03/23 21:16:00 [error] 29748#0: *1081 connect() failed (111: Connection refused) while connecting to upstream, client: 5.51.59.153, server: gitlab.floth.fr, request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:8033/", host: "gitlab.floth.fr"
If I open /var/lib/gitlab/gitlab-shell.log I get
# Logfile created on 2015-03-23 21:09:06 +0100 by logger.rb/47272
W, [2015-03-23T21:09:06.321779 #30833] WARN -- : Failed to connect to internal API <GET http://localhost:8033/api/v3/internal/check>: #<Errno::ECONNREFUSED: Connection refused - connect(2) for "localhost" port 8033>
W, [2015-03-23T21:17:48.059769 #31230] WARN -- : Failed to connect to internal API <GET http://localhost:8033//api/v3/internal/check>: #<Errno::ECONNREFUSED: Connection refused - connect(2) for "localhost" port 8033>
W, [2015-03-23T21:22:01.846281 #31548] WARN -- : Failed to connect to internal API <GET http://localhost:8033//api/v3/internal/check>: #<Errno::ECONNREFUSED: Connection refused - connect(2) for "localhost" port 8033>
And if I run sudo -u gitlab bundle exec rake gitlab:check RAILS_ENV=production I get :
hooks directories in repos are links: ... can't check, you have no projects
Running /usr/share/webapps/gitlab-shell/bin/check
Check GitLab API access: FAILED: Failed to connect to internal API
gitlab-shell self-check failed
Try fixing it:
Make sure GitLab is running;
Check the gitlab-shell configuration file:
sudo -u gitlab -H editor /usr/share/webapps/gitlab-shell/config.yml
Please fix the error above and rerun the checks.
Checking GitLab Shell ... Finished
If I do netstat -a | grep 8033, nothing listening on that port...
Does anyone have an idea where to look for ? What service is not running because not started or failed?
Thank you for your help.
Edit
Content of gitlab-shell/config.yml
user: gitlab
gitlab_url: "http://localhost:8033/"
repos_path: "/srv/git/gitlab"
auth_file: "/var/lib/gitlab/.ssh/authorized_keys"
redis:
bin: /usr/bin/redis-cli
host: 127.0.0.1
port: 6379
database: 0
namespace: resque:gitlab
log_file: "/var/log/gitlab/gitlab-shell.log"
log_level: INFO
audit_usernames: false
git_annex_enabled: false
TADA !
I found where my configuration was bad.
All comes from the fact I have chosen an other port than 8080.
In such a case, it is important not only to modify gitlab-shell configuration (that is only the client) but the server part gitlab/config/unicorn.rb:
# Listen on both a Unix domain socket and a TCP port.
# If you are load-balancing multiple Unicorn masters, lower the backlog
# setting to e.g. 64 for faster failover.
listen "/run/gitlab/gitlab.socket", :backlog => 1024
listen "127.0.0.1:8033", :tcp_nopush => true
Related
I have added following in my conf file (ref - https://docs.fluentd.org/input/monitor_agent )-
<source>
#type monitor_agent
bind 0.0.0.0
port 24220
</source>
When I run fluentd in a docker container , following log is also reported - >
2022-09-21 07:57:22 +0000 [debug]: #0 [monitor_agent_stats] listening monitoring http server on http://0.0.0.0:24220/api/plugins for worker0
As per documentation,
This configuration launches HTTP server with 24220 port
But when I try to run following command in another terminal to list plugins =>
curl http://localhost:24220/api/plugins.json
I am getting ->
curl: (7) Failed to connect to localhost port 24220 after 9 ms: Connection refused
When running Fluentd in a container you need to map the ports on host in order to access its api:
docker run -p 24220:24220 ...
then from host you can run
curl http://localhost:24220/api/plugins.json
During the deployment GAE health checks are failing because of connection refused error. Container is exposing same port as GAE expects - 8080. After connecting with SSH to the container and doing curl 127.0.0.1/liveness_check, it works, however trying to manually query from gae instance itself is resulting with connection refused error.
Disabling health checks allows the deployment to finish but when accessing the service URL we receive nginx 502 bad gateway error.
Looks like nginx cannot access container port, or something else, I did try to deploy the image on GCE and it works.
app.yaml is pretty standard, it's using a custom VPC.
From GAE service logs:
[error] 33#33: *407 connect() failed (111: Connection refused) while connecting to upstream, client: 172.217.20.180, server: , request: "GET / HTTP/1.1", upstream: "http://172.17.0.1:8080/", host: "XXXXXXXXX"
I m trying to deploy rasa on my shared server. I have follow the Docker Compose Installation documentation to deploy rasa. And tried both with script and manual deployment. But it's not working.
As it shared server my 80 and 443 ports are used, therefore i change rasa/nginx container ports to 8080 and 8443, in docker-compose.yml file
When I hit to http://<server_ip>:8080 its get redirected to http://<server_ip>/api/health and finally shows unable to connect.
And when I hit url http://<server_ip>:8080/conversations then it shows blank page with title "Rasa X".
Edit:
Still not able to figure out what was the issue. But now url http://<server_ip>:8080/ returning 502 Bad Gateway
From log docker-compose logs:
[error] 17#17: *40 connect() failed (111: Connection refused) while connecting to upstream, client: 43.239.112.255, server: , request: "GET / HTTP/1.1", upstream: "http://192.168.64.6:5002/", host: "http://<server_ip>:8080"
Any idea what causing it?
It seem that RASA X 0.35.0 is not compatible with RASA OPEN SOURCE 2.2.4 on server.
When I changed versions, from
RASA_X_VERSION=0.35.0
RASA_VERSION=2.2.4
RASA_X_DEMO_VERSION=0.35.0
to
RASA_X_VERSION=0.34.0
RASA_VERSION=2.1.2
RASA_X_DEMO_VERSION=0.34.0
Then it's works.
Can you also define the ports in config.yml file as shown below for duckling server
I have what I think is exactly the setup prescribed in the documentation. Easy peasy, development-only, no SSL ... But I'm getting "Bad Gateway."
docker exec ... cat /etc/nginx/conf.d/default.conf
... seems to correctly identify the internal IP-address of the other-container of interest ... which means that scanning for ENV VIRTUAL_HOST obviously worked:
upstream my_site.local {
[...]
server 172.16.238.5:80 # CORRECT!
}
When I do docker logs app_server I see ... silence. The server isn't being contacted.
When I do docker logs nginx_proxy I see this:
failed (111: connection refused) while connecting to upstream, client 172.16.238.1 [...] upstream: "172.16.238.5:80/"
The other container specifies EXPOSE 80 ... so, why is the connection being refused and who is refusing it?
Well, as I said above, I realized the error of my ways and did this:
VIRTUAL_PROTO=fastcgi
VIRTUAL_ROOT=/var/www
... and within the Dockerfile of the app container I apparently did need to EXPOSE 9000. (This being the default port used by php-fpm for FastCGI purposes.)
on OS X i started kafka docker image successfully,but it seems that i can't access it on localhost
➜ ~ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1f931da3d661 wurstmeister/zookeeper:3.4.6 "/bin/sh -c '/usr/..." About an hour ago Up About an hour 22/tcp, 2888/tcp, 3888/tcp, 0.0.0.0:2181->2181/tcp docker_zookeeper_1
8bc36bcf8fdf wurstmeister/kafka:0.10.1.1 "start-kafka.sh" About an hour ago Up About an hour 0.0.0.0:9092->9092/tcp docker_kafka_1
➜ ~ telnet 0.0.0.0:2181
0.0.0.0:2181: nodename nor servname provided, or not known
➜ ~ telnet 0.0.0.0 2181
Trying 0.0.0.0...
telnet: connect to address 0.0.0.0: Connection refused
telnet: Unable to connect to remote host
➜ ~ telnet 192.168.43.193 2181
Trying 192.168.43.193...
telnet: connect to address 192.168.43.193: Connection refused
telnet: Unable to connect to remote host
➜ ~ telnet 127.0.0.1 2181
Trying 127.0.0.1...
telnet: connect to address 127.0.0.1: Connection refused
telnet: Unable to connect to remote host
my docker file is here kafka.yml and use this command to up:
docker-compose -f src/main/docker/kafka.yml up -d
when i use
./mvnw
the console is:
2017-09-15 17:05:46.433 WARN 15871 --- [localhost:2181)] org.apache.zookeeper.ClientCnxn : Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1141)
how can i access the 2181 port
EDIT
docker logs 8bc36bcf8fdf
[2017-09-15 08:14:13,386] FATAL Fatal error during KafkaServerStartable startup. Prepare to shutdown (kafka.server.KafkaServerStartable)
java.lang.RuntimeException: A broker is already registered on the path /brokers/ids/1001. This probably indicates that you either have configured a brokerid that is already in use, or else you have shutdown this broker and restarted it faster than the zookeeper timeout so it appears to be re-registering.
at kafka.utils.ZkUtils.registerBrokerInZk(ZkUtils.scala:393)
at kafka.utils.ZkUtils.registerBrokerInZk(ZkUtils.scala:379)
at kafka.server.KafkaHealthcheck.register(KafkaHealthcheck.scala:70)
at kafka.server.KafkaHealthcheck.startup(KafkaHealthcheck.scala:51)
at kafka.server.KafkaServer.startup(KafkaServer.scala:270)
at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:39)
at kafka.Kafka$.main(Kafka.scala:67)
at kafka.Kafka.main(Kafka.scala)
[2017-09-15 08:14:13,393] INFO [Kafka Server 1001], shutting down (kafka.server.KafkaServer)
docker logs 1f931da3d661
2017-09-14 08:53:05,878 [myid:] - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn#357] - caught end of stream exception
EndOfStreamException: Unable to read additional data from client sessionid 0x15e7ea74c8e0000, likely client has closed socket
at org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:228)
at org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208)
at java.lang.Thread.run(Thread.java:745)
2017-09-14 08:53:05,887 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn#1007] - Closed socket connection for client /172.18.0.2:54222 which had sessionid 0x15e7ea74c8e0000
Have you tried using host networking as in this example? https://docs.confluent.io/current/cp-docker-images/docs/quickstart.html#zookeeper
That looks like it will simplify and solve this. I'd also recommend checking out these images instead of the custom ones it looks like you are using because these are being run in production for people so they are known to work well.