My server http://127.0.0.1:5438/api provides the api.
The nginx configuration works fine if I'm not using the docker.
server {
listen 80;
server_name 127.0.0.1;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
try_files $uri $uri/ /index.html;
}
location ^~/api/ { proxy_pass http://127.0.0.1:5438/api/; }
}
If I'm using docker, the nginx configuration is not working.
sudo docker run \
-d -p 80:80 \
-v /usr/share/nginx/html:/usr/share/nginx/html \
-v /etc/nginx/nginx.conf:/etc/nginx/nginx.conf:ro \
-v /usr/share/nginx/html/nginx.conf:/etc/nginx/sites-enabled/nginx.conf \
nginx
So how to access host's 5438 port in docker nginx?
If you are not care network toplogic, try add --net=host, but the limit is host should not have service use 80 port.
sudo docker run \
--net=host \
-d \
-v /usr/share/nginx/html:/usr/share/nginx/html \
-v /etc/nginx/nginx.conf:/etc/nginx/nginx.conf:ro \
-v /usr/share/nginx/html/nginx.conf:/etc/nginx/sites-enabled/nginx.conf \
nginx
Modify "-p 80:80 " to "-p 5438:80" in docker run command, This connects 80 port of docker to 5438 port of Host Machine.
sudo docker run \
-d -p 5438:80 \
-v /usr/share/nginx/html:/usr/share/nginx/html \
-v /etc/nginx/nginx.conf:/etc/nginx/nginx.conf:ro \
-v /usr/share/nginx/html/nginx.conf:/etc/nginx/sites-enabled/nginx.conf \
nginx
Hope It works !!!
Related
SUMMARY
I am running a Zabbix Server container, but I am not being able to communicate on its listening port - Locally even.
OS / ENVIRONMENT / Used docker-compose files
This is the script I am currently using to run it:
docker run -d --name zabbix-server \
--restart always \
--link zabbix-snmptraper:zabbix-snmptraps --volumes-from zabbix-snmptraper \
-p 192.168.1.248:10052:10051 \
-e MYSQL_DATABASE="zabbix" \
-e MYSQL_USER="zabbix" \
-e MYSQL_PASSWORD="aro#123" \
-e ZBX_LISTENPORT=10052 \
-e ZBX_HOUSEKEEPINGFREQUENCY=12 \
-e ZBX_LOGSLOWQUERIES=1000 \
-e ZBX_STARTPOLLERSUNREACHABLE=1 \
-e ZBX_STARTPINGERS=5 \
-e ZBX_STARTTRAPPERS=1 \
-e ZBX_STARTDBSYNCERS=3 \
-e ZBX_STARTDISCOVERERS=4 \
-e ZBX_STARTPOLLERS=10 \
-e ZBX_TIMEOUT=30 \
-e ZBX_VALUECACHESIZE=32M \
-e ZBX_CACHESIZE=48M \
-e ZBX_MAXHOUSEKEEPERDELETE=432000 \
-e ZBX_ENABLE_SNMP_TRAPS=true \
-e MYSQL_ROOT_PASSWORD="my_root_pass_of_mysql..." \
-e DB_SERVER_HOST="mysql-server" \
-e DB_SERVER_PORT="3306" \
-v /etc/localtime:/etc/localtime:ro \
-v /mnt/dados/zabbix/external_scripts:/usr/lib/zabbix/externalscripts \
--network=zabbix-net \
zabbix/zabbix-server-mysql:5.4-ubuntu-latest
CONFIGURATION
The code block of commands is being run on a Debian 11.
STEPS TO REPRODUCE
Basically, the container is UP and running.
The passive queries are all working - I can gather data from Zabbix to other Zabbix Agents, SNMP, etc.
The problem happens when I try to do a active query from outside to Zabbix Server itself... (Active queries.)
My deduction was that the docker container did not create the necessary routes for this, so I must specify something or there is some configuration missing.
EXPECTED RESULTS
When doing a telnet to 10052 on my Zabbix Server, the expected result is a OK Connected.
ACTUAL RESULTS
Locally, on my own Zabbix Server, when I did:
sudo telnet 192.168.1.248 10052
I got telnet: Unable to connect to remote host: Connection refused
Crazy thing is that when doing this on the IP address of the DOCKER NETWORK, (Got the IP from docker inspect zabbix-server "IPAddress": "172.18.0.4"):
sudo telnet 172.18.0.4 10052
Trying 172.18.0.4...
Connected to 172.18.0.4.
It worked. So there is a routing problem with this container.
But most containers when running create the rules or at least show it in logs or docs. how to do it.
But I could not find this anywhere...
Can you please help me?
I am on this for more than two weeks and do not know what to do anymore.
If this is in the wrong section or "flow", please direct me to the correct place to this.
I really appreciate the help.
Edit 1
Here is the output TCPDUMP gave me:
16:28:12.373378 IP 192.168.17.24.55114 > 192.168.1.248.10052: Flags [S], seq 2008667124, win 64240, options [mss 1460,nop,wscale 8,nop,nop,sackOK], length 0
As you can see, packets are coming through and arriving to the Docker Server.
I tried adding the following rule to IPTables to see if it solved it:
sudo iptables -t nat -A PREROUTING -p tcp --dport 10052 -j DNAT --to-destination 172.18.0.4:10052 -m comment --comment "Redirect requests from IP 248 to the container IP"
But it did not work. Or I created this wrongly.
To list the rules I used the command:
sudo iptables -t nat -v -L PREROUTING -n --line-number
It was created all fine.
While you configured Zabbix to listen on port 10052 (-e ZBX_LISTENPORT=10052), you mount the host port 10052 to the containers port 10051 instead (-p 192.168.1.248:10052:10051).
Use -p 192.168.1.248:10052:10052 to make it work.
docker run -d --name zabbix-server \
--restart always \
--link zabbix-snmptraper:zabbix-snmptraps --volumes-from zabbix-snmptraper \
-p 192.168.1.248:10052:10052 \
-e MYSQL_DATABASE="zabbix" \
-e MYSQL_USER="zabbix" \
-e MYSQL_PASSWORD="aro#123" \
-e ZBX_LISTENPORT=10052 \
-e ZBX_HOUSEKEEPINGFREQUENCY=12 \
-e ZBX_LOGSLOWQUERIES=1000 \
-e ZBX_STARTPOLLERSUNREACHABLE=1 \
-e ZBX_STARTPINGERS=5 \
-e ZBX_STARTTRAPPERS=1 \
-e ZBX_STARTDBSYNCERS=3 \
-e ZBX_STARTDISCOVERERS=4 \
-e ZBX_STARTPOLLERS=10 \
-e ZBX_TIMEOUT=30 \
-e ZBX_VALUECACHESIZE=32M \
-e ZBX_CACHESIZE=48M \
-e ZBX_MAXHOUSEKEEPERDELETE=432000 \
-e ZBX_ENABLE_SNMP_TRAPS=true \
-e MYSQL_ROOT_PASSWORD="my_root_pass_of_mysql..." \
-e DB_SERVER_HOST="mysql-server" \
-e DB_SERVER_PORT="3306" \
-v /etc/localtime:/etc/localtime:ro \
-v /mnt/dados/zabbix/external_scripts:/usr/lib/zabbix/externalscripts \
--network=zabbix-net \
zabbix/zabbix-server-mysql:5.4-ubuntu-latest
I am trying to start gitlab running on Ubuntu 20.04.1 LTS. I have already an apache server running.
sudo docker run --detach \
--hostname hostname.de \
--publish 443:443 --publish 80:80 --publish 22:22 \
--name gitlab \
--restart always \
--volume $GITLAB_HOME/config:/etc/gitlab \
--volume $GITLAB_HOME/logs:/var/log/gitlab \
--volume $GITLAB_HOME/data:/var/opt/gitlab \
gitlab/gitlab-ee:latest
when i try to run that docker image i get the following error:
Error starting userland proxy: listen tcp4 0.0.0.0:443: bind: address already in use.
I run already a few websites on my apache webserver so i already use port 80 and 443.
How can I run that docker gitlab image beside my apache server?
I'm installing Jenkins via docker by following this official guide.
After running command:
docker run \
-u root \
--rm \
-d \
-p 8080:8080 \
-p 50000:50000 \
-v jenkins-data:/var/jenkins_home \
-v /var/run/docker.sock:/var/run/docker.sock \
jenkinsci/blueocean
I'm expecting Jenkins to be installed # /var/jenkins_home, but instead it is being installed # /var/lib/docker/volumes/jenkins-data.
Also there is no such folder as /var/jenkins_home.
Am I missing something. Please suggest.
Thank You
/var/jenkins_home is inside the container. You are using named volume & that's why it's located in /var/lib/docker/volumes/jenkins-data.
Instead, you can use host bind mounts as below to ensure you get the data in /var/jenkins_home on the host machine -
docker run \
-u root \
--rm \
-d \
-p 8080:8080 \
-p 50000:50000 \
-v /var/jenkins_home:/var/jenkins_home \
-v /var/run/docker.sock:/var/run/docker.sock \
jenkinsci/blueocean
Volumes path in case of host mounts has to be absolute else it results in creating a named volume.
am running the progrium/consul container with the gliderlabs/registrator container. I am trying to create health checks to monitor if my docker containers are up or down. However I noticed some very strange activity with with health check I was able to make. Here is the command I used to create the health check:
curl -v -X PUT http://$CONSUL_IP_ADDR:8500/v1/agent/check/register -d #/home/myUserName/health.json
Here is my health.json file:
{
"id": "docker_stuff",
"name": "echo test",
"docker_container_id": "4fc5b1296c99",
"shell": "/bin/bash",
"script": "echo hello",
"interval": "2s"
}
First I noticed that this check would automatically delete the service whenever the container was stopped properly, but would do nothing when the container was stopped improperly (i.e. durring a node failure).
Second I noticed that the docker_container_id did not matter at all, this health check would attach itself to every container running on the consul node it was attached to.
I would like to just have a working tcp or http health test run for every docker container running on a consul node (yes I know my above json file runs a script, I just created that one following the documentation example). I just want consul to be able to tell if a container is stopped or running. I don't want my services to delete themselves when a health check fails. How would I do this.
Note: I find the consul documentation on Agent Health Checks very lacking, vague and inaccurate. So please don't just link to it and tell me to go read it. I am looking for a full explanation on exactly how to set up docker health checks the right way.
Update: Here is how to start consul servers with the most current version of the official consul container (right now its the dev versions, soon ill update it with the production versions):
#bootstrap server
docker run -d \
-p 8300:8300 \
-p 8301:8301 \
-p 8301:8301/udp \
-p 8302:8302 \
-p 8302:8302/udp \
-p 8400:8400 \
-p 8500:8500 \
-p 53:53/udp \
--name=dev-consul0 consul agent -dev -ui -client 0.0.0.0
#its IP address will then be the IP of the host machine
#lets say its 172.17.0.2
#start the other two consul servers, without web ui
docker run -d --name --name=dev-consul1 \
-p 8300:8300 \
-p 8301:8301 \
-p 8301:8301/udp \
-p 8302:8302 \
-p 8302:8302/udp \
-p 8400:8400 \
-p 8500:8500 \
-p 53:53/udp \
consul agent -dev -join=172.17.0.2
docker run -d --name --name=dev-consul2 \
-p 8300:8300 \
-p 8301:8301 \
-p 8301:8301/udp \
-p 8302:8302 \
-p 8302:8302/udp \
-p 8400:8400 \
-p 8500:8500 \
-p 53:53/udp \
consul agent -dev -join=172.17.0.2
# then heres your clients
docker run -d --net=host --name=client0 \
-e 'CONSUL_LOCAL_CONFIG={"leave_on_terminate": true}' \
consul agent -bind=$(hostname -i) -retry-join=172.17.0.2
https://hub.docker.com/r/library/consul/
progrium/consul image has old version of consul (https://hub.docker.com/r/progrium/consul/tags/) and currently seems to be not maintained.
Please try to use official image with current version for consul https://hub.docker.com/r/library/consul/tags/
You can also use registrator to register checks in consul connected with your service. eg.
SERVICE_[port_]CHECK_SCRIPT=nc $SERVICE_IP $SERVICE_PORT | grep OK
More examples: http://gliderlabs.com/registrator/latest/user/backends/#consul
So a solution that works around using any version of the consul containers is to just directly install consul on the host machine. This can be done by following these steps from https://sonnguyen.ws/install-consul-and-consul-template-in-ubuntu-14-04/:
sudo apt-get update -y
sudo apt-get install -y unzip curl
sudo wget https://releases.hashicorp.com/consul/0.6.4/consul_0.6.4_linux_amd64.zip
sudo unzip consul_0.6.4_linux_amd64.zip
sudo rm consul_0.6.4_linux_amd64.zip
sudo chmod +x consul
sudo mv consul /usr/bin/consul
sudo mkdir -p /opt/consul
cd /opt/consul
sudo wget https://releases.hashicorp.com/consul/0.6.4/consul_0.6.4_web_ui.zip
sudo unzip consul_0.6.4_web_ui.zip
sudo rm consul_0.6.4_web_ui.zip
sudo mkdir -p /etc/consul.d/
sudo wget https://releases.hashicorp.com/consul-template/0.14.0/consul-template_0.14.0_linux_amd64.zip
sudo unzip consul-template_0.14.0_linux_amd64.zip
sudo rm consul-template_0.14.0_linux_amd64.zip
sudo chmod a+x consul-template
sudo mv consul-template /usr/bin/consul-template
sudo nohup consul agent -server -bootstrap-expect 1 \
-data-dir /tmp/consul -node=agent-one \
-bind=$(hostname -i) \
-client=0.0.0.0 \
-config-dir /etc/consul.d \
-ui-dir /opt/consul/ &
echo 'Done with consul install!!!'
Then after you do this create your consul health check json files, info on how to do that can be found here. After you create your json files just put them in the /etc/consul.d directory and restart consul with consul reload. If after the reload consul does not add your new health checks then there is something wrong with the syntax of your json files. Go back edit them and try again.
An example of the commands being run:
docker run \
--detach \
--hostname gitlab.docker \
--publish 8443:443 \
--publish 8081:80 \
--publish 2222:22 \
--name gitlab \
--restart always -v /var/run/docker.sock:/var/run/docker.sock \
--volume /tmp/gitlab/config:/etc/gitlab \
--volume /tmp/gitlab/logs:/var/log/gitlab \
--volume /tmp/gitlab/data:/var/opt/gitlab \
--env GITLAB_OMNIBUS_CONFIG="external_url 'http://gitlab.docker:8081/'; gitlab_rails['lfs_enabled'] = true;" \
gitlab/gitlab-ce:latest
gitlab.rb
external_url "http://gitlab.docker:8081"
access url:
http://gitlab.docker:8081
Perhaps I'm missing something but when I remove the port I can access the interface on 8081, with it there it becomes inaccessible.
Any insights?
You need set 'nginx-listen-port' to make the nginx inside the docker to listen to port 80, instead of the port 8081 specified by 'external_url'.
See:
https://gitlab.com/gitlab-org/omnibus-gitlab/blob/master/doc/settings/nginx.md#setting-the-nginx-listen-port
I figured it out, when you run:
gitlab-ctl reconfigure
The port in the external url gets parsed and placed into nginx config so the docker port you were forwarding is no longer valid.