Trouble starting docker container with Nginx - docker

I created a Docker CentOS image exposing ports 80, 8000, and 443. When it was done being built, I ran the container using ...
docker run -it -p 8080:8000 -u root <image_id>
Installed on the image is Nginx. There are some issues using the service command to start nginx, so I simply started it by going to /usr/bin/nginx. I can see that nginx is running by using ps aux | grep nginx
bash-4.2# ps aux | grep nginx
root 8 0.0 0.1 122892 2172 ? Ss 18:57 0:00 nginx: master process nginx
nginx 9 0.0 0.3 123356 6972 ? S 18:57 0:00 nginx: worker process
nginx 10 0.0 0.3 123356 6972 ? S 18:57 0:00 nginx: worker process
nginx 11 0.0 0.3 123356 6972 ? S 18:57 0:00 nginx: worker process
nginx 12 0.0 0.3 123356 6972 ? S 18:57 0:00 nginx: worker process
I get the ip address for the container at /etc/hosts, 172.17.0.11, but when I go to that address in a web browser it just loads for a long time, and eventually times out. I am admittedly pretty new to Nginx, and nginx configurations, so I'm not sure if theres something I'm missing in the default configuration file. I checked the access and error logs for nginx, and both are empty.
/etc/hosts
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
172.17.0.12 85a91e447fca
/etc/nginx/nginx.conf
# For more information on configuration, see:
# * Official English Documentation: http://nginx.org/en/docs/
# * Official Russian Documentation: http://nginx.org/ru/docs/
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
# Load dynamic modules. See /usr/share/nginx/README.dynamic.
include /usr/share/nginx/modules/*.conf;
events {
worker_connections 1024;
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
# Load modular configuration files from the /etc/nginx/conf.d directory.
# See http://nginx.org/en/docs/ngx_core_module.html#include
# for more information.
include /etc/nginx/conf.d/*.conf;
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name _;
root /usr/share/nginx/html;
# Load configuration files for the default server block.
include /etc/nginx/default.d/*.conf;
location / {
}
error_page 404 /404.html;
location = /40x.html {
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
}
# Settings for a TLS enabled server.
#
# server {
# listen 443 ssl http2 default_server;
# listen [::]:443 ssl http2 default_server;
# server_name _;
# root /usr/share/nginx/html;
#
# ssl_certificate "/etc/pki/nginx/server.crt";
# ssl_certificate_key "/etc/pki/nginx/private/server.key";
# ssl_session_cache shared:SSL:1m;
# ssl_session_timeout 10m;
# ssl_ciphers HIGH:!aNULL:!MD5;
# ssl_prefer_server_ciphers on;
#
# # Load configuration files for the default server block.
# include /etc/nginx/default.d/*.conf;
#
# location / {
# }
#
# error_page 404 /404.html;
# location = /40x.html {
# }
#
# error_page 500 502 503 504 /50x.html;
# location = /50x.html {
# }
# }
}

The IP you searched for was the ip of the container from it's point of view. you must access the docker container from the external location as your browser lives on your local machine and not in the container.
The local machine and the container are are completely separate worlds and can only be accessed through the ports you exposed. think (as an analogue) about it as the ip of your internal network at home and the ip you get from your internet provider.
if you are using docker-machine you can find the ip you need to use with the following command:
docker-machine ip default
if you are using docker 'native' you can go to http://localhost:8080
but as your nginx listens to port 80 and 443 you need to start the container with a command like:
docker run -it -p 8080:80 -p 8443:443 imagename
Note that ports don't just exist if you want them to. If ngnix is running on port 80 and 443 in the container it will not suddenly start running on port 8000 if you add it to the docker run command. within the container it will still run on port 80 and 443. so if you want to map the outside world (read your own computer) to the container than you have to map a port (e.g. 8000) to the actual port 80 in the container. Nothing else will work. any other mapping than 80 or 443 within the container will in this case fail.

Your docker run command should map some host ports for container ports 80 and 443. Looks like you only map 8000.
Also you should use host IP, (assuming you map all required container ports to host ports), such as http://{host ipv4}:8080

Related

Can't connect from one docker container to another by its public domain name

I have an application composed of containerized web-services deployed with docker-compose (it's a test env). One of the containers is nginx that operates as a reverse proxy for services and also serves static files. A public domain name points to the host machine and nginx has a server section that utilizes it.
The problem I am facing is that I can't talk to nginx by that public domain name from the containers launched on this same machine - connection always timeouts. (For example, I tried doing a curl https://<mypublicdomain>.com)
Referring by the containers name (using docker's hostnames) works just fine. Reuqests to the same domain name from other machines also work ok.
I understand this has to do with how docker does networking, but fail to find any docs that would outline what exactly goes wrong here. Could anyone explain the root of the issue to me or maybe just point in the right direction?
(For extra context: originally I was going to use this to set up monitoring with prometheus and blackbox exporter to make it see the server the same way anyone from the outside would do + to automatically check that SSL is working. For now I pulled back to point the prober to nginx by its docker hostname)
Nginx image
FROM nginx:stable
COPY ./nginx.conf /etc/nginx/nginx.conf.template
COPY ./docker-entrypoint.sh /docker-entrypoint.sh
COPY ./dhparam/dhparam-2048.pem /dhparam-2048.pem
COPY ./index.html /var/www/index.html
ENTRYPOINT ["/docker-entrypoint.sh"]
CMD ["nginx", "-g", "daemon off;"]
docker-compose.yaml
version: "3"
networks:
mainnet:
driver: bridge
services:
my-gateway:
container_name: my-gateway
image: aturok/manuwor_gateway:latest
restart: always
networks:
- mainnet
ports:
- 80:80
- 443:443
expose:
- "443"
volumes:
- /var/stuff:/var/www
- /var/certs:/certsdir
command: "/bin/sh -c 'while :; do sleep 6h & wait $${!}; nginx -s reload; done & nginx -g \"daemon off;\"'"
(I show only the nginx service, as others are irrelevant - I would for example spin up a nettools container and not connect it to the mainnet network - still expect the requests to reach nginx, since I am using the public domain name. The problem also happens with the containers connected to the same network)
nginx.conf (normally it comes with a bunch of env vars, replaced + removed irrelevant backend)
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
#include /etc/nginx/conf.d/*.conf;
server {
listen 80;
server_name mydomain.com;
location /.well-known/acme-challenge/ {
root /var/www/certbot;
}
location / {
return 301 https://$host$request_uri;
}
}
server {
listen 443 ssl;
server_name mydomain.com;
ssl_certificate /certsdir/fullchain.pem;
ssl_certificate_key /certsdir/privkey.pem;
server_tokens off;
ssl_buffer_size 8k;
ssl_dhparam /dhparam-2048.pem;
ssl_protocols TLSv1.2 TLSv1.1 TLSv1;
ssl_prefer_server_ciphers on;
ssl_ciphers ECDH+AESGCM:ECDH+AES256:ECDH+AES128:DH+3DES:!ADH:!AECDH:!MD5;
ssl_ecdh_curve secp384r1;
ssl_session_tickets off;
ssl_stapling on;
ssl_stapling_verify on;
resolver 8.8.8.8;
root /var/www/;
index index.html;
location / {
root /var/www;
try_files $uri /index.html;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
}
}
Note: certificates are ok when I access the server from elsewhere

How to record http requests and time cost in containers?

Here's the situation, I'm using docker to build some projects in containers, I want to record the request urls of these containers to optimize these jobs.
So I find a way to run a Nginx container as forward proxy called proxy and run the other building jobs in container with http_proxy.
proxy:
docker run -d -p 8090:8090 proxy
jobs:
docker run --env http_proxy="http://127.0.0.1:8090" --network host jobs
But I can't find the correct Nginx config to do this trick.
➜ cat nginx.conf
worker_processes 1;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
gzip on;
server {
listen 80;
listen 443;
server_name _;
# forward proxy for CONNECT request
proxy_connect;
proxy_connect_allow 443 563;
proxy_connect_connect_timeout 10s;
proxy_connect_read_timeout 10s;
proxy_connect_send_timeout 10s;
location / {
resolver 8.8.8.8;
proxy_pass $scheme://$host$request_uri;
}
}
}
I also try to use Envoy to proxy the containers, and I read the doc Front Proxy and it seems that's not forward proxy, so what's the recommended way to record the http requests and time cost in containers?
Any help would be great appreciate.
I solved this issue by using Nginx, actually, it's easy to use Nginx as transparent forward proxy to do this trick, Nginx needs ngx_http_proxy_connect_module to proxy HTTPS requests, and the author also contributed this module to Tengine. So I try to use Tengine.
worker_processes 1;
events {
worker_connections 65536;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
gzip on;
server {
listen *:80; # fix 99 address not available
listen *:443;# fix 99 address not available
server_name localhost;
resolver 10.10.10.10 ipv6=off;
resolver_timeout 30s;
# forward proxy for CONNECT request
proxy_connect;
proxy_connect_allow 443 563;
proxy_connect_connect_timeout 30s;
proxy_connect_read_timeout 30s;
proxy_connect_send_timeout 30s;
location / {
proxy_pass $scheme://$host$request_uri;
}
access_log /tmp/access.log;
error_log /tmp/error.log;
}
}
The above conf is my Nginx.conf. To avoid connection error while connecting to upstream, I disable ipv6 option. It works.

nginx index directive works fine locally but gives 404 on ec2

I have a web project that I want to deploy using docker-compose and nginx.
Locally, I:
docker-compose build
docker-compose push
If I docker-compose up, I can access localhost/ and get redirected to my index.html.
Now on my ec2 instance (a regular ec2 instance where I installed docker and docker-compose) I docker-compose pull, then docker-compose up.
All the containers launch correctly and I can exec sh into my nginx container and see there's a /facebook/index.html file.
If I go to [instance_ip]/index.html, everything works as expected.
If I go to [instance_ip]/, I get a 404 response.
nginx receives the request (I see it in the access logs) but does not redirect to index.html.
Why is the index directive not able to redirect to my index.html file?
I tried to:
Reproduce locally by remove all local images and pulling from my registry.
Kill my ec2 instance and launch a new one.
But I got the same result.
I'm using docker-compose 1.11.1 and docker 17.05.0. On the ec2 instance it's docker 17.03.1 and I tried both docker-compose 1.11.1 and 1.14.1 (Sign that I'm a bit desperate ;)).
An extract from my docker-compose file:
nginx:
image: [image from registry]
build:
context: ./
dockerfile: deploy/nginx.dockerfile
ports:
- "80:80"
depends_on:
- web
My nginx image starts from alpine, installs nginx, adds the index.html file and copies my conf file in /etc/nginx/nginx.conf.
Here's my nginx config. I checked that it is present on the running containers (both locally and on ec2).
# prevent from exiting when using `run` to launch container
daemon off;
worker_processes auto;
#
pid /run/nginx.pid;
events {
worker_connections 768;
# multi_accept on;
}
http {
##
# Basic Settings
sendfile off;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# SSL Settings
##
ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
gzip_disable "msie6";
server {
error_log /var/log/nginx/file.log debug;
listen 80 default_server;
# root /home/domain.com;
# Bad developers use underscore in headers.
underscores_in_headers on;
# root should be out of location block
root /facebook;
location / {
index index.html;
# autoindex on;
try_files $uri #app;
}
location #app {
include uwsgi_params;
# Using docker-compose linking, the nginx docker-compose service depends on a 'web' service.
uwsgi_pass web:3033;
}
}
}
I have no idea why the container is behaving differently on the ec2 instance.
Any pointers appreciated!

Rails 5 app with Puma and Nginx - 111: Connection refused while connecting to upstream, client

I an getting this error:
2016/09/29 01:05:39 [error] 7169#0: *3 connect() to unix:/home/deploy/tasks/shared/tmp/sockets/puma.sock failed (111: Connection refused) while connecting to upstream, client: 99.254.197.158, server: localhost, request: "GET / HTTP/1.1", upstream: "http://unix:/home/deploy/tasks/shared/tmp/sockets/puma.sock:/", host: "ec2-54-88-181-57.compute-1.amazonaws.com"
When trying to use this URL for my app:
http://ec2-54-88-181-57.compute-1.amazonaws.com/
The browser also presents the this message:
We're sorry, but something went wrong.
If you are the application owner check the logs for more information.
However I am able to access my app when using Puma directly on port 3000 using this URL:
http://ec2-54-88-181-57.compute-1.amazonaws.com:3000/
And I am able to navigate through all pages of the app this way.
Here are some of my configuration files:
$ ls -l /etc/nginx/sites-enabled
total 0
lrwxrwxrwx 1 root root 34 Sep 28 22:46 default -> /etc/nginx/sites-available/default
$ sudo cat /etc/nginx/nginx.conf
[sudo] password for deploy:
user root; #www-data;
worker_processes 4;
pid /run/nginx.pid;
events {
worker_connections 768;
# multi_accept on;
}
http {
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# server_tokens off;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
gzip_disable "msie6";
# gzip_vary on;
# gzip_proxied any;
# gzip_comp_level 6;
# gzip_buffers 16 8k;
# gzip_http_version 1.1;
# gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
##
# nginx-naxsi config
##
# Uncomment it if you installed nginx-naxsi
##
#include /etc/nginx/naxsi_core.rules;
##
# nginx-passenger config
##
# Uncomment it if you installed nginx-passenger
##
#passenger_root /usr;
#passenger_ruby /usr/bin/ruby;
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
$ sudo cat /etc/nginx/sites-available/default
upstream app {
# Path to Puma SOCK file, as defined previously
server unix:/home/deploy/tasks/shared/tmp/sockets/puma.sock fail_timeout=0;
}
server {
listen 80;
server_name localhost;
root /home/deploy/tasks/current/public;
try_files $uri/index.html $uri #app;
location #app {
proxy_pass http://app;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
}
error_page 500 502 503 504 /500.html;
client_max_body_size 4G;
keepalive_timeout 10;
}
The server is listening on ports 80, 22 and 3000 (for Puma)
$ netstat -an
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:3000 0.0.0.0:* LISTEN
tcp 0 280 172.31.51.143:22 99.254.197.158:60843 ESTABLISHED
tcp 0 0 172.31.51.143:22 99.254.197.158:60842 ESTABLISHED
tcp 0 0 172.31.51.143:59545 172.31.47.0:5432 ESTABLISHED
tcp 0 0 172.31.51.143:59544 172.31.47.0:5432 ESTABLISHED
tcp6 0 0 :::22 :::* LISTEN
udp 0 0 0.0.0.0:55159 0.0.0.0:*
udp 0 0 0.0.0.0:68 0.0.0.0:*
udp6 0 0 :::12784 :::*
Nginx and Puma are running.
$ ps -ef | grep nginx
root 1644 1586 0 01:21 pts/0 00:00:00 sudo tail -f /var/log/nginx/error.log
root 1645 1644 0 01:21 pts/0 00:00:00 tail -f /var/log/nginx/error.log
root 1698 1 0 01:39 ? 00:00:00 nginx: master process /usr/sbin/nginx
root 1701 1698 0 01:39 ? 00:00:00 nginx: worker process
root 1702 1698 0 01:39 ? 00:00:00 nginx: worker process
root 1703 1698 0 01:39 ? 00:00:00 nginx: worker process
root 1704 1698 0 01:39 ? 00:00:00 nginx: worker process
deploy 1736 1309 0 02:13 pts/1 00:00:00 grep nginx
$ ps -ef | grep puma
deploy 1564 1 0 01:20 ? 00:00:00 puma 3.6.0 (tcp://0.0.0.0:3000) [20160928212850]
deploy 1571 1564 0 01:20 ? 00:00:01 puma: cluster worker 0: 1564 [20160928212850]
I am deploying with Capistrano to an AWS EC2 Ubuntu 14.04 server.
There are no errors related to deployments.
I fetched all blogs / post I found but none of these solutions there worked for me so far.
What I shall try next in order to have the Nginx server working?
I scrapped the AWS EC2 instance and recreate it using and Ubuntu 14.04 implementation which I upgraded to 16.04. I followed strictly the guidance found here
http://codepany.com/blog/rails-5-puma-capistrano-nginx-jungle-upstart/
and related links from the same blog.
Now Nginx and Puma are working properly together and my app runs perfectly here:
http://ec2-54-159-156-217.compute-1.amazonaws.com/
The only difference form the guidelines is that I kept the AWS RDS instance for the database. I used RVM in the production server although I am using RBENV on my Mac. I used ubuntu user (like root) for deployment since I am suspecting all the troubles I had were related to permissions and I did not know how to fix them.
Many errors encountered earlier trying to start properly Puma on socket and make it working with Nginx, especially not restarting after
cap production deploy
were related to generating the secret and putting this value in the appropiate file. For me it worked the best writing it in the /etc/environment file.
I also did changes in the file /etc/ssh/sshd_config in order to get root or ubuntu access through ssh. In this matter this link
https://forums.aws.amazon.com/thread.jspa?threadID=86876
was very useful.

Passenger + Rails app only works on port 3000

Passenger will start up on port 80, but only the home page (which is 100% HTML) shows up. No other page will resolve. And even stranger, all traffic that fails to resolve is forwarded to HTTPS (which, of course, also fails to resolve).
This works:
rvmsudo passenger start --daemonize
This does not work:
rvmsudo passenger start --daemonize --port 80
My config.ru is pretty standard, too:
require ::File.expand_path('../config/environment', __FILE__)
run Rails.application
I am using Rails 4.2.0 and Ruby 2.2.2 with Passenger 5.0.7
Anyone have any ideas?
nginx conf:
user www-data;
worker_processes 4;
pid /run/nginx.pid;
events {
worker_connections 768;
# multi_accept on;
}
http {
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# server_tokens off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
gzip_disable "msie6";
##
# nginx-naxsi config
##
# Uncomment it if you installed nginx-naxsi
##
#include /etc/nginx/naxsi_core.rules;
##
# nginx-passenger config
##
# Uncomment it if you installed nginx-passenger
##
passenger_root /usr/lib/ruby/vendor_ruby/phusion_passenger/locations.ini
#passenger_ruby /home/ubuntu/.rvm/gems/ruby-2.2.2
app specific conf:
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
server {
listen 80;
server_name www.mydomain.com;
# Tells Nginx to serve static assets from this directory.
root /var/www/mydomain/public;
location / {
# Tells Nginx to forward all requests for www.foo.com
# to the Passenger Standalone instance listening on port 4000.
proxy_pass http://127.0.0.1:4000;
# These are "magic" Nginx configuration options that
# should be present in order to make the reverse proxying
# work properly. Also contains some options that make WebSockets
# work properly with Passenger Standalone. Please learn more at
# http://nginx.org/en/docs/http/ngx_http_proxy_module.html
proxy_http_version 1.1;
proxy_set_header Host $http_host;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_buffering off;
}
}
I think you need a different "setup"/configuration if you want to run passenger behind a nginx.
Nginx should listen on port 80 (its specified in the server-section of your nginx.conf), and forwards traffic to your app running under passengers hood (specified in nginx.conf to be on port 4000, but started by hand at port 80), if I do not misread.
Probably nginx tells you that its unhappy in its logfile /var/log/nginx.log I believe. You can confirm what is sitting on port 80 by executing netstat -tlpn .

Resources