I'm attempting to serve simple static page with Nginx on Cloud Run. But the container fails to properly start serving.
Container is starting, as shown by the debug lines echoed from docker-entrypoint.sh:
2019-05-26T22:19:02.340289Z testing config
2019-05-26T22:19:02.433935Z nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
2019-05-26T22:19:02.434903Z nginx: configuration file /etc/nginx/nginx.conf test is successful
2019-05-26T22:19:02.436605Z starting on 8080
2019-05-26T22:19:02.487188Z2019/05/26 22:19:02 [alert] 6#6: prctl(PR_SET_DUMPABLE) failed (22: Invalid argument)
and eventually terminates
2019-05-26T22:20:00.153060259ZContainer terminated by the container manager on signal 9.
In order to conform with the Cloud Run service contract specifically listening on $PORT the docker-entrypoint.sh performs $PORT substitution in conf.d/*.conf.
FROM nginx:1.15-alpine
COPY nginx-default.conf.template /etc/nginx/conf.d/default.conf.template
COPY docker-entrypoint.sh /
ENTRYPOINT ["/docker-entrypoint.sh"]
CMD ["nginx", "-g", "daemon off;"]
I'm pretty confident issue lies within docker-entrypoint.sh because once $PORT is hardcoded as 8080 and image looks like this:
FROM nginx:1.15-alpine
COPY nginx-default.conf /etc/nginx/conf.d/default.conf
Cloud Run "runs" fine.
The code performing the substitution:
export NGINX_PORT=${PORT:-8080}
for f in $(find /etc/nginx/conf.d/ -type f -name '*.conf'); do
envsubst '$NGINX_PORT' < $f > $f
done
NOTE: reading < $f and writing > $f to the same file works as tested by running the container locally.
Expected
nginx configuration gets $PORT placeholder substituted with actual values
container runs and listens on $PORT on Cloud Run
Actual
container fails to run on Cloud Run
container runs and listens on $PORT locally
I have published a blog post to show how to run nginx in a Cloud Run container (alongside with a process).
You can read the article here: https://ahmet.im/blog/cloud-run-multiple-processes-easy-way/ or take a look at the code repository at https://github.com/ahmetb/multi-process-container-lazy-solution
Basically, the nginx.conf file should be something like:
events {}
http {
server {
listen 8080; # Cloud Run PORT env variable
access_log /dev/stdout;
error_log /dev/stdout;
# if you need to serve static access, specify an absolute path like below
location /static/ {
alias /src/static/;
}
# anything else is routed to your app that you would start on port 8081
location / {
proxy_pass http://localhost:8081;
}
}
}
daemon off;
pid /run/nginx.pid;
You can sort of safely hard code port 8080 in your nginx.conf as it's very unlikely to change in the foreseeable future on Cloud Run.
fixed by replacing
for f in $(find /etc/nginx/conf.d/ -type f -name '*.conf'); do
envsubst '$NGINX_PORT' < $f > $f
done
with
sed -i "s/\${NGINX_PORT}/${NGINX_PORT}/g" /etc/nginx/conf.d/*.conf
and changing $NGINX_PORT -> ${NGINX_PORT} in *.conf files to avoid substitution ambiguities
Related
I'm trying to spin up a dockerized website (React JS) being hosted on my Vultr server. I used the one click install feature that Vultr provides to install Docker (Ubuntu 20.04).
I can get my website started with HTTP and a port number 8080. But what I'm looking to accomplish are the following:
How to add SSL to my website with docker(since my website is dockerized).
PS: I already have a domain name.
How to get rid of the port number as it doesn't look very professional.
For number 2, I did try adding a reverse proxy but not sure if I did it correctly.
Also, not sure if this was the right approach, but I did install letsencrypt in my host Vultr machine (nginx was need too for some reason). I navigated to my domain name and sure enough I do see my website secured (https) with the "Welcome to Nginx" landing page. But again, is this the correct way? If so, how do I display my react website secured instead of the default "Welcome to Nginx" landing page.
Or should I have not installed nginx or letsencrypt in the host machine?
As you can see, I'm an absolute beginner in docker!
This is my Dockerfile-prod
FROM node as build
WORKDIR /usr/src/app
COPY package*.json ./
RUN yarn cache clean && yarn --update-checksums
COPY . ./
RUN yarn && yarn build
# Stage - Production
FROM nginx
COPY --from=build /usr/src/app/build /usr/share/nginx/html
EXPOSE 80 443
#Make sites-available directory
RUN mkdir "etc/nginx/sites-available"
#Make sites-enabled directory
RUN mkdir "etc/nginx/sites-enabled"
# add nginx live config
ADD config/*****.com /etc/nginx/sites-available/*****.com
# create symlinks
RUN ln -s /etc/nginx/sites-available/*****.com /etc/nginx/sites-enabled/*****
# make certs dir as volume
VOLUME ["/etc/letsencrypt"]
CMD ["nginx", "-g", "daemon off;"]
I have to configuration files. PS: I was trying following this guy's repo repo
I feel like everything is there in front of me to get it to work, but I just can't figure it out. If you guys can help me out, I'd really appreciate it! Thanks in advance!
config 1 *****-live.com
server {
listen 80;
listen [::]:80;
server_name *****.com www.*****.com;
location ~ /.well-known/acme-challenge {
allow all;
root /var/www/html;
}
}
server {
listen 443 ssl;
server_name *****.com www.*****.com;
ssl_certificate /etc/letsencrypt/live/*****.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/*****.com/privkey.pem;
location / {
proxy_pass http://172.17.0.2:8080;
}
}
config 2 *****-staging.com
server {
listen 80;
listen [::]:80;
server_name *****.com www.*****.com;
location ~ /.well-known/acme-challenge {
allow all;
root /var/www/html;
}
}
This is my host machine directory to letsencrypt/live
I'm setting up nginx server using Docker. If I add a new file/directory to all-html,is that content need to loaded to nginx dynamically(without reloading the nginx)?
I can able to load the new contents only If rebuilding the image again (without cache). is there any way to setup the nginx configuration to dynamically load the content without rebuilding the docker image?
Dockerfile
FROM ubuntu:latest
RUN apt-get update
RUN apt-get install -y nginx
RUN rm /etc/nginx/nginx.conf
ADD nginx.conf /etc/nginx/
ADD web /usr/share/nginx/html/
ADD web /var/www/html/
RUN echo "daemon off;" >> /etc/nginx/nginx.conf
EXPOSE 90
CMD service nginx start
nginx.conf
worker_processes 1;
events { worker_connections 1024; }
http {
include mime.types;
sendfile on;
server {
root /usr/share/nginx/html/;
index index.html;
server_name localhost;
listen 90;
location /all-html {
autoindex on;
}
}
}
ls web/
all-html icons index.html mime.types
ls web/all-html/
1.html ntf.zip 2.html
Use VOLUME to mount the web directory, so that the files will be in sync and Nginx dynamically load the content without rebuilding the docker image.
You can mount the volume in Dockerfile or even while starting the container like below.
-v web:/var/www/html/
You can mount a host directory as a volume inside the container, make changes in the host directory (they will propagate inside the container) and then docker exec ... nginx -s reload OR kill -s HUP, is that the bash snippet you mentioned ? Or you can run another processus inside the container that will periodically check for changes and reload the nginx process.
I'm start doing a little test using docker in order to set up my own server and I have a little bit issues.
I use the nginx-fpm image which have most of the services I need to set up my server.
This is my Dockerfile I did in order to set up a basic server. I built it perfectly without any issues and named it as nginx-custom-server.
Dockerfile
FROM "richarvey/nginx-php-fpm"
ADD /conf/simple-project.conf /etc/nginx/sites-available/simple-project.conf
RUN mkdir /srv/www/
RUN mkdir /LOGS/
RUN ln -s /etc/nginx/sites-available/simple-project.conf /etc/nginx/sites-enabled/simple-project.conf
RUN rm /etc/nginx/sites-enabled/default.conf
CMD ["/start.sh"]
I ran it using the following command via terminal.
docker run --name=server-stack -v /home/ismael/Documentos/docker-nginx/code:/srv/www -v /home/ismael/Documentos/docker-nginx/logs:/LOGS -p 80:80 -d nginx-custom-server:stack
In /srv/www folder I have a simple hello world php. I want make changes in my code on my local machine and sync it with docker container using the shared folder code.
The nginx logs are empty so I don't know what is wrong. I set up logs in my conf but nginx didn't create them so I think there is a problem with the general nginx conf I guess.
Here is the conf I'm using for my hello world. Also I mapped this server name in the hosts of the host machine.
simple-project.conf
server {
listen 0.0.0.0:80;
server_name simple-project.olive.com;
root /srv/www/simple-project/;
location / {
try_files $uri /index.php$is_args$args;
}
location ~ ^/index\.php(/|$) {
# the ubuntu default
fastcgi_pass /var/run/php-fpm.sock:9000;
fastcgi_split_path_info ^(.+\.php)(/.*)$;
include fastcgi_params;
fastcgi_param APPLICATION_ENV int;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param HTTPS off;
}
location ~ \.php$ {
return 404;
}
error_log /LOGS/custom_error.log;
access_log /LOGS/custom_access.log;
}
EDIT : Error when I tried to access to the server inside docker's container.
bash-4.4# wget localhost:80 > /tmp/output.html
--2019-03-27 12:33:11-- http://localhost/ Resolving localhost... 127.0.0.1, ::1 Connecting to localhost|127.0.0.1|:80... failed: Connection refused. Connecting to localhost|::1|:80... failed: Address
not available. Retrying.
From what I can tell, there are two reasons why you can't access the server.
The first is that you don't forward any ports from the container to the host. You should include the -p 80:80 argument to your docker run command.
The second is that you're attempting to listen on what I assume to be the IP of the container itself, which is not static (by default). In the nginx config, you should replace listen 172.17.0.2:80; with listen 0.0.0.0:80;.
With these two modifications in place, you should be able to access your server.
A different approach (but not recommended) would be to start the container in with the --network=host parameter. This way, the host's network is actually visible from within the container. In this scenario, you would only need to set the nginx config to listen on a valid address.
However, if the problem persists, a good approach would be to run docker exec -it {$container_id} bash when the container is running and see if you can access the server from within the container itself. This would mean that the server is running correctly but from other reasons, the port is not being correctly forwarded to the host.
I need some help in order to configure PhpStorm debugger with a particular development configuration.
On my pc (192.168.1.23) I have the source code of a PHP project, a dbms and an instance of nginx as reverse proxy. Nginx is configured in order to send all the traffic to a docker container:
server {
listen 80;
listen [::]:80;
server_name www.mysite.local;
root /usr/share/nginx/html/;
# pass PHP scripts to FastCGI server
location ~ \.php$ {
include snippets/fastcgi-php.conf;
fastcgi_pass unix:/var/run/php/php7.2-fpm.sock;
}
location / {
index index.html index.php;
}
}
upstream backend {
# configuration in order to use apache inside docker container
server 172.17.0.2:80;
}
The docker container (172.17.0.2) has been created with:
docker run -dP --add-host=db.local:172.17.0.1 \
-e remote_connect_back_xdbg=1 \
-e remote_host_xdbg='192.168.1.23' \
-v /opt/live:/opt/live \
--name local_php_apache_container local_php_apache_image
So Docker mounts my project (located at /opt/live) inside /opt/live of the container. The container is a Debian 9 with PHP 5.3 + Apache2.
And it is started automatically at the boot of the pc with this command:
docker exec -it local_php_apache_container /bin/bash
Inside the docker container, the xdebug configuration in php.ini is:
[xdebug]
xdebug.remote_connect_back=${remote_connect_back_xdbg}
xdebug.remote_enable=1
xdebug.remote_port=10123
xdebug.remote_handler=dbgp
xdebug.remote_log=/stackdriver/log/xdebug.log
xdebug.remote_mode=req
xdebug.remote_autostart=1
xdebug.remote_host=${remote_host_xdbg}
xdebug.idekey="netbeans-xdebug"
The idkey is netbeans-xdebug because with netbeans, the debugger works properly (https://www.mysite.local/index.php?XDEBUG_SESSION_START=netbeans-xdebug, https with a local untrusted certificate)
But I have a lot of problems with PHPStorm and the position of PHP interpreter, both with PHP Build-in Web Server and with PHP Remote Debug configuration...
Any suggestions?
I'm running nginx and unicorn in a docker container managed by supervisord.
So, supervisord is the docker command. It in turn spawns nginx and unicorn. Unicorn talks to nginx over a socket. nginx listens on port 80.
I also have a logspout docker container running, basically piping all the docker logs to papertrail by listening to docker.sock.
The problem is nginx is constantly spewing thousands of mundane entries to the logs:
172.31.45.231 - - [25/May/2016:05:53:33 +0000] "GET / HTTP/1.0" 200 12961 0.0090
I'm trying to disable it.
So far I've:
set access_logs /dev/null; in nginx.conf and vhost files.
tried to tell supervisord to stop logging the requests
[program:nginx]
command=bash -c "/usr/sbin/nginx -c /etc/nginx/nginx.conf"
stdout_logfile=/dev/null
stderr_logfile=/dev/null
tried to tell supervisord to send the logs to syslog not stdout:
[program:nginx]
command=bash -c "/usr/sbin/nginx -c /etc/nginx/nginx.conf"
stdout_logfile=syslog
stderr_logfile=syslog
Set log level in Unicorn to Warn in the rails code, and via env var.
Full supervisord conf:
[supervisord]
nodaemon=true
loglevel=warn
[program:unicorn]
command=bundle exec unicorn -c /railsapp/config/unicorn.rb
process_name=%(program_name)s_%(process_num)02d
numprocs=1
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
redirect_stderr=true
[program:nginx]
command=bash -c "/usr/sbin/nginx -c /etc/nginx/nginx.conf"
stdout_logfile=/dev/null
stderr_logfile=/dev/null
You could try and change nginx log level with a:
access_log off;
The goal remains to modify the nginx.conf used by the nginx image.