I want to use nginx on a container and that the nginx will read the environment variables of the container. I've searched and found that using lua modules makes it possible but for some reason I can't load the lua modules on the nginx itself. Please help, adding Dockerfile and nginx.conf
Dockerfile
FROM nginx:1.15-alpine
RUN mkdir -p /run/nginx && \
apk add nginx-mod-http-lua
WORKDIR /usr/src/app
COPY build /usr/src/app/build
COPY mime.types /usr/src/app
COPY nginx.conf /usr/src/app
EXPOSE 8080
CMD [ "nginx", "-c", "/usr/src/app/nginx.conf", "-g", "daemon off;" ]
nginx.conf
load_module /usr/lib/nginx/modules/ndk_http_module.so;
load_module /usr/lib/nginx/modules/ngx_http_lua_module.so;
pcre_jit on;
events {
}
http {
server {
listen 8080;
set_by_lua $db_api 'return os.getenv("DB_API")';
location /db/ {
proxy_pass $db_api;
}
location / {
root /usr/src/app/build;
index index.html;
}
}
}
and these are the errors I get:
2020/09/24 17:06:49 [alert] 1#1: detected a LuaJIT version which is not OpenResty's; many optimizations will be disabled and performance will be compromised (see https://github.com/openresty/luajit2 for OpenResty's LuaJIT or, even better, consider using the OpenResty releases from https://openresty.org/en/download.html)
nginx: [alert] detected a LuaJIT version which is not OpenResty's; many optimizations will be disabled and performance will be compromised (see https://github.com/openresty/luajit2 for OpenResty's LuaJIT or, even better, consider using the OpenResty releases from https://openresty.org/en/download.html)
2020/09/24 17:06:49 [error] 1#1: lua_load_resty_core failed to load the resty.core module from https://github.com/openresty/lua-resty-core; ensure you are using an OpenResty release from https://openresty.org/en/download.html (rc: 2, reason: module 'resty.core' not found:
no field package.preload['resty.core']
no file './resty/core.lua'
no file '/usr/share/luajit-2.1.0-beta3/resty/core.lua'
no file '/usr/local/share/lua/5.1/resty/core.lua'
no file '/usr/local/share/lua/5.1/resty/core/init.lua'
no file '/usr/share/lua/5.1/resty/core.lua'
no file '/usr/share/lua/5.1/resty/core/init.lua'
no file '/usr/share/lua/common/resty/core.lua'
no file '/usr/share/lua/common/resty/core/init.lua'
no file './resty/core.so'
no file '/usr/local/lib/lua/5.1/resty/core.so'
no file '/usr/lib/lua/5.1/resty/core.so'
no file '/usr/local/lib/lua/5.1/loadall.so'
no file './resty.so'
no file '/usr/local/lib/lua/5.1/resty.so'
no file '/usr/lib/lua/5.1/resty.so'
no file '/usr/local/lib/lua/5.1/loadall.so')
nginx: [error] lua_load_resty_core failed to load the resty.core module from https://github.com/openresty/lua-resty-core; ensure you are using an OpenResty release from https://openresty.org/en/download.html (rc: 2, reason: module 'resty.core' not found:
no field package.preload['resty.core']
no file './resty/core.lua'
no file '/usr/share/luajit-2.1.0-beta3/resty/core.lua'
no file '/usr/local/share/lua/5.1/resty/core.lua'
no file '/usr/local/share/lua/5.1/resty/core/init.lua'
no file '/usr/share/lua/5.1/resty/core.lua'
no file '/usr/share/lua/5.1/resty/core/init.lua'
no file '/usr/share/lua/common/resty/core.lua'
no file '/usr/share/lua/common/resty/core/init.lua'
no file './resty/core.so'
no file '/usr/local/lib/lua/5.1/resty/core.so'
no file '/usr/lib/lua/5.1/resty/core.so'
no file '/usr/local/lib/lua/5.1/loadall.so'
no file './resty.so'
no file '/usr/local/lib/lua/5.1/resty.so'
no file '/usr/lib/lua/5.1/resty.so'
no file '/usr/local/lib/lua/5.1/loadall.so')
and this is the docker build and run commands:
docker build -t client:1.0.0 --no-cache .
docker run -p 80:8080 -it -e DB_API=DB_API_URL client:1.0.0
Add env directive in the top and it works with the comments of the question it works
Related
I created a website and a respective admin-backend with Angular and want to serve it using Nginx via Docker. The website should be available via 'mydomain.com/' whereas the backend should be available via 'mydomain.com/backend/'. Getting the website running works perfectly fine. But the path resolution for the backend just does not work.
This is my Dockerfile:
# Webpage
FROM node:16 as build-webpage
WORKDIR /usr/local/app
COPY ./webpage /usr/local/app/
RUN npm ci
RUN npm run build # ng build
# Backend
FROM node:16 as build-backend
WORKDIR /usr/local/app
COPY ./backend /usr/local/app/
RUN npm ci
RUN npm run build
# Nginx
FROM nginx:latest
COPY --from=build-webpage /usr/local/app/dist/webpage /usr/share/nginx/html/webpage
COPY --from=build-backend /usr/local/app/dist/backend /usr/share/nginx/html/backend
COPY nginx.conf /etc/nginx/nginx.conf
EXPOSE 80
The nginx.conf contains these location blocks:
location /backend/ {
root /usr/share/nginx/html/; # backend should get appended (due to root)
index index.html index.htm;
include /etc/nginx/mime.types;
}
location / {
root /usr/share/nginx/html/webpage/; # only / is appended, so webpage is appended manually
index index.html index.htm;
include /etc/nginx/mime.types;
}
When I then try to open 'mydomain.com/backend' or 'mydomain.com/backend/index.html', the following log is generated by Nginx:
open() /usr/share/nginx/html/webpage/backend/index.html failed …
As it seems to me, Nginx matches the "/-Location" instead of the "/backend/-Location", since the "webpage"-folder is part of the directory Nginx resolves.
According to various blogposts, Nginx chooses the "best-fitting" location branch when resolving a URL. Therefore, "/backend/" should be chosen. Also, I tried to replace location /backend/ with location ^~ \/backend\/, so that the caret enforces that branch since it is before the "/-Location" in the config file. Unfortunately, that made no difference either.
So I was blessed if someone could tell me what I'm doing wrong.
Thank you!
Have a problem adding authentication due to a new needs while using Apache NiFi (NiFi) without SSL processing it in a container.
The image version is apache/nifi:1.13.0
It's said that SSL is unconditionally required to add authentication. It's recommended to use tls-toolkit in the NiFi image to add SSL. Worked on the following process:
Except for environment variable nifi.web.http.port for HTTP communication, and executed up the standalone mode container with nifi.web.https.port=9443
docker-compose up
Joined to the container and run the tls-toolkit script in the nifi-toolkit.
cd /opt/nifi/nifi-toolkit-1.13.0/bin &&\
sh tls-toolkit.sh standalone \
-n 'localhost' \
-C 'CN=yangeok,OU=nifi' \
-O -o $NIFI_HOME/conf
Attempt 1
Organized files in directory $NIFI_HOME/conf. Three files keystore.jks, truststore.jsk, and nifi.properties were created in folder localhost that entered the value of the option -n of the tls-toolkit script.
cd $NIFI_HOME/conf &&
cp localhost/*.jks .
The file $NIFI_HOME/conf/localhost/nifi.properties was not overwritten as it is, but only the following properties were imported as a file $NIFI_HOME/conf/nifi.properties:
nifi.web.http.host=
nifi.web.http.port=
nifiweb.https.host=localhost
nifiweb.https.port=9443
Restarted container
docker-compose restart
The container died with below error log:
Only one of the HTTP and HTTPS connectors can be configured at one time
Attempt 2
After executing the tls-toolkit script, all files a were overwritten, including file nifi.properties
cd $NIFI_HOME/conf &&
cp localhost/* .
Restarted container
docker-compose restart
The container died with the same error log
Hint
The dead container volume was also accessible, so copied and checked file nifi.properties, and when did docker-compose up or restart, it changed as follows:
The part I overwritten or modified:
nifi.web.http.host=
nifi.web.http.port=
nifi.web.http.network.interface.default=
#############################################
nifi.web.https.host=localhost
nifi.web.https.port=9443
The changed part after re-executing the container:
nifi.web.http.host=a8e283ab9421
nifi.web.http.port=9443
nifi.web.http.network.interface.default=
#############################################
nifi.web.https.host=a8e283ab9421
nifi.web.https.port=9443
I'd like to know how to execute the container with http.host, http.port empty. docker-compose.yml file is as follows:
version: '3'
services:
nifi:
build:
context: .
args:
NIFI_VERSION: ${NIFI_VERSION}
container_name: nifi
user: root
restart: unless-stopped
network_mode: bridge
ports:
- ${NIFI_HTTP_PORT}:8080/tcp
- ${NIFI_HTTPS_PORT}:9443/tcp
volumes:
- ./drivers:/opt/nifi/nifi-current/drivers
- ./templates:/opt/nifi/nifi-current/templates
- ./data:/opt/nifi/nifi-current/data
environment:
TZ: 'Asia/Seoul'
########## JVM ##########
NIFI_JVM_HEAP_INIT: ${NIFI_HEAP_INIT} # The initial JVM heap size.
NIFI_JVM_HEAP_MAX: ${NIFI_HEAP_MAX} # The maximum JVM heap size.
########## Web ##########
# NIFI_WEB_HTTP_HOST: ${NIFI_HTTP_HOST} # nifi.web.http.host
# NIFI_WEB_HTTP_PORT: ${NIFI_HTTP_PORT} # nifi.web.http.port
NIFI_WEB_HTTPS_HOST: ${NIFI_HTTPS_HOST} # nifi.web.https.host
NIFI_WEB_HTTP_PORT: ${NIFI_HTTPS_PORT} # nifi.web.https.port
Thank you
I am running nextcloud in a docker container on my Raspberry Pi. I have set it up witih self-signed certificates as described here. These are my files to run the docker-compose:
Dockerfile:
FROM nextcloud:apache
COPY setssl.sh /usr/local/bin/
RUN /usr/local/bin/setssl.sh mail#mail.com 172.30.0.2
docker-compose:
version: '2'
services:
nextcloud:
image: nextcloud_ssl
build: .
container_name: nextcloud
restart: always
user: 1000:1000
ports:
- 8443:443
volumes:
- /home/pi/nextcloud/ncdata:/var/www/html
- /home/pi/nextcloud/ssl:/etc/ssl/nextcloud
- /home/pi/pictures:/var/www/html/data/files/pics
- ./php.ini:/usr/local/etc/php/conf.d/zzz-custom.ini
environment:
- MYSQL_PASSWORD=xxx
- MYSQL_DATABASE=xxx
- MYSQL_USER=xxx
- MYSQL_HOST=xxx
- MYSQL_PORT=xxx
networks:
default:
external:
name: mariabridge
setssl.sh
# setssl.sh
# USAGE: setssl.sh <email> <domain>
echo 'SSLCipherSuite EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH
SSLProtocol All -SSLv2 -SSLv3
SSLHonorCipherOrder On
Header always set Strict-Transport-Security "max-age=63072000; includeSubdomains"
Header always set X-Frame-Options DENY
Header always set X-Content-Type-Options nosniff
SSLCompression off
SSLSessionTickets Off' > /etc/apache2/conf-available/ssl-params.conf
echo "<IfModule mod_ssl.c>
<VirtualHost _default_:443>
ServerAdmin $2
ServerName $1
" > /etc/apache2/sites-available/default-ssl.conf
echo '
DocumentRoot /var/www/html
ErrorLog ${APACHE_LOG_DIR}/error.log
CustomLog ${APACHE_LOG_DIR}/access.log combined
SSLEngine on
SSLCertificateFile /etc/ssl/nextcloud/cert.pem
SSLCertificateKeyFile /etc/ssl/nextcloud/key.pem
<FilesMatch "\.(cgi|shtml|phtml|php)$">
SSLOptions +StdEnvVars
</FilesMatch>
<Directory /usr/lib/cgi-bin>
SSLOptions +StdEnvVars
</Directory>
</VirtualHost>
</IfModule>' >> /etc/apache2/sites-available/default-ssl.conf
a2enmod ssl >/dev/null
a2ensite default-ssl >/dev/null
a2enconf ssl-params >/dev/null
According to the linked thread above, I have to run these commands to update my container:
docker-compose build --pull
docker-compose up -d
I have used this successfully before. Now, I'm getting the following errors:
docker-compose build --pull
Building with native build. Learn about native build in Compose here: https://docs.docker.com/go/compose-native-build/
Building nextcloud
Sending build context to Docker daemon 45.65GB
Step 1/3 : FROM nextcloud:apache
apache: Pulling from library/nextcloud
Digest: sha256:99d94124b2024c9f7f38dc12144a92bc0d68d110bcfd374169ebb7e8df0adf8e
Status: Image is up to date for nextcloud:apache
---> 0dd24a9c32e9
Step 2/3 : COPY setssl.sh /usr/local/bin/
---> Using cache
---> 360b5260b30a
Step 3/3 : RUN /usr/local/bin/setssl.sh mail#mail.com 172.30.0.2
---> Running in 4e77f23a45f2
touch: setting times of '/var/lib/apache2/module/enabled_by_admin/socache_shmcb': Operation not permitted
ERROR: Failed to create marker '/var/lib/apache2/module/enabled_by_admin/socache_shmcb'!
ERROR: Could not enable dependency socache_shmcb for ssl, aborting
touch: setting times of '/var/lib/apache2/site/enabled_by_admin/default-ssl': Operation not permitted
ERROR: Failed to create marker '/var/lib/apache2/site/enabled_by_admin/default-ssl'!
touch: setting times of '/var/lib/apache2/conf/enabled_by_admin/ssl-params': Operation not permitted
ERROR: Failed to create marker '/var/lib/apache2/conf/enabled_by_admin/ssl-params'!
The command '/bin/sh -c /usr/local/bin/setssl.sh mail#mail.com 172.30.0.2' returned a non-zero code: 1
ERROR: Service 'nextcloud' failed to build
I tried running this as sudo as well (running it as pi usually), but it didn't solve the issue. Not sure what to do here. The nextcloud forums is not really that active to ask for help..
Step 3/3 : RUN /usr/local/bin/setssl.sh mail#mail.com 172.30.0.2
---> Running in 4e77f23a45f2
touch: setting times of '/var/lib/apache2/module/enabled_by_admin/socache_shmcb': Operation not permitted
ERROR: Failed to create marker '/var/lib/apache2/module/enabled_by_admin/socache_shmcb'!
ERROR: Could not enable dependency socache_shmcb for ssl, aborting
touch: setting times of '/var/lib/apache2/site/enabled_by_admin/default-ssl': Operation not permitted
ERROR: Failed to create marker '/var/lib/apache2/site/enabled_by_admin/default-ssl'!
touch: setting times of '/var/lib/apache2/conf/enabled_by_admin/ssl-params': Operation not permitted
ERROR: Failed to create marker '/var/lib/apache2/conf/enabled_by_admin/ssl-params'!
The command '/bin/sh -c /usr/local/bin/setssl.sh mail#mail.com 172.30.0.2' returned a non-zero code: 1
If the RUN step that's filing now used to work, it's because your upstream nextcloud:apache has added a USER line in the dockerfile and the USER they added does not have access to /var/lib/apache2.
The solution is simple, if inelegant.
Figure out which USER is set in the nextcloud:apache upstream image.
Add a USER command of your own before the CMD that runs setssl.sh to set the user to one that has access to /var/lib/apache2
Add another USER command to switch back to the user from nextcloud:apache.
I tried running this as sudo as well (running it as pi usually), but it didn't solve the issue.
No, sudo wouldn't. This all happens in docker land, so running docker build / docker compose with sudo doesn't change any of that.
Let me share one more thought with you. I assume you authored setssl.sh. One of many idiosyncrasies of sh is that failed commands do not typically end the script by default, while most non-shell programming languages won't continue to run after a statement generates an error. You should always run the shell with -e in the shebang line, run set -e, or the equivalent in non-bourn-like shells to make sure your shell command will error if its commands error, which is appropriate for this case. (If you're curious, setssl.sh returns the exit code of the last touch command that ran, but it should have failed after the first command that failed!) Otherwise your docker build will succeed but the docker image won't have the files you need.
I am using Goland IDE on MacOSX and I'm trying to debug an application running on the container. I'm trying to attempt remote debugging, just that the container is on my local.
When I run the debugger on my IDE it does stop on the breakpoint but the one that it is debugging is the application on my local and not the one on the container.
For background, my application is supposed to listen on port 8000 and return "Hello, visitor!".
If I compile and run this file through a docker container, map my port 8000 and make a request through browser or through .http file, I do receive this response.
However, when I run it through Delve on the container, it does not respond through browser.
Also, once the container is up, when I start debugger on my IDE it does not debug the application on the container, as it complains about
2020/08/05 17:57:39 main.go:16: listen tcp :8000: bind: address already in use
I've tried following these 2 tutorials, both of which are mostly same, except for the version of their docker images that they use.
Tutorial1
Tutorial2
I have gone through all the comments on these 2 posts as well but haven't found anything that would solve my problem.
Here is my main.go
package main
import (
"fmt"
"log"
"net/http"
)
func main() {
// Set the flags for the logging package to give us the filename in the logs
log.SetFlags(log.LstdFlags | log.Lshortfile)
log.Println("starting server...")
http.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(http.StatusOK)
_, _ = fmt.Fprintln(w, `Hello, visitor!`)
})
log.Fatal(http.ListenAndServe(":8000", nil))
}
Here is my Dockerfile:
# Compile stage
FROM golang AS build-env
# Build Delve
RUN git config --global http.sslVerify "false"
RUN git config --global http.proxy http://mycompanysproxy.com:80
RUN go get github.com/go-delve/delve/cmd/dlv
ADD . /dockerdev
WORKDIR /dockerdev
RUN go build -gcflags="all=-N -l" -o /server
# Final stage
FROM debian:buster
EXPOSE 8000 40000
WORKDIR /
COPY --from=build-env /go/bin/dlv /
COPY --from=build-env /server /
CMD ["/dlv", "--listen=:40000", "--headless=true", "--api-version=2", "--accept-multiclient", "exec", "/server"]
The container comes up successfully and the attached console's log says:
API server listening at: [::]:40000
However, it does not seem to be listening.
If I run
GET http://localhost:8000/
Accept: application/json
I expect it to stop on the breakpoint but it doesn't. Rather it complains:
org.apache.http.NoHttpResponseException: localhost:8000 failed to respond
Am I missing something?
Is this the way to invoke debugger on a containerized app?
Some more information:
I figured out that I was using the wrong debug configuration. Need to press debug button with remote debug (top right) showing in the configuration.
I am trying to run the nginx-mod-http-headers-more module for nginx so that I can fully hide the server name/version from a header response.
A bit of background, I am running nginx 1.16.1 inside a docker container. It has a dockerfile running nginx:1.16.1-alpine. In order to hide the Server header response field I need to use the nginx-mod-http-headers-more module.
I added the following commands into my dockerfile to get the module installed in my docker container:
RUN apk update && \
apk upgrade && \
apk add nginx-mod-http-headers-more
Inside my nginx.conf file, I added the following lines:
load_module modules/ngx_http_headers_more_filter_module.so;
...
http {
server {
more_clear_headers "Server: ";
...
}
}
The load_module statement and the more_clear_headers are the two pieces of code needed to make this module work.
However when the docker container is created and ran, it generates this error inside the container:
nginx: [emerg] module "/etc/nginx/modules/ngx_http_headers_more_filter_module.so" is not binary compatible in /etc/nginx/nginx.conf:1
I need help to figure out what to do from here! Thanks!