I have a project with the following elements:
frontend - Angular
backend - Laravel
DB - AWS RDS
I want to make to dockerize this project locally and I have several questions:
Is it possible to have only 1 docker NGINX service which will work with frontend and backend without using volumes?
When I will write at enironment of angular the route for the backend, like: http://backend/api/login - should it work?
When I'm trying to open localhost:8050 - I can see the frontend part and the request can't reach the backend container. Please advice
What is the best practise in this case to use at cloud solutions: to use shared drive with 2 folders for frontend and backend and mount them at each container or something else?
Docker-compose
version: '3'
services:
nginx-frontend:
restart: always
build:
dockerfile: dockerfile
context: ./nginx
ports:
- '8050:80'
backend:
build:
dockerfile: dockerfile
context: ./backend
ports:
- '1000:80'
Nginx configuration
server {
listen 80;
# Log files for Debug
error_log /var/log/nginx/error.log;
access_log /var/log/nginx/access.log;
# Laravel web root directory
root /var/www/html/public;
index index.php index.html;
location / {
try_files $uri $uri/ /index.php?$query_string;
gzip_static on;
}
# Nginx Pass requests to PHP-FPM
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass backend:9000;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
}
}
Nginx-frontend dockerfile
FROM node:14.17.6 as build
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY ./frontend/code/package.json ./frontend/code/package-lock.json ./
RUN apt-get update || : && apt-get install python -y
RUN npm install node-sass -y
RUN npm rebuild node-sass
RUN npm install
COPY ./frontend/code .
RUN npm run build
FROM nginx
COPY ./default.conf /etc/nginx/conf.d/default.conf
COPY --from=build /usr/src/app/dist /var/www/html
Backend dockerfile
FROM php:7.4-fpm as php-build
RUN apt-get -y update
RUN apt-get -y install curl
RUN apt-get -y install zip
RUN apt-get -y install libzip-dev
RUN apt-get -y install libpng-dev
RUN docker-php-ext-install zip
COPY --from=composer:latest /usr/bin/composer /usr/bin/composer
RUN docker-php-ext-configure gd && docker-php-ext-install gd
COPY ./code /var/www/html
WORKDIR /var/www/html
RUN composer require "ext-gd:*" --ignore-platform-reqs
RUN composer require phpoffice/phpspreadsheet --with-all-dependencies
RUN composer install
RUN chmod -R 777 /var/www/html/storage/
RUN chmod -R 777 /var/www/html/bootstrap/cache
FROM nginx
COPY ./default.conf /etc/nginx/conf.d/default.conf
COPY --from=php-build /var/www/html /var/www/html
The configuration files are provided.
I tried to learn a (even though a bit outdated) form of linking containers. I created an NGINX and PHP container, which should get linked. Everything runs on my local machine.
Dockerfile NGINX
FROM ubuntu:16.04
MAINTAINER Sebastian Scharf
# Install NGINX
RUN apt-get update && apt-get install -y nginx \
# Clean after apt-get
&& apt-get clean \
## remove content from apt/lists and var/tmp
&& rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/* \
## set deamon off
&& echo "daemon off;" >> /etc/nginx/nginx.conf
ADD default /etc/nginx/sites-available/default
# forward request and error logs to docker log collector
RUN ln -sf /dev/stdout /var/log/nginx/access.log \
&& ln -sf /dev/stderr /var/log/nginx/error.log
CMD ["nginx"]
Dockerfile PHP
FROM ubuntu:16.04
MAINTAINER Sebastian Scharf
RUN apt-get update \
&& apt-get install -y locales \
&& locale-gen en_US.UTF-8
ENV LANG en_US.UTF-8
ENV LANGUAGE en_US:en
ENV LC_ALL en_US.UTF-8
RUN apt-get update \
&& apt-get install -y curl zip unzip git software-properties-common \
&& add-apt-repository -y ppa:ondrej/php \
&& apt-get update \
&& apt-get install -y php7.0-fpm php7.0-cli php7.0-mcrypt php7.0-gd php7.0-mysql \
php7.0-pgsql php7.0-imap php-memcached php7.0-mbstring php7.0-xml php7.0-curl \
&& php -r "readfile('http://getcomposer.org/installer');" | php -- --install-dir=/usr/bin/ --filename=composer \
&& mkdir /run/php \
&& apt-get remove -y --purge software-properties-common \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
ADD php-fpm.conf /etc/php/7.0/fpm/php-fpm.conf
ADD www.conf /etc/php/7.0/fpm/pool.d/www.conf
EXPOSE 9000
CMD ["php-fpm7.0"]
NGINX CONFIG
server {
listen 8080 default_server;
root /var/www/html/public;
index index.html index.htm index.php;
server_name _;
charset utf-8;
location = /favicon.ico { log_not_found off; access_log off; }
location = /robots.txt { log_not_found off; access_log off; }
location / {
try_files $uri $uri/ /index.php$is_args$args;
}
location ~ \.php$ {
include snippets/fastcgi-php.conf;
fastcgi_pass php:9000;
}
error_page 404 /index.php;
location ~ /\.ht {
deny all;
}
}
When I call localhost:8080, I get the error: This page isn't working (localhost didn't send any data). I was expecting to see a test file.
This is how I started the containers and linked them:
docker run -d --name=myphp -v $(pwd)/application:/var/www/html retronexus/php:0.1.0
docker run -d --link=myphp:php -p 8080:80 -v $(pwd)/application:/var/www/html retronexus/nginx:0.2.0
Nginx in your container listens on port 8080, but you are binding port 80 from the container to 8080 on the host. Switch to binding 8080 from the container.
docker run -d \
--link=myphp:php \
-p 8080:8080 \
-v $(pwd)/application:/var/www/html \
retronexus/nginx:0.2.0
See here for more details: https://docs.docker.com/config/containers/container-networking/#published-ports
Good day.
I have been working with Docker recently. Faced such a problem:
The administrative part of the site is working fine, and the public part gives an error 502 bad gateway.
Here are the settings for my docker:
docker-kompose.yml (root folder):
version: '3'
services:
nginx:
image: nginx:1.14
ports:
- "${NGINX_HOST}:${NGINX_PORT}:80"
volumes:
- ./:/var/www/html:cached
- ./docker/nginx/nginx.conf:/etc/nginx/nginx.conf
depends_on: [php]
env_file: .env
restart: always
php:
build:
context: docker/php
args:
TIMEZONE: ${TIMEZONE}
volumes:
- ./:/var/www/html:cached
- ./docker/php/php.ini:/usr/local/etc/php/php.ini:cached
- ./docker/php/php-fpm.conf:/usr/local/etc/php-fpm.d/99-custom.conf:cached
user: "${DOCKER_UID}"
env_file: .env
restart: always
docker-kompose.override.yml (root folder):
version: '3'
services:
markup:
build: docker/markup
user: "${DOCKER_UID}"
ports:
- "${MARKUP_PORT}:4000"
volumes:
- ./:/app:cached
command: bash -c "npm install --no-save && bower install && gulp external"
environment:
NODE_ENV: "${MARKUP_ENV}"
.env file (root folder):
COMPOSE_PROJECT_NAME=nrd
# "docker-compose.yml:docker-compose.prod.yml" for prod
# "docker-compose.yml:docker-compose.stage.yml" for stage
COMPOSE_FILE=
DOCKER_UID=1000
DOCKER_ADDRESS=172.17.0.1
# "develop" or "production"
MARKUP_ENV=develop
MARKUP_PORT=4000
# 0.0.0.0 for external access
NGINX_HOST=127.0.0.1
NGINX_PORT=80
TIMEZONE=Europe/Moscow
nginx.conf (folder docker/nginx/):
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log off;
error_log /dev/null crit;
client_max_body_size 1024M;
client_header_buffer_size 4k;
large_client_header_buffers 4 8k;
sendfile on;
keepalive_timeout 65;
map $http_x_forwarded_proto $fastcgi_https {
default off;
https on;
}
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
server {
root /var/www/html;
listen 80;
server_name _;
access_log off;
error_log /dev/null crit;
charset utf-8;
location ~ ^/(\.git|log|docker|\.env) {
return 404;
}
location ~ \.php$ {
fastcgi_pass php:9000;
fastcgi_read_timeout 7200;
fastcgi_send_timeout 72000;
fastcgi_buffer_size 32k;
fastcgi_buffers 16 32k;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param HTTPS $fastcgi_https if_not_empty;
fastcgi_param REMOTE_ADDR $http_x_forwarded_for;
include fastcgi_params;
if (!-f $request_filename) {
rewrite ^(.*)/index.php$ $1/ redirect;
}
}
location / {
index index.php index.html index.htm;
if (!-e $request_filename){
rewrite ^(.*)$ /bitrix/urlrewrite.php last;
}
}
location ~ /\.ht {
deny all;
}
location ~* ^.+\.(xml|html|jpg|jpeg|gif|ttf|eot|swf|svg|png|ico|mp3|css|woff2?|zip|tgz|gz|rar|bz2|doc|xls|exe|pdf|dat|avi|ppt|txt|tar|mid|midi|wav|bmp|rtf|wmv|mpeg|mpg|mp4|m4a|spx|ogx|ogv|oga|webm|weba|ogg|tbz|js)$ {
expires 30d;
access_log off;
}
}
}
.Dockerfile (folder docker/php):
# See https://github.com/docker-library/php/blob/master/7.1/fpm/Dockerfile
FROM php:7.1-fpm
ARG TIMEZONE
RUN apt-get update && apt-get install -y \
openssl \
git \
zlibc \
zlib1g \
zlib1g-dev \
libfreetype6-dev \
libssl-dev \
libjpeg62-turbo-dev \
libmemcached-dev \
libmagickwand-dev \
libmcrypt-dev \
libpng-dev \
libicu-dev \
unzip
# Install Composer
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
RUN mkdir /.composer/ && chmod 777 /.composer/
# Set timezone
RUN ln -snf /usr/share/zoneinfo/${TIMEZONE} /etc/localtime && echo ${TIMEZONE} > /etc/timezone
RUN printf '[PHP]\ndate.timezone = "%s"\n', ${TIMEZONE} > /usr/local/etc/php/conf.d/tzone.ini
RUN "date"
# Type docker-php-ext-install to see available extensions
RUN docker-php-ext-configure gd --with-freetype-dir=/usr/include/ --with-jpeg-dir=/usr/include/
RUN docker-php-ext-install pdo_mysql gd
# Install memcached extension
RUN apt-get update \
&& apt-get install -y libmemcached11 libmemcachedutil2 build-essential libmemcached-dev libz-dev \
&& pecl install memcached \
&& echo extension=memcached.so >> /usr/local/etc/php/conf.d/memcached.ini \
&& apt-get remove -y build-essential libmemcached-dev libz-dev \
&& apt-get autoremove -y \
&& apt-get clean \
&& rm -rf /tmp/pear
RUN docker-php-ext-install intl
RUN docker-php-ext-install opcache
RUN docker-php-ext-install soap
RUN apt-get install -y --no-install-recommends default-libmysqlclient-dev
RUN apt-get install -y \
libzip-dev \
zip \
&& docker-php-ext-configure zip --with-libzip \
&& docker-php-ext-install zip
RUN docker-php-ext-install mysqli
WORKDIR /var/www/html
CMD ["php-fpm"]
php.ini (folder docker/php):
[PHP]
short_open_tag = On
upload_max_filesize = 200M
post_max_size = 250M
display_errors = Off
memory_limit = 1024M
max_execution_time = 60
[mbstring]
mbstring.internal_encoding = UTF-8
mbstring.func_overload = 2
php-fpm.conf (folder docker/php):
[www]
pm = dynamic
pm.max_children = 50
pm.start_servers = 2
pm.min_spare_servers = 2
pm.max_spare_servers = 10
access.format = "%{REMOTE_ADDR}e - %u %t \"%m %{REQUEST_URI}e%Q%q\" %s %{miliseconds}dms %{megabytes}MM %C%%"
;slowlog = /proc/self/fd/2
;request_slowlog_timeout = 2s
php_admin_value[error_log] = /proc/self/fd/2
php_admin_flag[log_errors] = on
.Dockerfile (folder docker/markup):
FROM node:10.13
RUN npm install -g gulp bower
RUN echo '{ "allow_root": true }' > /root/.bowerrc
RUN mkdir /.npm && mkdir /.config && mkdir /.cache && mkdir /.local && chmod 777 /.npm && chmod 777 /.config && chmod 777 /.cache && chmod 777 /.local
EXPOSE 4000
WORKDIR /app/markup
Please help, I do not understand what is the matter. Today everything worked fine, I did not touch the settings anywhere, I did not change the passwords anywhere. The public part of my local project just started giving 502 errors
Using Docker for Windows.
I am actually getting error when making CORS calls to the backend:
GET http://backend/api/countries net::ERR_NAME_NOT_RESOLVED
I am using for that docker-compose:
docker-compose.yml
version: '2'
networks:
minn_net:
services:
backend:
build: backend-symfony
container_name: backend
networks:
- minn_net
ports:
- 81:80
volumes:
- ./backend-symfony/backend/var/logs-nginx:/var/log/nginx
- ./backend-symfony/backend/:/var/www/html
- ./backend-symfony/errors/:/var/www/errors
db:
image: mysql:5.7.19
container_name: db
networks:
- minn_net
ports:
- 3306
volumes:
- "./.data/db:/var/lib/mysql"
environment:
MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD}
MYSQL_DATABASE: ${MYSQL_DATABASE}
MYSQL_USER: ${MYSQL_USER}
MYSQL_PASSWORD: ${MYSQL_PASSWORD}
phpmyadmin:
image: phpmyadmin/phpmyadmin:edge-4.7
container_name: phpmyadmin
networks:
- minn_net
ports:
- 8080:80
links:
- db
frontend:
build: frontend-angular
container_name: frontend
networks:
- minn_net
links:
- backend # I added links as extra: may be the frontend recognises the backend
ports:
- 88:80
volumes:
- ./frontend-angular/frontend2/dist:/var/www/frontend
- ./frontend-angular/conf/docker/default.conf:/etc/nginx/conf.d/default.conf
- ./frontend-angular/logs/nginx/:/var/log/nginx
config of nginx for the backend
server {
listen 80; ## listen for ipv4; this line is default and implied
listen [::]:80 default ipv6only=on; ## listen for ipv6
root /var/www/html/public;
index index.php;
# Make site accessible from http://localhost/
server_name _;
# Disable sendfile as per https://docs.vagrantup.com/v2/synced-folders/virtualbox.html
sendfile off;
# Add stdout logging
error_log /dev/stdout info;
access_log /dev/stdout;
# Add option for x-forward-for (real ip when behind elb)
#real_ip_header X-Forwarded-For;
#set_real_ip_from 172.16.0.0/12;
location / {
# Match host using a hostname if you like
#if ($http_origin ~* (https?://.*\.tarunlalwani\.com(:[0-9]+)?$)) {
# set $cors "1";
#}
set $cors "1";
# OPTIONS indicates a CORS pre-flight request
if ($request_method = 'OPTIONS') {
set $cors "${cors}o";
}
# OPTIONS (pre-flight) request from allowed
# CORS domain. return response directly
if ($cors = "1o") {
add_header 'Access-Control-Allow-Origin' '$http_origin' always;
add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS, PUT, DELETE, PATCH' always;
add_header 'Access-Control-Allow-Credentials' 'true' always;
add_header 'Access-Control-Allow-Headers' 'Origin, Content-Type, Accept, Lang, Authorization' always;
add_header Content-Length 0;
add_header Content-Type text/plain;
return 204;
}
add_header 'Access-Control-Allow-Headers' 'Content-Type,Authorization,Lang';
# add_header 'Access-Control-Allow-Headers' '*';
add_header 'Access-Control-Allow-Methods' 'POST,GET,PUT,DELETE,OPTIONS';
add_header 'Access-Control-Allow-Origin' '*';
try_files $uri /index.php$is_args$args;
}
location ~* \.(jpg|jpeg|gif|css|png|js|ico|html|eof|woff|ttf)$ {
add_header 'Access-Control-Allow-Headers' 'Content-Type,Authorization,Lang';
#add_header 'Access-Control-Allow-Headers' '*';
add_header 'Access-Control-Allow-Methods' 'POST,GET,PUT,DELETE,OPTIONS';
add_header 'Access-Control-Allow-Origin' '*';
if (-f $request_filename) {
expires 30d;
access_log off;
}
}
location ~ \.php$ {
add_header 'Access-Control-Allow-Headers' 'Content-Type,Authorization,Lang';
#add_header 'Access-Control-Allow-Headers' '*';
add_header 'Access-Control-Allow-Methods' 'POST,GET,PUT,DELETE,OPTIONS';
add_header 'Access-Control-Allow-Origin' '*';
fastcgi_pass unix:/var/run/php-fpm.sock;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
}
Angular service that calls the backend
import { Injectable } from '#angular/core';
import { HttpClient, HttpErrorResponse, HttpHeaders } from '#angular/common/http';
// import { HttpClientModule } from '#angular/common/http';
// Grab everything with import 'rxjs/Rx';
import { Observable } from 'rxjs/Observable';
import 'rxjs/add/observable/throw';
import { Observer } from 'rxjs/Observer';
import 'rxjs/add/operator/do';
import 'rxjs/add/operator/map';
import 'rxjs/add/operator/catch';
import * as _ from 'lodash';
import { ICountry } from '#app/shared/interfaces';
#Injectable()
export class DataService {
baseUrl = 'http://backend/api'; // it does not work
// baseUrl = 'http://172.18.0.4/api'; // it works perfectly
constructor(private http: HttpClient) { }
// private httpheadersGet = new HttpHeaders().set("Access-Control-Allow-Origin", "http://localhost:4200");
public getCountries(): Observable<ICountry[]> {
return (
this.http
.get<ICountry[]>(this.baseUrl + '/countries'/* , {"headers": this.httpheadersGet} */)
.do(console.log)
.map(data => _.values(data["hydra:member"]))
.catch(this.handleError)
);
}
}
Please be noticed, if I set backend IP address manually from the frontend (as it can be seen in the angular code), the CORS calls are working perfectly. So, have I missing something in the config of the docker-compose.yml file?
PS: Even the network and the container names were specified to avoid an incorrect container name in the angular code.
Nginx Dockerfile for the frontend container
FROM debian:stretch-slim
LABEL maintainer="NGINX Docker Maintainers <docker-maint#nginx.com>"
ENV NGINX_VERSION 1.13.9-1~stretch
ENV NJS_VERSION 1.13.9.0.1.15-1~stretch
RUN set -x \
&& apt-get update \
&& apt-get install --no-install-recommends --no-install-suggests -y gnupg1 apt-transport-https ca-certificates \
&& \
NGINX_GPGKEY=573BFD6B3D8FBC641079A6ABABF5BD827BD9BF62; \
found=''; \
for server in \
ha.pool.sks-keyservers.net \
hkp://keyserver.ubuntu.com:80 \
hkp://p80.pool.sks-keyservers.net:80 \
pgp.mit.edu \
; do \
echo "Fetching GPG key $NGINX_GPGKEY from $server"; \
apt-key adv --keyserver "$server" --keyserver-options timeout=10 --recv-keys "$NGINX_GPGKEY" && found=yes && break; \
done; \
test -z "$found" && echo >&2 "error: failed to fetch GPG key $NGINX_GPGKEY" && exit 1; \
apt-get remove --purge --auto-remove -y gnupg1 && rm -rf /var/lib/apt/lists/* \
&& dpkgArch="$(dpkg --print-architecture)" \
&& nginxPackages=" \
nginx=${NGINX_VERSION} \
nginx-module-xslt=${NGINX_VERSION} \
nginx-module-geoip=${NGINX_VERSION} \
nginx-module-image-filter=${NGINX_VERSION} \
nginx-module-njs=${NJS_VERSION} \
" \
&& case "$dpkgArch" in \
amd64|i386) \
# arches officialy built by upstream
echo "deb https://nginx.org/packages/mainline/debian/ stretch nginx" >> /etc/apt/sources.list.d/nginx.list \
&& apt-get update \
;; \
*) \
# we're on an architecture upstream doesn't officially build for
# let's build binaries from the published source packages
echo "deb-src https://nginx.org/packages/mainline/debian/ stretch nginx" >> /etc/apt/sources.list.d/nginx.list \
\
# new directory for storing sources and .deb files
&& tempDir="$(mktemp -d)" \
&& chmod 777 "$tempDir" \
# (777 to ensure APT's "_apt" user can access it too)
\
# save list of currently-installed packages so build dependencies can be cleanly removed later
&& savedAptMark="$(apt-mark showmanual)" \
\
# build .deb files from upstream's source packages (which are verified by apt-get)
&& apt-get update \
&& apt-get build-dep -y $nginxPackages \
&& ( \
cd "$tempDir" \
&& DEB_BUILD_OPTIONS="nocheck parallel=$(nproc)" \
apt-get source --compile $nginxPackages \
) \
# we don't remove APT lists here because they get re-downloaded and removed later
\
# reset apt-mark's "manual" list so that "purge --auto-remove" will remove all build dependencies
# (which is done after we install the built packages so we don't have to redownload any overlapping dependencies)
&& apt-mark showmanual | xargs apt-mark auto > /dev/null \
&& { [ -z "$savedAptMark" ] || apt-mark manual $savedAptMark; } \
\
# create a temporary local APT repo to install from (so that dependency resolution can be handled by APT, as it should be)
&& ls -lAFh "$tempDir" \
&& ( cd "$tempDir" && dpkg-scanpackages . > Packages ) \
&& grep '^Package: ' "$tempDir/Packages" \
&& echo "deb [ trusted=yes ] file://$tempDir ./" > /etc/apt/sources.list.d/temp.list \
# work around the following APT issue by using "Acquire::GzipIndexes=false" (overriding "/etc/apt/apt.conf.d/docker-gzip-indexes")
# Could not open file /var/lib/apt/lists/partial/_tmp_tmp.ODWljpQfkE_._Packages - open (13: Permission denied)
# ...
# E: Failed to fetch store:/var/lib/apt/lists/partial/_tmp_tmp.ODWljpQfkE_._Packages Could not open file /var/lib/apt/lists/partial/_tmp_tmp.ODWljpQfkE_._Packages - open (13: Permission denied)
&& apt-get -o Acquire::GzipIndexes=false update \
;; \
esac \
\
&& apt-get install --no-install-recommends --no-install-suggests -y \
$nginxPackages \
gettext-base \
&& apt-get remove --purge --auto-remove -y apt-transport-https ca-certificates && rm -rf /var/lib/apt/lists/* /etc/apt/sources.list.d/nginx.list \
\
# if we have leftovers from building, let's purge them (including extra, unnecessary build deps)
&& if [ -n "$tempDir" ]; then \
apt-get purge -y --auto-remove \
&& rm -rf "$tempDir" /etc/apt/sources.list.d/temp.list; \
fi
# forward request and error logs to docker log collector
RUN ln -sf /dev/stdout /var/log/nginx/access.log \
&& ln -sf /dev/stderr /var/log/nginx/error.log
EXPOSE 80
STOPSIGNAL SIGTERM
CMD ["nginx", "-g", "daemon off;"]
After installing dnsutils
root#123e38093010:/# nslookup backend
Server: 127.0.0.11
Address: 127.0.0.11#53
Non-authoritative answer:
Name: backend
Address: 172.18.0.3
I am trying to get the SMW running in a Docker container. I get the
main page up, but it will not let me log in. It says:
Login error
Knowledgebase uses cookies to log in users. You have cookies disabled.
Please enable them and try again.
My browser does have cookies enabled.
Anyone here run SMW in Docker and/or have a clue on how I can fix this issue?
Dockerfile:
FROM centos:centos7
ENV HOME /opt/smw
ADD . $HOME
RUN chmod 777 $HOME
# Add the ngix and PHP dependent repository
ADD nginx.repo /etc/yum.repos.d/nginx.repo
# Installing packages
RUN yum -y install nginx
# Installing PHP
RUN yum -y --enablerepo=remi,remi-php56 install nginx php-fpm php-common php-mysql php-xml
# Installing MySQL
RUN yum -y install wget && \
wget http://repo.mysql.com/mysql-community-release-el7-5.noarch.rpm && \
rpm -ivh mysql-community-release-el7-5.noarch.rpm && \
yum -y update && \
yum -y install mysql-server
# Installing supervisor
RUN yum install -y python-setuptools
RUN easy_install pip
RUN pip install supervisor
# Adding the configuration file of the nginx
ADD nginx.conf /etc/nginx/nginx.conf
ADD default.conf /etc/nginx/conf.d/default.conf
# Adding the configuration file of the Supervisor
ADD supervisord.conf /etc/
# Config MySQL
RUN chmod 755 $HOME/config_mysql.sh
RUN $HOME/config_mysql.sh
VOLUME ["/opt/smw"]
VOLUME ["/var/lib/mysql"]
EXPOSE 80
CMD ["/opt/smw/run.sh"]
supervisord.conf:
[supervisord]
;logfile=/var/log/supervisor/supervisord-nobody.log ; (main log file;default $CWD/supervisord.log)
;logfile_maxbytes=50MB ; (max main logfile bytes b4 rotation;default 50MB)
;logfile_backups=10 ; (num of main logfile rotation backups;default 10)
;loglevel=info ; (log level;default info; others: debug,warn,trace)
;pidfile=/var/run/supervisord.pid ; (supervisord pidfile;default supervisord.pid)
nodaemon=true ; (start in foreground if true;default false)
;user=nobody
[rpcinterface:supervisor]
supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface
[program:php5-fpm]
command=/usr/sbin/php-fpm -c /etc/php-fpm.d
numprocs=1
autostart=true
autorestart=true
[program:php5-fpm-log]
command=tail -f /var/log/php5-fpm.log
stdout_events_enabled=true
stderr_events_enabled=true
[program:nginx]
command=/usr/sbin/nginx
numprocs=1
autostart=true
autorestart=true
nginx config:
server {
listen 80;
root /opt/smw;
index index.html index.htm index.php;
# Make site accessible from http://set-ip-address.xip.io
server_name localhost;
access_log /var/log/nginx/localhost.com-access.log;
error_log /var/log/nginx/localhost.com-error.log error;
charset utf-8;
location / {
try_files $uri $uri/ /index.html /index.php?$query_string;
}
location = /favicon.ico { log_not_found off; access_log off; }
location = /robots.txt { access_log off; log_not_found off; }
error_page 404 /index.php;
# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
#
location ~ \.php$ {
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
# Deny .htaccess file access
location ~ /\.ht {
deny all;
}
Instead of installing remi-php56 I installed just plain php and then I did not have the issue.