Dockerfile entrypoint unable to switch user - docker

I am unable to switch user to a non-root user from the entry point script. The User directive to change the user in Dockerfile works, but I am not able to change permissions using chmod. To overcome this issue I created entrypoint.sh script to change the folder permissions but when I try to switch user using su command, it apparently doesn't work, the container is still running as root.
The Dockerfile
FROM php:7.2-fpm
# Installing dependencies
RUN apt-get update && apt-get install -y \
build-essential \
mysql-client \
libpng-dev \
libjpeg62-turbo-dev \
libfreetype6-dev \
locales \
zip \
jpegoptim optipng pngquant gifsicle \
vim \
unzip \
git \
curl
# Installing composer
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
ENV USER_ID=1000
ENV GROUP_ID=1000
ENV USER_NAME=www
ENV GROUP_NAME=www
RUN groupadd -g $GROUP_ID $GROUP_NAME
RUN useradd -u $USER_ID -ms /bin/bash -g $GROUP_NAME $USER_NAME
RUN mkdir /app
WORKDIR /app
EXPOSE 9000
COPY ./entrypoint.sh /
RUN ["chmod", "+x", "/entrypoint.sh"]
ENTRYPOINT ["/entrypoint.sh"]
Entrypoint.sh file
#!/bin/bash
if [ -n "$USER_ID" -a -n "$GROUP_ID" ]; then
chown -R $USER_NAME:$GROUP_NAME .
su $USER_NAME
fi
php-fpm
exec "$#"
whatever I do I am not able to switch user from the entrypoint.sh script.
My case is to run the container as non-root user.

I think that your su command should be something like
su $USERNAME --command "/doit.sh"
b/c your entrpoiny script is switching user, doing nothing, and then switching back to root.

To solve this you need to change your dockerfile and add:
RUN echo "root ALL = NOPASSWD: /bin/su ALL" >> /etc/sudoers
Or use gosu what is better:
# install gosu
# seealso:
# https://docs.docker.com/develop/develop-images/dockerfile_best-practices/
# https://github.com/tianon/gosu/blob/master/INSTALL.md
# https://github.com/tianon/gosu
RUN set -eux; \
apt-get update; \
apt-get install -y gosu; \
rm -rf /var/lib/apt/lists/*; \
# verify that the binary works
gosu nobody true
Then inside entrypoint.sh:
gosu root yourservice &
#ie: gosu root /usr/sbin/sshd -D &
exec gosu no-root-user yourservice2
# ie: exec gosu no-root-user tail -f /dev/null

Related

Starting supervisor with Docker and seeing its logs in docker logs, but not finding the service with service supervisor status in the container

I want to run supervisor to have multiple processes in the same container, as I can't use docker-compose in our current hosting environment. Things seems to work when I look in the docker logs, but I can't see the supervisor service inside the linux system when I've attached my terminal to the container.
When I check the logs for the container I get:
Starting supervisord.... (entrypoint.sh)
2021-12-22 08:38:50,871 CRIT Supervisor is running as root. Privileges were not dropped because no user is specified in the config file. If you intend to run as root, you can set user=root in the config file to avoid this message.
2021-12-22 08:38:50,877 INFO RPC interface 'supervisor' initialized
2021-12-22 08:38:50,877 CRIT Server 'inet_http_server' running without any HTTP authentication checking
2021-12-22 08:38:50,878 INFO supervisord started with pid 1
However, if I attach my shell to the container and run "service supervisor status" I get:
supervisord is not running.
And I don't get why the system don't seem to recognise that the service is running. Can anyone help me figuring this out, because if I can't access the service from the terminal I can't really manage it in any way.
This is my Dockerfile
FROM python:3.8
ENV PYTHONUNBUFFERED 1
RUN apt-get update
RUN apt-get install -y pgbouncer
RUN apt-get update && apt-get install -y supervisor
# install nginx
ENV NGINX_VERSION 1.15.12-1~stretch
ENV NJS_VERSION 1.15.12.0.3.1-1~stretch
RUN set -x \
&& \
NGINX_GPGKEY=573BFD6B3D8FBC641079A6ABABF5BD827BD9BF62; \
found=''; \
for server in \
hkp://keyserver.ubuntu.com:80 \
hkp://p80.pool.sks-keyservers.net:80 \
pgp.mit.edu \
; do \
echo "Fetching GPG key $NGINX_GPGKEY from $server"; \
apt-key adv --keyserver "$server" --keyserver-options timeout=10 --recv-keys "$NGINX_GPGKEY" && found=yes && break; \
done; \
test -z "$found" && echo >&2 "error: failed to fetch GPG key $NGINX_GPGKEY" && exit 1; \
apt-get remove --purge --auto-remove -y gnupg1 && rm -rf /var/lib/apt/lists/* \
&& dpkgArch="$(dpkg --print-architecture)" \
&& nginxPackages=" \
nginx=${NGINX_VERSION} \
nginx-module-xslt=${NGINX_VERSION} \
nginx-module-geoip=${NGINX_VERSION} \
nginx-module-image-filter=${NGINX_VERSION} \
nginx-module-njs=${NJS_VERSION} \
" \
&& echo "deb https://nginx.org/packages/mainline/debian/ stretch nginx" >> /etc/apt/sources.list.d/nginx.list \
&& apt-get update \
&& apt-get install --no-install-recommends --no-install-suggests -y \
$nginxPackages \
gettext-base \
&& rm -rf /var/lib/apt/lists/* /etc/apt/sources.list.d/nginx.list
# install app
RUN mkdir /var/app && chown www-data:www-data /var/app
WORKDIR /var/app
COPY ./requirements.txt /var/app/
RUN pip install -r requirements.txt
COPY . /var/app/
COPY ./conf/nginx/staging.conf /etc/nginx/conf.d/default.conf
COPY ./conf/pgbouncer/pgbouncer.ini /etc/pgbouncer/pgbouncer.ini
COPY ./supervisord.conf /etc/supervisor/conf.d/supervisord.conf
VOLUME /var/logs
# Expose ports (Added from previous dockerfile)
EXPOSE 80 2222
# Added for setting right permissions to entrypoint script
RUN ["chmod", "+x", "./entrypoint.sh"]
RUN ["chmod", "+x", "/var/app/bin/staging/django-q.sh"]
ENTRYPOINT ["./entrypoint.sh"]
This is my entrypoint.sh - I first set-up some settings for pg-bouncer, and then start supervisor
#!/bin/bash
set -e
# SET UP PG BOUNCER
PG_CONFIG_DIR=/etc/pgbouncer
invoke_main(){
check_variables
create_config
}
check_variables(){
...
}
error(){
...
}
create_databases_config(){
...
}
create_config(){
...
}
[databases]
$(create_databases_config)
[pgbouncer]
...
invoke_main
# INVOKE SUPERVISORD
echo " Starting supervisord.... (entrypoint.sh)"
exec supervisord -n -c /etc/supervisor/conf.d/supervisord.conf
#exec supervisord -n -c /etc/supervisor/conf.d/supervisord.conf
This is my supervisord.conf
[supervisord]
logfile=/var/logs/supervisord.log ; main log file; default $CWD/supervisord.log
logfile_maxbytes=50MB ; max main logfile bytes b4 rotation; default 50MB
logfile_backups=10 ; # of main logfile backups; 0 means none, default 10
loglevel=info ; log level; default info; others: debug,warn,trace
pidfile=/var/logs/supervisord.pid
nodaemon=true ; Run interactivelly instead of deamonizing
# user=www-data
[rpcinterface:supervisor]
supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface
[inet_http_server]
port = 127.0.0.1:9001
[supervisorctl]
serverurl = http://127.0.0.1:9001
You are starting supervisord manually. service command won't report its status correctly.

Docker Alpine, Celery (worker and beat) fail with PermissionError when using non-root user

I'm trying to run a Flask app with Celery (worker + beat) on Docker Alpine using docker-compose.
I want it to run with a non-root user celery in my Docker container.
The flask app is building ok and works, but my celery containers are failing with this error:
File "/usr/lib/python3.6/site-packages/celery/platforms.py", line 543, in maybe_drop_privileges
_setuid(uid, gid)
File "/usr/lib/python3.6/site-packages/celery/platforms.py", line 564, in _setuid
initgroups(uid, gid)
File "/usr/lib/python3.6/site-packages/celery/platforms.py", line 507, in initgroups
return os.initgroups(username, gid)
PermissionError: [Errno 1] Operation not permitted
My Dockerfile:
I tried to add RUN chown celery:celery /etc/group thinking that was the issue but it's still failing
FROM alpine:3.8
RUN apk update && \
apk add build-base python3 python3-dev libffi-dev libressl-dev && \
cd /usr/bin && \
ln -sf python3 python && \
ln -sf pip3 pip && \
pip install --upgrade pip
COPY requirements.txt .
RUN pip install -r requirements.txt
RUN addgroup celery
RUN adduser celery -G celery -s /bin/sh -D
RUN mkdir -p /var/log/celery/ && chown celery:celery /var/log/celery/
RUN mkdir -p /var/run/celery/ && chown celery:celery /var/run/celery/
RUN chown celery:celery /etc/group # added to try fixing the issue
USER celery
ENV FLASK_APP=flask_app
WORKDIR app/
COPY flask_app flask_app
My docker-compose:
(...)
celeryworker:
build: .
command: celery -A flask_app.tasks worker --loglevel=INFO --uid=celery --pidfile=/tmp/celeryworker-shhh.pid
celerybeat:
build: .
command: celery -A flask_app.tasks beat --loglevel=INFO --uid=celery --pidfile=/tmp/celerybeat-shhh.pid
You should do like that
RUN mkdir -p /var/log/celery/ /var/run/celery/
RUN useradd -G root celery && \
chgrp -Rf root /var/log/celery/ /var/run/celery/ && \
chmod -Rf g+w /var/log/celery/ /var/run/celery/c && \
chmod g+w /etc/passwd
...
RUN chmod a+x /start.sh
USER celery
ENTRYPOINT ["/start.sh"]
You should create user celery firsts. Then, add this user into group root. After that you need set write permission for this folder you need to put logs and /etc/passwd.
You also need to have one script to add your user into /etc/passwd
#!/bin/bash
#
if [ `id -u` -ge 10000 ]; then
echo "celery:x:`id -u`:`id -g`:,,,:/home/web:/bin/bash" >> /etc/passwd
fi
Both answers from #Shashank V and #Kine were really relevant and helpful but still had some issues afterward.
After doing some research, I finally made it works with the following configuration
Dockerfile
FROM alpine:3.11.0
RUN apk update && \
apk add build-base python3 python3-dev libffi-dev libressl-dev && \
ln -sf /usr/bin/python3 /usr/bin/python && \
ln -sf /usr/bin/pip3 usr/bin/pip && \
pip install --upgrade pip
RUN mkdir -p /var/log/celery/ /var/run/celery/
RUN addgroup app && \
adduser --disabled-password --gecos "" --ingroup app --no-create-home app && \
chown app:app /var/run/celery/ && \
chown app:app /var/log/celery/
USER app
ENV PATH="/home/app/.local/bin:${PATH}"
WORKDIR app/
COPY requirements.txt .
RUN pip install --user -r requirements.txt\
COPY flask_app flask_app
ENV FLASK_APP=flask_app
docker-compose
(...)
celeryworker:
build: .
command: >
celery -A shhh.tasks worker
--loglevel=INFO
--logfile=/var/log/celery/celeryworker-shhh.log
--pidfile=/var/run/celery/celeryworker-shhh.pid
celerybeat:
build: .
command: >
celery -A shhh.tasks beat
--loglevel=INFO
--logfile=/var/log/celery/celerybeat-shhh.log
--pidfile=/var/run/celery/celerybeat-shhh.pid
--schedule=/var/run/celery/celerybeat-schedule # specify schedule db in a loc where app has read/write access
You have to be root user if you want to use --uid or --gid argument. Try removing these arguments.

docker run claims a command needs admin access

bash-3.2$ docker run -it -e DISPLAY=$IP:0 -v /tmp/.X11-unix:/tmp/.X11-unix -v `pwd`:`pwd` josh:latest
To run a command as administrator (user "root"), use "sudo <command>".
See "man sudo_root" for details.
bash: /home/ros/catkin_ws/devel/setup.bash: No such file or directory
And my dockerfile is:
FROM ros:kinetic-robot-xenial
MAINTAINER Joshua Schraven
RUN apt-get update && apt-get install --assume-yes \
vim-nox \
sudo \
python-pip \
ros-kinetic-desktop-full \
ros-kinetic-turtlebot3 \
ros-kinetic-turtlebot3-bringup \
ros-kinetic-turtlebot3-description \
ros-kinetic-turtlebot3-fake \
ros-kinetic-turtlebot3-gazebo \
ros-kinetic-turtlebot3-msgs \
ros-kinetic-turtlebot3-navigation \
ros-kinetic-turtlebot3-simulations \
ros-kinetic-turtlebot3-slam \
ros-kinetic-turtlebot3-teleop
# create non-root user
ENV USERNAME ros
RUN adduser --ingroup sudo --disabled-password --gecos "" --shell /bin/bash --home /home/$USERNAME $USERNAME
RUN bash -c "echo ros:ros | chpasswd"
ENV HOME /home/$USERNAME
USER $USERNAME
# create catkin_ws
RUN mkdir /home/$USERNAME/catkin_ws
WORKDIR /home/$USERNAME/catkin_ws
# add catkin env
RUN echo 'source /opt/ros/kinetic/setup.bash' >> /home/$USERNAME/.bashrc
RUN echo 'source /home/$USERNAME/catkin_ws/devel/setup.bash' >> /home/$USERNAME/.bashrc
I don't know what command is causing the problem, nor how I would trouble shoot that myself.

How to Add user when creating docker image from alpine base image

I want to add user www-data as my default user when I run my container in bash mode.
Currently, when I type WHOAMI then it's showing root as user, but I need www-data as user, how to do this in docker.
This is my docker file:
FROM php:7.2-fpm-alpine
LABEL maintainer="y.ghorecha#abc.de" \
muz.customer="xxx" \
muz.product="WIDC" \
container.mode="production"
#https://pkgs.alpinelinux.org/packages
RUN apk add --no-cache --virtual .deps autoconf tzdata build-base libzip-dev mysql-dev gmp-dev \
libxml2-dev libpng-dev zlib-dev freetype-dev jpeg-dev icu-dev openldap-dev libxslt-dev &&\
docker-php-ext-install zip xml mbstring json intl gd pdo pdo_mysql iconv soap \
dom gmp fileinfo sockets bcmath mysqli ldap xsl &&\
echo 'date.timezone="Europe/Berlin"' >> "${PHP_INI_DIR}"/php.ini &&\
cp /usr/share/zoneinfo/Europe/Berlin /etc/localtime &&\
echo 'Europe/Berlin' > /etc/timezone &&\
apk del .deps &&\
apk add --no-cache libzip mysql libxml2 libpng zlib freetype jpeg icu gmp git subversion libxslt openldap \
apache2 apache2-ldap apache2-proxy libreoffice openjdk11-jre ghostscript msttcorefonts-installer \
terminus-font ghostscript-fonts &&\
ln -s /usr/lib/apache2 /usr/lib/apache2/modules &&\
ln -s /usr/sbin/httpd /etc/init.d/httpd &&\
update-ms-fonts
# imap setup
RUN apk --update --virtual build-deps add imap-dev
RUN apk add imap
RUN docker-php-ext-install imap
# copy all codebase
COPY ./ /var/www
# SSH setup
RUN apk update && \
apk add --no-cache \
openssh-keygen \
openssh
# copy Azure specific files
COPY backend/build/azure/backend/ /var/www/backend/
# User owner setup
RUN chown -R www-data:www-data /var/www/
# Work directory setup
WORKDIR /var/www
# copy apache httpd.conf file
COPY httpd.conf /etc/apache2/httpd.conf
# copy crontabs for root user
COPY backend/data/CRONTAB/production/crontab.txt /etc/crontabs/www-data
# SSH Key setup
RUN mkdir -p /home/www-data/.ssh
RUN chown -R www-data:www-data /home/www-data/
#https://github.com/docker-library/httpd/blob/3ebff8dadf1e38dbe694ea0b8f379f6b8bcd993e/2.4/alpine/httpd-foreground
#https://github.com/docker-library/php/blob/master/7.2/alpine3.10/fpm/Dockerfile
CMD ["/bin/sh", "-c", "rm -f /usr/local/apache2/logs/httpd.pid && /usr/sbin/crond start && httpd -DBACKGROUND && php-fpm"]
Please advice on above.
I'd start with first checking if the www-data user even exits in the image. Execute in the running container something like:
sudo cat /etc/passwd | grep www-data
If the user does exist then add the USER www-data directive to the Dockerfile after all commands that do installs, create directories, change permissions, etc. It would be required to also add USER 0 at the beginning to switch to the root user for those commands if the base image doesn't run as root. Looking at the Dockerfile I'd suggest to add USER www-data before the CMD directive.
If the www-data user doesn't exist then it has to be added first. The commands for Alpine Linux are addgroup and adduser. Something like these if the user id for www-data is to be 33 and the group it belongs to is also named www-data and has id of 33:
RUN addgroup -S -g 33 www-data \
&& adduser -S -D -u 33 -s /sbin/nologin -h /var/www -G www-data www-data
Add the above just before RUN chown -R www-data:www-data /var/www/, or make it a single RUN directive:
RUN addgroup -S -g 33 www-data \
&& adduser -S -D -u 33 -s /sbin/nologin -h /var/www -G www-data www-data \
&& chown -R www-data:www-data /var/www/
You can add this line to you dockerfile in order to become the www-data user
USER www-data
This can be either added
in the end of your file (if you want at to become that user before the script exits), or
in the beginning if you want to perform your actions within the docker file as this user.

How to access remote debugging page for dockerized Chromium launch by Puppeteer?

When the chromium succeed to launch, its Debugging WebSocket URL should be like ws://127.0.0.1:9222/devtools/browser/ec261e61-0e52-4016-a5d7-d541e82ecb0a.
127.0.0.1:9222 should be able to browse by Chrome to inspect the headless Chromium. However, I cannot access the remote debugger URL by Chrome after I dockerize my application.
launchOption for launching chromium by Puppeteer:
{
"args": [
"--remote-debugging-port=9222",
"--window-size=1920,1080",
"--mute-audio",
"--disable-notifications",
"--force-device-scale-factor=0.8",
"--no-sandbox",
"--disable-setuid-sandbox"
],
"defaultViewport": {
"height": 1080,
"width": 1920
},
"headless": true
}
Dockerfile:
FROM node:10.16.3-slim
RUN wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add - \
&& sh -c 'echo "deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google.list' \
&& apt-get update \
&& apt-get install -y google-chrome-unstable fonts-ipafont-gothic fonts-wqy-zenhei fonts-thai-tlwg fonts-kacst fonts-freefont-ttf \
--no-install-recommends \
&& rm -rf /var/lib/apt/lists/* \
&& wget --quiet https://raw.githubusercontent.com/vishnubob/wait-for-it/master/wait-for-it.sh -O /usr/sbin/wait-for-it.sh \
&& chmod +x /usr/sbin/wait-for-it.sh
WORKDIR /usr/app
COPY ./ ./
VOLUME ["......." ]
RUN groupadd -r pptruser && useradd -r -g pptruser -G audio,video pptruser \
&& mkdir -p /home/pptruser/Downloads \
&& chown -R pptruser:pptruser /home/pptruser \
&& chown -R pptruser:pptruser /usr/app \
&& npm install
USER pptruser
CMD npm run start
EXPOSE 3000 9222
Run the new container by :
docker run \
-p 3000:3000 \
-p 9222:9222 \
pptr
Port 9222 should be accessible in my host machine. But Chrome shows the error ERR_EMPTY_RESPONSE when I browse 127.0.0.1:9222 and DOCKER-INTERNAL-IP:9222 will timeout.
I managed to make this work with puppeteer using the following Dockerfile, docker run and puppeteer config:
FROM ubuntu:18.04
RUN apt update \
&& apt install -y \
curl \
wget \
gnupg \
gcc \
g++ \
make \
&& curl -sL https://deb.nodesource.com/setup_10.x | bash - \
&& apt install -y nodejs \
&& rm -rf /var/lib/apt/lists/*
RUN wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add - \
&& sh -c 'echo "deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google.list' \
&& apt-get update \
&& apt-get install -y google-chrome-unstable fonts-ipafont-gothic fonts-wqy-zenhei fonts-thai-tlwg fonts-kacst fonts-freefont-ttf \
--no-install-recommends \
&& rm -rf /var/lib/apt/lists/*
RUN groupadd -r pptruser && useradd -r -g pptruser -G audio,video pptruser \
&& mkdir -p /home/pptruser/Downloads \
&& chown -R pptruser:pptruser /home/pptruser
ADD . /app
WORKDIR /app
RUN chown -R pptruser:pptruser /app
RUN rm -rf node_modules
RUN rm -rf build/*
USER pptruser
RUN npm install --dev
RUN chmod +x /app/entrypoint.sh
ENTRYPOINT /app/entrypoint.sh
Docker run:
docker run -p 9223:9222 -it myimage
Puppeteer launch:
this.browser = await puppeteer.launch(
{
headless: true,
args: [
'--remote-debugging-port=9222',
'--remote-debugging-address=0.0.0.0',
'--no-sandbox'
]
}
);
The entrypoint just launches the platform like: node build/main.js
After that I just had to connect to localhost:9223 on Chrome to see the browser. Hope it helps!
I know there is already an accepted answer, but let me add onto this in hopes to greatly reduce your image size. One shouldn't add too many extras into the Dockerfile if one can help it. But ultimately, adding --remote-debugging-port=9222 and --remote-debugging-address=0.0.0.0 will allow you to access it.
Dockerfile
FROM ubuntu:latest
LABEL Full Name <email#email.com> https://yourwebsite.com
WORKDIR /home/
COPY wrapper-script.sh wrapper-script.sh
# install chromium-browser and cleanup.
RUN apt update && apt install chromium-browser --no-install-recommends -y && apt autoremove && apt clean && apt autoclean && rm -rf /var/lib/apt/lists/*
# Run your commands and add environment variables from your compose file.
CMD ["sh", "wrapper-script.sh"]
I use a wrapper script so that I can include environment variables here. You can see URL and USERNAME set so that I can configure them from the compose file. Of course, i'm sure there is a better way to do this, but I do this so that I can scale my containers horizontally with ease.
wrapper-script.sh
#!/bin/bash
# Start the process
chromium-browser --headless --disable-gpu --no-sandbox --remote-debugging-port=9222 --remote-debugging-address=0.0.0.0 ${URL}${USERNAME}
status=$?
if [ $status -ne 0 ]; then
echo "Failed to start chromium-browser: $status"
exit $status
fi
# Naive check runs checks once a minute to see if either of the processes exited.
# This illustrates part of the heavy lifting you need to do if you want to run
# more than one service in a container. The container exits with an error
# if it detects that either of the processes has exited.
# Otherwise it loops forever, waking up every 60 seconds
while sleep 60; do
ps aux |grep chromium-browser | grep -q -v grep
PROCESS_1_STATUS=$?
# If the greps above find anything, they exit with 0 status
# If they are not both 0, then something is wrong
if [ $PROCESS_1_STATUS -ne 0 ]; then
echo "One of the processes has already exited."
exit 1
fi
done
Lastly, I have the docker-compose file. This is where I define all my settings so that I can configure my wrapper-script.sh with what I need and scale horizontally. Notice the environment section of the docker-compose file. USERNAME and URL are environment variables, and they can be called from the wrapper script.
docker-compose.yml
version: '3.7'
services:
chrome:
command: [ 'sh', 'wrapper-script.sh' ]
image: headless-chrome
build:
context: .
dockerfile: Dockerfile
environment:
- USERNAME=eaglejs
- URL=https://teamtreehouse.com/
ports:
- 9222:9222
If you are wondering what my folder structure looks like. all three files are at the root of the folder. For example:
My_Docker_Repo:
Dockerfile
docker-compose.yml
wrapper-script.sh
After that is all said and done, I simply run docker-compose up and I have one container running. Right now, using the ports section, you'll have to do something to scale that as well. if you were to run docker-compose up --scale chrome=5 your ports will clash, but let me know if you want to try that and i'll see what I can do for scaling, but other than that, if it is for testing, this should work well the way it is. :) Happy coding!
eaglejs

Resources