Gunicorn, Docker and /dev/shm - docker

These command lines are from entrypoint.sh in the process of containerizing a simple Django application with Docker.As you can see, I'm trying to use Django with Gunicorn, which is a WSGI server. It may be a simple question, but I still don't understand. What exactly does /dev/shm and --worker-tmp-dir in here help?
#!/bin/bash
APP_PORT=${PORT:-8000}
cd /app/
/opt/venv/bin/gunicorn --worker-tmp-dir /dev/shm app.wsgi:application --bind "0.0.0.0:${APP_PORT}"

Related

How to run nginx and gunicorn in same docker container

I am trying to deploy a python flask application with gunicorn and nginx . I am trying to run both gunicorn(wsgi) and nginx in same container . But my nginx is not started . By login into container I am able to start nginx.
Below is my dockerfile
RUN apt-get clean && apt-get -y update
RUN apt-get -y install \
nginx \
python3-dev \
curl \
vim \
build-essential \
procps
WORKDIR /app
COPY requirements.txt /app/requirements.txt
COPY nginx-conf /etc/nginx/sites-available/default
RUN pip install -r requirements.txt --src /usr/local/src
COPY . .
EXPOSE 8000
EXPOSE 80
CMD ["bash" , "server.sh"]
server.sh file looks like
# turn on bash's job control
set -m
gunicorn --bind :8000 --workers 3 wsgi:app
service nginx start or /etc/init.d/nginx
gunicorn is started by server.sh but nginx is not started.
My aim is to later run these containers in kubernetes. Should i) I run both nginx and gunicorn in separate pod or ii) run it in same pod with separate container or iii) run in same container in same pod
My aim is to later run these containers in kubernetes. Should i) I run both nginx and gunicorn in separate pod
Yes, this. This is very straightforward to set up (considering YAML files with dozens of lines "straightforward"): write a Deployment and a matching (ClusterIP-type) Service for the GUnicorn backend, and then write a separate Deployment and matching (NodePort- or LoadBalancer-type) Service for the Nginx proxy. In the Nginx configuration, use a proxy_pass directive, pointing at the name of the GUnicorn Service as the backend host name.
There's a couple of advantages to doing this. If the Python service fails for whatever reason, you don't have to restart the Nginx proxy as well. If you're handling enough load that you need to scale up the application, you can run a minimum number of lightweight Nginx proxies (maybe 3 for redundancy) with a larger number of backends depending on the load. If you update the application, Kubernetes will delete and recreate the Deployment-managed Pods for you, and again, using a separate Deployment for the proxies and backends means you won't have to restart the proxies if only the application code changes.
So, to address the first part of the question:
I am trying to deploy a python flask application with gunicorn and nginx.
In plain Docker, for similar reasons, you can run two separate containers. You could manage this in Docker Compose, which has a much simpler YAML file layout; it would look something like
version: '3.8'
services:
backend:
build: . # Dockerfile just installs GUnicorn, CMD starts it
proxy:
image: nginx
volumes:
- ./nginx-conf:/etc/nginx/conf.d # could build a custom image too
# configuration specifies `proxy_pass http://backend:8000`
ports:
- '8888:80'
This sidesteps all of the trouble of trying to get multiple processes running in the same container. You can simplify the Dockerfile you show:
# Dockerfile
FROM python:3.9
RUN apt-get update \
&& DEBIAN_FRONTEND=noninteractive
apt-get install --no-install-recommends --assume-yes \
python3-dev \
build-essential
# (don't install irrelevant packages like vim or procps)
WORKDIR /app
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY . .
EXPOSE 8000
# (don't need a shell script wrapper)
CMD gunicorn --bind :8000 --workers 3 wsgi:app
About choosing how to split containers between pods, that really depends on the use-case. If they talk to each other but perform separate tasks, I would go with two containers and one pod.
Also, about your server.sh file, the reason gunicorn starts but nginx doesn't is that gunicorn doesn't run in daemon mode by default. If you run gunicorn --help you see this:
-D, --daemon Daemonize the Gunicorn process. [False]
I still think it's better to separate the containers but if you want it to just work, change it to this:
# turn on bash's job control
set -m
gunicorn --bind :8000 --workers 3 wsgi:app -D
service nginx start or /etc/init.d/nginx
To answer your question regarding Kubernetes:
It depends on what you want to do.
Containers within the same Pod share the same network namespace, meaning that 2 containers in the same Pod can communicate with each other by contacting localhost. This means your packages never get send around and communication is always gonna be possible.
If you split them up into separate Pods you will want to create a Service object and let them communicate via that Service object. Having them in two Pods allows you to scale them up and down individually and overall gives you more options to configure them individually, for example by setting different kind of security mechanisms.
Which option you choose depends on your architecture and what you want to accomplish.
Having two containers in the same Pod is usually only done when it follows a "Sidecar" pattern, which basically means that there is a "main" container doing the work and the others in the Pod simply assist the "main" container and have no reason whatsoever to exist on their own.

Redis connection within docker container is extremely slow

I have a web app in a docker container that connects to a Redis server running in the same docker container. I get around 150ms get and set commands when running this environment in docker (locally and when deployed). When I run the same web app outside of docker, connecting to a Redis server installed on my local machine, I get around 5-10ms timings for most Redis operations, as I should. What on earth could be wrong with my docker container that could cause timings of over 150ms when connecting within the same docker container?
Dockerfile
FROM crystallang/crystal:0.34.0
...
RUN wget http://download.redis.io/redis-stable.tar.gz
RUN tar xvzf redis-stable.tar.gz
WORKDIR /redis-stable
RUN make -j4
RUN cp src/redis-server /usr/local/bin/
RUN cp src/redis-cli /usr/local/bin/
RUN cp utils/redis_init_script /etc/init.d/redis_6379
RUN mkdir -p /var/redis/6379
COPY /redis.conf /etc/redis/6379.conf
RUN update-rc.d redis_6379 defaults
...
EXPOSE 3000
HEALTHCHECK CMD ["/app", "-c", "http://localhost:3000/"]
ENTRYPOINT ["./prod_init.sh"]
prod_init.sh
#!/bin/sh
/etc/init.d/redis_6379 start &
exec [web app startup procedure here]
redis.conf
https://gist.github.com/sam0x17/5af8ca142e9eec692d30057160b45d6b
docker stats
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
06908bd70a00 myapp 0.14% 231MiB / 31.28GiB 0.72% 127kB / 75.9kB 1.11MB / 0B 38

Google Cloud Run fails to listen even after changing port to 8080

I am having some issues deploying to Cloud Run lately. When I am trying to deploy the below Dockerfile to Cloud Run, it ends up with the error Failed to start and then listen on the port defined by the PORT environment variable.:
FROM phpmyadmin/phpmyadmin:latest
EXPOSE 8080
RUN sed -i 's/80/${PORT}/g' /etc/apache2/sites-available/000-default.conf /etc/apache2/ports.conf
ENTRYPOINT [ "/docker-entrypoint.sh" ]
CMD [ "apache2-foreground" ]
The ENTRYPOINT and CMD were added separately even though the phpmyadmin/phpmyadmin:latest uses this same ENTRYPOINT and CMD to see if that would solve it, though it is not required. The same Docker image when deployed using docker run runs properly and listens on port 8080. Is there something I am doing wrong?
This is the command I use to deploy:
gcloud run deploy phpmyadmin --memory=1Gi --platform=managed \
--allow-unauthenticated --add-cloudsql-instances project_id:us-central1:db-name \
--region=us-central1 --image gcr.io/project_id/phpmyadmin:1.3 \
--update-env-vars PMA_HOST=localhost,PMA_SOCKET="/cloudsql/project_id:us-central1:db-name",PMA_ABSOLUTE_URI=phpmyadmin.domain.com
This is all I can find in the logs. (Have redacted some data):
https://gist.github.com/shanukk27/9dd4b3076c55307bd6e853a76e7a34e0
Cloud Run runtime environment seems to be slightly different than Docker run command. You can't use ENTRYPOINT and CMD in the same time
ENTRYPOINT [ "/docker-entrypoint.sh" ]
CMD [ "apache2-foreground" ]
It works with Docker Run (Why? Docker issue? Docker feature?) and not on Cloud Run (missing feature? bug?).
Use only one of them, for example:
ENTRYPOINT /docker-entrypoint.sh && apache2-foreground
EDIT
A strange remark shared by Shanu is the 2 command works with Wordpress deployment, and doesn't work here.
FROM wordpress:5.3.2-php7.3-apache
EXPOSE 8080
# Copy custom entrypoint from repo
COPY cloud-run-entrypoint.sh /usr/local/bin/
# Change apache listening port and set permission for docker entrypoint
RUN sed -i 's/80/${PORT}/g' /etc/apache2/sites-available/000-default.conf /etc/apache2/ports.conf && \
chmod +x /usr/local/bin/cloud-run-entrypoint.sh
# Wordpress conf
COPY wordpress/. /var/www/html/
# Custom entrypoint
ENTRYPOINT ["cloud-run-entrypoint.sh","docker-entrypoint.sh"]
# Start apache when docker container starts
CMD ["apache2-foreground"]
The problem is solved here, but the reason is not clear
Note to Googler (Steren? Ahmet?): Can you share more details on this behavior?

folder permissions docker-osx-dev

I'm using docker on macOS with docker-osx-dev (https://github.com/brikis98/docker-osx-dev)
And all is ok. It helps me to solve the problem with slow volumes. But every time when i up my docker-compose i have a problem with permissions and i am forced to set permission through docker exec and chmod ... I spent a lot of time to find solution. I tried to use usermod with 501 and 1000 id but nothing help. Have you any idea about how to fix it?
My project settings: https://bitbucket.org/SmileSergey/symfonydocker/src
Thanks!
You can use http://docker-sync.io with the https://github.com/EugenMayer/docker-sync/blob/master/example/docker-sync.yml#L47 to map the OSX user to the uid you have on the container. This removes the whole issue, even though this will replace docker-osx-dev completely.
As a quick workaround, you could try to add a command to your nginx part of your docker-compose.yml like follows:
nginx:
image: nginx:latest
...
volumes:
- "./config/nginx.conf:/etc/nginx/conf.d/default.conf"
- "./app:/var/www/symfony"
command: /bin/bash -c "chmod -R 777 /var/www/symfony/ && nginx -g 'daemon off;'"
Background:
the official nginx Dockerfile specifies following default command:
CMD ["nginx", "-g", "daemon off;"]
which is executing
nginx -g 'daemon off;'
You already have found a quick&dirty workaround to run chmod -R 777 /var/www/symfony/ within the container.
With the command in the docker-compose file above, we execute the chmod command before executing the nginx -g 'daemon off;' default command.
Note, that a CMD specified in the Dockerfile is replaced by the command defined on a docker-compose file (or any command specified on docker run ... myimage mycommand.)
For dev on OSX just use vagrant + nfs shared folders. It solves whole the problems and doesn't require to add anything special to docker-compose for dev.

Docker - Change the owner of a file for a container in Windows

I have the following simple Docker file:
FROM php:5.6-apache
RUN chown -R www-data:www-data /var/www/html
RUN chmod -R 777 /var/www/html
VOLUME /var/www/html
CMD ["apache2-foreground"]
I run it using docker-compose with the following docker-compose.yml file:
version: '2'
services:
foo:
image: image_name
volumes:
- "./:/var/www/html"
ports:
- "8080:80"
restart: always
I then connected to the container like so:
docker exec -it container_name_foo_1 bash
When I run ls -l on the /var/www/html directory I get something like this:
total 0
drwxrwxrwx 1 1000 staff 0 Aug 6 20:38 hi
However when I try to change the owner of this file like so:
chown www-data:www-data hi
and the run ls -l again; the owner has not changed!
I believe this may be a Windows only problem.
I mentioned a similar issue here: Change the owner of a file in a running Docker container with an attached volume in Windows, which is still unanswered. (This question is slightly different as this question deals with DockerFile rather than an already running Docker container).
From reading other Stack Overflow answers, I was told to change the permissions before mounting the volume in the Docker file (which I did), but did not work.
To use posix permissions on an NTFS mounted filesystem you need to provide a user mapping file. Basically you need a way to convert www-data to a valid NT SID. See this answer on askubuntu, and this ntfs-3g article for a more thorough discussion.
Have you tried this?
docker exec -u root -it container_name_foo_1 chown other:user -R /path

Resources