bind(): No such file or directory [core/socket.c line 230] - uwsgi

I'm trying to create a unix socket application for a run in uWSGI... but does not allow me to create the socket, please check the following settings.
[uwsgi]
chdir = /home/deploy/webapps/domain/virtualenv/app
module = app.wsgi
home = /home/deploy/webapps/domain/virtualenv
master = true
processes = 10
uwsgi-socket = /var/run/uwsgi/app/%n/socket # if i'm tried /tmp/name.socket if work!
vacuum = true
# Error codes:
The -s/--socket option is missing and stdin is not a socket.
bind(): No such file or directory [core/socket.c line 230]
I have given permissions to this directory and is created but does not work.
mkdir -p /var/run/uwsgi/app
sudo chown -R deploy:root /var/run/uwsgi/app
sudo chmod 777 /var/run/uwsgi/app
which would be the same solution for this. thanks.

You need to do two things:
mkdir /var/run/app-uwsgi
and
sudo chown -R www-data:www-data /var/run/app-uwsgi
After a reboot this directory gets lost and needs to be recreated in Ubuntu.

I had the same error trying to run uwsgi inside a Docker container, so I needed to create the directories first.
I needed to add the following command to the end of my Dockerfile:
RUN mkdir -p /var/www/todobackend
The settings of the server in my case are part of the docker-compose.yml file:
- uwsgi
- "--socket /var/www/todobackend/todobackend.sock"
- "--chmod-socket=666"
- "--module todobackend.wsgi"
- "--master"
- "--die-on-term"

Related

How to give permissions write and Read to a mount folder in docker container

I want to do a binding (mount volume) between the jupyterLab and the VM. The only problem is the permissions, the folder that I create (/home/jovyan/work) always have a root permissions. I cannot create anything inside. I tried many solutions: In a docker file I tried this solution:
FROM my_image
RUN if [[ -d "$HOME/toto" ]] ; then echo OK ; else mkdir "$HOME/toto" ; fi
RUN chmod 777 $HOME/toto
==> always I got no permissions on the mounted folder
ARG NB_DIR=/tmp/lab_share
RUN mkdir "$NB_DIR"
RUN chown :100 "$NB_DIR"
RUN chmod g+rws "$NB_DIR"
RUN apt-get install acl
RUN setfacl -d -m g::rwx "$NB_DIR"
USER root
==> The problem here is the setfacl is not recognized in the container, I tried to install it just before, always not acceptable.
I tried to add a GRANT_SUDO in the jupyterHub service
extraEnv:
GRANT_SUDO: "yes"
==> The problem here is the extraEnv is not recognized
I tried to create a method in the jupyterHub_config file, just after the binding code:
notebook_dir = os.environ.get('DOCKER_NOTEBOOK_DIR') or '/home/jovyan/work'
c.DockerSpawner.notebook_dir = notebook_dir
c.DockerSpawner.volumes = { "/tmp/{username}" :notebook_dir}
c.DockerSpawner.remove_containers = True
##### CREATE HOOKER
def create_dir_hook(spawner):
username = spawner.user.name # get the username
logger.info(f"### USERNAME {username}")
volume_path = os.path.join('/tmp', username)
logger.info(f"### USERNAME {volume_path}")
if not os.path.exists(volume_path):
# create a directory with umask 0755
# hub and container user must have the same UID to be writeable
# still readable by other users on the system
logger.info(f"FOLDER FOR USER {username} not existing")
logger.info(f"CREATING A FOLDER IN {volume_path}")
os.mkdir(volume_path, 0o777)
# now do whatever you think your user needs
# ...
logger.info("coucou")
# attach the hook function to the spawner
c.Spawner.pre_spawn_hook = create_dir_hook
In this solutions the compilator don't read all the if bloc, even the else bloc. I found in the docker doc, that is a famous issue in docker, but I didn't find it's solution. Really I need your help please, if you have any solution I will be appreciate. Thanks a lot.

How to solve file permission issues when developing with Apache HTTP server in Docker?

My Dockerfile extends from php:8.1-apache. The following happens while developing:
The application creates log files (as www-data, 33:33)
I create files (as the image's default user root, 0:0) within the container
These files are mounted on my host where I'm acting as user (1000:1000). Of course I'm running into file permission issues now. I'd like to update/delete files created in the container on my host and vice versa.
My current solution is to set the image's user to www-data. In that way, all created files belong to it. Then, I change its user and group id from 33 to 1000. That solves my file permission issues.
However, this leads to another problem:
I'm prepending sudo -E to the entrypoint and command. I'm doing that because they're normally running as root and my custom entrypoint requires root permissions. But in that way the stop signal stops working and the container has to be killed when I want it to stop:
~$ time docker-compose down
Stopping test_app ... done
Removing test_app ... done
Removing network test_default
real 0m10,645s
user 0m0,167s
sys 0m0,004s
Here's my Dockerfile:
FROM php:8.1-apache AS base
FROM base AS dev
COPY entrypoint.dev.sh /usr/local/bin/custom-entrypoint.sh
ARG user_id=1000
ARG group_id=1000
RUN set -xe \
# Create a home directory for www-data
&& mkdir --parents /home/www-data \
&& chown --recursive www-data:www-data /home/www-data \
# Make www-data's user and group id match my host user's ones (1000 and 1000)
&& usermod --home /home/www-data --uid $user_id www-data \
&& groupmod --gid $group_id www-data \
# Add sudo and let www-data execute it without asking for a password
&& apt-get update \
&& apt-get install --yes --no-install-recommends sudo \
&& rm --recursive --force /var/lib/apt/lists/* \
&& echo "www-data ALL=(ALL) NOPASSWD: ALL" > /etc/sudoers.d/www-data
USER www-data
# Run entrypoint and command as sudo, as my entrypoint does some config substitution and both normally run as root
ENTRYPOINT [ "sudo", "-E", "custom-entrypoint.sh" ]
CMD [ "sudo", "-E", "apache2-foreground" ]
Here's my custom-entrypoint.sh
#!/bin/sh
set -e
sed --in-place 's#^RemoteIPTrustedProxy.*#RemoteIPTrustedProxy '"$REMOTEIP_TRUSTED_PROXY"'#' $APACHE_CONFDIR/conf-available/remoteip.conf
exec docker-php-entrypoint "$#"
What do I need to do to make the container catch the stop signal (it is SIGWINCH for the Apache server) again? Or is there a better way to handle the file permission issues, so I don't need to run the entrypoint and command with sudo -E?
What do I need to do to make the container catch the stop signal (it is SIGWINCH for the Apache server) again?
First, get rid of sudo, if you need to be root in your container, run it as root with USER root in your Dockerfile. There's little value add to sudo in the container since it should be an environment to run one app and not a multi-user general purpose Linux host.
Or is there a better way to handle the file permission issues, so I don't need to run the entrypoint and command with sudo -E?
The pattern I go with is to have developers launch the container as root, and have the entrypoint detect the uid/gid of the mounted volume, and adjust the uid/gid of the user in the container to match that id before running gosu to drop permissions and run as that user. I've included a lot of this logic in my base image example (note the fix-perms script that tweaks the uid/gid). Another example of that pattern is in my jenkins-docker image.
You'll still need to either configure root's login shell to automatically run gosu inside the container, or remember to always pass -u www-data when you exec into your image, but now that uid/gid will match your host.
This is primarily for development. In production, you probably don't want host volumes, use named volumes instead, or at least hardcode the uid/gid of the user in the image to match the desired id on the production hosts. That means the Dockerfile would still have USER www-data but the docker-compose.yml for developers would have user: root that doesn't exist in the compose file in production. You can find a bit more on this in my DockerCon 2019 talk (video here).
You can use user namespace to map different user/group in your docker to you on the host.
For example, the group www-data/33 in the container could be the group docker-www-data/100033 on the host, you just have be in the group to access log files.

Volumes and permissions between host and docker container

I'm trying to dockerize all of the services on my host machine. But I'm running into the following problems with Docker and volume-permissions between host and docker.
I have a host machine with the following folder-structure:
- /data/mysql (user: docker-mysql)
- /data/gitlab (user: docker-gitlab)
- /data/backup (user: share-backup)
The /data/mysql:/mnt/mysql folder is getting mounted into the mysql docker container. The docker mysql container creates backups every 24 hours, but because the docker container runs on user root these backups get created as user root and group root in the /data/mysql folder.
My goal to achieve is that files in the /data/mysql folder will get created as docker-mysql user, not as root user.
I tried to change the user of the docker container to another user by setting RUN groupadd -r docker-mysql && useradd -r -g docker-mysql docker-mysql and USER docker-mysql, but then the mysql container won't even start anymore, because the docker-mysql user doesn't seem to have root permissions. I also tried this on the Gitlab-CE docker image, but ran into the same issue that running Gitlab-CE as a different user throws permission errors.
Any idea how to write files to for example /data/mysql on the host with the correct user?
Despite of your mysql image, what you need is: every process inside your container must be executed as a non-root user. There is some workarounds for it, but I suggest that you first dive into your mysql base image and see what is happening under the hood. One method is redirecting every process to a non-root user. This can be achieved by:
On your Dockerfile
RUN groupadd -r docker-mysql && useradd -r -g docker-mysql docker-mysql
ENTRYPOINT ["docker-entrypoint.sh"]
CMD ["mysqld"] ##CHECK YOUR IMAGE FIRST
Those two can be translated as
docker-entrypoint.sh mysqld
Now on your docker-entrypoint.sh, which need to be modified if the image is not the same as this example (it was from a mongodb which uses ubuntu as OS base image) :
if [[ “$originalArgOne” == mysql* ]] && [ “$(id -u)” = ‘0’ ]; then
if [ “$originalArgOne” = ‘mysqld’ ];
then chown -R docker-mysql <MYSQL FOLDERS in CONTAINER>
fi
# make sure we can write to stdout and stderr as “mongodb”
# (for our “initdb” code later; see “ — logpath” below)
chown --dereference docker-mysql “/proc/$$/fd/1” “/proc/$$/fd/2” || :
exec gosu docker-mysql “$BASH_SOURCE” “$#”
fi
This answer was just adapted from here (It's a good reading)

Can not run metricbeat in docker

I am trying to run metricbeat using docker in windows machine and I have changed metricbeat.yml as per my requirement.
docker run -v /c/Users/someuser/docker/metricbeat.yml:/usr/share/metricbeat/metricbeat.yml docker.elastic.co/beats/metricbeat:5.6.0
but getting these error
metricbeat2017/09/17 10:13:19.285547 beat.go:346: CRIT Exiting: error
loading config file: config file ("metricbeat.yml") can only be
writable by the owner but the permissions are "-rwxrwxrwx" (to fix the
permissions use: 'chmod go-w /usr/share/metricbeat/metricbeat.yml')
Exiting: error loading config file: config file ("metricbeat.yml") can only be writable by the owner but the permissions are "-rwxrwxrwx"
(to fix the permissions use: 'chmod go-w /
usr/share/metricbeat/metricbeat.yml')
Why I am getting this?
What is the right way to make permanent change in file content in docker container (As I don't want to change configuration file each time when container start)
Edit:
Container is not meant to be edited / changed.If necessary, docker volume management is available to externalize all configuration related works.Thanks
So there are 2 options you can do here I think.
The first is that you can ensure the file has the proper permissions:
chmod 644 metricbeat.yml
Or you can run your docker command with -strict.perms=false which flags that metricbeat shouldn't care about what permissions are on the metricbeat.yml file.
docker run \
docker.elastic.co/beats/metricbeat:5.6.0 \
--volume="/c/Users/someuser/docker/metricbeat.yml:/usr/share/metricbeat/metricbeat.yml" \
-strict.perms=false
You can see more documentation about that flag in the link below:
https://www.elastic.co/guide/en/beats/metricbeat/current/command-line-options.html#global-flags

Elastic Beanstalk: what's the best way to create folders & set permissions after deploy?

I'm running a Rails 4.2 app on Elastic Beanstalk, and need to set log permissions and create the /tmp/uploads folder (plus permissions) after deploy.
I was running two ebextensions scripts to do this, but on some occasions they would fail because the folder /var/app/current/ didn't yet exist.
I'm presuming this is because the permissions and/or folders should be created on /app/ondeck/ first so that EB can copy the contents over to /var/app/current/, but I'm interested to see if there's a recommended and more foolproof approach to doing this?
For reference, my two ebextension scripts were:
commands:
01_set_log_permissions:
command: "chmod 755 /var/app/current/log/*"
and
commands:
01_create_uploads_folder:
command: "mkdir -p /var/app/current/tmp/uploads/"
02_set_folder_permission:
command: "chmod 755 /var/app/current/tmp/uploads/"
Thanks,
Dan
you should probably use files tag and not command:
commands:
create_post_dir:
command: "mkdir /opt/elasticbeanstalk/hooks/appdeploy/post"
ignoreErrors: true
files:
"/opt/elasticbeanstalk/hooks/appdeploy/post/99_make_changes.sh":
mode: "000777"
content: |
#!/bin/bash
mkdir -p /var/app/current/tmp/uploads/
chmod 755 /var/app/current/tmp/uploads/
it will be triggered after app deploy finished
I've used the below stpes:
Create a folder .ebextensions
Creta a file .config
Move .config to .ebextensions
Edit the file .config, it must have the below sintaxe
commands:
command1:
command: mkdir /opt/jenkins
command2:
command: chmod 644 /opt/jenkins
https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers-ec2.html#linux-commands
* Pay Attention *
You cannot run again command1 "mkdir /opt/jenkins", you will have a error, so you must do a test before.
What about using Container Commands?
http://docs.aws.amazon.com/ko_kr/elasticbeanstalk/latest/dg/customize-containers-ec2.html#linux-container-commands
You can use the container_commands key to execute commands for your container. The commands in container_commands are processed in alphabetical order by name. They run after the application and web server have been set up and the application version file has been extracted, but before the application version is deployed. They also have access to environment variables such as your AWS security credentials.
Container commands are run from the staging directory, where your source code is extracted prior to being deployed to the application server. Any changes you make to your source code in the staging directory with a container command will be included when the source is deployed to its final location.
container_commands:
01_set_log_permissions:
command: "chmod 755 log/*"
and
container_commands:
01_create_uploads_folder:
command: "mkdir -p tmp/uploads/"
02_set_folder_permission:
command: "chmod 755 tmp/uploads/"

Resources