I'm using containerized Vespa.ai DB, and I want to execute the following commands from the host:
vespa-stop-services
vespa-remove-index
vespa-start-services
If I execute the following vespa-stop-services && vespa-remove-index && vespa-start-services from my shell after I attach the container, it works fine. But when I use docker exec it fails.
I tried the following commands:
docker exec bash -c 'vespa-stop-services && vespa-remove-index && vespa-start-services'
docker exec bash -l 'vespa-stop-services && vespa-remove-index && vespa-start-services'
The only way I successfully managed to execute those commands, is when I execute them sequentially, which I would like to avoid:
docker exec bash -l 'vespa-stop-services'
docker exec bash -l 'vespa-remove-index'
docker exec bash -l 'vespa-start-services'
What am I doing wrong?
Thanks in advance!
You need to specify the location of these commands when running from the parent host system
The following works/should work :
docker exec vespa bash -c "/opt/vespa/bin/vespa-stop-services && /opt/vespa/bin/vespa-remove-index -force && /opt/vespa/bin/vespa-start-services"
Notice the -force, which will not ask for confirmation before deleting the data, also note that indexes is not the only persistent data, configuration state is still retained.
Example run of a docker container called 'vespa':
docker exec vespa bash -c "/opt/vespa/bin/vespa-stop-services && /opt/vespa/bin/vespa-remove-index -force && /opt/vespa/bin/vespa-start-services"
Executing /opt/vespa/libexec/vespa/stop-vespa-base.sh
config-sentinel was running with pid 7788, sending SIGTERM
Waiting for exit (up to 15 minutes)
.. DONE
configproxy was running with pid 7666, sending SIGTERM
Waiting for exit (up to 15 minutes)
. DONE
[info] You have 23088 kilobytes of data for cluster msmarco
[info] For cluster msmarco distribution key 0 you have:
[info] 23084 kilobytes of data in var/db/vespa/search/cluster.msmarco/n0
[info] removing data: rm -rf var/db/vespa/search/cluster.msmarco/n0
[info] removed.
Running /opt/vespa/libexec/vespa/start-vespa-base.sh
Starting config proxy using tcp/localhost:19070 as config source(s)
runserver(configproxy) running with pid: 10553
Waiting for config proxy to start
config proxy started after 1s (runserver pid 10553)
runserver(config-sentinel) running with pid: 10679
Related
I need some help on how to configure Dockerfile so my queue works as expected. I already tried to run this manually inside container, so I need this to run Automatically after each deploy in gitlab. Here's what I did manually inside the container
sudo service supervisor enable
sudo service supervisor start
ps aux | grep artisan
It works perfectly, but with Dockefile, I need it to run those commands. Heres my excerpt of the Dockefile
COPY gal-worker /etc/supervisor/conf.d/gal-worker.conf
COPY gal-schedule /etc/supervisor/conf.d/gal-schedule.conf
RUN chown -R root:root /etc/supervisor/conf.d/*.conf
# Make sure Supervisor comes up after a reboot.
RUN sudo service supervisor enable
# Bring Supervisor up right now.
RUN sudo service supervisor start
But my pipeline cant succeed due to errors:
Step 33/34 : RUN sudo service supervisor enable
---> Running in c04c3ab807d2
Usage: /etc/init.d/supervisord {start|stop|restart|force-reload|status}
The command '/bin/sh -c sudo service supervisor enable' returned a non-zero code: 1
Cleaning up file based variables
00:01
ERROR: Job failed: exit code 1
Any ideas ?
I have a logrotate config:
/opt/docker_folders/logs/nginx/*.log {
dateext
daily
rotate 31
nocreate
missingok
notifempty
nocompress
postrotate
/usr/bin/docker exec -it nginx-container-name nginx -s reopen > /dev/null 2>/dev/null
endscript
su docker_nginx root
}
folder permissions:
drwxrwxr-x. 2 docker_nginx root 4096 Oct 13 10:22 nginx
nginx is a local host folder mounted to docker container.
docker_nginx is a user that has same uid as nginx user inside a container (uid: 101).
If I run commands (as root)
# /sbin/logrotate -v /etc/logrotate.d/nginx_logrotate_config
# /sbin/logrotate -d -v /etc/logrotate.d/nginx_logrotate_config
# /sbin/logrotate -d -f -v /etc/logrotate.d/nginx_logrotate_config
All working like a charm.
Problem:
But when log rotating automatically by cron I have get error
logrotate: ALERT exited abnormally with [1]
in /var/log/messages
As result logs rotating as usual but nginx don't create new files (access.log, etc).
Looks like postrotate nginx -s reopen script failing.
Linux version is CentOS 7.
SELinux disabled.
Question:
At least how know what happend when logrotate running from cron?
And what problem may be?
PS I know that I can also use docker restart. But I don't want to do this because of service short time disconnect.
PS2 Also I know that here is nocreate parameter in config. That is because I want to create new log files by nginx (to avoid wrong permissions of new files). Anyway, if nginx -s reopen really failing, there is a possibility that nginx will not re-read newly created files.
EDIT1:
I edited /etc/cron.daily/logrotate script and get logs.
There is only one line about problem.
error: error running non-shared postrotate script for /opt/docker_folders/logs/nginx/access.log of '/opt/docker_folders/logs/nginx/*.log '
So I still don't understand what cause this problem... When I run this script manually all running fine.
Okay. Answering by myself.
-it parameters can't be used with cron tasks (and logrotate is also a cron task).
Because cron don't has interactive session (TTY).
I figured it out by running the /usr/bin/docker exec -it nginx-container-name nginx -s reopen > /dev/null 2>/dev/null as a cron task. I have got error message "The input device is not a TTY"
So my new logrotate config looks like
/opt/docker_folders/logs/nginx/*.log {
dateext
daily
rotate 31
nocreate
missingok
notifempty
nocompress
postrotate
/usr/bin/docker exec nginx-container-name /bin/sh -c '/usr/sbin/nginx -s reopen > /dev/null 2>/dev/null'
endscript
su docker_nginx root
}
And it's finally works.
I have to understand the parameter before using it
I have to understand the parameter before using it
I have to understand the parameter before using it
I have created a base image with supervisord installed.
Summary of steps:
FROM ubuntu:20.04
Then I installed some base utilities (time zone/nano/sudo/zip etc)
FROM current_timezone/base-utils:1.04
Then I created a base supervisord image including a user with sudo privileges and password.
RUN apt-get update \
&& groupadd ${DOCKER_CONTAINER_WEBGROUP} -f \
&& useradd -m -s $(which bash) -G sudo ${DOCKER_CONTAINER_USERNAME} \
&& echo "${DOCKER_CONTAINER_USERNAME}:${DOCKER_CONTAINER_PASSWORD}" | chpasswd \
&& usermod -aG www-data ${DOCKER_CONTAINER_USERNAME}
So in any Docker image deriving from this I can run supervisor :
USER ${DOCKER_CONTAINER_USERNAME}
CMD ["/usr/bin/supervisord"]`
So, I have Dockerfile entries for my images deriving from this image :
Apache
Nginx
Varnish
etc
Most of the applications can launch with supervisord like this:
[program:apache2]
command=/bin/bash -c "source /etc/apache2/envvars && exec /usr/sbin/apache2 -DFOREGROUND"
autorestart=false
startretries=0
But Nginx doesn't launch, the error:
the "user" directive makes sense only if the master process runs with super-user privileges, ignored in /etc/nginx/nginx.conf:1
So I created this and thought I would get an input prompt once the container starts: (the objective was to receive input prompt when container starts so that password can be sent to sudo -S to start Nginx)
[program:nginx]
command=sudo -K && read -s -p "Nginx requires a super-privileges (sudo user) to start - Please enter password for your sudo user: " TMP_PW && echo $TMP_PW | sudo -S service nginx start && unset TMP_PW
user=userdefinedinstagesupwards
Running that command above in command-line once I am inside the container already (docker exec -ti container_nginx bash) works, and I can input password from command-line.
The Issues
Nginx does not start automatically, and I have to enter container to start Nginx manually.
NOTE: I have seen the docker nginx image
docker run -d -v $PWD/nginx.conf:/etc/nginx/nginx.conf nginx but this only has Nginx - I have some tools I would like to reuse (as explained above I created an image that has those installed) which means I would have to recreate the steps backwards just for Nginx.
Additional information
As requested below by users, the reasoning why I am using supervisord like this is because I run multiple scripts (debug info/dynamic paths/secrets) and the main application (eg. Apache/Nginx/Varnish) etc alongside.
A simple example: Apache web-server with two files (tried to make a brief example):
When supervisord initializes (CMD ["/usr/bin/supervisord"]) the main application starts, and the helper scripts (in this example some environment variables built from parent images). I can then access all output in /var/log/supervisor/app-stdout(or stderr)* as required.
For instance: I then have information on ${INSTALLED_BASE_APPS_TEXT} available which tells me which apps my base-utils are installed. If I ever see I need to add another tool, for argument here let's say htop, I can go and update the parent image and rebuild this child stage later. Some tools I would always like to be available regardless of which container is running - nano,zip etc are things permanently used by me.
supervisor/conf.d/config-webserver.conf
[supervisord]
nodaemon=true
[program:apache2]
command=/bin/bash -c "source /etc/apache2/envvars && exec /usr/sbin/apache2 -DFOREGROUND"
autorestart=false
startretries=0
supervisor/conf.d/config-information.conf
[program:echo]
command=/bin/bash -c "echo Loaded Supervisord program 'echo' - Stage 5 operation \(Custom Nginx supervisord config\)"
autorestart=false
startretries=1
[program:echo_base_utils]
command=/bin/bash -c "echo ${INSTALLED_BASE_APPS_TEXT}"
autorestart=false
startretries=0
[program:echo_test_item]
command=/bin/bash -c "echo ${ENV_TEST_ITEM}"
autorestart=false
startretries=0
QUESTION
Is there any way that supervisord commands can be made so that they prompt for input as soon as container starts? I would like to keep using the images described above.
I'm moving a rails app from Heroku to a linux server and deploying it using Caprover. It's an app very dependent on background jobs, which I run with sidekiq.
I've managed to make it work by running both the rails server (bundle exec rails server -b 0.0.0.0 -p80 &) and sidekiq (bundle exec sidekiq &) from a script that launches both in the CMD of the Dockerfile.
But I guess it would be much better (separation of concerns) if the rails server was in one Docker container and sidekiq in another one. But I can't figure out how to connect them. How do I tell my rails app that sidekiq lives in another container?
Because I use Caprover I'm limited to Dockerfiles to deploy my images, so I can't use docker-compose.
Is there a way to tell rails that it should use a certain sidekiq found in a certain Docker container? Caprover uses Docker swarm if that is of any help.
Am I thinking about this the wrong way?
My setup, currently, is as follows:
1 Docker container with rails server + sidekiq
1 Docker container with the postgres DB
1 Docker container with the Redis DB
My desired setup would be:
1 Docker container with rails server
1 Docker container with sidekiq
1 Docker container with postgres DB
1 Docker container with Redis DB
Is that even possible with my current limitations?
My rails + sidekiq Dockerfile is as follows:
FROM ruby:2.6.4-alpine
#
RUN apk update && apk add nodejs yarn postgresql-client postgresql-dev tzdata build-base ffmpeg
RUN apk add --no-cache --upgrade bash
RUN mkdir /myapp
WORKDIR /myapp
COPY Gemfile /myapp/Gemfile
COPY Gemfile.lock /myapp/Gemfile.lock
RUN bundle install --deployment --without development test
COPY . /myapp
RUN yarn
RUN bundle exec rake yarn:install
# Set production environment
ENV RAILS_ENV production
ENV RAILS_SERVE_STATIC_FILES true
# Assets, to fix missing secret key issue during building
RUN SECRET_KEY_BASE=dumb bundle exec rails assets:precompile
# Add a script to be executed every time the container starts.
COPY entrypoint.sh /usr/bin/
RUN chmod +x /usr/bin/entrypoint.sh
ENTRYPOINT ["entrypoint.sh"]
EXPOSE 80
COPY start_rails_and_sidekiq.sh /myapp/start_rails_and_sidekiq.sh
RUN chmod +x /myapp/start_rails_and_sidekiq.sh
# Start the main process.
WORKDIR /myapp
CMD ./start_rails_and_sidekiq.sh
the start_rails_and_sidekiq.sh looks like this:
#!/bin/bash
# Start the first process
bundle exec rails server -b 0.0.0.0 -p80 &
status=$?
if [ $status -ne 0 ]; then
echo "Failed to start Rails server: $status"
exit $status
fi
# Start the second process
bundle exec sidekiq &
status=$?
if [ $status -ne 0 ]; then
echo "Failed to start Sidekiq: $status"
exit $status
fi
# Naive check runs checks once a minute to see if either of the processes exited.
# This illustrates part of the heavy lifting you need to do if you want to run
# more than one service in a container. The container exits with an error
# if it detects that either of the processes has exited.
# Otherwise it loops forever, waking up every 60 seconds
while sleep 60; do
ps aux |grep puma |grep -q -v grep
PROCESS_1_STATUS=$?
ps aux |grep sidekiq |grep -q -v grep
PROCESS_2_STATUS=$?
# If the greps above find anything, they exit with 0 status
# If they are not both 0, then something is wrong
if [ $PROCESS_1_STATUS -ne 0 -o $PROCESS_2_STATUS -ne 0 ]; then
echo "One of the processes has already exited."
exit 1
fi
done
I'm totally lost!
Thanks in advance!
Method 1
According to CapRover docs, it seems that it is possible to run Docker Compose on CapRover, but I haven't tried it myself (yet).
Method 2
Although this CapRover example is for a different web app, the Internal Access principle is the same:
You can simply add a srv-captain-- prefix to the name of the container if you want to access it from another container.
However, isn't this method how you told your Rails web app where to find the PostgreSQL DB container? Or are you accessing it through an external subdomain name?
I want to run cron in a centos7 OS running in docker. When I try and start crond I get:
Failed to get D-Bus connection: Operation not permitted
Googling shows that that is because systemd is not running. But when I try and start that I get:
bash-4.2# /usr/lib/systemd/systemd --system --unit=basic.target
systemd 219 running in system mode. (+PAM +AUDIT +SELINUX +IMA -APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ -LZ4 -SECCOMP +BLKID +ELFUTILS +KMOD +IDN)
Detected architecture x86-64.
Set hostname to <7232ef24bdc8>.
Initializing machine ID from random generator.
Failed to install release agent, ignoring: No such file or directory
Failed to create root cgroup hierarchy: Read-only file system
Failed to allocate manager object: Read-only file system
Anyone know how I can run crond here?
I did a quick check whether that could work with the docker-systemctl-replacement script. What that script does is to read *.service files (without the help of a systemd daemon) so that it knows how to start and stop a service.
After "yum install -y cronie" I was able to "systemctl.py start crond" in which case I can see a running process "/usr/sbin/crond -n". It is possible to install the systemctl.py as the default CMD so that it will also work on simply starting and stopping the container from a saved image.
You can run cron service inside docker in this manner:
Map etc/cron.d/crontab file inside the container, the crontab file should contains your cronjobs, see cronjob examples below:
#reboot your-commands-here >> /var/log/cron.log 2>&1
#reboot sleep 02 && your-commands-here >> /var/log/cron.log 2>&1
0 * * * * your-commands-here >> /var/log/cron.log 2>&1
In your Dockerfile:
chmod -R 0664 /etc/cron.d/*
# Create the log file to be able to run tail and initiate my crontab file
RUN touch /var/log/cron.log && crontab /etc/cron.d/crontab
# Run the command on container startup
CMD /etc/init.d/cron restart && tail -f /var/log/cron.log
# Run the command on container startup
CMD /etc/init.d/cron restart && tail -f /var/log/cron.log
#start supervisor
ENTRYPOINT ["service", "supervisor", "start"]
To run cron under supervisor use this config:
[program:cron]
command= cron -f
startsecs = 3
stopwaitsecs = 3
autostart = true
autorestart = true