Logrotate fails with error "unknown option 'I'" on Docker container - ruby-on-rails

I have a Ruby on Rails application running on Docker and would like to rotate my production logs every day. In the spirit of keeping everything self-contained, I would like to keep the log rotation on Docker itself as well. Here's the logrotate configuration on my Docker container:
/etc/logrotate.conf
# rotate log files weekly
weekly
# keep 1 week worth of backlogs
rotate 1
# create new (empty) log files after rotating old ones
create
# use date as a suffix of the rotated file
dateext
# uncomment this if you want your log files compressed
#compress
# packages drop log rotation information into this directory
include /etc/logrotate.d
# system-specific logs may be also be configured here.
# Rotate Rails logs
/myapp/log/*.log {
daily
missingok
rotate 7
compress
delaycompress
notifempty
copytruncate
}
Permissions for /etc/logrotate.conf:
-rw-r--r-- 1 root root 656 Jul 13 02:06 /etc/logrotate.conf
After a day, my log file has not been rotated. I know this isn't an issue with my container being recreated because it's been running for 6 days.
When I go to test it, I get an error message on every line:
> logrotate -d /myapp/log/production.log
...
> error: production.log:56257 unknown option 'I' -- ignoring line
> error: production.log:56258 unknown option 'I' -- ignoring line
> error: production.log:56259 unknown option 'I' -- ignoring line
Here are the permissions on my production.log:
-rw-r--r-- 1 root root 10868754 Jul 14 12:42 /myapp/log/production.log
And when I check what's in the log file, the contents are indeed there:
I, [2022-07-13T02:48:23.666904 #1] INFO -- : Raven 3.1.0 ready to catch errors
I, [2022-07-13T03:00:18.483790 #8] INFO -- : Raven 3.1.0 ready to catch errors
I, [2022-07-13T03:00:34.021416 #22] INFO -- : Started GET "/healthcheck" for xxx.xx.xx.xx at 2022-07-13 03:00:34 +0000
I, [2022-07-13T03:00:34.026047 #22] INFO -- : Processing by HealthcheckController#show as HTML
I, [2022-07-13T03:00:34.027170 #29] INFO -- : Started GET "/healthcheck" for xxx.xx.xx.xx at 2022-07-13 03:00:34 +0000
I, [2022-07-13T03:00:34.031331 #29] INFO -- : Processing by HealthcheckController#show as HTML
I, [2022-07-13T03:00:34.038132 #20] INFO -- : Started GET "/healthcheck" for xxx.xx.xx.xx at 2022-07-13 03:00:34 +0000
I, [2022-07-13T03:00:34.042771 #20] INFO -- : Processing by HealthcheckController#show as HTML
I, [2022-07-13T03:00:34.040177 #20] INFO -- : Started GET "/healthcheck" for xxx.xx.xx.xx at 2022-07-13 03:00:34 +0000
I, [2022-07-13T03:00:34.221546 #20] INFO -- : Processing by HealthcheckController#show as HTML
What can I do to get logrotate to rotate my logs?

Thanks to #β.εηοιτ.βε, I found out that I needed to specify the config file, not the log file, in the logrotate arguments when debugging it, like this:
logrotate -d /etc/logrotate.conf
That revealed there was nothing wrong with my configuration. Then I forced a log rotation with:
logrotate -f /etc/logrotate.conf
And it worked! As for why my log file was not rotated after a day, this SO post suggested that cron may not be running in the container, which I found out to be true in my case:
> service cron status
[FAIL] cron is not running ... failed!
I updated my Dockerfile's entrypoint to start cron. So it looks like this now:
Dockerfile
...
CMD /bin/bash
ENTRYPOINT [ "./docker/web-entrypoint.sh" ]
docker/web-entrypoint.sh
#!/bin/sh
set -e
if [ -f tmp/pids/server.pid ]; then
rm tmp/pids/server.pid
fi
cron service start # New line added
bundle exec rails s -b 0.0.0.0 -p 3000
Now it's all working!

Related

ElasticBeanstalk Ruby PostDeploy Script Mission Impossible

We have recently updated our ruby/elasticbeanstalk platform to AWS Linux 2 / Ruby (Ruby 2.7 running on 64bit Amazon Linux 2/3.2.0)
A part of our Ruby deployment is a delayed_job (daemon gem)
After many attempts to have a bash script from the .platform/hooks/postdeploy/ folder, I have offically declared I am stuck. Here is the error from eb-engine.log:
2020/12/08 04:18:44.162454 [INFO] Running platform hook: .platform/hooks/postdeploy/restart_delayed_job.sh
2020/12/08 04:18:44.191301 [ERROR] An error occurred during execution of command [app-deploy] - [RunAppDeployPostDeployHooks]. Stop running the command. Error: Command .platform/hooks/postdeploy/restart_delayed_job.sh failed with error exit status 127
2020/12/08 04:18:44.191327 [INFO] Executing cleanup logic
2020/12/08 04:18:44.191448 [INFO] CommandService Response: {"status":"FAILURE","api_version":"1.0","results":[{"status":"FAILURE","msg":"Engine execution has encountered an error.","returncode":1,"events":[{"msg":"Instance deployment failed. For details, see 'eb-engine.log'.","timestamp":1607401124,"severity":"ERROR"}]}]}```
Here is one of many scripts I have attempted:
#!/bin/bash
#Using similar syntax as the appdeploy pre hooks that is managed by AWS
exec 3>&1 4>&2
trap 'exec 2>&4 1>&3' 0 1 2 3
exec 1>delayed_job_err.out 2>&1
# Loading environment data
# source /etc/profile.d/sh.local #created from other .ebextension file
EB_APP_USER=$(/opt/elasticbeanstalk/bin/get-config platformconfig -k AppUser)
EB_APP_CURRENT_DIR=$(/opt/elasticbeanstalk/bin/get-config platformconfig -k AppDeployDir)
#EB_APP_PIDS_DIR=/home/webapp/pids
/opt/elasticbeanstalk/bin/get-config environment | jq -r 'to_entries | .[] | "export \(.key)=\"\(.value)\""' > /tmp/envvars
source /tmp/envvars
cd /var/app
cd $EB_APP_CURRENT_DIR
su -s /bin/bash -c "bin/delayed_job restart" $EB_APP_USER```
Here is the delayed_job file:
#!/usr/bin/env ruby
require File.expand_path(File.join(File.dirname(__FILE__), '..', 'config', 'environment'))
require 'delayed/command'
Delayed::Command.new(ARGV).daemonize
As you can see I'm doing my best to load up the env variables. The delayed_job seems to run just fine as root from within the EB Linux 2 host with the env vars loaded.
total 12
-rwxrwxr-x 1 webapp webapp 179 Dec 8 04:15 001_load_envs.sh
-rw-r--r-- 1 root root 251 Dec 8 04:46 delayed_job_err.out
-rwxrwxr-x 1 webapp webapp 1144 Dec 8 04:15 restart_delayed_job.sh
[root#ip-172-16-100-178 postdeploy]# cat delayed_job_err.out
/var/app/current/vendor/bundle/ruby/2.7.0/gems/json-1.8.6/lib/json/common.rb:155: warning: Using the last argument as keyword parameters is deprecated
delayed_job: warning: no instances running. Starting...
delayed_job: process with pid 5292 started.
Any help would be appreciated..
I am also using elasticbeanstalk on Amazon Linux 2
I am using resque which needs a restart postdeploy. Following is my postdeploy hook which restarts resque workers
.platform/hooks/postdeploy/0020_restart_resque_workers.sh
#!/usr/bin/env bash
. /opt/elasticbeanstalk/deployment/env
cd /var/app/current/
su -c "RAILS_ENV=production bundle exec rake resque:restart_workers" webapp ||
echo "resque workers restarted."
true
Notice the environment variable setup. It simply executes /opt/elasticbeanstalk/deployment/env which will give you the environment.
Hope you are able to use above script by simply replcaing command to restart delayed job instead of resque workers.

How can you fix a 'port already in use' error while trying to run a Rails + React app locally on a Mac?

I am working on a sample react/rails app based on this.
It was working fine for a few days before this issue arose, and I can't figure out what caused it or how to fix it.
I get this behavior for any port I try to run the web server on.
This kind of thing lists no processes to kill: lsof -nP -iTCP:3000| grep LISTEN
This also shows no results: lsof -i tcp:3000
It seems like react starts fine (on 3000), then rails starts (on 3001) then there is a collision of some kind causing it to shut down.
Here is the Procfile.dev I am using:
web: PORT=3000 yarn --cwd client run start
api: PORT=3001 bundle exec rails s
The react app lives in the /client directory within the rails app.
Here is the proxy line from the react app's package.json: "proxy": "http://localhost:3001/",
Here is the terminal output:
$ bin/rake start
Running via Spring preloader in process 41869
[OKAY] Loaded ENV .env File as KEY=VALUE Format
12:38:45 PM web.1 | yarn run v1.22.4
12:38:45 PM web.1 | $ react-scripts start
12:38:46 PM api.1 | => Booting Puma
12:38:46 PM api.1 | => Rails 6.0.3.1 application starting in development
12:38:46 PM api.1 | => Run `rails server --help` for more startup options
12:38:46 PM web.1 | Something is already running on port 3000.
12:38:46 PM web.1 | Done in 1.06s.
[DONE] Killing all processes with signal SIGINT
12:38:46 PM web.1 Exited Successfully
12:38:46 PM api.1 | Exiting
12:38:46 PM api.1 Exited Successfully
The rake task (lib/tasks/start.rake) is:
namespace :start do
task :development do
exec 'heroku local -f Procfile.dev'
end
end
desc 'Start development server'
task :start => 'start:development'
Thanks for taking a look!
do export PORT = 'portNumber' (without the quotes) just before you run npm start
I ended up just wiping the repo and cloning the older version. Now everything works fine, I just lost like a day of work. I think that is good enough for me.

opensipsctl start gives an error: opensips.pid does not exist

When I run opensipsctl start command for start opensips that time I got one error.
ERROR: PID file /var/run/opensips.pid does not exist -- OpenSIPS start failed
So please help me to solve it.
open up opensipsctl, it includes the file opensipsctlrc, which defined $PID_FILE as /var/run/opensips.pid
Then in opensipsctl, when you run start one of the checks is..
if [ ! -s $PID_FILE ] ; then
echo
merr "PID file $PID_FILE does not exist -- OpenSIPS start failed"
exit 1
fi
Which is saying if then check of whethever '/var/run/opensips.pid exists and is bigger than 0 bytes' fails, then echo out the above error.
This means the file isn't being created.
If you look just above that line it does..
if [ $SYSLOG = 1 ] ; then
$OSIPSBIN -P $PID_FILE $STARTOPTIONS 1>/dev/null 2>/dev/null
else
$OSIPSBIN -P $PID_FILE -E $STARTOPTIONS
fi
Which is where opensips actually starts. I would suggest adding the following to your opensips.cfg if you havn't already..
# Logging
debug=6
log_stderror=no
log_facility=LOG_LOCAL0
..now everything will be logged to /var/log/syslog on boot.
Try boot again, then look at that log for info about what's happened.
Another thing to check, is the user you're running opensips as has permission to access the directory it's trying to create the pid file in.
I had the same error & it was driving me mad as well. I managed to trace it down to one of two things - I had both!
1/ A misconfiguration in the OpenSIPS config file. journalctl -xe should be able to tell you what the error is
2/ Something else is listening on the port that you are trying to listen on
For 2, you can try the below, if you have Ubuntu, to see if anything is already listening on that port
lsof -i :5060
I was able to see logs and fix issue by below steps
Set log_level=4 in opensips.cfg to view debug logs in /var/log/syslog
debug is deprecated in 2.4 and higher version.
You can refer here for different log level

Supervisord as Windows Service on Cygwin

I am attempting to run Celery as a Windows Service using Supervisord. I followed the configuration laid out on the Celery site and [here][1]. I have set up a virtual environment to run supervisord through cygwin.I have highlighted the lines I think are most important (with **). It appears supervisord and rabbitMQ are working. The problem is with Celery.
I setup the service with the commands:
$ cygrunsrv --install supervisord --path /usr/bin/python --args "/usr/bin/supervisord -n -c /usr/etc/supervisord.conf"
$ supervisord
UPDATED: I now have the following in my supervisord.log file:
2014-08-07 12:46:40,676 INFO exited: celery (exit status 1; not expected)
2014-08-07 12:47:07,187 INFO Increased RLIMIT_NOFILE limit to 1024
2014-08-07 12:47:07,238 INFO RPC interface 'supervisor' initialized
2014-08-07 12:47:07,251 INFO daemonizing the supervisord process
2014-08-07 12:47:07,253 INFO supervisord started with pid 7508
2014-08-07 12:47:08,272 INFO spawned: 'celery' with pid 8056
**2014-08-07 12:47:08,833 INFO success: celery entered RUNNING state, process has stayed up for > than 0 seconds (startsecs)**
The config file is:
[inet_http_server] ; inet (TCP) server disabled by default
port=127.0.0.1:8072 ; (ip_address:port specifier, *:port for all iface)
username = user
password = 123
[supervisord]
logfile= /home/HBA/venv/logFiles/supervisord.log ; (main log file;default $CWD/supervisord.log)
logfile_maxbytes=50MB ; (max main logfile bytes b4 rotation;default 50MB)
logfile_backups=10 ; (num of main logfile rotation backups;default 10)
loglevel=info ; (log level;default info; others: debug,warn,trace)
pidfile=/tmp/supervisord.pid ; (supervisord pidfile;default supervisord.pid)
nodaemon=false ; (start in foreground if true;default false)
minfds=1024 ; (min. avail startup file descriptors;default 1024)
minprocs=200 ; (min. avail process descriptors;default 200)
;user=HBA ; (default is current user, required if root)
childlogdir=/tmp ; ('AUTO' child log dir, default $TEMP)
[rpcinterface:supervisor]
supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface
[supervisorctl]
serverurl=http://127.0.0.1:8072 ; use an http:// url to specify an inet socket
[program:celery]
command= celery worker -A runLogProject --loglevel=INFO ; the program (relative uses PATH, can take args)
directory= /home/HBA/venv/runLogProject
environment=PATH="/home/HBA/venv/;/home/HBA/venv/Scripts/"
numprocs=1
stdout_logfile= /home/HBA/venv/logFiles/%(program_name)s/worker.log ; stdout log path, NONE for none; default AUTO
stderr_logfile= /home/HBA/venv/logFiles/%(program_name)s/worker.log ; stderr log path, NONE for none; default AUTO
autostart=true ; start at supervisord start (default: true)
autorestart=true ; whether/when to restart (default: unexpected)
startsecs=0
stopwaitsecs=1000
killasgroup=true
My celery log file gives me:
**[2014-08-07 19:46:40,584: ERROR/MainProcess] Process 'Worker-4' pid:12284 exited with 'signal -1'
[2014-08-07 19:46:40,584: ERROR/MainProcess] Process 'Worker-3' pid:4432 exited with 'signal -1'
[2014-08-07 19:46:40,584: ERROR/MainProcess] Process 'Worker-2' pid:9120 exited with 'signal -1'
[2014-08-07 19:46:40,584: ERROR/MainProcess] Process 'Worker-1' pid:6280 exited with 'signal -1'**
C:\Python27\lib\site-packages\celery\apps\worker.py:161: CDeprecationWarning:
Starting from version 3.2 Celery will refuse to accept pickle by default.
The pickle serializer is a security concern as it may give attackers
the ability to execute any command. It's important to secure
your broker from unauthorized access when using pickle, so we think
that enabling pickle should require a deliberate action and not be
the default choice.
If you depend on pickle then you should set a setting to disable this
warning and to be sure that everything will continue working
when you upgrade to Celery 3.2::
CELERY_ACCEPT_CONTENT = ['pickle', 'json', 'msgpack', 'yaml']
You must only enable the serializers that you will actually use.
warnings.warn(CDeprecationWarning(W_PICKLE_DEPRECATED))
[2014-08-07 19:47:08,822: WARNING/MainProcess] C:\Python27\lib\site-packages\celery\apps\worker.py:161: CDeprecationWarning:
Starting from version 3.2 Celery will refuse to accept pickle by default.
The pickle serializer is a security concern as it may give attackers
the ability to execute any command. It's important to secure
your broker from unauthorized access when using pickle, so we think
that enabling pickle should require a deliberate action and not be
the default choice.
If you depend on pickle then you should set a setting to disable this
warning and to be sure that everything will continue working
when you upgrade to Celery 3.2::
CELERY_ACCEPT_CONTENT = ['pickle', 'json', 'msgpack', 'yaml']
You must only enable the serializers that you will actually use.
warnings.warn(CDeprecationWarning(W_PICKLE_DEPRECATED))
**[2014-08-07 19:47:08,944: INFO/MainProcess] Connected to amqp://guest:**#127.0.0.1:5672//
[2014-08-07 19:47:08,954: INFO/MainProcess] mingle: searching for neighbors
[2014-08-07 19:47:09,963: INFO/MainProcess] mingle: all alone**
C:\Python27\lib\site-packages\celery\fixups\django.py:236: UserWarning: Using settings.DEBUG leads to a memory leak, never use this setting in production environments!
warnings.warn('Using settings.DEBUG leads to a memory leak, never '
[2014-08-07 19:47:09,982: WARNING/MainProcess] C:\Python27\lib\site-packages\celery\fixups\django.py:236: UserWarning: Using settings.DEBUG leads to a memory leak, never use this setting in production environments!
warnings.warn('Using settings.DEBUG leads to a memory leak, never '
[2014-08-07 19:47:09,982: WARNING/MainProcess] celery#CORONADO ready.
I solved my issue using the following command: /home/HBA/venv/Scripts/celery worker -A runLogProject --loglevel=INFO
My biggest issue was an unfamiliarity with virtual environments. I needed to make sure the files were in the correct folders within the venv.

PhantomJS failed to load URL on Jetty running on Jenkins

I want to use the Siesta testing Framework with PhantomJS on a local server and there is no problem. I worked like in http://www.bryntum.com/forum/viewtopic.php?f=20&t=3068 and on my machine, there were nothing to complain about, so i want to combine it with Jenkins.
But using Jenkins an Error 403 appears.
What I do:
Copy the files of my project in the webapps folder of Jetty (incl.
Framework )
Start the jetty server (so far no problems)
Use the PhantomJs of the framwork on my
localhost:port/project/index.html
And there my Problem starts:
Failed to load URL: localhost:port/project/index.html(Status 403)
I searched for some results but didn't find anything that solves this problem.
Every hint is welcome
Thanks
To see what i've done:
My Jenkins Shell Script
JETTY="jetty-distribution-9.2.0.v20140526"
JETTYWEB="$JETTY/webapps"
DIR="$WORKSPACE/$JETTYWEB/myProject/src/test"
PHANTOM="$DIR/Siesta_Framework/bin"
rm -r "$JETTYWEB/myProject/"
mkdir "$JETTYWEB/myProject/"
cp -pr "src/" "$JETTYWEB/myProject/"
chmod u+x -R $JETTYWEB/
cd $WORKSPACE/$JETTY
# Start des Servers
java -DSTOP.PORT=11183 -jar start.jar -DSTOP.KEY=tadam &
sleep 5
#jenkins "$DIR/browse-autmation.html?phantom=true&enableCodeCoverage=false&hasPreviousReport=false&page=0
cd $PHANTOM
#curl http://localhost:11182/myProject/src/test/browse-automation.html
./phantomjs "http://127.0.0.1:11182/myProject/src/test/browse-automation.html"
#"http://.../ci/job/test-phatomJS/ws/src/test/browse-automation.html?phantom=true&enableCodeCoverage=false&hasPreviousReport=false&page=0"
#curl http://127.0.0.1:11182/myProject/src/test/Siesta_Framework/bin/phantomjs
sleep 15
# Stop des Servers -DSTOP.KEY=tadam
cd $WORKSPACE/$JETTY
java -DSTOP.PORT=11183 -DSTOP.KEY=tadam -jar start.jar --stop
And the Result was:
[EnvInject] - Loading node environment variables.
Building remotely on ja_lin01 in workspace /var/opt/coinop/data/workspace/test-phatomJS
Fetching changes from the remote Git repository
Fetching upstream changes from gitlab#moso-ci-srv.novalocal:b.rohn/myProject.git
Checking out Revision a056b4ac6a7b47a4e77f3f80c5b7cbc51167cefc (origin/master)
[test-phatomJS] $ /bin/bash -xe /tmp/hudson8419984949815797813.sh
+ JETTY=jetty-distribution-9.2.0.v20140526
+ JETTYWEB=jetty-distribution-9.2.0.v20140526/webapps
+ DIR=/var/opt/coinop/data/workspace/test-phatomJS/jetty-distribution-9.2.0.v20140526/webapps/myProject/src/test
+ PHANTOM=/var/opt/coinop/data/workspace/test-phatomJS/jetty-distribution-9.2.0.v20140526/webapps/myProject/src/test/Siesta_Framework/bin
+ rm -r jetty-distribution-9.2.0.v20140526/webapps/myProject/
+ mkdir jetty-distribution-9.2.0.v20140526/webapps/myProject/
+ cp -pr src/ jetty-distribution-9.2.0.v20140526/webapps/myProject/
+ chmod u+x -R jetty-distribution-9.2.0.v20140526/webapps/
+ cd /var/opt/coinop/data/workspace/test-phatomJS/jetty-distribution-9.2.0.v20140526
+ sleep 5
+ java -DSTOP.PORT=11183 -jar start.jar -DSTOP.KEY=tadam
WARNING: System properties and/or JVM args set. Consider using --dry-run or --exec
2014-07-01 15:37:10.895:INFO::main: Logging initialized #1014ms
2014-07-01 15:37:12.451:INFO:oejs.Server:main: jetty-9.2.0.v20140526
2014-07-01 15:37:12.480:INFO:oejdp.ScanningAppProvider:main: Deployment monitor [file:/data/coinop/data/workspace/test-phatomJS/jetty-distribution-9.2.0.v20140526/webapps/] at interval 1
2014-07-01 15:37:13.232:INFO:oejsh.ContextHandler:main: Started o.e.j.w.WebAppContext#57cd102a{/myProject,file:/data/coinop/data/workspace/test-phatomJS/jetty-distribution-9.2.0.v20140526/webapps/myProject/,AVAILABLE}{/myProject}
2014-07-01 15:37:13.255:INFO:oejs.ServerConnector:main: Started ServerConnector#6d622548{HTTP/1.1}{0.0.0.0:11182}
2014-07-01 15:37:13.255:INFO:oejs.Server:main: Started #3388ms
+ cd /var/opt/coinop/data/workspace/test-phatomJS/jetty-distribution-9.2.0.v20140526/webapps/myProject/src/test/Siesta_Framework/bin
+ ./phantomjs http://127.0.0.1:11182/myProject/src/test/browse-automation.html
/var/opt/coinop/data/workspace/test-phatomJS/jetty-distribution-9.2.0.v20140526/webapps/myProject/src/test/Siesta_Framework/bin
Launching PhantomJS 1.6.0 at http://127.0.0.1:11182/myProject/src/test/browse-automation.html
Failed to load URL: http://127.0.0.1:11182/myProject/src/test/browse-automation.html?phantom=true&enableCodeCoverage=false&hasPreviousReport=false&page=0(status: 403)
Process leaked file descriptors. See http://wiki.jenkins-ci.org/display/JENKINS/Spawning+processes+from+build for more information
Build step 'Execute shell' marked build as failure
2014-07-01 15:37:24.931:INFO:oejs.ServerConnector:Thread-0: Stopped ServerConnector#6d622548{HTTP/1.1}{0.0.0.0:11182}
Finished: FAILURE
after long searching i noticed, that the phantomjs call doesn't have all the informations it need. It needs the directory itself. So my resolution was: install phantomjs on the linux server and use this phantomjs, including the directory and the phantom script of the framework: now it works.
my actually call is:
./phantomjs "$DIR/phantomjs-launcher.js" $DIR http://127.0.0.1:11182/myProject/browse-automation.html
Situation: i cd in my phtomjs directory on the linux machine and give it the "DIR" of my framework/bin

Resources