Setting up cron job to launch docker containers on ec2 - docker

I am trying to set up regular task on an amazon ec2 instance that will launch a few docker containers.
I ve created a startup_service.sh script in my home directory :
cd ~
docker-compose pull && docker-compose up
In that very same home directory, I have a docker-compose.yml file that defines my containers and image.
I have tested this file sh startup_service.sh and it is working as expected
Ive added execute permission to this startup_service.sh file and created a cronjob with crontab -e :
50 11 * * * /usr/bin/sh /home/ec2-user/startup_service.sh
However it is not working ( running docker ps doesnt show any containers being created or anything).
However checking the cron logs, it seems like the task is actually executed sudo grep -C 3 "startup" /var/log/cron :
Feb 27 11:42:01 ip-172-31-27-90 crond[3188]: (ec2-user) RELOAD (/var/spool/cron/ec2-user)
Feb 27 11:44:00 ip-172-31-27-90 crontab[18011]: (ec2-user) LIST (ec2-user)
Feb 27 11:48:54 ip-172-31-27-90 crontab[18064]: (ec2-user) LIST (ec2-user)
Feb 27 11:50:01 ip-172-31-27-90 CROND[18153]: (ec2-user) CMD (/usr/bin/sh /home/ec2-user/startup_service.sh)
Feb 27 11:50:02 ip-172-31-27-90 CROND[18156]: (root) CMD (/usr/lib64/sa/sa1 1 1)
Feb 27 11:53:05 ip-172-31-27-90 crontab[18232]: (ec2-user) LIST (ec2-user)
Feb 27 12:00:01 ip-172-31-27-90 CROND[18343]: (root) CMD (/usr/lib64/sa/sa1 1 1)
How can I correctly setup this cron job ?

Related

Airflow DockerOperator gives file not found error / permission denied error

I have a locally created docker container (with python script) that I can run locally as follows:
docker run --rm -v "$PWD"/output:/app_base/output eod_price:latest
python eod_price.py ACN -b 7 -f out.csv
This app saves the output in file /Users/me/Documents/Python/GitHub/docker-learn/docker007/output/out.csv
$ ls -l
-rw-r--r--# 1 me staff 647 23 Jul 22:47 Dockerfile
-rw-r--r-- 1 me staff 2628 24 Jul 06:37 Readme.md
-rw-r--r--# 1 me staff 10222 12 Jul 15:54 airflow-docker-compose.yaml
drwxr-xr-x 3 me staff 96 23 Jul 22:56 app
drwxr-xr-x# 6 me staff 192 23 Jul 22:26 dags
drwxr-xr-x# 10 me staff 320 23 Jul 13:39 logs
-rw-r--r-- 1 me staff 646 3 Jul 09:59 myapp-docker-compose.yaml
drwxr-xr-x 3 me staff 96 23 Jul 22:58 output
drwxr-xr-x# 2 me staff 64 11 Jul 22:41 plugins
-rw-r--r--# 1 me staff 119 23 Jul 21:26 requirements.txt
In airflow-docker-compose file I have added additional volumes for input and output directory
volumes:
- ./dags:/opt/airflow/dags
- ./logs:/opt/airflow/logs
- ./plugins:/opt/airflow/plugins
- ./input:/opt/airflow/input
- ./output:/opt/airflow/output
In the dags directory I am trying to run this in apache airflow
eod_price = DockerOperator(
task_id='get_eod_price',
image='eod_price',
api_version='auto',
command='python eod_price.py ACN -b 7 -f out.csv',
container_name='eod_price-container',
auto_remove=True,
mounts=[
Mount(source='/opt/airflow/output',
target='/app_base/output',
type='bind'),
],
mount_tmp_dir=False,
docker_url='unix://var/run/docker.sock',
network_mode='bridge'
)
However, I get FileNotFoundError: [Errno 2] No such file or directory error
Other things I tried in DockerOperator are:
add & remove host_tmp_dir
remove mount_tmp_dir
I followed another similar issue link and in airflow docker compose I added additional volume x-airflow-common:
- /var/run/docker.sock:/var/run/docker.sock
However, that changed error to PermissionError: [Errno 13] Permission denied.
I am on Mac OS
Any tips how to resolve this?
What is the easiest way to debug airflow problems?
What I do not get is that is this error due to wrong mounting of volume in DockerOperator function or some other issue. I cannot make out if airflow did not even find the container image (eod_price) or was error in execution of the image.
Edit:
I followed yet another solution at link. This solution chmod 777 /var/run/docker.sock also did not work. I also tried to run it with TCP wrap solution and then I get a new error, so I reverted back.

Permission denied error when starting Elasticsearch as Singularity container

I am trying to run single node Elasticsearch instance on a HPC cluster. To do this, I am converting the Elasticsearch docker container as a singularity container. When I launch the container itself I get the following error:
$ singularity exec --overlay overlay.img elastic.sif /usr/share/elasticsearch/bin/elasticsearch
Could not create auto-configuration directory
Exception in thread "main" java.lang.RuntimeException: starting java failed with [1]
output:
[0.000s][error][logging] Error opening log file 'logs/gc.log': Permission denied
[0.000s][error][logging] Initialization of output 'file=logs/gc.log' using options 'filecount=32,filesize=64m' failed.
error:
Invalid -Xlog option '-Xlog:gc*,gc+age=trace,safepoint:file=logs/gc.log:utctime,pid,tags:filecount=32,filesize=64m', see error log for details.
Error: Could not create the Java Virtual Machine.
Error: A fatal exception has occurred. Program will exit.
at org.elasticsearch.server.cli.JvmOption.flagsFinal(JvmOption.java:113)
at org.elasticsearch.server.cli.JvmOption.findFinalOptions(JvmOption.java:80)
at org.elasticsearch.server.cli.MachineDependentHeap.determineHeapSettings(MachineDependentHeap.java:59)
at org.elasticsearch.server.cli.JvmOptionsParser.jvmOptions(JvmOptionsParser.java:132)
at org.elasticsearch.server.cli.JvmOptionsParser.determineJvmOptions(JvmOptionsParser.java:90)
at org.elasticsearch.server.cli.ServerProcess.createProcess(ServerProcess.java:211)
at org.elasticsearch.server.cli.ServerProcess.start(ServerProcess.java:106)
at org.elasticsearch.server.cli.ServerProcess.start(ServerProcess.java:89)
at org.elasticsearch.server.cli.ServerCli.startServer(ServerCli.java:213)
at org.elasticsearch.server.cli.ServerCli.execute(ServerCli.java:90)
at org.elasticsearch.common.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:54)
at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:85)
at org.elasticsearch.cli.Command.main(Command.java:50)
at org.elasticsearch.launcher.CliToolLauncher.main(CliToolLauncher.java:64)
If I understand correctly, Elasticsearch is trying to create a logfile in /var/log/elasticsearch but does not have the correct permissions. So I created the following recipe to create the folders and set the permission such that any process can write into the log directory. My recipe is the following:
Bootstrap: docker
From: elasticsearch:8.3.1
%files
elasticsearch.yml /usr/share/elasticsearch/config/
%post
mkdir -p /var/log/elasticsearch
chown -R elasticsearch:elasticsearch /var/log/elasticsearch
chmod -R 777 /var/log/elasticsearch
mkdir -p /var/data/elasticsearch
chown -R elasticsearch:elasticsearch /var/data/elasticsearch
chmod -R 777 /var/data/elasticsearch
The elasticsearch.yml file has the following content:
cluster.name: "docker-cluster"
network.host: 0.0.0.0
discovery.type: single-node
ingest.geoip.downloader.enabled: false
After building this recipe the directory /var/log/elasticsearch seems to get created correctly:
$ singularity exec elastic.sif ls -alh /var/log/
total 569K
drwxr-xr-x 4 root root 162 Jul 8 14:43 .
drwxr-xr-x 12 root root 172 Jul 8 14:43 ..
-rw-r--r-- 1 root root 7.7K Jun 29 17:29 alternatives.log
drwxr-xr-x 2 root root 69 Jun 29 17:29 apt
-rw-r--r-- 1 root root 58K May 31 11:43 bootstrap.log
-rw-rw---- 1 root utmp 0 May 31 11:43 btmp
-rw-r--r-- 1 root root 187K Jun 29 17:30 dpkg.log
drwxrwxrwx 2 elasticsearch elasticsearch 3 Jul 8 14:43 elasticsearch
-rw-r--r-- 1 root root 32K Jun 29 17:30 faillog
-rw-rw-r-- 1 root utmp 286K Jun 29 17:30 lastlog
-rw-rw-r-- 1 root utmp 0 May 31 11:43 wtmp
But when I launch the container I get the permission denied error listed above.
What is missing here? What permissions is Elasticsearch expecting?
The following workaround seems to be working for me now:
When launching the singularity container, the elasticsearch process is executed inside the container with the same UID as my own UID (the user that is launching the singularity container with singularity exec). The elasticsearch container is configured to run elasticsearch with the a separate user elasticsearch that exists inside the container. The issue is that singularity (unlike docker) will run every process inside the container with my own UID and not the elasticsearch UID, resulting in the error above.
To work around this, I created a base ubuntu singularity image and then installed elasticsearch into the container following these installation instructions (https://www.elastic.co/guide/en/elasticsearch/reference/current/targz.html). Because the installation was performed with my system user and UID, the entire elasticsearch installation belongs to my system user and not a separate elasticsearch user. Then I can launch the elasticsearch service inside the container.

How to run Tomcat 8 and MySQL in single Docker container

I have a Tomcat 8 / MySQL application I want to run in a docker container. I run Ubuntu 16.04 today in test and production and wanted use the Ubuntu 16.04 "latest" as the base FROM to my docker file and add Tomcat 8 and MySQL from there.
I know I can get a Tomcat 8 docker file as my base from https://hub.docker.com/_/tomcat/ but I did not see an Ubuntu base OS for those and I wanted to stay consistent with Ubuntu. Also, it seemed odd to add MySQL to a Tomcat container.
I worked through this issue and am posting my findings in case it helps others with similar issues.
Short answer: Running multiple services (tomcat / mysql) in a single container is not recommended. Yes, there is supervisor.d, etc. But this is discouraged. There is also baseimage-docker if you are committed to multiple services in one container.
The remainder of this answer shows how I got it working it if you really are determined...
The Tomcat 8 distro version on Ubuntu 16.04 is unfortunately only configured to run as a service (described in detail below). Issues with running a service in a docker container are documented well in many posts across stack exchange (it is discouraged). I was able to get tomcat 8 working as a service by adding a "tail -f /var/log/tomcat8/catalina.out" to the end of the "service tomcat8 start" command and starting the container with the "--cap-add SYS_PTRACE" option.
CMD service tomcat8 start && tail -f /var/log/tomcat8/catalina.out
The recommended way to start tomcat8 is to use the commands in /usr/share/tomcat8/bin. However, the distro version's soft links are incorrect and the server fails to start.
Using the commands ./catalina.sh run or ./startup.sh both produce an error such as this:
SEVERE: Cannot find specified temporary folder at /usr/share/tomcat8/temp
WARNING: Unable to load server configuration from [/usr/share/tomcat8/conf/server.xml]
SEVERE: Cannot start server. Server instance is not configured.
The distro splits tomcat8 across /usr/share/tomcat8 and /var/lib/tomcat8 which separates the bin files (catalina.sh and startup.sh) from the config and logs soft links in /var/lib/tomcat8. This makes these commands fail.
Files in /usr/share/tomcat8:
root#85d5fe47b66a:/usr/share/tomcat8# ls -la
total 32
drwxr-xr-x 4 root root 4096 Mar 9 22:18 .
drwxr-xr-x 117 root root 4096 Mar 9 23:29 ..
drwxr-xr-x 2 root root 4096 Mar 9 22:18 bin
-rw-r--r-- 1 root root 39 Mar 31 2017 defaults.md5sum
-rw-r--r-- 1 root root 1929 Apr 10 2017 defaults.template
drwxr-xr-x 2 root root 4096 Mar 9 22:18 lib
-rw-r--r-- 1 root root 53 Mar 31 2017 logrotate.md5sum
-rw-r--r-- 1 root root 118 Apr 10 2017 logrotate.template
Files in /var/lib/tomcat8:
root#85d5fe47b66a:/var/lib/tomcat8# ls -la
total 16
drwxr-xr-x 4 root root 4096 Mar 9 22:18 .
drwxr-xr-x 41 root root 4096 Mar 9 23:29 ..
lrwxrwxrwx 1 root root 12 Sep 28 14:43 conf -> /etc/tomcat8
drwxr-xr-x 2 tomcat8 tomcat8 4096 Sep 28 14:42 lib
lrwxrwxrwx 1 root root 17 Sep 28 14:43 logs -> ../../log/tomcat8
drwxrwxr-x 3 tomcat8 tomcat8 4096 Mar 9 22:18 webapps
lrwxrwxrwx 1 root root 19 Sep 28 14:43 work -> ../../cache/tomcat8
Running ./version.sh reveals that both CATALINA_BASE and CATALINA_HOME are set to /usr/share/tomcat8
Using CATALINA_BASE: /usr/share/tomcat8
Using CATALINA_HOME: /usr/share/tomcat8
Using CATALINA_TMPDIR: /usr/share/tomcat8/temp
Using JRE_HOME: /usr
Using CLASSPATH: /usr/share/tomcat8/bin/bootstrap.jar:/usr/share/tomcat8/bin/tomcat-juli.jar
Server version: Apache Tomcat/8.0.32 (Ubuntu)
Server built: Sep 27 2017 21:23:18 UTC
Server number: 8.0.32.0
OS Name: Linux
OS Version: 4.4.0-116-generic
Architecture: amd64
JVM Version: 1.8.0_161-b12
JVM Vendor: Oracle Corporation
Setting CATALINA_BASE explicitly to /var/lib/tomcat8 inside catalina.sh solved the problem in using ./catalina.sh run to start tomcat. In the past, I have alternatively added the soft links to conf, logs and work under the /usr/share/tomcat8 directory so it could find those files and start up properly with the catalina.sh run command.
BTW, even thought the JRE_HOME is clearly wrong in the version.sh dump above, the service does start correctly (when I append the tail -f command as described earlier). It also starts using catalina.sh run when I manually add the correct CATALINA_BASE variable to catalina.sh. So I spent no time looking into why that listed out incorrectly.
In the end, I realized three things:
Running multiple services (tomcat / mysql) in a single container is not recommended. Yes, there is supervisor.d, etc. But this is discouraged. There is also baseimage-docker if you are committed to multiple services in one container.
Even running a single service in a container is not recommended but there are documented ways to make it work (which I did for tomcat8 by adding the && tail -f ... to the end of the CMD).
In Ubuntu 16.04 (did not test other distros), to make tomcat8 run as a command (not a service) you need to either:
a) grab the tar file for Tomcat 8 and install that, since it puts all of the files under one directory and therefore there is no soft link issue. Or, b) if you insist on using the distro tomcat8 from apt-get, b.1) you need to modify a version of catalina.sh by adding the CATALINA_BASE and copy it to the proper installation directory or b.2) add the soft links.

Scheduling cron inside docker container

I was trying to schedule a cron job inside docker based logstash application.
The cron job is as follows:
30 10 * * * root logrotate -f /etc/logrotate.d/logstash
The cron is not getting executed inside container but when I execute the above command manually it works fine.
# logrotate -f /etc/logrotate.d/logstash
# ls -l /usr/share/logstash/logs/
total 36
-rw-r--r-- 1 logstash logstash 17 Jan 2 10:16 logstash.log
-rw-r--r-- 1 logstash logstash 10701 Jan 2 10:16 logstash.log.1
This might be a duplicate of Cronjobs in Docker container how get them running?
It basically says, that you need to make sure, that
/etc/init.d/cron start
is running.

Two Users in Docker

I am trying to use two users with Docker for my Spring Boot application running in an OpenJDK/Alpine base image.
Here is my scenario that I am trying to support. Maybe there is a better way.
I need to provide production support so I want to attach with EXEC as
devuser:appgroup but I am not allowed by our security
department to see credentials or secrets so the application should RUN
as appuser:appgroup which would also own all the application files.
I can build the image with the correct(?) permissions.
/opt/app $ ls -l
total 24552
-r-sr-xr-x 1 appuser appgroup 10632 Jun 27 12:59 app
-r-------- 1 appuser appgroup 25101769 Jun 27 12:59 app.jar
-r-xr-xr-- 1 appuser appgroup 327 Jun 27 12:59 app.sh
-r-------- 1 appuser appgroup 316 Jun 27 12:59 application.yml
-r-sr-xr-x 1 root root 10632 Jun 27 12:59 setup
-r-xr-xr-- 1 root root 152 Jun 27 12:59 setup.sh
The application runs well when I specify USER appuser but when I connect to the running container using EXEC I am appuser and I can see the configuration.
The application does not run when I specify USER devuser but when I connect to the running container using EXEC I am blocked from viewing the as I should be.
As you can see from the file permissions I am trying SUID but that seems somewhat of a hack to write a C program to run a shell script and is not working for me. (The last part is probably lack of experience on my part.)
I would appreciate any help,
Thanks.

Resources