Running a go script wtih cron in Docker - docker

I've been trying to run a go script with cron under Ubuntu 16.04 Docker image. Here are the files that I've
Dockerfile
FROM couchbase
RUN apt-get update
RUN apt-get install gcc make -y
RUN apt-get install golang-1.10 git -y
ADD src/crontab.txt /crontab.txt
ADD src/backup.sh /backup.sh
ADD src/backup.go /backup.go
ADD src/file.txt /file.txt
COPY entry.sh /entry.sh
RUN chmod 755 /backup.sh /entry.sh
RUN /usr/bin/crontab /crontab.txt
RUN apt-get install vim -y
CMD ["/entry.sh"]
entry.sh
#!/bin/sh
/usr/sbin/cron -f -l 8
src/crontab.txt
* * * * * /backup.sh >> /var/log/backup.log
src/backup.sh
#!/bin/sh
chmod 666 /var/log/backup.log
/usr/lib/go-1.10/bin/go run backup.go
backup.go
package main
import (
"log"
"os"
"strings"
)
func init() {
file, err := os.OpenFile("/var/log/backup.log", os.O_APPEND|os.O_CREATE|os.O_WRONLY, 0666)
if err != nil {
log.Fatal(err)
}
log.SetOutput(file)
}
func main() {
log.Println("Writing log")
}
I checked and the cron task is running each minute. The go installation is there in the folder and when I exec into the container it works, but the backup.go script is not logging anything. When I trigger the script manually it works though. The container that I'm using has Ubuntu 16.04 and I want it cause I don't have to do a couchbase installation.

You can do this simpler by using a multi stage build. First use a Go image to compile a standalone executable from your src/backup.go. Then switch to a coughbase image and copy the executable from the previous step.
Dockerfile:
# use a first-stage image to build the go code
# we'll change it later
FROM golang:1.10 AS build
# for now we only need the go code
COPY src/backup.go backup.go
# build a standalone executable
RUN go build -o /backup backup.go
# switch to a second-stage production image
FROM couchbase
# setup cronjob
COPY src/crontab.txt /crontab.txt
RUN /usr/bin/crontab /crontab.txt
# copy the executable from the first stage
# into the production image
COPY --from=build /backup /backup
CMD ["/usr/sbin/cron", "-f", "-l", "8"]
src/crontab.txt:
* * * * * /backup >> /var/log/backup.log
Build and run like this:
docker build . -t backup
# start in backgroud
docker run --name backup -d test
# check if it works
docker exec backup tail -f /var/log/backup.log
On the next minute :
2021/04/09 19:05:01 Writing log

Related

How to use "colcon build" in a Dockerfile?

I am trying to create a docker image based on ubuntu:20.04 where I want to install ROS2, ignition gazebo and the ROS2-ign-bridge with a Dockerfile.
The installation of ROS2 and ign work without any issue but during the bridge installation I need to use colcon. Heres that part from the Dockerfile:
## install ROS2 ignition gazebo bridge
RUN export IGNITION_VERSION=edifice
RUN mkdir -p ros_ign_bridge_ws/src
RUN git clone https://github.com/osrf/ros_ign.git -b foxy ros_ign_bridge_ws/src
WORKDIR ros_ign_bridge_ws
RUN rosdep install -r --from-paths src -i -y --rosdistro foxy
RUN colcon build
RUN source ros_ign_bridge_ws/install/setup.bash
RUN echo "source ros_ign_bridge_ws/install/setup.bash" >> ~/.bashrc
It fails during the colcon build step when I use
docker build -f Dockerfiles/companion_base.Dockerfile -t companion_base .
, but when I run the image created up to that step
docker run -it c125a17c2f68 /bin/bash
and then execute colcon build inside the container it works without any issue.
So what is the difference between RUN colcon build and running colcon build inside the container ?
The issue was that when you source something in a previous docker build step, it isn't available in the next step. So what I needed to do was do the sourcing and building in the same step:
RUN /bin/bash -c "source /opt/ros/foxy/setup.bash; colcon build"

Dockerfile wants to copy shell script to /usr/bin but I'm running Windows

I'm using Docker with Windows 10. The Dockerfile for my app includes the following lines:
# Add a script to be executed every time the container starts.
COPY docker/entrypoint.sh /usr/bin/
RUN chmod +x /usr/bin/entrypoint.sh
The problem is that because the OS is Win 10, there is no /usr/bin/ path--the equivalent I guess would be C:\Program Files. So when I run docker-compose up (in VS Code's Bash terminal), I get the following error:
my_app_name | exec /usr/bin/entrypoint.sh: no such file or directory
my_app_name exited with code 1
Changing the path in the Dockerfile doesn't seem like a good idea, because then Linux users will have the same problem. What is the right way to handle this for compatibility with both Windows and Linux?
EDIT: the entrypoint.sh script is as follows:
#!/bin/bash
set -e
# Remove a potentially pre-existing server.pid for Rails.
rm -f /docker-rails/tmp/pids/server.pid
# Then exec the container's main process (what's set as CMD in the Dockerfile).
exec "$#"
and the entire Dockerfile is:
FROM ruby:2.6.2
RUN apt-get update -qq && apt-get install -y nodejs postgresql-client cron
RUN mkdir /docker-rails
WORKDIR /docker-rails
COPY Gemfile /docker-rails/Gemfile
COPY Gemfile.lock /docker-rails/Gemfile.lock
WORKDIR /docker-rails
RUN bundle install
COPY . /docker-rails
# Add a script to be executed every time the container starts.
COPY docker/entrypoint.sh /usr/bin/
RUN chmod +x /usr/bin/entrypoint.sh
ENTRYPOINT ["entrypoint.sh"]
EXPOSE 3000
# Start the main process.
CMD ["rails", "server", "-b", "0.0.0.0"]

How to extend nginx docker image without getting error systemctl: command not found?

I want to build my own custom docker image from nginx image.
I override the ENTRYPOINT of nginx with my own ENTERYPOINT file.
Which bring me to ask two questions:
I think I lose some commands from nginx by doing so. am I right? (like expose the port.. )
If I want to restart the nginx I run this commands: nginx -t && systemctl reload nginx. but the output is:
nginx: configuration file /etc/nginx/nginx.conf test is successful
/entrypoint.sh: line 5: systemctl: command not found
How to fix that?
FROM nginx:latest
WORKDIR /
RUN echo "deb http://ftp.debian.org/debian stretch-backports main" >> /etc/apt/sources.list
RUN apt-get -y update && \
apt-get -y install apt-utils && \
apt-get -y upgrade && \
apt-get -y clean
# I ALSO WANT TO INSTALL CERBOT FOR LATER USE (in my entrypoint file)
RUN apt-get -y install python-certbot-nginx -t stretch-backports
# COPY ./something ./tothisimage
# COPY ./something ./tothisimage
# COPY ./something ./tothisimage
# COPY ./something ./tothisimage
COPY entrypoint.sh /entrypoint.sh
ENTRYPOINT ["bash", "/entrypoint.sh"]
entrypoint.sh
echo "in entrypoint"
# I want to run some commands here...
# After I want to run nginx normally....
nginx -t && systemctl reload nginx
echo "after reload"
this will work using service command:
echo "in entrypoint"
# I want to run some commands here...
# After I want to run nginx normally....
nginx -t && service nginx reload
echo "after reload"
output:
in entrypoint
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
Restarting nginx: nginx.
after reload
Commands like service and systemctl mostly just don't work in Docker, and you should totally ignore them.
At the point where your entrypoint script is running, it is literally the only thing that is running. That means you don't need to restart nginx, because it hasn't started the first time yet. The standard pattern here is to use the entrypoint script to do some first-time setup; it will be passed the actual command to run as arguments, so you need to tell it to run them.
#!/bin/sh
echo "in entrypoint"
# ... do first-time setup ...
# ...then run the command, nginx or otherwise
exec "$#"
(Try running docker run --rm -it myimage /bin/sh. You will get an interactive shell in a new container, but after this first-time setup has happened.)
The one thing you do lose in your Dockerfile is the default CMD from the base image (setting an ENTRYPOINT resets that). You need to add back that CMD:
COPY entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
CMD ["nginx", "-g", "daemon off;"]
You should keep the other settings from the base image, like ENV definitions and EXPOSEd ports.
The "systemctl" command is specific to some SystemD based operating system. But you do not have such a SystemD daemon running on PID 1 - so even if you install those packages it wont work.
You can only check in the nginx.service file which command the "reload" would execute for real. Or have something like the docker-systemctl-replacement script do it for you.

Docker build and run with Miniconda environments on Ubuntu host

I am in the process of creating a docker container which has a miniconda environment setup with some packages (pip and conda). Dockerfile :
# Use an official Miniconda runtime as a parent image
FROM continuumio/miniconda3
# Create the conda environment.
# RUN conda create -n dev_env Python=3.6
RUN conda update conda -y \
&& conda create -y -n dev_env Python=3.6 pip
ENV PATH /opt/conda/envs/dev_env/bin:$PATH
RUN /bin/bash -c "source activate dev_env" \
&& pip install azure-cli \
&& conda install -y nb_conda
The behavior I want is that when the container is launched, it should automatically switch to the "dev_env" conda environment but I haven't been able to get this to work. Logs :
dparkar#mymachine:~/src/dev/setupsdk$ docker build .
Sending build context to Docker daemon 2.56kB
Step 1/4 : FROM continuumio/miniconda3
---> 1284db959d5d
Step 2/4 : RUN conda update conda -y && conda create -y -n dev_env Python=3.6 pip
---> Using cache
---> cb2313f4d8a8
Step 3/4 : ENV PATH /opt/conda/envs/dev_env/bin:$PATH
---> Using cache
---> 320d4fd2b964
Step 4/4 : RUN /bin/bash -c "source activate dev_env" && pip install azure-cli && conda install -y nb_conda
---> Using cache
---> 3c0299dfbe57
Successfully built 3c0299dfbe57
dparkar#mymachine:~/src/dev/setupsdk$ docker run -it 3c0299dfbe57
(base) root#3db861098892:/# source activate dev_env
(dev_env) root#3db861098892:/# exit
exit
dparkar#mymachine:~/src/dev/setupsdk$ docker run -it 3c0299dfbe57 source activate dev_env
[FATAL tini (7)] exec source failed: No such file or directory
dparkar#mymachine:~/src/dev/setupsdk$ docker run -it 3c0299dfbe57 /bin/bash source activate dev_env
/bin/bash: source: No such file or directory
dparkar#mymachine:~/src/dev/setupsdk$ docker run -it 3c0299dfbe57 /bin/bash "source activate dev_env"
/bin/bash: source activate dev_env: No such file or directory
dparkar#mymachine:~/src/dev/setupsdk$ docker run -it 3c0299dfbe57 /bin/bash -c "source activate dev_env"
dparkar#mymachine:~/src/dev/setupsdk$
As you can see above, when I am within the container, I can successfully run "source activate dev_env" and the environment switches over. But I want this to happen automatically when the container is launched.
This also happens in the Dockerfile during build time. Again, I am not sure if that has any effect either.
You should use the command CMD for anything related to runtime.
Anything typed after RUN will only be run at image creation time, not when you actually run the container.
The shell used to run such commands is closed at the end of the image creation process, making the environment activation non-persistent in that case.
As such, your additional line might look like this:
CMD ["conda activate <your-env-name> && <other commands>"]
where <other commands> are other commands you might need at runtime after the environment activation.
This docker build file worked for me.
# start with miniconda image
FROM continuumio/miniconda3
# setting the working directory
WORKDIR /usr/src/app
# Copy the file from your host to your current location in container
COPY . /usr/src/app
# Run the command inside your image filesystem to create an environment and name it in the requirements.yml file, in this case "myenv"
RUN conda env create --file requirements.yml
# Activate the environment named "myenv" with shell command
SHELL ["conda", "run", "-n", "myenv", "/bin/bash", "-c"]
# Make sure the environment is activated by testing if you can import flask or any other package you have in your requirements.yml file
RUN echo "Make sure flask is installed:"
RUN python -c "import flask"
# exposing port 8050 for interaction with local host
EXPOSE 8050
#Run your application in the new "myenv" environment
CMD ["conda", "run", "-n", "myenv", "python", "app.py"]

Running bower install inside a docker volume

Context
So I'm trying to execute build a polymer project inside a docker container as a volume (to access it I'm using docker run (...) --volume="/var/www/html:/var/www/html" --volumes-from="my-polymer-image-name" my-nginx-image).
And I tried execute the following Dockerfile, but declaring the volume last, but the volume was empty when I tried to access it from "my-nginx-container" (docker exec -ti my-nginx-image-name /bin/sh).
So I thought I had to declare the volume before using using it.
Problem
But when I tried to install my bower components, I noticed that no bower_components directory was being created.
########################################################
# Dockerfile to build Polymer project and move to server
# Based on oficial node Dockerfile
########################################################
FROM node:6
VOLUME /var/www/html
# Install polymer and bower
RUN npm install -g \
polymer-cli \
bower
# Add project to a temp folder to build it
RUN mkdir -p /var/www/html/temp
COPY . /var/www/html/temp
WORKDIR /var/www/html/temp
RUN ls -la
RUN bower install --allow-root # here is where I try to build my project
RUN polymer build
# Move to release folder
WORKDIR /var/www/html
RUN mv /var/www/html/temp/build/unbundled/* /var/www/html
RUN bower install --allow-root
# Remove temporary content
RUN rm -rf /var/www/html/temp
Volume mount when docker image build done.
in last row in Docker file add
ENTRYPOINT ["/bin/bash", "/etc/entrypoint.sh"]
Use entripoint script like this.
#!/bin/bash
set -e #if error bash script will exit and stop docker image
cd /var/www/html/
bower install --allow-root
polymer build
mv /var/www/html/temp/build/unbundled/* /var/www/html
rm -rf /var/www/html/temp

Resources