Docker Jboss/wildfly: How to add datasources and MySQL connector - docker

I am learning Docker which is completely new to me. I already was able to create an jboss/wildfly image, and then i was able to start jboss with my application using this Dockerfile:
FROM jboss/wildfly
CMD ["/opt/jboss/wildfly/bin/standalone.sh", "-c", "standalone-full.xml", "-b", "0.0.0.0"]
ADD mywebapp-web/target/mywebapp-1.0.war /opt/jboss/wildfly/standalone/deployments/mywebapp-1.0.war
Now i would like to add support for a MySQL Database by adding a datasource to the standalone and the mysql connector. For that i am following this example:
https://github.com/arun-gupta/docker-images/tree/master/wildfly-mysql-javaee7
Following is my dockerfile and my execute.sh script
Dockerfile:
FROM jboss/wildfly:latest
ADD customization /opt/jboss/wildfly/customization/
CMD ["/opt/jboss/wildfly/customization/execute.sh"]
execute script code:
#!/bin/bash
# Usage: execute.sh [WildFly mode] [configuration file]
#
# The default mode is 'standalone' and default configuration is based on the
# mode. It can be 'standalone.xml' or 'domain.xml'.
echo "=> Executing Customization script"
JBOSS_HOME=/opt/jboss/wildfly
JBOSS_CLI=$JBOSS_HOME/bin/jboss-cli.sh
JBOSS_MODE=${1:-"standalone"}
JBOSS_CONFIG=${2:-"$JBOSS_MODE.xml"}
function wait_for_server() {
until `$JBOSS_CLI -c ":read-attribute(name=server-state)" 2> /dev/null | grep -q running`; do
sleep 1
done
}
echo "=> Starting WildFly server"
echo "JBOSS_HOME : " $JBOSS_HOME
echo "JBOSS_CLI : " $JBOSS_CLI
echo "JBOSS_MODE : " $JBOSS_MODE
echo "JBOSS_CONFIG: " $JBOSS_CONFIG
echo $JBOSS_HOME/bin/$JBOSS_MODE.sh -b 0.0.0.0 -c $JBOSS_CONFIG &
$JBOSS_HOME/bin/$JBOSS_MODE.sh -b 0.0.0.0 -c $JBOSS_CONFIG &
echo "=> Waiting for the server to boot"
wait_for_server
echo "=> Executing the commands"
$JBOSS_CLI -c --file=`dirname "$0"`/commands.cli
# Add MySQL module
module add --name=com.mysql --resources=/opt/jboss/wildfly/customization/mysql-connector-java-5.1.39-bin.jar --dependencies=javax.api,javax.transaction.api
# Add MySQL driver
/subsystem=datasources/jdbc-driver=mysql:add(driver-name=mysql,driver-module-name=com.mysql,driver-xa-datasource-class-name=com.mysql.jdbc.jdbc2.optional.MysqlXADataSource)
# Deploy the WAR
#cp /opt/jboss/wildfly/customization/leadservice-1.0.war $JBOSS_HOME/$JBOSS_MODE/deployments/leadservice-1.0.war
echo "=> Shutting down WildFly"
if [ "$JBOSS_MODE" = "standalone" ]; then
$JBOSS_CLI -c ":shutdown"
else
$JBOSS_CLI -c "/host=*:shutdown"
fi
echo "=> Restarting WildFly"
$JBOSS_HOME/bin/$JBOSS_MODE.sh -b 0.0.0.0 -c $JBOSS_CONFIG
But I get a error when i run the image complaining that a file or directory is not found:
Building Image
$ docker build -t mpssantos/leadservice:latest .
Sending build context to Docker daemon 19.37 MB
Step 1 : FROM jboss/wildfly:latest
---> b8279b641e82
Step 2 : ADD customization /opt/jboss/wildfly/customization/
---> aea03d4f2819
Removing intermediate container 0920e2cd97fd
Step 3 : CMD /opt/jboss/wildfly/customization/execute.sh
---> Running in 8a0dbcb01855
---> 10335320b89d
Removing intermediate container 8a0dbcb01855
Successfully built 10335320b89d
SECURITY WARNING: You are building a Docker image from Windows against a non-Windows Docker host. All files and directories added to build context will have '-rwxr-xr-x' permissions. It is recommended to double check and reset permissions for sensitive files and directories.
Running image
$ docker run mpssantos/leadservice
no such file or directory
Error response from daemon: Cannot start container 5d3357ba17afa36e81d8794f2b0cd45cc00dde955b2b2054282c4ef17dd4f265: [8] System error: no such file or directory
Can someone let me know how can i access the filesystem so i can check which file or directory is complaining? Is there a better way to debug this?
I believe that is something related with the bash which is referred on first line of the script because the following echo is not printed
Thank you so much
I made it to ssh the container to check whats inside.
1) ssh to the docker machine: docker-machine ssh default
2) checked the container id with the command: docker ps -a
3) ssh to the container with the command: sudo docker exec -i -t 665b4a1e17b6 /bin/bash
4) i can check that the "/opt/jboss/wildfly/customization/" directory exists with the expected files
The customization dir have the following permissions and is listed like this:
drwxr-xr-x 2 root root 4096 Jun 12 23:44 customization
drwxr-xr-x 10 jboss jboss 4096 Jun 14 00:15 standalone
and the files inside the customization dir
drwxr-xr-x 2 root root 4096 Jun 12 23:44 .
drwxr-xr-x 12 jboss jboss 4096 Jun 14 00:15 ..
-rwxr-xr-x 1 root root 1755 Jun 12 20:06 execute.sh
-rwxr-xr-x 1 root root 989497 May 4 11:11 mysql-connector-java-5.1.39-bin.jar
if i try to execute the file i get this error
[jboss#d68190e4f0d8 customization]$ ./execute.sh
bash: ./execute.sh: /bin/bash^M: bad interpreter: No such file or directory
Does this bring light to anything?
Thank you so much again

I found the issue. The execute.sh file was with windows eof. I converted to UNIX And start to work.

I believe the execute.sh is not found. You can verify by running the following and finding the result is an empty directory:
docker run mpssantos/leadservice ls -al /opt/jboss/wildfly/customization/
The reason for this is you are doing your build on a different (virtual) machine than your local system, so it's pulling the "customization" folder from that VM. I'd run the build within the VM and place the files you want to import on that VM where the build can find it.

Related

Verify crontab works on container using slim-buster docker image?

Problem Description
I am unable to see any output from the cron job when I run docker-compose logs -f cron after running docker-compose up.
When I attached to the container using VSCode, I navigated to var/logs/cron.log and ran the cat command and saw no output. Curiously, when I run crontab -l I see * * * * * /bin/sh get_date.sh as the output.
Description of Attempted Solution
Here is how I organized the project (it is over engineered at the moment for reasons of extensibility later)
├── config
│ └── crontab
├── docker-compose.yml
├── Dockerfile
├── README.md
└── scripts
└── get_date.sh
Here is the details on the above, the contents are simple. Also, it is my attempt to use a lean python:3.8-slim-buster docker image so I can run bash or python scripts (not attempted):
crontab
* * * * * /bin/sh get_date.sh
get_date.sh
#!/bin/sh
echo "Current date and time is " "$(date +%D-%H:%M)"
docker-compose.yml
version: '3.8'
services:
cron:
build:
context: .
dockerfile: ./Dockerfile
Dockerfile
FROM python:3.8-slim-buster
#Install cron
RUN apt-get update \
&& apt-get install -y cron
# Copying script file into the container.
COPY scripts/get_date.sh .
# Giving executable permission to the script file.
RUN chmod +x get_date.sh
# Adding crontab to the appropriate location
ADD config/crontab /etc/cron.d/my-cron-file
# Giving executable permission to crontab file
RUN chmod 0644 /etc/cron.d/my-cron-file
# Running crontab
RUN crontab /etc/cron.d/my-cron-file
# Create the log file to be able to run tail
RUN touch /var/log/cron.log
# Creating entry point for cron
CMD ["cron", "tail", "-f", "/var/log/cron.log"]
Things Attempted
I am new in trying to get cron working in a container environment. I am not getting any error messages, so not sure how I can debug this issue except describe the behavior.
I have changed the content of crontab from * * * * * root bash get_date.sh to the above. I also checked out stackoverflow and found a similar issue here but no clear solution was proposed as far as I could tell.
Thanks kindly in advance.
References
Stackoverflow discussion on running cron inside of container
How to run cron inside of containers
You have several issues that are preventing this from working:
Your attempt to run tail is a no-op: with your CMD as written you're simply running the command cron tail -f /var/log/cron.log. In other words, you're running cron and providing tail -f /var/log/cron.log as arguments. If you want to run cron followed by the tail command, you would need to write it like this:
CMD ["sh", "-c", "cron && tail -f /var/log/cron.log"]
While the above will both start cron and run the tail command, you still won't see any log output...because Debian cron doesn't log to a file; it logs to syslog. You won't see any output in /var/log/cron.log unless you have a syslog daemon installed, configured, and running.
I would suggest this as an alternative:
Fix your syntax in config/crontab; for files installed in /etc/cron.d, you need to provide the username:
* * * * * root /bin/sh /usr/local/bin/get_date.sh
I'm also being explicit about the path here, rather than assuming our cron job and the COPY command have the same working directory.
There's another problem here: this script outputs to stdout, but that won't go anywhere useful (cron generally takes output from your cron jobs and then emails it to root). We can explicitly send the output to syslog instead:
* * * * * root /bin/sh /usr/local/bin/get_date.sh | logger
We don't need to make get_date.sh executable, since we're explicitly running it with the sh command.
We'll use busybox for a syslog daemon that logs to stdout.
That all gets us:
FROM python:3.8-slim-buster
# Install cron and busybox
RUN apt-get update \
&& apt-get install -y \
cron \
busybox
# Copying script file into the container.
COPY scripts/get_date.sh /usr/local/bin/get_date.sh
# Adding crontab to the appropriate location
COPY config/crontab /etc/cron.d/get_date
# Creating entry point for cron
CMD sh -c 'cron && busybox syslogd -n -O-'
If we build an image from this, start a container, and leave it running for a while, we see as output:
Sep 22 00:17:52 701eb0bd249f syslog.info syslogd started: BusyBox v1.30.1
Sep 22 00:18:01 701eb0bd249f authpriv.err CRON[7]: pam_env(cron:session): Unable to open env file: /etc/default/locale: No such file or directory
Sep 22 00:18:01 701eb0bd249f authpriv.info CRON[7]: pam_unix(cron:session): session opened for user root by (uid=0)
Sep 22 00:18:01 701eb0bd249f cron.info CRON[8]: (root) CMD (/bin/sh /usr/local/bin/get_date.sh | logger)
Sep 22 00:18:01 701eb0bd249f user.notice root: Current date and time is 09/22/22-00:18
Sep 22 00:18:01 701eb0bd249f authpriv.info CRON[7]: pam_unix(cron:session): session closed for user root
Sep 22 00:19:01 701eb0bd249f authpriv.err CRON[12]: pam_env(cron:session): Unable to open env file: /etc/default/locale: No such file or directory
Sep 22 00:19:01 701eb0bd249f authpriv.info CRON[12]: pam_unix(cron:session): session opened for user root by (uid=0)
Sep 22 00:19:01 701eb0bd249f cron.info CRON[13]: (root) CMD (/bin/sh /usr/local/bin/get_date.sh | logger)
Sep 22 00:19:01 701eb0bd249f user.notice root: Current date and time is 09/22/22-00:19
Sep 22 00:19:01 701eb0bd249f authpriv.info CRON[12]: pam_unix(cron:session): session closed for user root

could not parse ssh: [default]: stat /tmp/ssh-qpL02JZP5k7x/agent.28198: no such file or directory

AFAIK the error means that there is no file named agent.28198 in the mentioned directory, but upon listing its contents the file (local socket file) is clearly there. What could be the reason for docker's inability to get the socket?
Here is the full command scenario:
$ eval $(ssh-agent -s)
Agent pid 28199
$ ssh-add
Enter passphrase for /home/ubuntu/.ssh/id_rsa:
Identity added: /home/ubuntu/.ssh/id_rsa (/home/ubuntu/.ssh/id_rsa)
$ DOCKER_BUILDKIT=1 docker build --ssh default -t my_image .
could not parse ssh: [default]: stat /tmp/ssh-qpL02JZP5k7x/agent.28198: no such file or directory
$ ls -l /tmp/ssh-qpL02JZP5k7x/
total 0
srw------- 1 ubuntu ubuntu 0 Sep 9 08:50 agent.28198
Docker is installed from snap - that's the culprit. It does not have access to /tmp folder because of that. To remediate, remove the package from snap - sudo snap remove docker and install it via dpkg (link).

Add a new entrypoint to a docker image

Recently, we decided to move one of our services to docker container. The service is product of another company and they have provided us the docker image. However, we need to do some extra configuration steps in the container entrypoint.
The first thing I tried, was to create a DockerFile from the base image and then add commands to do the extra steps, like this:
From baseimage:tag
RUN chmod a+w /path/to/entrypoint_creates_this_file
But, it failed, because these extra steps must be run after running the base container entrypoint.
Is there any way to extend entrypoint of a base image? if not, what is the correct way to do this?
Thanks
I finally ended up calling the original entrypoint bash script in my new entrypoint bash script, before doing other extra configuration steps.
You do not need to even create a new Dockerfile. To modify the entrypoint you can just run the image using the command such as below:
docker run --entrypoint new-entry-point-cmd baseimage:tag <optional-args-to-entrypoint>
create your custom entry-point file
-> add this to image
-> specify this as your entrypoint file
FROM image:base
COPY /path/to/my-entry-point.sh /my-entry-point.sh
// do sth here
ENTRYPOINT ["/my-entry-point.sh"]
Let me take an example with certbot. Using the excellent answer from Anoop, we can get an interactive shell (-ti) into a temporary container (--rm) with this image like so:
$ docker run --rm -ti --entrypoint /bin/sh certbot/certbot:latest
But what if we want to run a command after the original entry point, like the OP requested? We could run a shell and join the commands as in the following example:
$ docker run --rm --entrypoint /bin/sh certbot/certbot:latest \
-c "certbot --version && touch i-can-do-nice-things-here && ls -lah"
certbot 1.30.0
total 28K
drwxr-xr-x 1 root root 4.0K Oct 5 15:10 .
drwxr-xr-x 1 root root 4.0K Sep 7 18:10 ..
-rw-r--r-- 1 root root 0 Oct 5 15:10 i-can-do-nice-things-here
drwxr-xr-x 1 root root 4.0K Sep 7 18:10 src
drwxr-xr-x 1 root root 4.0K Sep 7 18:10 tools
Background
If I run it with the original entrypoint I will get this:
$ docker run --rm certbot/certbot:latest
Saving debug log to /var/log/letsencrypt/letsencrypt.log Certbot
doesn't know how to automatically configure the web server on this
system. However, it can still get a certificate for you. Please run
"certbot certonly" to do so. You'll need to manually configure your
web server to use the resulting certificate.
Or:
$ docker run --rm certbot/certbot:latest --version
certbot 1.30.0
I can see the entrypoint with docker inspect:
$ docker inspect certbot/certbot:latest | grep -i entry -C 2
},
"WorkingDir": "/opt/certbot",
"Entrypoint": null,
"OnBuild": null,
"Labels": null
--
},
"WorkingDir": "/opt/certbot",
"Entrypoint": [
"certbot"
],
If /bin/sh doesn't work in your container, try /bin/bash.

Unable to compile cpp via docker in Travis-CI: /usr/bin/ld: cannot open output file a.out: Permission denied

I am using a very simple .travis.yml to compile a cpp program via docker in Travis-CI. (My motivation is to experiment running docker in Travis CI.)
sudo: required
services:
- docker
before_install:
- docker pull glot/clang
script:
- sudo docker run --rm -v "$(pwd)":/app -w /app glot/clang g++ main.cpp
But the build is failing with following error:
/usr/bin/ld: cannot open output file a.out: Permission denied. This is regardless of whether I use sudo or not. Can someone help me out figuring out the root cause and help fix this? Thanks.
I would suggest you to set mounting path explicitly rather then doing it with $(pwd). Then you need to check the permissions from inside the container. Try something like that:
sudo docker run --rm -v "$(pwd)":/app -w /app glot/clang stat /app
This will show folder permissions. Probably noone is able to write into this folder.
Also you should avoid building your software using root permissions, it's not secure. Create non-priveleged user and use them when you running the compiler.
UPD:
I cannot reproduce this issue with docker 1.6.0, probably it's caused by some filesystem settings persisted by Travis-CI virtual machine. This is what I have on my localhost:
➜ /tmp mkdir /tmp/code
➜ /tmp echo "int main(){}" > /tmp/code/main.cpp
➜ /tmp echo "g++ main.cpp && ls -l" > /tmp/code/build.sh
➜ /tmp docker run --rm -v /tmp/code:/app -w /app glot/clang bash /app/build.sh
total 20
-rwxr-xr-x 1 glot glot 8462 Dec 30 10:19 a.out
-rwxrwxr-x 1 glot glot 22 Dec 30 10:17 build.sh
-rw-rw-r-- 1 glot glot 13 Dec 30 10:10 main.cpp
As you see, the resulting binary appears in /app folder

Multi command with docker in a script

With docker I would like to offer a vm to each client to compile and execute a C program in only one file.
For that, I share a folder with the docker and the host thanks to a dockerfile and the command "ADD".
My folder is like that:
folder/id_user/script.sh
folder/id_user/code.c
In script.sh:
gcc ./compil/code.c -o ./compil/code && ./compil/code
My problem is in the doc we can read this for ADD:
All new files and directories are created with mode 0755, uid and gid 0.
But when I launch "ls" on the file I have:
ls -l compil/8f41dacd-8775-483e-8093-09a8712e82b1/
total 8
-rw-r--r-- 1 1000 1000 51 Feb 11 10:52 code.c
-rw-r--r-- 1 1000 1000 54 Feb 11 10:52 script.sh
So I can't execute the script.sh. Do you know why?
Maybe you wonder why proceed like that.
It's because if I do:
sudo docker run ubuntu/C pwd && pwd
result:
/
/srv/website
So we can see the first command is in the VM but not the second. I understand it might be normal for docker.
If you have any suggestion I'm pleased to listen it.
Thanks !
You can set up the correct mode by RUN command with chmod:
# Dockerfile
...
ADD script.sh /root/script.sh
RUN chmod +x /root/script.sh
...
The second question, you should use CMD command - && approach does work in Dockerfile, try to put this line at the end of your Dockerfile:
CMD pwd && pwd
then docker build . and you will see:
root#test:/home/test/# docker run <image>
/
/
Either that your you can do:
RUN /bin/sh /root/script.sh
to achieve the same result

Resources