How to delete empty repository from docker registry - docker

I have a docker local registry:2.6.2, and my Web-UI constantly log an error:
time="2019-08-13T13:58:43Z" level=error msg="Failed to retrieve an updated list of tags for http://172.20.0.20:5000" Error="Get http://172.20.0.20:5000/v2/myrepo/tags/list: http: non-successful response (status=404 body=\"{\\\"errors\\\":[{\\\"code\\\":\\
\"NAME_UNKNOWN\\\",\\\"message\\\":\\\"repository name not known to registry\\\",\\\"detail\\\":{\\\"name\\\":\\\"myrepo\\\"}}]}\\n\")" Repository Name=myrepo file=allregistries.go line=71 source=ap
It happens, becouse of empty repository "myrepo" witch exist om my registry.
curl -X GET http://172.20.0.20:5000/v2/_catalog
{"repositories":["myrepo","myrepo2","myrepo3"]}
curl -X GET http://172.20.0.20:5000/v2/myrepo/tags/list
{"errors":[{"code":"NAME_UNKNOWN","message":"repository name not known to registry","detail":{"name":"myrepo"}}]}
The question is, how to delete this empty repository?

NOTE: All below example codes are executed with root privilege on ubuntu 20.04 LTS
I use following command to delete emptry repository
docker exec -it registry sh -c '
for t in $(find /var/lib/registry/docker/registry/v2/repositories -name tags)
do
TAGS=$(ls $t | wc -l)
REPO=${t%%\/_manifests\/tags}
LINKS=$(find $REPO/_manifests/revisions -name link | wc -l)
if [ "$TAGS" -eq 0 -a "$LINKS" -eq 0 ]; then
echo "REMOVE empty repo: $REPO"
rm -rf $REPO
fi
done'
docker restart registry
the registry is the registry that runs on the machine.
you can check the name by docker ps
# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d3c40930b56b konradkleine/docker-registry-frontend:v2 "/bin/sh -c $START_S…" About an hour ago Up 42 minutes 80/tcp, 0.0.0.0:30443->443/tcp, :::30443->443/tcp reg-ui
787dd0c13058 registry:2.7.1 "/entrypoint.sh /etc…" 5 days ago Up 34 minutes 0.0.0.0:443->443/tcp, :::443->443/tcp, 5000/tcp registry
#
make sure that both the number of tags and links must be zero.
and the docker process must be restarted, if any directories are removed.
If the command works find, you can create a script
cat<<EOM | tee /usr/local/bin/docker-registry-remove-empty-repo.sh
#!/bin/bash
RM_LOG=\$(docker exec -it registry sh -c '
for t in \$(find /var/lib/registry/docker/registry/v2/repositories -name tags)
do
TAGS=\$(ls \$t | wc -l)
REPO=\${t%%\/_manifests\/tags}
LINKS=\$(find \$REPO/_manifests/revisions -name link | wc -l)
if [ "\$TAGS" -eq 0 -a "\$LINKS" -eq 0 ]; then
echo "REMOVE empty repo: \$REPO"
rm -rf \$REPO
fi
done')
if [ -n "\$RM_LOG" ]; then
echo "\$RM_LOG"
docker restart registry
fi
EOM
chmod +x /usr/local/bin/docker-registry-remove-empty-repo.sh
even you can run it every minutes by registring the script to the cron
sudo sed -i '/docker-registry-remove-empty-repo.sh/d' /var/spool/cron/crontabs/$USER
cat <<EOM | tee -a /var/spool/cron/crontabs/$USER
* * * * * /usr/local/bin/docker-registry-remove-empty-repo.sh
EOM
systemctl restart cron
after that, all is ok, if you see the following system log
# tail -f /var/log/syslog
Jun 7 02:46:01 localhost CRON[38764]: (root) CMD (/usr/local/bin/docker-registry-remove-empty-repo.sh)
Jun 7 02:46:01 localhost CRON[38763]: (CRON) info (No MTA installed, discarding output)
Jun 7 02:47:01 localhost CRON[38774]: (root) CMD (/usr/local/bin/docker-registry-remove-empty-repo.sh)
Jun 7 02:47:01 localhost CRON[38773]: (CRON) info (No MTA installed, discarding output)
Jun 7 02:48:01 localhost CRON[38795]: (root) CMD (/usr/local/bin/docker-registry-remove-empty-repo.sh)
Jun 7 02:48:01 localhost CRON[38794]: (CRON) info (No MTA installed, discarding output)
Jun 7 02:49:01 localhost CRON[38804]: (root) CMD (/usr/local/bin/docker-registry-remove-empty-repo.sh)
Jun 7 02:49:02 localhost CRON[38803]: (CRON) info (No MTA installed, discarding output)
Jun 7 02:50:01 localhost CRON[38814]: (root) CMD (/usr/local/bin/docker-registry-remove-empty-repo.sh)
Jun 7 02:50:01 localhost CRON[38813]: (CRON) info (No MTA installed, discarding output)
^C
#

Related

How can i run docker commands outside container, with cron on the host?

I need to automate the verification of active containers with docker ps, and send updates of containers with docker pull. So i created this script file:
if docker ps | grep "fairplay";then
echo "doker fairplay ok" >> /home/ubuntu/at2.log
else
echo "doker fairplay caido" >> /home/ubuntu/at2.log
errdock=1
fi
The script works without a problem when i use manually on the terminal, but when i try with cron just don't work.
Crontab:
* * * * * root sh /home/ubuntu/at2.sh
The log when i run mannualy:
Thu Mar 25 13:33:43 -03 2021
doker fairplay ok
doker widevine ok
Thu Mar 25 13:33:44 -03 2021
The log when i run with cron:
Thu Mar 25 13:34:01 -03 2021
doker fairplay caido
doker widevine caido
Thu Mar 25 13:34:01 -03 2021
I don't want to run anything inside the container, i need to run the command in the cron that is on the host, so the following questions don't help question 1, question 2.
Your crontab syntax is not correct. I've tried your exact .sh file and I got no errors.
This is the correct one:
* * * * * sh /home/ubuntu/at2.sh
I'm not sure why you've added root user or word in crontab syntax as if you run root sh /home/ubuntu/at2.sh, you'll get Command 'root' not found.
I can also recommend you add date variable so that you know the time it's up or down:
if docker ps | grep "fairplay";then
then
echo "`date +"%D %H:%M:%S"` docker fairplay ok" >> /home/ubuntu/at2.log
else
echo "`date +"%D %H:%M:%S"` docker fairplay caido" >> /home/ubuntu/at2.log
errdock=1
fi

run container with containerd's ctr by means of uidmap to map to non-root user on the host

To better understand how to use the --uidmap with ctr, I've created a test container by means of the following steps. The containerd version is 1.4.3.
Build and Run Container:
Build Dockerfile
$ cat Dockerfile
FROM alpine
ENTRYPOINT ["/bin/sh"]
with
$ docker build -t test .
Sending build context to Docker daemon 143.1MB
Step 1/2 : FROM alpine
---> d6e46aa2470d
Step 2/2 : ENTRYPOINT ["/bin/sh"]
---> Running in 560b09f9b287
Removing intermediate container 560b09f9b287
---> 8506bfeab109
Successfully built 8506bfeab109
Successfully tagged test:latest
Save the image as tar ball
$ docker save test > test.tar
Import it with containerd's ctr
$ sudo ctr i import test.tar
unpacking docker.io/library/test:latest (sha256:9f7dabf0e4feadbca9bdc180422a3f2cdd7b545445180a3c23de8129dc95f29b)...done
Create and run the container
$ sudo ctr run --uidmap 0:5000:4999 docker.io/library/test:latest test
The uid map should map the container internal uid of 0 (root) to 5000 corresponding to ctr's manpage:
--uidmap="": run inside a user namespace with the specified UID mapping range; specified with the format container-uid:host-uid:length
Check UID in container and on host:
Within the container:
ps -eo ruser,rgroup,comm
RUSER RGROUP COMMAND
root root sh
root root ps
On the host:
$ ps -eo uid,gid,cmd | grep /bin/sh
126 128 /bin/sh /usr/lib/lightdm/lightdm-greeter-session /usr/sbin/unity-greeter
0 0 /bin/sh
Issue
It seems to not work, /bin/sh runs as root (uid=0) within the container as well as on the host.
I was searching for a while until I checked containerd's code and found this within cmd/ctr/commands/run/run_unix.go:
149 if uidmap, gidmap := context.String("uidmap"), context.String("gidmap"); uidmap != "" && gidmap != "" {
150 uidMap, err := parseIDMapping(uidmap)
151 if err != nil {
152 return nil, err
153 }
154 gidMap, err := parseIDMapping(gidmap)
155 if err != nil {
156 return nil, err
157 }
which basically means:
You have to provide both, the uidmap AND the gidmap, otherwise it won't work.
Running the above container again with
$ sudo ctr run --uidmap 0:5000:4999 --gidmap 0:5000:4999 docker.io/library/test:latest test
did the trick.
Within the container:
ps -eo ruser,rgroup,comm
RUSER RGROUP COMMAND
root root sh
root root ps
On the host:
$ ps -eo uid,gid,cmd | grep /bin/sh
126 128 /bin/sh /usr/lib/lightdm/lightdm-greeter-session /usr/sbin/unity-greeter
5000 5000 /bin/sh

Container not running

could you help me?
I'm trying to run a container by a dockerfile but it shows this
warning and my container does not start.
compose.parallel.parallel_execute_iter: Finished processing:
<Container: remote-Starting remote-host ... done
compose.parallel.feed_queue: Pending: set()
compose.parallel.parallel_execute_iter: Finished processing: <Service:
remote_host>
compose.parallel.feed_queue: Pending: set()
Attaching to jenkinks, remote-host
compose.cli.verbose_proxy.proxy_callable: docker logs <-
('f2e305942e57ce1fe90c2ca94d3d9bbc004155a136594157e41b7a916d1ca7de',
stdout=True, stderr=True, stream=True, follow=True)
remote-host | Unable to load host key: /etc/ssh/ssh_host_rsa_key
remote-host | Unable to load host key: /etc/ssh/ssh_host_ecdsa_key
remote-host | Unable to load host key:
/etc/ssh/ssh_host_ed25519_key remote-host | sshd: no hostkeys
available -- exiting.
compose.cli.verbose_proxy.proxy_callable: docker events <-
(filters={'label': ['com.docker.compose.project=jenkins',
'com.docker.compose.oneoff=False']}, decode=True)
My dockerfile is this:
FROM centos RUN yum -y install openssh-server RUN yum install -y
passwd RUN useradd remote_user &&
echo "1234" | passwd remote_user --stdin &&
mkdir /home/remote_user/.ssh &&
chmod 700 /home/remote_user/.ssh COPY remote_user.pub /home/remote_user/.ssh/authorized_keys RUN chown
remote_user:remote_user -R /home/remote_user &&
chmod 400 /home/remote_user/.ssh/authorized_keys CMD /usr/sbin/sshd -D
start with an empty dir and put following in that dir as a file called Dockerfile
FROM centos
RUN yum -y install openssh-server
RUN yum install -y passwd
RUN useradd remote_user
RUN echo "1234" | passwd remote_user --stdin
RUN mkdir /home/remote_user/.ssh
RUN chmod 700 /home/remote_user/.ssh
COPY remote_user.pub /home/remote_user/.ssh/authorized_keys
RUN chown remote_user:remote_user -R /home/remote_user
RUN chmod 400 /home/remote_user/.ssh/authorized_keys
CMD /usr/sbin/sshd -D
# CMD ["/bin/bash"]
# ... save this file as Dockerfile then in same dir issue following
#
# docker build --tag stens_centos . # creates image stens_ubuntu
#
# docker run -d stens_centos sleep infinity # launches container and just sleeps only purpose here is to keep container running
#
# docker ps # show running containers
#
#
# ... find CONTAINER ID from above and put into something like this
#
# docker exec -ti $( docker ps | grep stens_centos | cut -d' ' -f1 ) bash # login to running container
#
then in that same dir put your ssh key files as per
eve#milan ~/Dropbox/Documents/code/docker/centos $ ls -la
total 28
drwxrwxr-x 2 eve eve 4096 Nov 2 15:20 .
drwx------ 77 eve eve 12288 Nov 2 15:14 ..
-rw-rw-r-- 1 eve eve 875 Nov 2 15:20 Dockerfile
-rwx------ 1 eve eve 3243 Nov 2 15:18 remote_user
-rwx------ 1 eve eve 743 Nov 2 15:18 remote_user.pub
then cat out Dockerfile and copy and paste commands it explains at bottom of Dockerfile file ... for me all of them just worked OK
after I copy and pasted those commands listed at bottom of Dockerfile the container gets built and executed
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
0a06ebd2752a stens_centos "sleep infinity" 7 minutes ago Up 7 minutes pedantic_brahmagupta
keep in mind you must define in your Dockerfile the bottom CMD or similar to be just what you want to get executed as the container runs which typically is a server which by definition runs forever ... alternatively this CMD can be simply something which runs then finishes like a batch job in which case the container will exit when that job finishes ... with this knowledge I suggest you confirm sshd -D will hold that command as a server or will immediately terminate upon launch of container
I've just replied to this GitHub issue, but here's what I experienced and how I fixed it
I just had this issue for my Jekyll blog site which I normally bring up using docker-compose with mapped volume to rebuild when I create a new post - it was hanging, tried to run docker-compose up with the --verbose switch and saw the same compose.parallel.feed_queue: Pending: set().
I tried it on my Macbook and it was working fine
I didn't have any experimental features turned on, but I need need to go into (on Windows) settings-> resources -> File Sharing and add the folder I was mapping in my docker compose (the root of my blog site)
Re ran docker compose and its now up and running
Version Info:

File ownership after docker cp

How can I control which user owns the files I copy in and out of a container?
The docker cp command says this about file ownership:
The cp command behaves like the Unix cp -a command in that directories are copied recursively with permissions preserved if possible. Ownership is set to the user and primary group at the destination. For example, files copied to a container are created with UID:GID of the root user. Files copied to the local machine are created with the UID:GID of the user which invoked the docker cp command. However, if you specify the -a option, docker cp sets the ownership to the user and primary group at the source.
It says that files copied to a container are created as the root user, but that's not what I see. I create two files owned by user id 1005 and 1006. Those owners are translated into the container's user namespace. The -a option seems to make no difference when I copy the file into a container.
$ sudo chown 1005:1005 test.txt
$ ls -l test.txt
-rw-r--r-- 1 1005 1005 29 Oct 6 12:43 test.txt
$ docker volume create sandbox1
sandbox1
$ docker run --name run1 -vsandbox1:/data alpine echo OK
OK
$ docker cp test.txt run1:/data/test1005.txt
$ docker cp -a test.txt run1:/data/test1005a.txt
$ sudo chown 1006:1006 test.txt
$ docker cp test.txt run1:/data/test1006.txt
$ docker cp -a test.txt run1:/data/test1006a.txt
$ docker run --rm -vsandbox1:/data alpine ls -l /data
total 16
-rw-r--r-- 1 1005 1005 29 Oct 6 19:43 test1005.txt
-rw-r--r-- 1 1005 1005 29 Oct 6 19:43 test1005a.txt
-rw-r--r-- 1 1006 1006 29 Oct 6 19:43 test1006.txt
-rw-r--r-- 1 1006 1006 29 Oct 6 19:43 test1006a.txt
When I copy files out of the container, they are always owned by me. Again, the -a option seems to do nothing.
$ docker run --rm -vsandbox1:/data alpine cp /data/test1006.txt /data/test1007.txt
$ docker run --rm -vsandbox1:/data alpine chown 1007:1007 /data/test1007.txt
$ docker cp run1:/data/test1006.txt .
$ docker cp run1:/data/test1007.txt .
$ docker cp -a run1:/data/test1006.txt test1006a.txt
$ docker cp -a run1:/data/test1007.txt test1007a.txt
$ ls -l test*.txt
-rw-r--r-- 1 don don 29 Oct 6 12:43 test1006a.txt
-rw-r--r-- 1 don don 29 Oct 6 12:43 test1006.txt
-rw-r--r-- 1 don don 29 Oct 6 12:47 test1007a.txt
-rw-r--r-- 1 don don 29 Oct 6 12:47 test1007.txt
-rw-r--r-- 1 1006 1006 29 Oct 6 12:43 test.txt
$
You can also change the ownership by logging in as root user into the container :
docker exec -it --user root <container-id> /bin/bash
chown -R <username>:<groupname> <folder/file>
In addition to #Don Kirkby's answer, let me provide a similar example in bash/shell script for the case that you want to copy something into a container while applying different ownership and permissions than those of the original file.
Let's create a new container from a small image that will keep running by itself:
docker run -d --name nginx nginx:alpine
Now wel'll create a new file which is owned by the current user and has default permissions:
touch foo.bar
ls -ahl foo.bar
>> -rw-rw-r-- 1 my-user my-group 0 Sep 21 16:45 foo.bar
Copying this file into the container will set ownership and group to the UID of my user and preserve the permissions:
docker cp foo.bar nginx:/foo.bar
docker exec nginx sh -c 'ls -ahl /foo.bar'
>> -rw-rw-r-- 1 4098 4098 0 Sep 21 14:45 /foo.bar
Using a little tar work-around, however, I can change the ownership and permissions that are applied inside of the container.
tar -cf - foo.bar --mode u=+r,g=-rwx,o=-rwx --owner root --group root | docker cp - nginx:/
docker exec nginx sh -c 'ls -ahl /foo.bar'
>> -r-------- 1 root root 0 Sep 21 14:45 /foo.bar
tar options explained:
c creates a new archive instead of unpacking one.
f - will write to stdout instead of a file.
foo.bar is the input file to be packed.
--mode specifies the permissions for the target. Similar to chown, they can be given in symbolic notation or as an octal number.
--owner sets the new owner of the file.
--group sets the new group of the file.
docker cp - reads the file that is to be copied into the container from stdin.
This approach is useful when a file needs to be copied into a created container before it starts, such that docker exec is not an option (which can only operate on running containers).
Just a one-liner (similar to #ramu's answer), using root to make the call:
docker exec -u 0 -it <container-id> chown node:node /home/node/myfile
In order to get complete control of file ownership, I used the tar stream feature of docker cp:
If - is specified for either the SRC_PATH or DEST_PATH, you can also stream a tar archive from STDIN or to STDOUT.
I launch the docker cp process, then stream a tar file to or from the process. As the tar entries go past, I can adjust the ownership and permissions however I like.
Here's a simple example in Python that copies all the files from /outputs in the sandbox1 container to the current directory, excludes the current directory so its permissions don't get changed, and forces all the files to have read/write permissions for the user.
from subprocess import Popen, PIPE, CalledProcessError
import tarfile
def main():
export_args = ['sudo', 'docker', 'cp', 'sandbox1:/outputs/.', '-']
exporter = Popen(export_args, stdout=PIPE)
tar_file = tarfile.open(fileobj=exporter.stdout, mode='r|')
tar_file.extractall('.', members=exclude_root(tar_file))
exporter.wait()
if exporter.returncode:
raise CalledProcessError(exporter.returncode, export_args)
def exclude_root(tarinfos):
print('\nOutputs:')
for tarinfo in tarinfos:
if tarinfo.name != '.':
assert tarinfo.name.startswith('./'), tarinfo.name
print(tarinfo.name[2:])
tarinfo.mode |= 0o600
yield tarinfo
main()

Docker Jboss/wildfly: How to add datasources and MySQL connector

I am learning Docker which is completely new to me. I already was able to create an jboss/wildfly image, and then i was able to start jboss with my application using this Dockerfile:
FROM jboss/wildfly
CMD ["/opt/jboss/wildfly/bin/standalone.sh", "-c", "standalone-full.xml", "-b", "0.0.0.0"]
ADD mywebapp-web/target/mywebapp-1.0.war /opt/jboss/wildfly/standalone/deployments/mywebapp-1.0.war
Now i would like to add support for a MySQL Database by adding a datasource to the standalone and the mysql connector. For that i am following this example:
https://github.com/arun-gupta/docker-images/tree/master/wildfly-mysql-javaee7
Following is my dockerfile and my execute.sh script
Dockerfile:
FROM jboss/wildfly:latest
ADD customization /opt/jboss/wildfly/customization/
CMD ["/opt/jboss/wildfly/customization/execute.sh"]
execute script code:
#!/bin/bash
# Usage: execute.sh [WildFly mode] [configuration file]
#
# The default mode is 'standalone' and default configuration is based on the
# mode. It can be 'standalone.xml' or 'domain.xml'.
echo "=> Executing Customization script"
JBOSS_HOME=/opt/jboss/wildfly
JBOSS_CLI=$JBOSS_HOME/bin/jboss-cli.sh
JBOSS_MODE=${1:-"standalone"}
JBOSS_CONFIG=${2:-"$JBOSS_MODE.xml"}
function wait_for_server() {
until `$JBOSS_CLI -c ":read-attribute(name=server-state)" 2> /dev/null | grep -q running`; do
sleep 1
done
}
echo "=> Starting WildFly server"
echo "JBOSS_HOME : " $JBOSS_HOME
echo "JBOSS_CLI : " $JBOSS_CLI
echo "JBOSS_MODE : " $JBOSS_MODE
echo "JBOSS_CONFIG: " $JBOSS_CONFIG
echo $JBOSS_HOME/bin/$JBOSS_MODE.sh -b 0.0.0.0 -c $JBOSS_CONFIG &
$JBOSS_HOME/bin/$JBOSS_MODE.sh -b 0.0.0.0 -c $JBOSS_CONFIG &
echo "=> Waiting for the server to boot"
wait_for_server
echo "=> Executing the commands"
$JBOSS_CLI -c --file=`dirname "$0"`/commands.cli
# Add MySQL module
module add --name=com.mysql --resources=/opt/jboss/wildfly/customization/mysql-connector-java-5.1.39-bin.jar --dependencies=javax.api,javax.transaction.api
# Add MySQL driver
/subsystem=datasources/jdbc-driver=mysql:add(driver-name=mysql,driver-module-name=com.mysql,driver-xa-datasource-class-name=com.mysql.jdbc.jdbc2.optional.MysqlXADataSource)
# Deploy the WAR
#cp /opt/jboss/wildfly/customization/leadservice-1.0.war $JBOSS_HOME/$JBOSS_MODE/deployments/leadservice-1.0.war
echo "=> Shutting down WildFly"
if [ "$JBOSS_MODE" = "standalone" ]; then
$JBOSS_CLI -c ":shutdown"
else
$JBOSS_CLI -c "/host=*:shutdown"
fi
echo "=> Restarting WildFly"
$JBOSS_HOME/bin/$JBOSS_MODE.sh -b 0.0.0.0 -c $JBOSS_CONFIG
But I get a error when i run the image complaining that a file or directory is not found:
Building Image
$ docker build -t mpssantos/leadservice:latest .
Sending build context to Docker daemon 19.37 MB
Step 1 : FROM jboss/wildfly:latest
---> b8279b641e82
Step 2 : ADD customization /opt/jboss/wildfly/customization/
---> aea03d4f2819
Removing intermediate container 0920e2cd97fd
Step 3 : CMD /opt/jboss/wildfly/customization/execute.sh
---> Running in 8a0dbcb01855
---> 10335320b89d
Removing intermediate container 8a0dbcb01855
Successfully built 10335320b89d
SECURITY WARNING: You are building a Docker image from Windows against a non-Windows Docker host. All files and directories added to build context will have '-rwxr-xr-x' permissions. It is recommended to double check and reset permissions for sensitive files and directories.
Running image
$ docker run mpssantos/leadservice
no such file or directory
Error response from daemon: Cannot start container 5d3357ba17afa36e81d8794f2b0cd45cc00dde955b2b2054282c4ef17dd4f265: [8] System error: no such file or directory
Can someone let me know how can i access the filesystem so i can check which file or directory is complaining? Is there a better way to debug this?
I believe that is something related with the bash which is referred on first line of the script because the following echo is not printed
Thank you so much
I made it to ssh the container to check whats inside.
1) ssh to the docker machine: docker-machine ssh default
2) checked the container id with the command: docker ps -a
3) ssh to the container with the command: sudo docker exec -i -t 665b4a1e17b6 /bin/bash
4) i can check that the "/opt/jboss/wildfly/customization/" directory exists with the expected files
The customization dir have the following permissions and is listed like this:
drwxr-xr-x 2 root root 4096 Jun 12 23:44 customization
drwxr-xr-x 10 jboss jboss 4096 Jun 14 00:15 standalone
and the files inside the customization dir
drwxr-xr-x 2 root root 4096 Jun 12 23:44 .
drwxr-xr-x 12 jboss jboss 4096 Jun 14 00:15 ..
-rwxr-xr-x 1 root root 1755 Jun 12 20:06 execute.sh
-rwxr-xr-x 1 root root 989497 May 4 11:11 mysql-connector-java-5.1.39-bin.jar
if i try to execute the file i get this error
[jboss#d68190e4f0d8 customization]$ ./execute.sh
bash: ./execute.sh: /bin/bash^M: bad interpreter: No such file or directory
Does this bring light to anything?
Thank you so much again
I found the issue. The execute.sh file was with windows eof. I converted to UNIX And start to work.
I believe the execute.sh is not found. You can verify by running the following and finding the result is an empty directory:
docker run mpssantos/leadservice ls -al /opt/jboss/wildfly/customization/
The reason for this is you are doing your build on a different (virtual) machine than your local system, so it's pulling the "customization" folder from that VM. I'd run the build within the VM and place the files you want to import on that VM where the build can find it.

Resources