Ansible hangs using become when executed by jenkins user - jenkins

Ansible playbook via Jenkins job
For my CI/CD pipeline, I'm using Jenkins to execute an Ansible playbook using the Ansible plugin. Jenkins and ansible are running on the same GCP instance (Debian 10). Ansible is running the playbook against another GCP instance (Also Debian 10) via the private network. When I run the playbook manually using my default GCP user, everything works flawlessly. However, when Jenkins is trying to run the playbook or when I'm running the playbook manually as the jenkins user, playbook execution hangs whenever become is set to yes.
Ansible stdout
22m 19s
[WindBox_main#2] $ ansible-playbook ./ansible/playbook.yml -i ./ansible/hosts
PLAY [all] *********************************************************************
TASK [Gathering Facts] *********************************************************
ok: [10.156.0.2]
TASK [docker_stack : Ping host] ************************************************
ok: [10.156.0.2]
TASK [docker_stack : Ping host with become] ************************************
I have the ansible become password stored in a vars file (I know, it should be in the vault), and I verified that ansible is able to access this variable. The private key used by ansible to establish an ssh connection has the same owner and group as the vars file and ssh connection is not a problem.
Manually establishing an ssh connection with the jenkins user and private key works as expected, so does the sudo command when logged in as the ansible-cicd user.
Initial research has pointed me towards this post, stating that the process running on the remote host might be waiting for user input, this does not seem likely as this is not happening when running as my default user. I assume that remotely everything is running as the ansible-cicd user, as specified in the hosts file.
I assume it is some sort of permission problem. It does not look like it's any of the files mentioned below. Is there something I am missing? Does ansible's become require local sudo access?
Any help would be greatly appreciated.
Files
hosts
[dockermanager]
10.156.0.2 ansible_user=ansible-cicd ansible_ssh_private_key_file=/var/lib/jenkins/.ssh/id_rsa
[dockermanager:vars]
ansible_python_interpreter=/usr/bin/python3
ansible_connection=ssh
playbook.yml
---
- hosts: all
vars:
ansible_python_interpreter: /usr/bin/python3
ansible_connection: local
vars_files:
- /etc/ansible/secrets.yml
roles:
- docker_stack
roles/docker_stack/main.yml (shortened for brevity, but these are the first 2 steps)
---
- name: Ping host
ansible.builtin.ping:
- name: Ping host with become
ansible.builtin.ping:
become: yes
/var/lib/jenkins/.ssh/
total 20
drwx------ 2 jenkins jenkins 4096 Nov 23 22:12 .
drwxr-xr-x 26 jenkins jenkins 4096 Nov 25 15:27 ..
-rw------- 1 jenkins jenkins 1831 Nov 25 15:19 id_rsa
-rw-r--r-- 1 jenkins jenkins 405 Nov 25 15:19 id_rsa.pub
-rw-r--r-- 1 jenkins jenkins 666 Nov 25 15:13 known_hosts
/etc/ansible/
total 16
drwxr-xr-x 2 ansible ansible 4096 Nov 24 13:42 .
drwxr-xr-x 80 root root 4096 Nov 25 15:09 ..
-rw-r--r-- 1 jenkins jenkins 62 Nov 24 13:42 secrets.yml

As mentioned by #mdaniel in the comments, the ansible_connection variable was defined twice (in hosts and in playbook) and ssh was overridden by local, meaning ansible never actually connected to my remote machine.
Removing ansible_collection: local from the playbook vars solved the problem.

Related

docker command not available in custom pipe for BitBucket Pipeline

I'm working on a build step that handles common deployment tasks in a Docker Swarm Mode cluster. As this is a common problem for us and for others, we've shared this build step as a BitBucket pipe: https://bitbucket.org/matchory/swarm-secret-pipe/
The pipe needs to use the docker command to work with a remote Docker installation. This doesn't work, however, because the docker executable cannot be found when the pipe runs.
The following holds true for our test repository pipeline:
The docker option is set to true:
options:
docker: true
The docker service is enabled for the build step:
main:
- step:
services:
- docker: true
Docker works fine in the repository pipeline itself, but not within the pipe.
Pipeline log shows the docker path being mounted into the pipe container:
docker container run \
--volume=/opt/atlassian/pipelines/agent/build:/opt/atlassian/pipelines/agent/build \
--volume=/opt/atlassian/pipelines/agent/ssh:/opt/atlassian/pipelines/agent/ssh:ro \
--volume=/usr/local/bin/docker:/usr/local/bin/docker:ro \
--volume=/opt/atlassian/pipelines/agent/build/.bitbucket/pipelines/generated/pipeline/pipes:/opt/atlassian/pipelines/agent/build/.bitbucket/pipelines/generated/pipeline/pipes \
--volume=/opt/atlassian/pipelines/agent/build/.bitbucket/pipelines/generated/pipeline/pipes/matchory/swarm-secret-pipe:/opt/atlassian/pipelines/agent/build/.bitbucket/pipelines/generated/pipeline/pipes/matchory/swarm-secret-pipe \
--workdir=$(pwd) \
--label=org.bitbucket.pipelines.system=true \
radiergummi/swarm-secret-pipe:1.3.7#sha256:baf05b25b38f2a59b044e07f4ad07065de90257a000137a0e1eb71cbe1a438e5
The pipe is pretty standard and uses a recent Alpine image; nothing special in that regard. The PATH is never overwritten. Now for the fun part: If I do ls /usr/local/bin/docker inside the pipe, it shows an empty directory:
ls /usr/local/bin
total 16K
drwxr-xr-x 1 root root 4.0K May 13 13:06 .
drwxr-xr-x 1 root root 4.0K Apr 4 16:06 ..
drwxr-xr-x 2 root root 4.0K Apr 29 09:30 docker
ls /usr/local/bin/docker
total 8K
drwxr-xr-x 2 root root 4.0K Apr 29 09:30 .
drwxr-xr-x 1 root root 4.0K May 13 13:06 ..
ls: /usr/local/bin/docker/docker: No such file or directory
As far as I understand pipelines and Docker, /usr/local/bin/docker should be the docker binary file. Instead, it appears to be an empty directory for some reason.
What is going on here?
I've also looked at other, official, pipes. They don't do anything differently, but seem to be using the docker command just fine (eg. the Azure pipe).
After talking to BitBucket support, I solved the issue. As it turns out, if the docker context is changed, any docker command is sent straight to the remote docker binary, which (on our services) lives at a different path than in BitBucket Pipelines!
As we changed the docker context before using the pipe, and the docker instance mounted into the pipe still has the remote context set, but the pipe searches for the docker binary at another place, the No such file or directory error is thrown.
TL;DR: Always restore the default docker host/context before passing control to a pipe, e.g.:
script:
- export DEFAULT_DOCKER_HOST=$DOCKER_HOST
- unset DOCKER_HOST
- docker context create remote --docker "host=ssh://${DEPLOY_SSH_USER}#${DEPLOY_SSH_HOST}"
- docker context use remote
# do your thing
- export DOCKER_HOST=$DEFAULT_DOCKER_HOST # <------ restore the default host
- pipe: matchory/swarm-secret-pipe:1.3.16

Mounted Docker volume has different ownership when using Travis

This question relates to this repository with the most relevant Travis job here.
The repository is for static site built from Jupyter notebooks. The notebooks are converted using build/build.py which, for each post, builds a Docker image, starts a corresponding container with the post notebook directory mounted, and uses nbconvert to convert the notebook to Markdown. One step of nbconvert's conversion involves creating a supporting file directory. This fails on Travis due to a permission issue.
In attempting to debug this problem, I found that the ownership and permissions of the repo are the same on my local machine and Travis (with my username switched for travis) before running Docker. Despite this, inside the mounted volume of the Docker container, the ownerships are different:
Local:
drwxrwxr-x 3 jovyan 1000 4096 Dec 10 19:56 .
drwsrwsr-x 1 jovyan users 4096 Dec 3 21:51 ..
-rw-rw-r-- 1 jovyan 1000 105 Dec 7 09:57 Dockerfile
drwxr-xr-x 2 jovyan 1000 4096 Dec 10 12:09 .ipynb_checkpoints
-rw-r--r-- 1 jovyan 1000 154229 Dec 10 12:28 post.ipynb
Travis:
drwxrwxr-x 2 2000 2000 4096 Dec 10 19:58 .
drwsrwsr-x 1 jovyan users 4096 Nov 8 16:37 ..
-rw-rw-r-- 1 2000 2000 101 Dec 10 19:58 Dockerfile
-rw-rw-r-- 1 2000 2000 35271 Dec 10 19:58 post.ipynb
Both my local machine and Travis are running Ubuntu 20.04, have the same version of Docker, and all other tools come from Conda so should behave the same. I am struggling to understand where this difference in ownership is coming from.
Try running the docker again with this command, so the uid outside the container is propagated inside:
docker run -u `id -u`
alternative, as pointed by #anemyte:
docker run -u $(id -u)
This should involve the creation of the new files inside the docker to be owned by "jovyan".
If you are able to guess that mounting points will exist, you could also pre-create them so the ownership of the files inside is also correct:
docker run -v /path/on/host:/path/in/container ...
If you set the permissions of your local path (/path/on/host) as 777, that will also be propagated to the mounting point: no permission error will be thrown regardless of the user that docker uses to create those files.
After that, you'll be free to restore permissions, if needed.

How to run the liquibase docker image in Jenkins containerized docker environment

I'm doing POC with liquibase docker image,
I would like to run the liquibase docker image in docker with Jenkins kubernetes POD template. unfortunately unable to make it.
And also I have attached the Jenkins file and my observation.
Jenkins File
def workspace_dir = "/home/jenkins/agent/workspace/${env.JOB_BASE_NAME}"
def project_name = "master-chart"
def isDeployerJob = (env.JOB_BASE_NAME).contains("deploy") ? "true" : "false"
// These variables come from the build parameters in the Jenkins job
def git_branch = git_branch
def release_version
if (isDeployerJob == "true") {
// Extracting the release version from the branch
def temp = git_branch.split("/")
release_version = temp[temp.length - 1]
switch(environment) {
case "dev":
hs_jdbc_url="jdbc:postgresql://40.xx.xx.xx:5432/dbname"
db_username="username"
db_password="pwd"
break
default:
break
}
}
pipeline {
agent {
kubernetes {
cloud 'eks-tools-13'
yaml """
apiVersion: v1
kind: Pod
spec:
containers:
- name: azcli-kubectl-helm
image: internal.docker.cioxhealth.com/azcli-kubectl-helm
command:
- cat
tty: true
- name: docker
image: docker
command:
- cat
tty: true
privileged: true
volumeMounts:
- name: dockersock
mountPath: /var/run/docker.sock
volumes:
- name: dockersock
hostPath:
path: /var/run/docker.sock
"""
}
}
stages {
stage('Install Database Scripts') {
when {
expression {
"${isDeployerJob}" == "true"
}
}
steps {
container('docker') {
sh """
docker run --rm --network="host" -v ${workspace_dir}/db:/liquibase/changelog liquibase/liquibase --url=${hs_jdbc_url} --changeLogFile=db.changelog-master.yaml --driver=org.postgresql.Driver --username=${db_username} --password=${db_password} --logLevel=info update
"""
}
}
}
}
}
For verifying the files, I have getting into running container
Jenkins Master Node:
ls -ltr /home/jenkins/agent/workspace/master-chart-deploy/db
total 4
drwxr-xr-x 3 1000 1000 21 Nov 6 04:35 sql
drwxr-xr-x 3 1000 1000 21 Nov 6 04:35 rollback
drwxr-xr-x 4 1000 1000 35 Nov 6 04:35 migration
-rw-r--r-- 1 1000 1000 154 Nov 6 04:35 db-master-changelog.yaml
drwxr-xr-x 2 1000 1000 38 Nov 6 04:35 changelog
Docker container on master-chart-deploy-259-qxrn5-nqq7j-hhlb8
ls -ltr /home/jenkins/agent/workspace/master-chart-deploy/db
total 4
drwxr-xr-x 3 1000 1000 21 Nov 6 04:35 sql
drwxr-xr-x 3 1000 1000 21 Nov 6 04:35 rollback
drwxr-xr-x 4 1000 1000 35 Nov 6 04:35 migration
-rw-r--r-- 1 1000 1000 154 Nov 6 04:35 db-master-changelog.yaml
drwxr-xr-x 2 1000 1000 38 Nov 6 04:35 changelog
Liquibase Container
docker run --rm '--network=host' -v /home/jenkins/agent/workspace/master-chart-deploy/db:/liquibase/changelog liquibase/liquibase -- ls -ltr /liquibase/changelog
total 0
Files are not available in the liquibase running container. due this the following error has been occurred.
Error:
Starting Liquibase at 14:50:38 (**version 4.1.1** #10 built at 2020-10-12 19:24+0000)
[2020-11-05 14:50:38] INFO [liquibase.lockservice] Successfully acquired change log lock
[2020-11-05 14:50:38] INFO [liquibase.lockservice] Successfully released change log lock
Unexpected error running Liquibase: db-master-changelog.yaml does not exist
For more information, please use the --logLevel flag
[2020-11-05 14:50:38] SEVERE [liquibase.integration] Unexpected error running Liquibase: db-master-changelog.yaml does not exist
liquibase.exception.ChangeLogParseException: db-master-changelog.yaml does not exist
at liquibase.parser.core.yaml.YamlChangeLogParser.parse(YamlChangeLogParser.java:27)
at liquibase.Liquibase.getDatabaseChangeLog(Liquibase.java:337)
at liquibase.Liquibase.lambda$update$1(Liquibase.java:229)
at liquibase.Scope.lambda$child$0(Scope.java:160)
at liquibase.Scope.child(Scope.java:169)
at liquibase.Scope.child(Scope.java:159)
at liquibase.Scope.child(Scope.java:138)
at liquibase.Liquibase.runInScope(Liquibase.java:2277)
at liquibase.Liquibase.update(Liquibase.java:215)
at liquibase.Liquibase.update(Liquibase.java:201)
at liquibase.integration.commandline.Main.doMigration(Main.java:1760)
at liquibase.integration.commandline.Main$1.lambda$run$0(Main.java:361)
at liquibase.Scope.lambda$child$0(Scope.java:160)
May I know, What did I do wrong in this case? and why files are not available in liquibase running container?
Is this a problem, because of file permissions due to Docker in Docker case?
Is there any other way I can achieve this?
Thank you in advance for the help.
I think you are somehow messing with docker configuration. From documentation it looks like liquibase expects that you mount everything inside /liquibase/changelog directory.
And in your command you are mapping your changelogs to /app/liquibase:
docker run --rm --network="host" -v ${workspace_dir}/db:/app/liquibase liquibase/liquibase --url=${hs_jdbc_url} --changeLogFile=db.changelog-master.yaml --classpath=/app/liquibase --driver=org.postgresql.Driver --username=${db_username} --password=${db_password} --logLevel=info update
so instead of that I'd use this:
docker run --rm --network="host" -v ${workspace_dir}/db:/liquibase/changelog liquibase/liquibase --url=${hs_jdbc_url} --changeLogFile=db.changelog-master.yaml --driver=org.postgresql.Driver --username=${db_username} --password=${db_password} --logLevel=info update
note: I've removed --classpath=/app/liquibase if you rely on it, if you've added some additional driver or something else you should probably include it again, but try to read about it at first. I think the documentation is pretty good.
you must specify full path when you use docker run in jekins pipeline:
--changeLogFile=/app/liquibase/db.changelog-master.yaml
Define in jenkins pipeline :
environment {
HOME = '.'
}
Change log file is the main point from where Liquibase looks for configuration. If we do not define change log file path in Spring Boot, it considers db/changelog/db.changelog-master.yaml as default path for YAML format. As we will go with XML format, we need to set spring.liquibase.change-log=classpath:/db/changelog/changelog-master.xml for change log file path in application.properties file. You can set logging level of liquibase logs by setting log level in logging.level.liquibase property. Other properties in given below properties file are for H2 database configuration.

Push local image to local IBM Private Cloud cluster?

On ubuntu I have installed a local IBM Private Cloud cluster using this guide:
https://github.com/IBM/deploy-ibm-cloud-private/blob/master/docs/deploy-vagrant.md
Next I would like to push some local docker images I have on my host to the IBM cluster. I have found this guide:
https://www.ibm.com/support/knowledgecenter/en/SSBS6K_1.2.0/manage_images/using_docker_cli.html
where bullet 2 says:
Obtain the configure-registry-cert.sh script from your system administrator. The script is located in the /<installation_directory>/misc/configure-registry-cert.sh directory. You must obtain the IBM® Cloud private registry certificate script to pull and push images to the private image registry.
I have SSH'ed to the master container with:
vagrant ssh
but I have not been able to find /<installation_directory>/misc/configure-registry-cert.sh
in either /home/vagrant or /opt
UPDATE:
I have found this guide:
https://www.ibm.com/support/knowledgecenter/en/SSBS6K_2.1.0/manage_images/using_docker_cli.html
which says that you need to copy cert from master node to client machine (my host) with:
scp /etc/docker/certs.d/<cluster_CA_domain>\:8500/ca.crt \
root#<client_node>:/etc/docker/certs.d/<cluster_CA_domain>\:8500/
I created a password for root and copied /etc/docker/certs.d/mycluster.icp:8500/ca.crt from the master node to my local docker installation in /etc/docker/certs.d/mycluster.icp:8500/ca.crt
But when I then try to login I get the below error:
$ docker login mycluster.icp:8500
Username: admin
Password:
Error response from daemon: Get https://mycluster.icp:8500/v2/: x509: certificate signed by unknown authority
where I specified admin as password (I use admin/admin for logging in to the web interface) since I have not found info on what credentials to use for that login.
Based on:
https://www.ibm.com/developerworks/community/blogs/fe25b4ef-ea6a-4d86-a629-6f87ccf4649e/entry/Working_with_the_local_docker_registry_from_Spectrum_Conductor_for_Containers?lang=en
it says that I first need to create a namespace and then a user for that namespace. I can create a namespace but I don't have an option to create a new user.
Any ideas on how to login to the docker registry?
And as requested below I can confirm that the ca.cert indeed is in the correct location on the master node:
$ vagrant ssh
Welcome to Ubuntu 16.04.5 LTS (GNU/Linux 4.4.0-131-generic x86_64)
* Documentation: https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/advantage
0 packages can be updated.
0 updates are security updates.
Last login: Thu Jul 26 19:59:18 2018 from 192.168.27.100
vagrant#master:~$ sudo passwd
Enter new UNIX password:
Retype new UNIX password:
passwd: password updated successfully
vagrant#master:~$ su
Password:
root#master:/home/vagrant# ls -la /etc/docker/certs.d/mycluster.icp\:8500/
total 12
drwxr-xr-x 2 root root 4096 Jul 26 19:54 .
drwxr-xr-x 3 root root 4096 Jul 26 19:53 ..
-rw-r--r-- 1 root root 1850 Jul 26 19:54 ca.crt
root#master:/home/vagrant#
You can try to update your docker configuration to put <cluster_CA_domain>\:8500 registry in the insecure registry list.
/usr/bin/docker --insecure-registry docker-reg:5000 -d
you can update the docker service add --insecure-registry mycluster.icp:8500 in the docker option. then
```systemctl daemon-reload
systemctl restart docker```
And then you can try docker login mycluster.icp:8500
remember to add mycluster.icp in your /etc/hosts

How to run Tomcat 8 and MySQL in single Docker container

I have a Tomcat 8 / MySQL application I want to run in a docker container. I run Ubuntu 16.04 today in test and production and wanted use the Ubuntu 16.04 "latest" as the base FROM to my docker file and add Tomcat 8 and MySQL from there.
I know I can get a Tomcat 8 docker file as my base from https://hub.docker.com/_/tomcat/ but I did not see an Ubuntu base OS for those and I wanted to stay consistent with Ubuntu. Also, it seemed odd to add MySQL to a Tomcat container.
I worked through this issue and am posting my findings in case it helps others with similar issues.
Short answer: Running multiple services (tomcat / mysql) in a single container is not recommended. Yes, there is supervisor.d, etc. But this is discouraged. There is also baseimage-docker if you are committed to multiple services in one container.
The remainder of this answer shows how I got it working it if you really are determined...
The Tomcat 8 distro version on Ubuntu 16.04 is unfortunately only configured to run as a service (described in detail below). Issues with running a service in a docker container are documented well in many posts across stack exchange (it is discouraged). I was able to get tomcat 8 working as a service by adding a "tail -f /var/log/tomcat8/catalina.out" to the end of the "service tomcat8 start" command and starting the container with the "--cap-add SYS_PTRACE" option.
CMD service tomcat8 start && tail -f /var/log/tomcat8/catalina.out
The recommended way to start tomcat8 is to use the commands in /usr/share/tomcat8/bin. However, the distro version's soft links are incorrect and the server fails to start.
Using the commands ./catalina.sh run or ./startup.sh both produce an error such as this:
SEVERE: Cannot find specified temporary folder at /usr/share/tomcat8/temp
WARNING: Unable to load server configuration from [/usr/share/tomcat8/conf/server.xml]
SEVERE: Cannot start server. Server instance is not configured.
The distro splits tomcat8 across /usr/share/tomcat8 and /var/lib/tomcat8 which separates the bin files (catalina.sh and startup.sh) from the config and logs soft links in /var/lib/tomcat8. This makes these commands fail.
Files in /usr/share/tomcat8:
root#85d5fe47b66a:/usr/share/tomcat8# ls -la
total 32
drwxr-xr-x 4 root root 4096 Mar 9 22:18 .
drwxr-xr-x 117 root root 4096 Mar 9 23:29 ..
drwxr-xr-x 2 root root 4096 Mar 9 22:18 bin
-rw-r--r-- 1 root root 39 Mar 31 2017 defaults.md5sum
-rw-r--r-- 1 root root 1929 Apr 10 2017 defaults.template
drwxr-xr-x 2 root root 4096 Mar 9 22:18 lib
-rw-r--r-- 1 root root 53 Mar 31 2017 logrotate.md5sum
-rw-r--r-- 1 root root 118 Apr 10 2017 logrotate.template
Files in /var/lib/tomcat8:
root#85d5fe47b66a:/var/lib/tomcat8# ls -la
total 16
drwxr-xr-x 4 root root 4096 Mar 9 22:18 .
drwxr-xr-x 41 root root 4096 Mar 9 23:29 ..
lrwxrwxrwx 1 root root 12 Sep 28 14:43 conf -> /etc/tomcat8
drwxr-xr-x 2 tomcat8 tomcat8 4096 Sep 28 14:42 lib
lrwxrwxrwx 1 root root 17 Sep 28 14:43 logs -> ../../log/tomcat8
drwxrwxr-x 3 tomcat8 tomcat8 4096 Mar 9 22:18 webapps
lrwxrwxrwx 1 root root 19 Sep 28 14:43 work -> ../../cache/tomcat8
Running ./version.sh reveals that both CATALINA_BASE and CATALINA_HOME are set to /usr/share/tomcat8
Using CATALINA_BASE: /usr/share/tomcat8
Using CATALINA_HOME: /usr/share/tomcat8
Using CATALINA_TMPDIR: /usr/share/tomcat8/temp
Using JRE_HOME: /usr
Using CLASSPATH: /usr/share/tomcat8/bin/bootstrap.jar:/usr/share/tomcat8/bin/tomcat-juli.jar
Server version: Apache Tomcat/8.0.32 (Ubuntu)
Server built: Sep 27 2017 21:23:18 UTC
Server number: 8.0.32.0
OS Name: Linux
OS Version: 4.4.0-116-generic
Architecture: amd64
JVM Version: 1.8.0_161-b12
JVM Vendor: Oracle Corporation
Setting CATALINA_BASE explicitly to /var/lib/tomcat8 inside catalina.sh solved the problem in using ./catalina.sh run to start tomcat. In the past, I have alternatively added the soft links to conf, logs and work under the /usr/share/tomcat8 directory so it could find those files and start up properly with the catalina.sh run command.
BTW, even thought the JRE_HOME is clearly wrong in the version.sh dump above, the service does start correctly (when I append the tail -f command as described earlier). It also starts using catalina.sh run when I manually add the correct CATALINA_BASE variable to catalina.sh. So I spent no time looking into why that listed out incorrectly.
In the end, I realized three things:
Running multiple services (tomcat / mysql) in a single container is not recommended. Yes, there is supervisor.d, etc. But this is discouraged. There is also baseimage-docker if you are committed to multiple services in one container.
Even running a single service in a container is not recommended but there are documented ways to make it work (which I did for tomcat8 by adding the && tail -f ... to the end of the CMD).
In Ubuntu 16.04 (did not test other distros), to make tomcat8 run as a command (not a service) you need to either:
a) grab the tar file for Tomcat 8 and install that, since it puts all of the files under one directory and therefore there is no soft link issue. Or, b) if you insist on using the distro tomcat8 from apt-get, b.1) you need to modify a version of catalina.sh by adding the CATALINA_BASE and copy it to the proper installation directory or b.2) add the soft links.

Resources