Jenkins hangs after "chroot . sh" during ssh - jenkins

I'm doing a Jenkins freestyle build, ssh'ing into a VM and running some existing scripts. During a build step "Execute shell script on remote host using ssh", everything is working fine until I get to a command: "chroot . sh". This is mounting a rootfs which we build in, if I do this step manually it is bringing me to a sh prompt where I can run another script to do the actual build, but Jenkins just hangs forever at this point.
From looking around it looks like this is because the command does not return a completion signal?, so Jenkins is waiting indefinitely. I've also just tried doing the same steps in Putty, using a text file containing the commands I need. The Putty "script" also fails at this point, stopping any input because of the new sh prompt.
Is there any way around this? I've tried various solutions, like:
nohup chroot . sh 1>&2 - of course this doesnt work
and running the command in the background doesnt put me into the chroot environment I need.
Kind of confused at this point.
Edit:
Code snippet:
cd /home/dev/root_env
chroot . sh
cd /home/dev
./build.sh
Thats literally all I'm doing, however I freeze forever at line 2 of that.

Related

Is there a way to automatically backup your database when issuing docker down command

Tried to search the Docker documentation, however, I cannot find anything that directly relates to a backup on the down command. Additionally, I see you can add your own command script in the yml on up, so I was hoping that there might be something similar for down?
You need to make your own entrypoint script that will create an exit hook. You can see more details on the steps of building custom image with a custom entrypoint in this SO.
In your case, the entrypoint will look like this:
#!/bin/bash
set -e
execute_on_finish() {
echo "Execute on finish"
}
trap execute_on_finish EXIT
echo "CALLING ENTRYPOINT WITH CMD: $#"
exec /old_entrypoint.sh "$#" &
daemon_pid=$!
wait $daemon_pid
execute_on_finish
Note
Since the backup process is a long operation, and docker will execute a kill if the process doesn't shut-down in 10s, you will need to send option to the stop not to kill the container with -t. See more details here

Unable to start pm2 via jenkins pipeline

on a Windows machine, I have setup a very simple pipeline in Jenkins that does the following:
clone a git repository,
install the packages,
run the app via "pm2 start command"
Below is the entire pipleline script :
node {
stage('dev'){
git credentialsId: 'my-credentials', url: 'git#myurl.git'
bat 'npm install'
bat 'pm2 start src\\index.js --name myapp'
}
}
Everything works fine except running the pm2 command. The output error says :
'pm2' is not recognized as an internal or external command,
operable program or batch file.
However, I can easily run the exact same PM2 command via CMD, I have tried putting the last line command into a .bat file and asked jenkins to execute it, and get the same error.
Jenkins couldn't access the PM2 that was installed on the Windows machine globally that is due to the fact that Jenkin was running as the system (root) user while pm2 was running with the local user. I had to include PM2 in the package.config file of the project and then call it from the node_module folder.
\node_modules\.bin\pm2 start src\\index.js --name myapp

Jenkins run Shellscript via SSH is not leaving console

I'm using Jenkins to deploy my play application for this I've added SSH support to jenkins and I connect via ssh to the test server and then I run a shel script via ssh.
Thats working fine.
Not working ist finishing the job in jenkins.
The command in the the shell script is the following:
/usr/src/activator-dist-1.3.10/bin/activator "~ run" &
that only should run the activator, build and run the project
But then when The application is build and the activator runs the Jenkins job dosn't finish ... it always hang in console
looks like this:
When you run a script via ssh it will stay open until stdout/stderr are closed or a timeout occurs. In Jenkins it seems as if the script hangs.
So if you run a script as background job, make sure to redirect all its output to somewhere:
nohup yourCommand < /dev/null > /dev/null 2>&1 &
or
nohup yourCommand < /dev/null >> logfile.log 2>&1 &
See SSH Frequently Asked Questions for more details.

RUN command in Dockerfile not persisting from container to container

I am running yo in a Docker container and in my Dockerfile, I have the command RUN echo no | yo doctor. When yo runs for the first time, it asks for an answer to:
====================================================================
We're constantly looking for ways to make yo better!
May we anonymously report usage statistics to improve the tool over time?
More info: https://github.com/yeoman/insight & http://yeoman.io
==================================================================== (Y/n)
Every time, I create a new container, yo is asking me the same question again.
Since each container is being built using the same image and I am running echo no | yo doctor in my Dockerfile shouldn't it prevent yo from asking the question again?
Whenever I see a RUN using pipe, I try that command in a subshell (sh -c)
RUN sh -c 'echo no | yo doctor'
If it does not work, another workaround would be to include that command in a script, COPY the script and RUN it.

Run command in Docker Container only on the first start

I have a Docker Image which uses a Script (/bin/bash /init.sh) as Entrypoint. I would like to execute this script only on the first start of a container. It should be omitted when the containers is restarted or started again after a crash of the docker daemon.
Is there any way to do this with docker itself, or do if have to implement some kind of check in the script?
I had the same issue, here a simple procedure (i.e. workaround) to solve it:
Step 1:
Create a "myStartupScript.sh" script that contains this code:
CONTAINER_ALREADY_STARTED="CONTAINER_ALREADY_STARTED_PLACEHOLDER"
if [ ! -e $CONTAINER_ALREADY_STARTED ]; then
touch $CONTAINER_ALREADY_STARTED
echo "-- First container startup --"
# YOUR_JUST_ONCE_LOGIC_HERE
else
echo "-- Not first container startup --"
fi
Step 2:
Replace the line "# YOUR_JUST_ONCE_LOGIC_HERE" with the code you want to be executed only the first time the container is started
Step 3:
Set the scritpt as entrypoint of your Dockerfile:
ENTRYPOINT ["/myStartupScript.sh"]
In summary, the logic is quite simple, it checks if a specific file is present in the filesystem; if not, it creates it and executes your just-once code. The next time you start your container the file is in the filesystem so the code is not executed.
The entry point for a docker container tells the docker daemon what to run when you want to "run" that specific container. Let's ask the questions "what the container should run when it's started the second time?" or "what the container should run after being rebooted?"
Probably, what you are doing is following the same approach you do with "old-school" provisioning mechanisms. Your script is "installing" the needed scripts and you will run your app as a systemd/upstart service, right? If you are doing that, you should change that into a more "dockerized" definition.
The entry point for that container should be a script that actually launches your app instead of setting things up. Let's say that you need java installed to be able to run your app. So in the dockerfile you set up the base container to install all the things you need like:
FROM alpine:edge
RUN apk --update upgrade && apk add openjdk8-jre-base
RUN mkdir -p /opt/your_app/ && adduser -HD userapp
ADD target/your_app.jar /opt/your_app/your-app.jar
ADD scripts/init.sh /opt/your_app/init.sh
USER userapp
EXPOSE 8081
CMD ["/bin/bash", "/opt/your_app/init.sh"]
Our containers, at the company I work for, before running the actual app in the init.sh script they fetch the configs from consul (instead of providing a mount point and place the configs inside the host or embedded them into the container). So the script will look something like:
#!/bin/bash
echo "Downloading config from consul..."
confd -onetime -backend consul -node $CONSUL_URL -prefix /cfgs/$CONSUL_APP/$CONSUL_ENV_NAME
echo "Launching your-app..."
java -jar /opt/your_app/your-app.jar
One advice I can give you is (in my really short experience working with containers) treat your containers as if they were stateless once they are provisioned (all the commands you run before the entry point).
I had to do this and I ended up doing a docker run -d which just created a detached container and started bash (in the background) followed by a docker exec, that did the necessary initialization. here's an example
docker run -itd --name=myContainer myImage /bin/bash
docker exec -it myContainer /bin/bash -c /init.sh
Now when I restart my container I can just do
docker start myContainer
docker attach myContainer
This may not be ideal but work fine for me.
I wanted to do the same on windows container. It can be achieved using task scheduler on windows. Linux equivalent for task Scheduler is cron. You can use that in your case. To do this edit the dockerfile and add the following line at the end
WORKDIR /app
COPY myTask.ps1 .
RUN schtasks /Create /TN myTask /SC ONSTART /TR "c:\WINDOWS\system32\WindowsPowerShell\v1.0\powershell.exe C:\app\myTask.ps1" /ru SYSTEM
This Creates a task with name myTask runs it ONSTART and the task its self is to execute a powershell script placed at "c:\app\myTask.ps1".
This myTask.ps1 script will do whatever Initialization you need to do on the container startup. Make sure you delete this task once it is executed successfully or else it will run at every startup. To delete it you can use the following command at the end of myTask.ps1 script.
schtasks /Delete /TN myTask /F

Resources