Problems deploying with Capistrano and Perforce after changing workspace - ruby-on-rails

we have been uising capistrano to roll out changes for some time (it was set up by a previous coder). as IT have decided to remove his workspace from perforce ive created a new one in my name and figured it would just work, but it seems to be rolling back on [deploy:update_code].
ive chekced the usual username / password errors, and with any of these incorrect the error is
Perforce password (P4PASSWD) invalid or unset.
or
p4client is incorrect or unset
so im quite confident it isnt that.
any ideas? ill paste the error message in ommiting any details! thanks a lot
failed: "sh -c 'p4 -p p4SERVER:1666 -u P4Username -P p4Password -c P4Workspace sync -f #298781 && cp -rf p4 -p p4SERVER:1666 -u P4Username -P p4Password -c P4Workspace client -o | grep ^Root | cut -f2 /home/ubuntu/clan/releases/20110905145323 && (echo 298781 > /home/ubuntu/clan/releases/20110905145323/REVISION)'" on ipaddressofserver

I managed to resolve this problem. the issue was that when creating the workspace the "host" was specified as being my local machine, removing this optional param allowed it to deploy correctly

Related

Forked docker image not building

I am trying to fork this docker image so that if anything changes on the original it won't affect me.
I have forked the repo corresponding to that image to my own repo.
I have cloned the repo and am trying to build it:
docker build . -t davcal/gcc-cross-x86_64-elf
I am getting this error:
+ cd /usr/local/src
+ ./build-binutils.sh 2.31.1
/bin/sh: 1: ./build-binutils.sh: not found
The command '/bin/sh -c set -x && cd /usr/local/src && ./build-binutils.sh ${BINUTILS_VERSION} && ./build-gcc.sh ${GCC_VERSION}' returned a non-zero code: 127
What makes no sense to me is that if I use the original image, it builds successfully:
FROM randomdude/gcc-cross-x86_64-elf
...
Maybe Docker Hub stores a pre-built image?
How do I fix this?
Note: I am using Windows. This shouldn't make a difference since the error originates within the container.
Edit
I tried patching the Dockerfile to chmod executable permissions to the sh files in case that was causing problems on Windows. Unfortunately, the exact same error occurs.
RUN set -x \
&& chmod +x /usr/local/src/build-binutils.sh \
&& chmod +x /usr/local/src/build-gcc.sh \
&& cd /usr/local/src \
&& ./build-binutils.sh ${BINUTILS_VERSION} \
&& ./build-gcc.sh ${GCC_VERSION}
Edit 2
Following this method, I inspected the container to see if the sh files actually exist. Here is the output.
I ran docker run --rm -it c53693f11514 bash, including the hash of the intermediate container of the previous successful step of the Dockerfile.
This is the output showing that the files do exist:
root#9b8a64ac2090:/# cd usr/local/src
root#9b8a64ac2090:/usr/local/src# ls
binutils-2.31.1 build-binutils.sh build-gcc.sh gcc-8.2.0
From the described symptoms, file exists, is a shell script, and works on other machines, the "file not found" error is most likely from Winidows linefeeds being added to the file. When the Linux kernel processes a shell script, it looks at the first line, the #!/bin/sh or similar, and then finds that interpreter to run the shell script. If that interpreter isn't found, you'll get a "file not found" error.
In this case, the file it's looking for won't be /bin/sh, but instead /bin/sh\r or /bin/sh^M depending on how you want to represent the carriage return character. You can fix that for single files with a tool like dos2unix but in general, you'll want to fix git itself since there are likely other files that have had their linefeeds corrupted. For details on adjusting the behavior of git, see this post.

How to fetch war file from Jfrog artifactory inside dockerfile ? getting HTTP 401 error

I have created a declarative jenkins pipeline and one of it's stages is as follows:
stage('Docker Image'){
steps{
bat 'docker build -t HMT/demo-application:%BUILD_NUMBER% --no-cache -f Dockerfile .'
}
}
This is the docker file:
FROM tomcat:alpine
RUN wget -O /usr/local/tomcat/webapps/launchstation04.war http://localhost:8082/artifactory/demoArtifactory/com/demo/0.0.1-SNAPSHOT/demo-0.0.1-SNAPSHOT.war
EXPOSE 9100
CMD /usr/local/tomcat/bin/cataline.bat run
I am getting the below error.:
[91m/bin/sh:
01:33:28 [0mThe command '/bin/sh -c wget -O /usr/local/tomcat/webapps/launchstation04.war http://localhost:8082/artifactory/demoArtifactory/com/demo/0.0.1-SNAPSHOT/demo-0.0.1-SNAPSHOT.war' returned a non-zero code: 127
UPDATE:
I have updated the command to
RUN wget -O /usr/local/tomcat/webapps/launchstation04.war -U jenkinsuser:Learning#% http://localhost:8082/artifactory/demoArtifactory/com/demo/0.0.1-SNAPSHOT/demo-0.0.1-20200823.053346-18.war
There is no problem in my command.Jfrog artifactory was unable to authorize this action.So I added username and password details but it still didn't work.
Error:
wget: server returned error: HTTP/1.1 401 Unauthorized
It didnt work after modifiying the password policy to unsupported.But it worked when I allowed anonymous access.
How to provide access using credentials.
Need more clarification on your question. Not sure where you are using curl command.
Image tomcat:alpine doesn't contains curl command. Unless you install it manually.
bash-4.4# type curl
bash: type: curl: not found
bash-4.4#
If your ask is regarding the sh -c option, if the script is invoked through CMD option, yes it will use sh. Instead you can give a try with ENTRYPOINT.
You can provide username & password via command line:
wget --user user --password pass
Using curl :
curl -u username:password -O
But void using special characters:
Change your password to another once in: [a-z][A-Z][0-9]
Try an API Key instead of password, I have a feeling that "#" may be throwing you off. Quotes can help there too or separating the password with -p
Also look at the request logs for whether the entry comes as 401 for the user, or anonymous/unauthenticated
Lastly, see if you can cURL from outside the image and then ADD the file in, as that will remove any external factors that may vary from the host (where I assume the command works)

`PAM: Authentication failure` when running `chpasswd` on Alpine Linux

I am running Alpine Linux like this:
$ docker run --rm -it alpine sh
Then running the following commands:
/ # apk add shadow
/ # /usr/sbin/useradd -m -u 1000 jenkins
Creating mailbox file: No such file or directory
/ # echo "jenkins:mypassword" | chpasswd
Password: chpasswd: PAM: Authentication failure
According to this, the warning Creating mailbox file: No such file or directory can be safely ignored.
My problem is that chpasswd is failing with the vague error message seen in the last line. I tried the exact commands on CenstOS and Ubuntu and it worked there.
This turned out to be a bug in Alpine 3.6+. A new pull request is supposed to have fixed this as mentioned here: https://bugs.alpinelinux.org/issues/10209
Are you sure the root account is enable ?
This might be a consequence of this change: https://github.com/alpinelinux/aports/commit/72c7a7a3caf28c06289dc5f65e1756b38cfb00ca

Travis CI -- Docker run shell to extra Values

I am currently using Travis-CI, I am trying to do something like the following but I get just an empty value for the variable.
ANSIBLE_VERSION=$(docker run -d <ID/TAG> /bin/bash -c "ansible --version"|head -1 |awk '{print $2}')
I have tested the command on my local machine and it works correctly, so I am not sure what the problem might be Travis related?
Many thanks
It seems this is due to the following:
https://github.com/travis-ci/travis-ci/issues/3149

How to deal with state "Exit 0" in Docker

I have build a Docker image and afterwards run a container using Docker Compose. The following command will do the job for me:
docker-compose up -d
I have restarted the PC and now I want to start the previous container that I've created before. So I have tried the following command:
$ docker-compose start
Starting php-apache ... done
Apparently it works but it doesn't as per the output for the following command:
$ docker-compose ps
Name Command State Ports
---------------------------------------------------------------------------
php55devwork_php-apache_1 /bin/sh -c bash -C '/usr/l ... Exit 0
For sure something is wrong and I am trying to find out what.
How do I find why the command is failing?
Is there any place where I could see a log file or something that help me to identify and fix the error?
Here is the repository if you want to give it a try.
Update
If I remove the container: docker rm <container-id> and recreate it by running docker-compose up -d --build it works again.
Update #1
I am not able to see such weird characters:
This is what helped me to resolve this issue:
Under one of your services in the docker-compose yaml file, type in the following:
tty: true so it'll look like
version: '3'
services:
web:
tty: true
Hopefully this helps someone; thumps up if it helps you :)
I took a look into your Docker github and setup_php_settings
on line (line n. 27) there is source /etc/apache2/envvars && exec /usr/sbin/apache2 -DFOREGROUND
and that runs apache2 on foreground so it shouldn't exit with status code 0.
But it seems to me like your setup_php_settings contains some weird character (when I run your image with compose)
(original is one on right side) weird character
I have changed it to new lines and it worked for me. Let us know if it helped.
If you want to debug your docker container you can run it without entrypoint like:
docker run -it yourImage bash
-- AFTER some investigation:
There were still some errors when I restart docker container - like in your case stopped container and start after reboot. There were problems: symbolic links already exist and apache2 has grumpy PID so we need to do something like in oficial php docker
This is full setup_php_settings worked for me after container restart.
#!/bin/bash -x
set -e
PHP_ERROR_REPORTING=${PHP_ERROR_REPORTING:-"E_ALL & ~E_DEPRECATED & ~E_NOTICE"}
sed -ri 's/^display_errors\s*=\s*Off/display_errors = On/g' /etc/php5/apache2/php.ini
sed -ri 's/^display_errors\s*=\s*Off/display_errors = On/g' /etc/php5/cli/php.ini
sed -ri "s/^error_reporting\s*=.*$//g" /etc/php5/apache2/php.ini
sed -ri "s/^error_reporting\s*=.*$//g" /etc/php5/cli/php.ini
echo "error_reporting = $PHP_ERROR_REPORTING" >> /etc/php5/apache2/php.ini
echo "error_reporting = $PHP_ERROR_REPORTING" >> /etc/php5/cli/php.ini
mkdir -p /data/tmp/php/uploads
mkdir -p /data/tmp/php/sessions
mkdir -p /data/tmp/php/xdebug
chown -R www-data:www-data /data/tmp/php*
ln -sf /etc/php5/mods-available/zz-php.ini /etc/php5/apache2/conf.d/zz-php.ini
ln -sf /etc/php5/mods-available/zz-php-directories.ini /etc/php5/apache2/conf.d/zz-php-directories.ini
# Add symbolic link to get Zend out of the current install dir
ln -sf /usr/share/php/libzend-framework-php/Zend/ /usr/share/php/Zend
a2enmod rewrite
php5enmod mcrypt
# Apache gets grumpy about PID files pre-existing
: "${APACHE_PID_FILE:=${APACHE_RUN_DIR:=/var/run/apache2}/apache2.pid}"
rm -f "$APACHE_PID_FILE"
source /etc/apache2/envvars && exec /usr/sbin/apache2 -DFOREGROUND "$#"
You can check logs with docker compose logs.
Looking through your repo, you have
ENTRYPOINT bash -C '/usr/local/bin/setup_php_settings';'bash'
which, without an interactive session, bash will exit immediately (with an exit code 0) after reading the end of file on stdin.
Normally getting an exit 0 should be a reason to celebrate, as it indicates that your command has ended successfully (http://www.tldp.org/LDP/abs/html/exit-status.html).
Having had a look at your Dockerfile it looks like, your just invoking bash in your entry point which then for sure will exit (as it is non blocking). In order to serve some data, you should rather be calling php (which is a blocking operation that keeps the container up), like done in the official docker files for php (see the CMD ["php", "-a"] at https://github.com/docker-library/php/blob/1c56325a69718a3e3cf76179e75d070b7e23da62/5.6/Dockerfile)

Resources