I'm trying to deploy my web app using ftp protocols and the continouis integration of gitlab. The files all get uploaded and the site works fine, but i keep getting the following error when the gitlab runner is almost done.
my gitlab-ci.yml file
stages:
- build
- test
- deploy
build:
stage: build
tags:
- shell
script:
- echo "Building"
test:
stage: test
tags:
- shell
script: echo "Running tests"
frontend-deploy:
stage: deploy
tags:
- debian
allow_failure: true
environment:
name: devallei
url: https://devallei.azurewebsites.net/
only:
- master
script:
- echo "Deploy to staging server"
- apt-get update -qq
- apt-get install -y -qq lftp
- lftp -c "set ftp:ssl-allow yes; set ssl:verify-certificate false; debug; open -u devallei\FTPAccesHoussem,Devallei2019 ftps://waws-prod-dm1-131.ftp.azurewebsites.windows.net/site/wwwroot; mirror -Rev ./frontend/dist /site/wwwroot"
backend-deploy:
stage: deploy
tags:
- shell
allow_failure: true
only:
- master
script:
- echo "Deploy spring boot application"
I expect the runner goes through and passes the job but it gives me the following error.
---- Connecting data socket to (23.99.220.117) port 10033
---- Data connection established
---> ALLO 4329977
<--- 200 ALLO command successful.
---> STOR vendor.3b66c6ecdd8766cbd8b1.js.map
<--- 125 Data connection already open; Transfer starting.
---- Closing data socket
<--- 226 Transfer complete.
---> QUIT
gnutls_record_recv: The TLS connection was non-properly terminated. Assuming
EOF.
<--- 221 Goodbye.
---- Closing control socket
ERROR: Job failed: exit code 1
I don't know the reason for the "gnutls_record_recv: The TLS connection was non-properly terminated. Assuming EOF." error but it makes your lftp command return a non zero exit code. That makes GitLab think your job failed. The best thing would be to fix it.
If you think everything works fine and prevent the lftp command to fail, add an || true to the end of the lftp command. But be aware that your job wouldn't fail even if a real error happens.
Related
My pipeline is not working with the following error:
* Connected to sftp.domain.com (157.90.40.40) port 5544 (#0)
< SSH-2.0-sFTP Server ready.
* server response timeout
* Closing connection 0
curl: (28) server response timeout
Between the "Server ready" and the timeout message there are more messages that create something like a wave. My pipeline yaml:
image: php:7.4
pipelines:
branches:
master:
- step:
name: Deploy to production
deployment: production
script:
- apt-get update
- apt-get -qq install git-ftp
- git ftp init -vv --user $LiveFTPUsername --passwd $LiveFTPPassword ftp://sftp.domain.com:5544
Part of my CircleCI config is to deploy to a remote server using scp, now I added SSH private key (https://circleci.com/docs/add-ssh-key) and it looks like (the values masked intentionally):
And here is a snapshot of my config:
deploy-web:
working_directory: ~/subdir/web
docker:
- image: cimg/node:16.16
steps:
- add_ssh_keys:
fingerprints:
- "d7:*****fa"
- checkout:
path: ~/subdir
- node/install-packages:
pkg-manager: yarn
- run:
name: Build
command: yarn build
- run:
name: Deploy
command: |
SSH_DEPLOY_PATH=/apps/my-app
scp -r dist/* "$SSH_USER#$SSH_HOST:$SSH_DEPLOY_PATH"
Everything runs fine but the ssh part outputs:
The authenticity of host '************** (**************)' can't be established.
ECDSA key fingerprint is SHA256:6pix3P******M.
Are you sure you want to continue connecting (yes/no/[fingerprint])?
Please not that i copied the fingerprint that is in the config from the web (in the screenshot). Now, is there anything am doing wrong or how do I go about it, because so far, google has not been resourceful.
I managed to resolve this, and this is the hack (I can't believe I didn't think of this sooner), I added this step just before the scp step:
- run:
name: Add SSH host to known
command: ssh-keyscan -H $SSH_HOST >> ~/.ssh/known_hosts
I have seen many similar issues to this but none seem to resolve or describe my exact issue.
I have configured an azure devops pipeline to use a container like below:
container:
image: ptrthomas/karate-chrome
options: --cap-add=SYS_ADMIN
I have uploaded the contents of the example from the jobserver demo to a repository and then run the following:
steps:
- script: mvn clean test -DargLine='-Dkarate.env=docker' -Dtest=WebRunner
It is my understanding (and I can see from the logs) that the files are loaded into the container and the script command is being executed inside the container. So that script command is the equivalent of docker exec -it -w /src karate mvn clean test -DargLine='-Dkarate.env=docker' -Dtest=WebRunner just without having to exec into the container.
When I run the example locally it executes the tests with no issues but in azure dev ops it fails at the point the tests actually start running, throwing this error:
14:16:37.388 [main] ERROR com.intuit.karate - karate.org.apache.http.conn.HttpHostConnectException: Connect to
localhost:9222 [localhost/127.0.0.1] failed: Connection refused
(Connection refused), http call failed after 2 milliseconds for url:
http://localhost:9222/json 14:16:39.388 [main] DEBUG
com.intuit.karate.shell.Command - attempt #4 waiting for http to be
ready at: http://localhost:9222/json 14:16:39.391 [main] DEBUG
com.intuit.karate - request: 5 > GET http://localhost:9222/json 5 >
Host: localhost:9222 5 > Connection: Keep-Alive 5 > User-Agent:
Apache-HttpClient/4.5.13 (Java/1.8.0_275) 5 > Accept-Encoding:
gzip,deflate
Looking at other issues there have been suggestions to specify the driver in the feature files with this line:
* configure driver = { type: 'chrome', executable: 'chrome' }
but a) that hasn't worked for me and b) shouldn't the karate-chrome docker image render this configuration unnecessary as it should be no different than the container I run locally?
Any help appreciated!
Thanks
Only thing I can think of is that the Azure config does not call the ENTRYPOINT of the image.
Maybe you should try to create a container from scratch (that does extensive logging) and see what happens. Use the Karate one as a reference.
I have a CircleCI job with the following structure.
jobs:
test:
steps:
- checkout
- run #1
...<<install dependencies>>
- run #2
...<<execute server-side test>>
- run #3
...<<execute frontend test 1>>
- run #4
...<<execute frontend test 2>>
I want to execute step #1 first, and then steps #2-4 in parallel.
#1, #2, #3, and #4 take around ~4 min., ~1 min., ~1 min., and ~1 min., respectively.
I tried to split the steps to different jobs and use workspaces to pass the installed artifacts from #1 to #2-4. However, because of the large size of the artifacts, it took around ~2 min. to persist & attach workspace, so the advantage of splitting jobs was cancelled out.
Is there a smart way to run #2-4 in parallel without significant overhead?
If you want to run the commands in parallel, you need to move these commands into a new job, otherwise, CircleCI will follow the structure of your step, running the commands only when the last one is finished. Let me give you an example. I created a basic configuration with 4 jobs.
npm install
test1 (that will run at the same time as
test2) but only when the npm install finish
test2 (that will
run at the same time as test1) but only when the npm install
finish
deploy (that will only run after the 2 tests are done)
Basically, you need to split the commands between jobs and set a dependency from what you want.
See my config file:
version: 2.1
jobs:
install_deps:
docker:
- image: circleci/node:14
steps:
- checkout
- setup_remote_docker:
docker_layer_caching: true
- run: echo "running npm install"
- run: npm install
- persist_to_workspace:
root: .
paths:
- '*'
test1:
docker:
- image: circleci/node:14
steps:
- checkout
- setup_remote_docker:
docker_layer_caching: true
- attach_workspace:
at: .
- run: echo "running the first test and also will run the test2 in parallel"
- run: npm test
test2:
docker:
- image: circleci/node:14
steps:
- checkout
- setup_remote_docker:
docker_layer_caching: true
- attach_workspace:
at: .
- run: echo "running the second test in parallel with the first test1"
- run: npm test
deploy:
docker:
- image: circleci/node:14
steps:
- checkout
- setup_remote_docker:
docker_layer_caching: true
- attach_workspace:
at: .
- run: echo "running the deploy job only when the test1 and test2 are finished"
- run: npm run build
# Orchestrate our job run sequence
workflows:
test_and_deploy:
jobs:
- install_deps
- test1:
requires:
- install_deps
- test2:
requires:
- install_deps
- deploy:
requires:
- test1
- test2
Now see the logic above, the install_dep will run with no dependency, but the test1 and the test2 will not run until the install_dep is finished.
Also, the deploy will not run until both tests are finished.
I've run this config, in the first image we can see that the other jobs are waiting for the first one to finish, in the second image we can see both tests are running in parallel and the deploy job is waiting for them to finishes. In the third image, we can see that the deploy job is running.
I', trying to lint dockerfiles using hadolint in Gitlab CI with this snippet from my .gitlab-ci.yml file:
lint-dockerfile:
image: hadolint/hadolint:latest-debian
stage: verify
script:
- mkdir -p reports
- hadolint -f gitlab_codeclimate Dockerfile > reports/hadolint-$(md5sum Dockerfile | cut -d" " -f1).json
artifacts:
name: "$CI_JOB_NAME artifacts from $CI_PROJECT_NAME on $CI_COMMIT_REF_SLUG"
expire_in: 1 day
when: always
reports:
codequality:
- "reports/*"
paths:
- "reports/*"
This used to work perfectly fine but one week ago (without any change on my part) my pipeline started to crash all the time with ERROR: Job failed: exit code 1.
Full log output from job:
Running with gitlab-runner 14.0.0-rc1 (19d2d239)
on docker-auto-scale 72989761
feature flags: FF_SKIP_DOCKER_MACHINE_PROVISION_ON_CREATION_FAILURE:true
Resolving secrets 00:00
Preparing the "docker+machine" executor 00:14
Using Docker executor with image hadolint/hadolint:latest-debian ...
Pulling docker image hadolint/hadolint:latest-debian ...
Using docker image sha256:7caf5ee484b575ecd32219eb6f2a7a114180c41f4d8671c1f8e8d579b53d9f18 for hadolint/hadolint:latest-debian with digest hadolint/hadolint#sha256:2c06786c0d389715dae465c9556582ed6b1c38e1312b9a6926e7916dc4a9c89e ...
Preparing environment 00:01
Running on runner-72989761-project-26715289-concurrent-0 via runner-72989761-srm-1624273099-5f23871c...
Getting source from Git repository 00:02
$ eval "$CI_PRE_CLONE_SCRIPT"
Fetching changes with git depth set to 50...
Initialized empty Git repository in /builds/sommerfeld.sebastian/docker-vagrant/.git/
Created fresh repository.
Checking out f664890e as master...
Skipping Git submodules setup
Executing "step_script" stage of the job script 00:01
Using docker image sha256:7caf5ee484b575ecd32219eb6f2a7a114180c41f4d8671c1f8e8d579b53d9f18 for hadolint/hadolint:latest-debian with digest hadolint/hadolint#sha256:2c06786c0d389715dae465c9556582ed6b1c38e1312b9a6926e7916dc4a9c89e ...
$ mkdir -p reports
$ hadolint -f gitlab_codeclimate Dockerfile > reports/hadolint-$(md5sum Dockerfile | cut -d" " -f1).json
Uploading artifacts for failed job 00:03
Uploading artifacts...
reports/*: found 1 matching files and directories
Uploading artifacts as "archive" to coordinator... ok id=1363188460 responseStatus=201 Created token=vNM5xQ1Z
Uploading artifacts...
reports/*: found 1 matching files and directories
Uploading artifacts as "codequality" to coordinator... ok id=1363188460 responseStatus=201 Created token=vNM5xQ1Z
Cleaning up file based variables 00:01
ERROR: Job failed: exit code 1
I have no idea why my build breaks all of a sudden. I'm using image: docker:stable as image for my whole .gitlab-ci.ymnl file.
Anywone got an idea?
To conclude this question. The issue was an unexpected change in behavior probably caused by an update of the hadolint image used here.
The job was in fact failing because the linter decided to do so. For anyone wanting the job to succeed anyway here is a little trick:
hadolint -f gitlab_codeclimate Dockerfile > reports/hadolint-$(md5sum Dockerfile | cut -d" " -f1).json || true
Given command will force the exit code to be positive no matter what happens.
Another option as #Sebastian Sommerfeld pointed out is to use allow_failure: true which essentially allows the script to fail, which will then be marked in the pipeline overview. Only drawback to this approach is that script execution is interrupted at the point of failure and no further commands may be executed.