Part of my CircleCI config is to deploy to a remote server using scp, now I added SSH private key (https://circleci.com/docs/add-ssh-key) and it looks like (the values masked intentionally):
And here is a snapshot of my config:
deploy-web:
working_directory: ~/subdir/web
docker:
- image: cimg/node:16.16
steps:
- add_ssh_keys:
fingerprints:
- "d7:*****fa"
- checkout:
path: ~/subdir
- node/install-packages:
pkg-manager: yarn
- run:
name: Build
command: yarn build
- run:
name: Deploy
command: |
SSH_DEPLOY_PATH=/apps/my-app
scp -r dist/* "$SSH_USER#$SSH_HOST:$SSH_DEPLOY_PATH"
Everything runs fine but the ssh part outputs:
The authenticity of host '************** (**************)' can't be established.
ECDSA key fingerprint is SHA256:6pix3P******M.
Are you sure you want to continue connecting (yes/no/[fingerprint])?
Please not that i copied the fingerprint that is in the config from the web (in the screenshot). Now, is there anything am doing wrong or how do I go about it, because so far, google has not been resourceful.
I managed to resolve this, and this is the hack (I can't believe I didn't think of this sooner), I added this step just before the scp step:
- run:
name: Add SSH host to known
command: ssh-keyscan -H $SSH_HOST >> ~/.ssh/known_hosts
Related
i am new in using bitbucket pipelines. I have an issue related with deploying my dist file to ftp server. this is an error "mirror: Access failed: /opt/atlassian/pipelines/agent/build/dist/*: No such file or directory" that occurs when i am trying to deploy project.
this is my bitbucket.yml file
# Template NodeJS build
# This template allows you to validate your NodeJS code.
# The workflow allows running tests and code linting on the default branch.
image: node:16
pipelines:
branches:
master:
- step:
name: Install dependencies
caches:
- node
script:
- npm install
artifacts:
- node_modules/** # Save modules for next steps
- step:
name: Build project
caches:
- node
script:
- npm run build
artifacts:
- dist/** # Save build for next steps
- step:
name: Deploy to Production
trigger: manual
deployment: Production
script:
- pipe: atlassian/ftp-deploy:0.3.7
variables:
USER: $FTP_USERNAME
PASSWORD: $FTP_PASSWORD
SERVER: $FTP_HOST
REMOTE_PATH: '/var/www/*******/booking.crt-minds.ru/'
LOCAL_PATH: 'dist/*'
EXTRA_ARGS: "--exclude=.bitbucket/ --exclude=.git/ --exclude=bitbucket-pipelines.yml --exclude=.gitignore" # Ignore these
I have tried to delete local_path in yml and see what happened. but first of all i do not understand if my pipeline has access to ftp server. How can i check it? so then i need to understand how to replace dist folder files in ftp server? May be my bitbucket.yml file incorrect configured?
Telling from the pipe's documentation
LOCAL_PATH: Optional path to local directory to upload. Default ${BITBUCKET_CLONE_DIR}.
I bet it is interpreting the value you passed not as glob pattern but literally a folder named dist/*
Try to drop that /*:
- step:
script:
- pipe: atlassian/ftp-deploy:0.3.7
variables:
USER: $FTP_USERNAME
PASSWORD: $FTP_PASSWORD
SERVER: $FTP_HOST
REMOTE_PATH: /var/www/site
LOCAL_PATH: dist
I'm facing the same problem as here - I have set up a private Docker Registry with TLS certification (certificates generated via Certbot), and I can interact with it directly via curl etc. (thus proving that the certificate is correct), but the Docker Plugin in my Drone flow gives an error x509: certificate signed by unknown authority.
As per this StackOverflow answer, I believe that putting the certificate at /etc/docker/certs.d/<my_registry_address:port>/ca.crt should fix this problem, but it doesn't appear to (neither does adding the certificate into the standard /etc/ssl/certs/ca-certificates.crt location)
Demonstration that the certificates work as-expected, having already built the Docker Drone Plugin locally as per https://github.com/drone-plugins/drone-docker:
$ docker run --rm -v <path_to_directory_containing_pems>:/custom-certs -it --entrypoint /bin/sh plugins/docker
/ # ls /custom-certs
accounts archive csr keys live renewal renewal-hooks
/ # apk add curl
...
OK: 28 MiB in 56 packages
/ # curl https://docker-registry.scubbo.org:8843/v2/_catalog
curl: (60) SSL certificate problem: unable to get local issuer certificate
More details here: https://curl.se/docs/sslcerts.html
curl failed to verify the legitimacy of the server and therefore could not
establish a secure connection to it. To learn more about this situation and
how to fix it, please visit the web page mentioned above.
/ # curl https://docker-registry.scubbo.org:8843/v2/_catalog --cacert /custom-certs/live/docker-registry.scubbo.org/fullchain.pem
{"repositories":[...]}
/ # cat /custom-certs/live/docker-registry.scubbo.org/fullchain.pem >> /etc/ssl/certs/ca-certificates.crt
/ # curl https://docker-registry.scubbo.org:8843/v2/_catalog
{"repositories":[...]}
Here's my .drone.yml, for a Runner instantiated with --env=DRONE_RUNNER_VOLUMES=/var/run/docker.sock:/var/run/docker.sock,<path_to_directory_containing_pems>:/custom-certs:
kind: pipeline
name: hello-world
type: docker
platform:
os: linux
arch: arm64
steps:
- name: copy-cert-into-place
image: busybox
volumes:
- name: docker-cert-persistence
path: /etc/docker/certs.d/
commands:
# https://stackoverflow.com/a/56410355/1040915
# Note that we need to mount the whole `custom-certs` directory into the workflow and then copy the file to `/etc/...`,
# rather than mounting the file directly into `/etc/...`, because the original file is a symlink and it's not possible (AFAIK)
# to instruct Docker to "mount the eventual-target-of this symlink into <location>"
- mkdir -p /etc/docker/certs.d/docker-registry.scubbo.org:8843
- cp -L /custom-certs/live/docker-registry.scubbo.org/fullchain.pem /etc/docker/certs.d/docker-registry.scubbo.org:8843/ca.crt
- name: check-cert-persists-between-stages
image: alpine
volumes:
- name: docker-cert-persistence
path: /etc/docker/certs.d/
commands:
- apk add curl
# The command below would fail if the cert was unavailable or invalid
- curl https://docker-registry.scubbo.org:8843/v2/_catalog --cacert /etc/docker/certs.d/docker-registry.scubbo.org:8843/ca.crt
- name: build-image
# ...contents irrelevant to this question...
- name: push-built-image
image: plugins/docker
volumes:
- name: docker-cert-persistence
path: /etc/docker/certs.d/
settings:
repo: docker-registry.scubbo.org:8843/scubbo/blog_nginx
tags: built_in_ci
debug: true
launch_debug: true
volumes:
- name: docker-cert-persistence
temp: {}
giving these logs from push-built-image step - ending in...
+ /usr/local/bin/docker tag 472d41d9c03ee60fe9c1965ad9cfd36a1cdb6cbf docker-registry.scubbo.org:8843/scubbo/blog_nginx:built_in_ci
+ /usr/local/bin/docker push docker-registry.scubbo.org:8843/scubbo/blog_nginx:built_in_ci
The push refers to repository [docker-registry.scubbo.org:8843/scubbo/blog_nginx]
Get "https://docker-registry.scubbo.org:8843/v2/": x509: certificate signed by unknown authority
exit status 1
How should I go about providing the CA Certificate to my Drone Docker Plugin step to permit it to communicate over TLS with a secure Docker registry? This answer suggests simply reverting to insecure integration, which works but is unsatisfactory.
EDIT: After re-reading this documentation, I extended the copy-cert-into-place commands to copy all 3 certificate-related files:
commands:
- mkdir -p /etc/docker/certs.d/docker-registry.scubbo.org:8843
- cp -L /custom-certs/live/docker-registry.scubbo.org/fullchain.pem /etc/docker/certs.d/docker-registry.scubbo.org:8843/ca.crt
- cp -L /custom-certs/live/docker-registry.scubbo.org/privkey.pem /etc/docker/certs.d/docker-registry.scubbo.org:8843/client.key
- cp -L /custom-certs/live/docker-registry.scubbo.org/cert.pem /etc/docker/certs.d/docker-registry.scubbo.org:8843/client.cert
but that did not resolve the problem - same x509: certificate signed by unknown authority error.
EDIT2: I directly confirmed (directly on a host, outside the context of a plugin or docker container) that adding the certificate to the path used above is sufficient to permit interaction with the registry:
$ docker pull docker-registry.scubbo.org:8843/scubbo/blog_nginx:built_in_ci
Error response from daemon: Get "https://docker-registry.scubbo.org:8843/v2/": x509: certificate signed by unknown authority
$ sudo cp -L <path_to_directory_containing_pems>/live/docker-registry.scubbo.org/chain.pem /etc/docker/certs.d/docker-registry.scubbo.org\:8843/ca.crt
$ docker pull docker-registry.scubbo.org:8843/scubbo/blog_nginx:built_in_ci
built_in_ci: Pulling from scubbo/blog_nginx
Digest: sha256:3a17f86f23050303d94443f24318b49fb1a5e2d0cc9228270678c8aa55b4d2c2
Status: Image is up to date for docker-registry.scubbo.org:8843/scubbo/blog_nginx:built_in_ci
docker-registry.scubbo.org:8843/scubbo/blog_nginx:built_in_ci
This isn't a complete answer, but I was able to get secure registry access working by switching from mounting a directory, to mounting the file directly:
I changed the docker run option to --env=DRONE_RUNNER_VOLUMES=/var/run/docker.sock:/var/run/docker.sock,$(readlink -f <path_to_directory_containing_pems>/live/docker-registry.scubbo.org/chain.pem):/registry_cert.crt
I changed the commands in copy-cert-into-place to:
- mkdir -p /etc/docker/certs.d/docker-registry.scubbo.org:8843
- cp /registry_cert.crt /etc/docker/certs.d/docker-registry.scubbo.org:8843/ca.crt
I don't consider this a complete answer (and would love further input or advice!), because:
I don't know why copying the file out of the mounted directory into /etc/docker/... (as in the original question) didn't work, but mounting the file directly from the host filesystem worked. (Note that the check-cert-persists-between-stages stage confirms that the certificate is correct, so it's not a mistake of copying a wrong or empty file)
I don't know how to mount the file directly into an in-stage path that contains a colon - this answer indicates how to mount a path containing a colon directly into a container, but in this case we're passing the path to DRONE_RUNNER_VOLUMES
i am kicking off a dataflow flex template using a cloud build. In my cloud build file i am attempting to do 3 things
build an image
publish it
run a flex template job using that image
this is my yaml file
substitutions:
_IMAGE: my_logic:latest4
_JOB_NAME: 'pipelinerunner'
_TEMP_LOCATION: ''
_REGION: us-central1
_FMPKEY: ''
_PYTHON_VERSION: '3.8'
# checkout this link https://github.com/davidcavazos/python-docs-samples/blob/master/dataflow/gpu-workers/cloudbuild.yaml
steps:
- name: gcr.io/cloud-builders/docker
args:
[ 'build'
, '--build-arg=python_version=$_PYTHON_VERSION'
, '--tag=gcr.io/$PROJECT_ID/$_IMAGE'
, '.'
]
# Push the image to Container Registry.
- name: gcr.io/cloud-builders/docker2
args: [ 'push', 'gcr.io/$PROJECT_ID/$_IMAGE' ]
- name: gcr.io/$PROJECT_ID/$_IMAGE
entrypoint: python
args:
- /dataflow/template/main.py
- --runner=DataflowRunner
- --project=$PROJECT_ID
- --region=$_REGION
- --job_name=$_JOB_NAME
- --temp_location=$_TEMP_LOCATION
- --sdk_container_image=gcr.io/$PROJECT_ID/$_IMAGE
- --disk_size_gb=50
- --year=2018
- --quarter=QTR1
- --fmpkey=$_FMPKEY
- --setup_file=/dataflow/template/setup.py
options:
logging: CLOUD_LOGGING_ONLY
# Use the Compute Engine default service account to launch the job.
serviceAccount: projects/$PROJECT_ID/serviceAccounts/$PROJECT_NUMBER-compute#developer.gserviceaccount.com
And this is the command i am launching
gcloud beta builds submit \
--config run.yaml \
--substitutions _REGION=$REGION \
--substitutions _FMPKEY=$FMPKEY \
--no-source
The error message i am getting is this
Logs are available at [https://console.cloud.google.com/cloud-build/builds/0f5953cc-7802-4e53-b7c4-7e79c6f0d0c7?project=111111111].
ERROR: (gcloud.beta.builds.submit) build 0f5953cc-7802-4e53-b7c4-7e79c6f0d0c7 completed with status "FAILURE
but i cannot access the logs from the URL mentioned above
I cannot see the logs, so i am unable to see what is wrong, but i stongly suspect somethign in my run.yaml is not quite right
Note: before this, i was building the image myself by launching this command
gcloud builds submit --project=$PROJECT_ID --tag $TEMPLATE_IMAGE .
and my run.yaml just contained 1 step, the last one, and everything worked fine
But i am trying to see if i can do everything in the yaml file
Could anyone advise on what might be incorrect? I dont have much experience with yaml files for cloud build
thanks and regards
Marco
I guess the pipeline does not work because (in the second step) the container: gcr.io/cloud-builders/docker2 does not exist (check https://gcr.io/cloud-builders/ - there is a docker container, but not a docker2one).
This second step pushes the final container to the registry and, it is a dependence of the third step, which will fail too.
You can build the container and push it to the container registry in just one step:
steps:
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/$PROJECT_ID/$IMAGE_NAME', '<path_to_docker-file>']
images: ['gcr.io/$PROJECT_ID/$IMAGE_NAME']
Ok, sorted, the problem was the way i was launching the build command
this is the original
gcloud beta builds submit \
--config run.yaml \
--substitutions _REGION=$REGION \
--substitutions _FMPKEY=$FMPKEY \
--no-source
apparently when i removed the --no-source all worked fine.
I think i copied and pasted the command without really understanding it
regards
I use an ansible script to load & start the https://hub.docker.com/r/rastasheep/ubuntu-sshd/ container.
so it starts well of course :
bash-4.4$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
8bedbd3b7d88 rastasheep/ubuntu-sshd "/usr/sbin/sshd -D" 37 minutes ago Up 36 minutes 0.0.0.0:49154->22/tcp test
bash-4.4$
so after ansible failure on ssh access to it I tested manually from shell
this is also ok.
bash-4.4$ ssh root#172.17.0.2
The authenticity of host '172.17.0.2 (172.17.0.2)' can't be established.
ECDSA key fingerprint is SHA256:YtTfuoRRR5qStSVA5UuznGamA/dvf+djbIT6Y48IYD0.
ECDSA key fingerprint is MD5:43:3f:41:e9:89:45:06:6f:f6:42:c4:6a:70:37:f8:1d.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '172.17.0.2' (ECDSA) to the list of known hosts.
root#172.17.0.2's password:
root#8bedbd3b7d88:~# logout
Connection to 172.17.0.2 closed.
bash-4.4$
so the step that failed is trying to get on it from ansible script & make access to ssh-copy-id
ansible error message is :
Fatal: [172.17.0.2]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: Warning: Permanently added '172.17.0.2' (ECDSA) to the list of known hosts.\r\nPermission denied (publickey,password).\r\n", "unreachable": true}
---
- hosts: 127.0.0.1
tasks:
- name: start docker service
service:
name: docker
state: started
- name: load and start the container we wanna use
docker_container:
name: test
image: rastasheep/ubuntu-sshd
state: started
ports:
- "49154:22"
- name: Wait maximum of 300 seconds for ports to be available
wait_for:
host: 0.0.0.0
port: 49154
state: started
- hosts: 172.17.0.2
vars:
passwordadmin: $6$pbE6yznA$AeFIdI.....K0
passwordroot: $6$TMrxQUxT$I8.JIzR.....TV1
ansible_ssh_extra_args: "-o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null"
tasks:
- name: Build test container root user rsa ssh-key
shell: docker exec test ssh-keygen -b 2048 -t rsa -f /root/.ssh/id_rsa -q -N ""
so I cannot even run the needed step to build ssh
how to do then ??
1st step (ansible task) : load docker container
2cd step (ansible task on only 172.17.0.2) : connect to it & setup it
there will be 3rd step to run application on it after that.
the problem occurs only when starting the 2cd step
Ok after many trys on a second container
conclusion is my procedure was bad
what I have done to solve that :
build a diroctory tree separating ./ ./inventory ./includes
build 1 yaml file by host (local, docker, labo)
build 1 main yaml file on ./
build 1 new host file in ./inventory
connect forced by sshpass to docker on default password
changed it
add the host key on authorized key to a login dedicated usage
installed pyhton (needed to answer ansible host else it makes
randomly module errors or refused connections depending on current
action)
setup a ssh login user in sudoers
then I can un the docker.yaml actions
then only at last I can run the labo.yaml actions.
Thanks for help
now I'm able to build the missing tools.
I'm trying to setup a CI server inside a corporate network with drone (open source edition). Its author describes drone as very simple solution even for programmer (as I am), though some moments are not clear for me (may be official documentation misses them).
First, I've made up an docker image for my rails application: rails-qna.
Next, composing drone images:
docker-compose.yml:
version: '2'
services:
drone-server:
image: drone/drone:0.5
ports:
- 80:8000
volumes:
- ./drone:/var/lib/drone/
restart: always
environment:
- DRONE_OPEN=true
- DRONE_ADMIN=khataev
- DRONE_GITHUB_CLIENT=github-client-string
- DRONE_GITHUB_SECRET=github-secret-string
- DRONE_SECRET=drone-secret-string
drone-agent:
image: drone/drone:0.5
command: agent
restart: always
depends_on: [ drone-server ]
volumes:
- /var/run/docker.sock:/var/run/docker.sock
environment:
- DRONE_SERVER=ws://drone-server:8000/ws/broker
- DRONE_SECRET=drone-secret-string
Application is registered on Github and secret/client strings are provided.
I placed .drone.yml file into my project repository:
pipeline:
build:
image: rails-qna
commands:
- bundle exec rake db:drop
- bundle exec rake db:create
- bundle exec rake db:migrate
- bundle exec rspec
Unclear moments:
1) While registering OAuth application on github, we should specify Homepage URL and authorization callback URL. Where should they point to? Drone server container? Guessing that so, I specified
mycorporatedomain.com:3005
and
mycorporatedomain.com:3005/authorize
and setup port forwarding from 3005 port to 80 port of host, where drone docker is running. May be I'm wrong?
2) What should I specify in key DRONE_GITHUB_URL?
https://github.com or full path to my project repository, i.e.
https://github.com/khataev/qna?
3) What if I want to build some branch other than master? Were should I specify it? For now drone ready branch (with .drone.yml) is not a master branch - would it work?
4) Why DRONE_GITHUB_GIT_USERNAME and DRONE_GITHUB_GIT_PASSWORD are optional? How it is supposed to work if, I don't specify username and password for my github account?
5) When I start drone images with docker up, I get this errors:
→ docker-compose up
Starting drone_drone-server_1
Starting drone_drone-agent_1
Attaching to drone_drone-server_1, drone_drone-agent_1
drone-server_1 | time="2017-03-04T17:00:33Z" level=fatal msg="version control system not configured"
drone-agent_1 | 1:M 04 Mar 17:00:35.208 * connecting to server ws://drone-server:8000/ws/broker
drone-agent_1 | 1:M 04 Mar 17:00:35.229 # connection failed, retry in 15s. websocket.Dial ws://drone-server:8000/ws/broker: dial tcp: lookup drone-server on 127.0.0.11:53: no such host
drone_drone-server_1 exited with code 1
drone-server_1 | time="2017-03-04T16:53:38Z" level=fatal msg="version control system not configured"
UPD
5) this was solved - forgot to specify
DRONE_GITHUB=true
Homepage URL is the address of the server where drone is running on.
E.g. http://155.200.100.0
Authorize URL is the same address appended by /authorize
Eg. http://155.200.100.0/authorize
You dont have to specify that. DRONE_GITHUB=true says drone to use github url.
You can limit a single section to a branch or the whole drone build.
Single Section:
pipeline:
build:
image: node:latest
commands:
- npm install
- npm test
when:
branch: master
Whole build process:
pipeline:
build:
image: node:latest
commands:
- npm install
- npm test
branches: master
You don't need username and password when using OAuth.
Source:
http://readme.drone.io/admin/setup-github/
http://readme.drone.io/usage/skipping-builds/
http://readme.drone.io/usage/skipping-build-steps/
UPDATE:
Documentation is shifted to http://docs.drone.io/ due to version 0.6 of Drone