mirror: Access failed: /opt/atlassian/pipelines/agent/build/dist/*: No such file or directory - bitbucket

i am new in using bitbucket pipelines. I have an issue related with deploying my dist file to ftp server. this is an error "mirror: Access failed: /opt/atlassian/pipelines/agent/build/dist/*: No such file or directory" that occurs when i am trying to deploy project.
this is my bitbucket.yml file
# Template NodeJS build
# This template allows you to validate your NodeJS code.
# The workflow allows running tests and code linting on the default branch.
image: node:16
pipelines:
branches:
master:
- step:
name: Install dependencies
caches:
- node
script:
- npm install
artifacts:
- node_modules/** # Save modules for next steps
- step:
name: Build project
caches:
- node
script:
- npm run build
artifacts:
- dist/** # Save build for next steps
- step:
name: Deploy to Production
trigger: manual
deployment: Production
script:
- pipe: atlassian/ftp-deploy:0.3.7
variables:
USER: $FTP_USERNAME
PASSWORD: $FTP_PASSWORD
SERVER: $FTP_HOST
REMOTE_PATH: '/var/www/*******/booking.crt-minds.ru/'
LOCAL_PATH: 'dist/*'
EXTRA_ARGS: "--exclude=.bitbucket/ --exclude=.git/ --exclude=bitbucket-pipelines.yml --exclude=.gitignore" # Ignore these
I have tried to delete local_path in yml and see what happened. but first of all i do not understand if my pipeline has access to ftp server. How can i check it? so then i need to understand how to replace dist folder files in ftp server? May be my bitbucket.yml file incorrect configured?

Telling from the pipe's documentation
LOCAL_PATH: Optional path to local directory to upload. Default ${BITBUCKET_CLONE_DIR}.
I bet it is interpreting the value you passed not as glob pattern but literally a folder named dist/*
Try to drop that /*:
- step:
script:
- pipe: atlassian/ftp-deploy:0.3.7
variables:
USER: $FTP_USERNAME
PASSWORD: $FTP_PASSWORD
SERVER: $FTP_HOST
REMOTE_PATH: /var/www/site
LOCAL_PATH: dist

Related

Bitbucket pipeline run windows bat (batch) file remote through ssh

Messing around with Bitbucket Pipelines.
I copy a directory to the remote server using scp and that works perfectly
- step:
name: Deploy to server
deployment: staging
script:
- pipe: atlassian/scp-deploy:0.3.9
variables:
USER: Administrator
SERVER: 145.131.29.64
REMOTE_PATH: C:\Websites\dev.api.lekkerparkeren.nl
LOCAL_PATH: 'release'
DEBUG: 'true'
But now I need to execute a bat (batch) file on the remote host that stops IIS, copies the directory, and starts IIS again.
But how can I do this through Bitbucket pipelines.
Ive tried to use the atlassian/ssh-run:0.4.1 but that doesnt do anything
- step:
name: Install on server
script:
- pipe: atlassian/ssh-run:0.4.1
variables:
SSH_USER: Administrator
SERVER: 145.131.29.64
MODE: 'script'
DEBUG: 'true'
COMMAND: 'C:\Websites\mywebsite.com\release\deploy.bat'

Unable to deploy to remote ssh server in CircleCI

Part of my CircleCI config is to deploy to a remote server using scp, now I added SSH private key (https://circleci.com/docs/add-ssh-key) and it looks like (the values masked intentionally):
And here is a snapshot of my config:
deploy-web:
working_directory: ~/subdir/web
docker:
- image: cimg/node:16.16
steps:
- add_ssh_keys:
fingerprints:
- "d7:*****fa"
- checkout:
path: ~/subdir
- node/install-packages:
pkg-manager: yarn
- run:
name: Build
command: yarn build
- run:
name: Deploy
command: |
SSH_DEPLOY_PATH=/apps/my-app
scp -r dist/* "$SSH_USER#$SSH_HOST:$SSH_DEPLOY_PATH"
Everything runs fine but the ssh part outputs:
The authenticity of host '************** (**************)' can't be established.
ECDSA key fingerprint is SHA256:6pix3P******M.
Are you sure you want to continue connecting (yes/no/[fingerprint])?
Please not that i copied the fingerprint that is in the config from the web (in the screenshot). Now, is there anything am doing wrong or how do I go about it, because so far, google has not been resourceful.
I managed to resolve this, and this is the hack (I can't believe I didn't think of this sooner), I added this step just before the scp step:
- run:
name: Add SSH host to known
command: ssh-keyscan -H $SSH_HOST >> ~/.ssh/known_hosts

Access(clone) repository bitbucket from pipeline another repo bitbucket ssh

I have a flutter web project in bitbucket and I am making a pipeline that allows me to use CI/CD. The problem I have, is that the project manages a dependency of a project that is in another repository at bitbucket. I have not been able to find a way to configure the private SSH key in bitbucket and I can access the project in git without problem when doing the build. It gives me the following error:
Downloading Web SDK... 2,828ms
Downloading CanvasKit... 569ms
Running "flutter pub get" in build...
Git error. Command: `git clone --mirror ssh://git#bitbucket.org/... /root/.pub-cache/git/cache/barest-playground-47e65fcf6973f19ceed46038aa27a70e7bc4d47b`
stdout:
stderr: Cloning into bare repository '/root/.pub-cache/git/cache/'...
Warning: Permanently added the RSA host key for IP address '18.205.93.0' to the list of known hosts.
git#bitbucket.org: Permission denied (publickey).
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
My pipeline
image: cirrusci/flutter
pipelines:
branches:
develop:
- step:
name: Build
caches:
- node
size: 2x
script:
- ./run.sh dev
artifacts:
- build/**
- step:
name: Deploy to Firebase
deployment: dev
script:
- pipe: atlassian/firebase-deploy:1.1.0
variables:
FIREBASE_TOKEN: $FIREBASE_TOKEN
PROJECT_ID: $PROJECTID
MESSAGE: Deploying in $PROJECTID
EXTRA_ARGS: --only hosting
DEBUG: 'true'
master:
- step:
name: Build
size: 2x
script:
- ./run.sh prod
artifacts:
- build/**
- step:
name: Deploy to Firebase
deployment: prod
script:
- pipe: atlassian/firebase-deploy:1.1.0
variables:
FIREBASE_TOKEN: $FIREBASE_TOKEN
PROJECT_ID: $PROJECTID
MESSAGE: Deploying in $PROJECTID
EXTRA_ARGS: --only hosting
DEBUG: 'false'
First of all, Thanks

My cloudbuild.yaml is failing. Please review my cloudbuild.yaml

I am trying to set a react app to a kubernetes cluster. All my kubernetes files resides in k8s/ folder. In k8s/ folder I have a deployment.yaml and service.yaml file.
The below is my cloudbuild.yaml file which resides in the root folder. This part gcr.io/cloud-builders/kubectl Stage 3 is failing. I get the below error
build step 2 "gcr.io/cloud-builders/kubectl" failed: step exited with non-zero status: 1
steps:
# Build the image - Stage 1
- name: 'gcr.io/cloud-builders/docker'
args: ['build','-t','gcr.io/${_PROJECT}/${_CONTAINERNAME}:${_VERSION}','.']
timeout: 1500s
# Push the image - Stage 2
- name: 'gcr.io/cloud-builders/docker'
args: ['push','gcr.io/${_PROJECT}/${_CONTAINERNAME}:${_VERSION}']
# Deploy changes to kubernetes config files - Stage 3
- name: "gcr.io/cloud-builders/kubectl"
args: ["apply", "-f", "k8s/"]
env:
- 'CLOUDSDK_COMPUTE_ZONE=${_ZONE}'
- 'CLOUDSDK_CONTAINER_CLUSTER=${_GKE_CLUSTER}'
# These are variable substitutions
substitutions:
#GCP Specific configuration. Please DON'T change anything
_PROJECT: my-projects-121212
_ZONE: us-central1-c
_GKE_CLUSTER: cluster-1
#Repository Specific configuration. DevOps can change this settings
_DEPLOYMENTNAME: react
_CONTAINERNAME: react
_REPO_NAME: react-app
# Developers ONLY change
_VERSION: v1.0
options:
substitution_option: 'ALLOW_LOOSE'
machineType: 'N1_HIGHCPU_8'
timeout: 2500s
In step 3, there are double quotes name: "gcr.io/cloud-builders/kubectl"
If you replace them with single quotes, the issue should be fixed.

Why drone cannot find my repo name with plugins/docker?

I'm trying to build and push image with drone.io's plugins/docker, but it seems cannot find my repo name.
Here is the last log about the build step.
---> Running in afca20280587
Removing intermediate container afca20280587
---> cb05c781a4c4
Successfully built cb05c781a4c4
Successfully tagged caa418f0605dc7a6b2bc84faebabac55a09a373b:latest
+ /usr/local/bin/docker tag caa418f0605dc7a6b2bc84faebabac55a09a373b :latest
Error parsing reference: ":latest" is not a valid repository/tag: invalid reference format
time="2019-01-02T02:05:18Z" level=fatal msg="exit status 1"
The sixth line should be
+ /usr/local/bin/docker tag caa418f0605dc7a6b2bc84faebabac55a09a373b registry.cn-beijing.aliyuncs.com/xxx/xxx_xxx:latest
But now it didn't find my repo name.
It's drone/drone:1.0.0-rc.3, and here is my .drone.yml
kind: pipeline
name: default
steps:
- name: build
image: python:3.6
commands:
- pip install -r requirements.txt
- python -m pytest app.py
when:
branch: master
event:
- push
- pull_request
- name: publish
image: plugins/docker
registry: registry.cn-beijing.aliyuncs.com
repo: xxx/xxx_xxx
tags: [ latest ]
username:
- from_secret: ali_username
password:
- from_secret: ali_password
Is there something wrong? Thanks for any tip!
When you define the repository you need to use the fully qualified image name:
- repo: xxx/xxx_xxx
+ repo: registry.cn-beijing.aliyuncs.com/xxx/xxx_xxx
In addition, all of the plugin settings need to be declared inside the settings block [1] like this:
- name: publish
image: plugins/docker
settings:
registry: registry.cn-beijing.aliyuncs.com
repo: registry.cn-beijing.aliyuncs.com/xxx/xxx_xxx
username:
- from_secret: ali_username
password:
- from_secret: ali_password
[1] http://plugins.drone.io/drone-plugins/drone-docker/

Resources