Messing around with Bitbucket Pipelines.
I copy a directory to the remote server using scp and that works perfectly
- step:
name: Deploy to server
deployment: staging
script:
- pipe: atlassian/scp-deploy:0.3.9
variables:
USER: Administrator
SERVER: 145.131.29.64
REMOTE_PATH: C:\Websites\dev.api.lekkerparkeren.nl
LOCAL_PATH: 'release'
DEBUG: 'true'
But now I need to execute a bat (batch) file on the remote host that stops IIS, copies the directory, and starts IIS again.
But how can I do this through Bitbucket pipelines.
Ive tried to use the atlassian/ssh-run:0.4.1 but that doesnt do anything
- step:
name: Install on server
script:
- pipe: atlassian/ssh-run:0.4.1
variables:
SSH_USER: Administrator
SERVER: 145.131.29.64
MODE: 'script'
DEBUG: 'true'
COMMAND: 'C:\Websites\mywebsite.com\release\deploy.bat'
Related
i am new in using bitbucket pipelines. I have an issue related with deploying my dist file to ftp server. this is an error "mirror: Access failed: /opt/atlassian/pipelines/agent/build/dist/*: No such file or directory" that occurs when i am trying to deploy project.
this is my bitbucket.yml file
# Template NodeJS build
# This template allows you to validate your NodeJS code.
# The workflow allows running tests and code linting on the default branch.
image: node:16
pipelines:
branches:
master:
- step:
name: Install dependencies
caches:
- node
script:
- npm install
artifacts:
- node_modules/** # Save modules for next steps
- step:
name: Build project
caches:
- node
script:
- npm run build
artifacts:
- dist/** # Save build for next steps
- step:
name: Deploy to Production
trigger: manual
deployment: Production
script:
- pipe: atlassian/ftp-deploy:0.3.7
variables:
USER: $FTP_USERNAME
PASSWORD: $FTP_PASSWORD
SERVER: $FTP_HOST
REMOTE_PATH: '/var/www/*******/booking.crt-minds.ru/'
LOCAL_PATH: 'dist/*'
EXTRA_ARGS: "--exclude=.bitbucket/ --exclude=.git/ --exclude=bitbucket-pipelines.yml --exclude=.gitignore" # Ignore these
I have tried to delete local_path in yml and see what happened. but first of all i do not understand if my pipeline has access to ftp server. How can i check it? so then i need to understand how to replace dist folder files in ftp server? May be my bitbucket.yml file incorrect configured?
Telling from the pipe's documentation
LOCAL_PATH: Optional path to local directory to upload. Default ${BITBUCKET_CLONE_DIR}.
I bet it is interpreting the value you passed not as glob pattern but literally a folder named dist/*
Try to drop that /*:
- step:
script:
- pipe: atlassian/ftp-deploy:0.3.7
variables:
USER: $FTP_USERNAME
PASSWORD: $FTP_PASSWORD
SERVER: $FTP_HOST
REMOTE_PATH: /var/www/site
LOCAL_PATH: dist
Part of my CircleCI config is to deploy to a remote server using scp, now I added SSH private key (https://circleci.com/docs/add-ssh-key) and it looks like (the values masked intentionally):
And here is a snapshot of my config:
deploy-web:
working_directory: ~/subdir/web
docker:
- image: cimg/node:16.16
steps:
- add_ssh_keys:
fingerprints:
- "d7:*****fa"
- checkout:
path: ~/subdir
- node/install-packages:
pkg-manager: yarn
- run:
name: Build
command: yarn build
- run:
name: Deploy
command: |
SSH_DEPLOY_PATH=/apps/my-app
scp -r dist/* "$SSH_USER#$SSH_HOST:$SSH_DEPLOY_PATH"
Everything runs fine but the ssh part outputs:
The authenticity of host '************** (**************)' can't be established.
ECDSA key fingerprint is SHA256:6pix3P******M.
Are you sure you want to continue connecting (yes/no/[fingerprint])?
Please not that i copied the fingerprint that is in the config from the web (in the screenshot). Now, is there anything am doing wrong or how do I go about it, because so far, google has not been resourceful.
I managed to resolve this, and this is the hack (I can't believe I didn't think of this sooner), I added this step just before the scp step:
- run:
name: Add SSH host to known
command: ssh-keyscan -H $SSH_HOST >> ~/.ssh/known_hosts
I have a Java app that is running integration tests with Elasticsearch in Gitlab.
.gitlab-ci.yml:
...
integration:
stage: integration
tags:
- onprem
services:
- name: "docker.elastic.co/elasticsearch/elasticsearch:7.10.1"
alias: "elasticsearch"
command: [ "bin/elasticsearch", "-Expack.security.enabled=false", "-Ediscovery.type=single-node" ]
script:
- curl "http://elasticsearch:9200/_cat/health"
- mvn -Dgroups="IntegrationTest" -DargLine="-Durl=elasticsearch" test
...
Now I want to use Opensearch 1.1.0 because that is what we use on AWS. I tried working off the docker compose setup that Opensearch suggests for developers ( https://opensearch.org/docs/latest/opensearch/install/docker/#sample-docker-compose-file-for-development ), and came up with this:
...
integration:
stage: integration
tags:
- onprem
services:
- name: "opensearchproject/opensearch:1.1.0"
alias: "elasticsearch"
command: [
"./opensearch-docker-entrypoint.sh",
"-Ecluster.name=opensearch-cluster",
"-Enode.name=opensearch-node1",
"-Ebootstrap.memory_lock=true",
"-Ediscovery.type=single-node",
"OPENSEARCH_JAVA_OPTS=-Xms512m -Xmx512m",
"DISABLE_INSTALL_DEMO_CONFIG=true",
"DISABLE_SECURITY_PLUGIN=true"
]
script:
- curl "http://elasticsearch:9200/_cat/health"
- mvn -Dgroups="IntegrationTest" -DargLine="-Durl=elasticsearch" test
...
The curl response:
$ curl "http://elasticsearch:9200/_cat/health"
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- 0:00:02 --:--:-- 0curl: (7) Failed to connect to elasticsearch port 9200: No route to host
One big difference seems to be that Elasticsearch disables security with an environment variable, but Opensearch does that with an argument through setup. I tried running Opensearch directly through the "bin/" directory, but that seems to give all sorts of additional errors. The Opensearch image is available on dockerhub ( https://hub.docker.com/layers/opensearchproject/opensearch/1.1.0/images/sha256-94254d215845723e73829f34cf7053ae9810db360cf73c954737a717e9cf031d?context=explore ) , but I have no access to the Dockerfile of the Elasticsearch image to compare.
I have numerous other failed setups: Tried moving different combinations of the arguments over as stage variables in gitlab-ci.
Am I misunderstanding what to do here, or is what I'm trying even supported at all?
The final layer in opensearchproject/opensearch:1.1.0 is
CMD ["./opensearch-docker-entrypoint.sh"]
which reconfigures opensearch based on environment variables, like DISABLE_SECURITY_PLUGIN, and populates opensearch startup options like -Eplugins.security.disabled=true
Even though such ENVs are accepted by Docker, bash does not allow export of environment variables like discovery.type=single-node, so the gitlab-ci job fails.
The CMD is refactored to ENTRYPOINT in later releases by this issue
One way to trick the CMD, is to set the variables before launching the script:
integration:
stage: integration
variables:
OPENSEARCH_JAVA_OPTS: "-Xms512m -Xmx512m"
DISABLE_INSTALL_DEMO_CONFIG: "true"
DISABLE_SECURITY_PLUGIN: "true"
services:
- name: opensearchproject/opensearch:1.1.0
alias: opensearch
command: ["bash", "-c", "env 'discovery.type=single-node' 'cluster.name=opensearch' ./opensearch-docker-entrypoint.sh"]
script:
- curl -sS http://opensearch:9200/_cat/health
I have a flutter web project in bitbucket and I am making a pipeline that allows me to use CI/CD. The problem I have, is that the project manages a dependency of a project that is in another repository at bitbucket. I have not been able to find a way to configure the private SSH key in bitbucket and I can access the project in git without problem when doing the build. It gives me the following error:
Downloading Web SDK... 2,828ms
Downloading CanvasKit... 569ms
Running "flutter pub get" in build...
Git error. Command: `git clone --mirror ssh://git#bitbucket.org/... /root/.pub-cache/git/cache/barest-playground-47e65fcf6973f19ceed46038aa27a70e7bc4d47b`
stdout:
stderr: Cloning into bare repository '/root/.pub-cache/git/cache/'...
Warning: Permanently added the RSA host key for IP address '18.205.93.0' to the list of known hosts.
git#bitbucket.org: Permission denied (publickey).
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
My pipeline
image: cirrusci/flutter
pipelines:
branches:
develop:
- step:
name: Build
caches:
- node
size: 2x
script:
- ./run.sh dev
artifacts:
- build/**
- step:
name: Deploy to Firebase
deployment: dev
script:
- pipe: atlassian/firebase-deploy:1.1.0
variables:
FIREBASE_TOKEN: $FIREBASE_TOKEN
PROJECT_ID: $PROJECTID
MESSAGE: Deploying in $PROJECTID
EXTRA_ARGS: --only hosting
DEBUG: 'true'
master:
- step:
name: Build
size: 2x
script:
- ./run.sh prod
artifacts:
- build/**
- step:
name: Deploy to Firebase
deployment: prod
script:
- pipe: atlassian/firebase-deploy:1.1.0
variables:
FIREBASE_TOKEN: $FIREBASE_TOKEN
PROJECT_ID: $PROJECTID
MESSAGE: Deploying in $PROJECTID
EXTRA_ARGS: --only hosting
DEBUG: 'false'
First of all, Thanks
I would need some help with steps to run integration tests on GitHub. My project needs Neo4j so I have declared neo4j as a service. However while running my test project, I am seeing the following error
Connection with the server breaks due to ExtendedSocketException:
Resource temporarily unavailable Please ensure that your database is listening on the correct
host and port and that you have compatible encryption settings both on Neo4j server and driver.
Note that the default encryption setting has changed in Neo4j 4.0.
I am not sure if the issue is with how I am specifying the actions or something to do with Neo4j.
Here's my GitHub CI File: dotnetcore.yml
name: .NET Core
on:
push:
branches: [ master, GithubActions ]
pull_request:
branches: [ master ]
jobs:
build:
runs-on: ubuntu-latest
env:
NEO4J_HOST: neo4j
# Service containers to run with `container-job`
services:
# Label used to access the service container
neo4j:
# Docker Hub image
image: neo4j:4.0.1
ports:
- 7474:7474 # used for http
- 7687:7687 # used for bolt
env:
NEO4J_AUTH: neo4j/password
NEO4J_dbms_connector_http_advertised__address: "NEO4J_HOST:7687"
NEO4J_dbms_connector_bolt_advertised__address: "NEO4J_HOST:7687"
steps:
- uses: actions/checkout#v2
- name: Setup .NET Core
uses: actions/setup-dotnet#v1
with:
dotnet-version: 3.1.101
- name: Install dependencies
run: dotnet restore ./src/BbcCorp.Neo4j.NeoGraphManager.sln
- name: Build
run: dotnet build --configuration Release --no-restore ./src/BbcCorp.Neo4j.NeoGraphManager.sln
- name: Run Integration Tests
env:
NEO4J_SERVER: NEO4J_HOST
run: |
cd ./src/BbcCorp.Neo4j.Tests
docker ps
dotnet test --no-restore --verbosity normal BbcCorp.Neo4j.Tests.csproj
I am trying to connect to: bolt://NEO4J_HOST:7687 but it just doesn't connect.
Error Details:
Using Neo4j Server NEO4J_HOST:7687 as neo4j/password
Error executing query. Connection with the server breaks due to ExtendedSocketException: Resource temporarily unavailable Please ensure that your database is listening on the correct host and port and that you have compatible encryption settings both on Neo4j server and driver. Note that the default encryption setting has changed in Neo4j 4.0.
[xUnit.net 00:00:00.83] NeoGraphManagerIntegrationTests.SimpleNodeTests [FAIL]
[xUnit.net 00:00:00.83] Neo4j.Driver.ServiceUnavailableException : Connection with the server breaks due to ExtendedSocketException: Resource temporarily unavailable Please ensure that your database is listening on the correct host and port and that you have compatible encryption settings both on Neo4j server and driver. Note that the default encryption setting has changed in Neo4j 4.0.
[xUnit.net 00:00:00.83] ---- System.Net.Internals.SocketExceptionFactory+ExtendedSocketException : Resource temporarily unavailable
The settings file for my C# test project looks like this
{
"NEO4J_SERVER": "localhost",
"NEO4J_PORT": 7687,
"NEO4J_DB_USER": "neo4j",
"NEO4J_DB_PWD": "password"
}
The tests runs fine from my local machine but fails when we execute on GitHub.