Bitbucket pipeline fails (ftp) - Error: server response timeout - bitbucket

My pipeline is not working with the following error:
* Connected to sftp.domain.com (157.90.40.40) port 5544 (#0)
< SSH-2.0-sFTP Server ready.
* server response timeout
* Closing connection 0
curl: (28) server response timeout
Between the "Server ready" and the timeout message there are more messages that create something like a wave. My pipeline yaml:
image: php:7.4
pipelines:
branches:
master:
- step:
name: Deploy to production
deployment: production
script:
- apt-get update
- apt-get -qq install git-ftp
- git ftp init -vv --user $LiveFTPUsername --passwd $LiveFTPPassword ftp://sftp.domain.com:5544

Related

Github actions run jobs in webserver container unable to connect to localhost

I have a container which starts a webserver when I run the container on my laptop, login to its terminal and do a curl request to 127.0.0.1 it will give a result. When I try the same thing in a GitHub Actions Workflow I get: "curl: (7) Failed to connect to 127.0.0.1 port 80: Connection refused". I have tried things like adding ports (which I do not think should be necessary) but it won't work. I actually think apache is not running for some reason but I do not understand why as it does work locally.
See a minimal workflow file below:
name: Auto tests
on:
push:
branches: [ "master", "githubActions" ]
jobs:
build:
runs-on: ubuntu-latest
container:
image: php:7.4.33-apache-bullseye
steps:
- name: GET localhost
run: curl 127.0.0.1
I am expecting I get something like:
# curl localhost
<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<html><head>
<title>403 Forbidden</title>
</head><body>
<h1>Forbidden</h1>
<p>You don't have permission to access this resource.</p>
<hr>
<address>Apache/2.4.54 (Debian) Server at localhost Port 80</address>
</body></html>
But I get "curl: (7) Failed to connect to localhost port 80: Connection refused" instead.
You are running steps inside the container, not alongside it. See Running jobs in a container. That's why there's no server running there and curl fails.
What you're looking for is jobs.<job_id>.services. See About service containers for more details.
With services, a sample workflow will be:
name: php_container
on:
workflow_dispatch:
jobs:
ci:
runs-on: ubuntu-latest
services:
php:
image: php:7.4.33-apache-bullseye
ports:
- 80:80
steps:
- name: Test
run: curl localhost
Apart from that, if using php:7.4.33-apache-bullseye is not a hard requirement on your side, you can simply avoid the container or services altogether and use the preinstalled PHP and Apache Web Server. The default preinstalled Apache is inactive so you'll have to start it.

Connection timeout using local kafka-connect cluster to connect on a remote database

I'm trying to run a local kafka-connect cluster using docker-compose.
I need to connect on a remote database and i'm also using a remote kafka and schema-registry.
I have enabled access to these remotes resources from my machine.
To start the cluster, on my project folder in my Ubuntu WSL2 terminal, i'm running
docker build -t my-connect:1.0.0
docker-compose up
The application runs successfully, but when I try to create a new connector, returns error 500 with timeout.
My Dockerfile
FROM confluentinc/cp-kafka-connect-base:5.5.0
RUN cat /etc/confluent/docker/log4j.properties.template
ENV CONNECT_PLUGIN_PATH="/usr/share/java,/usr/share/confluent-hub-components"
ARG JDBC_DRIVER_DIR=/usr/share/java/kafka/
RUN confluent-hub install --no-prompt confluentinc/kafka-connect-jdbc:5.5.0 \
&& confluent-hub install --no-prompt confluentinc/connect-transforms:1.3.2
ADD java/kafka-connect-jdbc /usr/share/confluent-hub-components/confluentinc-kafka-connect-jdbc/lib/
COPY java/kafka-connect-jdbc/ojdbc8.jar /usr/share/confluent-hub-components/confluentinc-kafka-connect-jdbc/lib/
ENTRYPOINT ["sh","-c","export CONNECT_REST_ADVERTISED_HOST_NAME=$(hostname -I);/etc/confluent/docker/run"]
My docker-compose.yaml
services:
connect:
image: my-connect:1.0.0
ports:
- 8083:8083
environment:
- CONNECT_KEY_CONVERTER_SCHEMA_REGISTRY_URL=http=//schema-registry:8081
- CONNECT_KEY_CONVERTER=io.confluent.connect.avro.AvroConverter
- CONNECT_VALUE_CONVERTER_SCHEMA_REGISTRY_URL=http=//schema-registry:8081
- CONNECT_BOOTSTRAP_SERVERS=broker1.intranet:9092
- CONNECT_GROUP_ID=kafka-connect
- CONNECT_INTERNAL_KEY_CONVERTER=org.apache.kafka.connect.json.JsonConverter
- CONNECT_VALUE_CONVERTER=io.confluent.connect.avro.AvroConverter
- CONNECT_INTERNAL_VALUE_CONVERTER=org.apache.kafka.connect.json.JsonConverter
- CONNECT_OFFSET_STORAGE_TOPIC=kafka-connect.offset
- CONNECT_CONFIG_STORAGE_TOPIC=kafka-connect.config
- CONNECT_STATUS_STORAGE_TOPIC=kafka-connect.status
- CONNECT_CONNECTOR_CLIENT_CONFIG_OVERRIDE_POLICY=All
- CONNECT_LOG4J_ROOT_LOGLEVEL=INFO
- KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=1
- CONNECT_REST_ADVERTISED_HOST_NAME=localhost
My cluster it's up
~$ curl -X GET http://localhost:8083/
{"version":"5.5.0-ccs","commit":"606822a624024828","kafka_cluster_id":"OcXKHO7eT4m9NBHln6ACKg"}
Connector call
curl -i -X POST -H "Accept:application/json" -H "Content-Type:application/json" localhost:8083/connectors/ -d
{
"name": "my-connector",
"config":
{
"connector.class" : "io.debezium.connector.oracle.OracleConnector",
"tasks.max": "1",
"database.user": "user",
"database.password": "pass",
"database.dbname":"SID",
"database.schema":"schema",
"database.server.name": "dbname",
"schema.include.list": "schema",
"database.connection.adapter":"logminer",
"database.hostname":"databasehost",
"database.port":"1521"
}
}
Error
{"error_code": 500,"message": "IO Error trying to forward REST request: java.net.SocketTimeoutException: Connect Timeout"}
## LOG
connect_1 | [2021-07-01 19:08:50,481] INFO Database Version: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
connect_1 | Version 19.4.0.0.0 (io.debezium.connector.oracle.OracleConnection)
connect_1 | [2021-07-01 19:08:50,628] INFO Connection gracefully closed (io.debezium.jdbc.JdbcConnection)
connect_1 | [2021-07-01 19:08:50,643] INFO AbstractConfig values:
connect_1 | (org.apache.kafka.common.config.AbstractConfig)
connect_1 | [2021-07-01 19:09:05,722] ERROR IO error forwarding REST request: (org.apache.kafka.connect.runtime.rest.RestClient)
connect_1 | java.util.concurrent.ExecutionException: java.net.SocketTimeoutException: Connect Timeout
Testing the connection to the database
$ telnet databasehostname 1521
Trying <ip>... Connected to databasehostname
Testing connection to kafka broker
$ telnet broker1.intranet 9092
Trying <ip>... Connected to broker1.intranet
Testing connection to remote schema-registry
$ telnet schema-registry.intranet 8081
Trying <ip>... Connected to schema-registry.intranet
What am I doing wrong? Do I need to configure something else to allow connection to this remote database?
You need to set correctly rest.advertised.host.name (or CONNECT_REST_ADVERTISED_HOST_NAME, if you’re using Docker).
This is how a Connect worker communicates with other workers in the cluster.
For more details see Common mistakes made when configuring multiple Kafka Connect workers by Robin Moffatt.
In your case try to remove CONNECT_REST_ADVERTISED_HOST_NAME=localhost from compose file.

CircleCI kubectl config error on deployment to gcloud

I'm new to docker/kubernetes/dev ops in general and I was following a course that used Travis with Github, however I use BitBucket so I'm trying to implement a CI deployment to GKE with CircleCI.
Most of the tasks are working just fine but I'm reaching an error when it comes to kubectl (especifically on the deploy.sh script). Here's the error I'm getting:
unable to recognize "k8s/client-deployment.yml": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused
unable to recognize "k8s/database-persistent-volume-claim.yml": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused
unable to recognize "k8s/ingress-service.yml": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused
unable to recognize "k8s/postgres-cluster-ip-service.yml": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused
unable to recognize "k8s/postgres-deployment.yml": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused
unable to recognize "k8s/redis-cluster-ip-service.yml": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused
unable to recognize "k8s/redis-deployment.yml": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused
unable to recognize "k8s/server-cluster-ip-service.yml": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused
unable to recognize "k8s/server-deployment.yml": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused
unable to recognize "k8s/worker-deployment.yml": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused
I've managed to get this far tackling through problems, however I'm lost on this one so any help is appreciated.
Here's the config.yml for CircleCI (on MyUser I'm actually using my docker user, not an env or anything, it's simply to not disclose it):
version: 2
jobs:
build:
docker:
- image: node
working_directory: ~/app
steps:
- checkout
- setup_remote_docker
- run:
name: Install Docker Client
command: |
set -x
VER="18.09.2"
curl -L -o /tmp/docker-$VER.tgz https://download.docker.com/linux/static/stable/x86_64/docker-$VER.tgz
tar -xz -C /tmp -f /tmp/docker-$VER.tgz
mv /tmp/docker/* /usr/bin
- run:
name: Build Client Docker Image
command: docker build -t MY_USER/multi-docker-react -f ./client/Dockerfile.dev ./client
- run:
name: Run Tests
command: docker run -e CI=true MY_USER/multi-docker-react npm run test -- --coverage
deploy:
working_directory: ~/app
# Docker environment where we gonna run our build deployment scripts
docker:
- image: google/cloud-sdk
steps:
- checkout
- setup_remote_docker:
docker_layer_caching: true
# Set up Env
- run:
name: Setup Environment Variables
command: |
echo 'export GIT_SHA="$CIRCLE_SHA1"' >> $BASH_ENV
echo 'export CLOUDSDK_CORE_DISABLE_PROMPTS=1' >> $BASH_ENV
# Log in to docker CLI
- run:
name: Log in to Docker Hub
command: |
echo "$DOCKER_PASSWORD" | docker login -u "$DOCKER_ID" --password-stdin
# !!! This installs gcloud !!!
- run:
name: Installing GCL
working_directory: /
command: |
echo $GCLOUD_SERVICE_KEY | gcloud auth activate-service-account --key-file=-
gcloud --quiet config set project ${GOOGLE_PROJECT_ID}
gcloud --quiet config set compute/zone ${GOOGLE_COMPUTE_ZONE}
# !!! This runs a deployment
- run:
name: Deploying
command: bash ./deploy.sh
workflows:
version: 2
build:
jobs:
- deploy:
filters:
branches:
only:
- master
Here's the deploy.sh:
docker build -t MY_USER/multi-docker-client:latest -t MY_USER/multi-docker-client:$GIT_SHA -f ./client/Dockerfile ./client
docker build -t MY_USER/multi-docker-server:latest -t MY_USER/multi-docker-server:$GIT_SHA -f ./server/Dockerfile ./server
docker build -t MY_USER/multi-docker-worker:latest -t MY_USER/multi-docker-worker:$GIT_SHA -f ./worker/Dockerfile ./worker
docker push MY_USER/multi-docker-client:latest
docker push MY_USER/multi-docker-server:latest
docker push MY_USER/multi-docker-worker:latest
docker push MY_USER/multi-docker-client:$GIT_SHA
docker push MY_USER/multi-docker-server:$GIT_SHA
docker push MY_USER/multi-docker-worker:$GIT_SHA
kubectl apply -f k8s
kubectl set image deployments/client-deployment client=MY_USER/multi-docker-client:$GIT_SHA
kubectl set image deployments/server-deployment server=MY_USER/multi-docker-server:$GIT_SHA
kubectl set image deployments/worker-deployment worker=MY_USER/multi-docker-worker:$GIT_SHA
And here's my project structure:
So turns out I was only missing the next command:
gcloud --quiet container clusters get-credentials multi-cluster
As part of this task:
# !!! This installs gcloud !!!
- run:
name: Installing GCL
working_directory: /
command: |
echo $GCLOUD_SERVICE_KEY | gcloud auth activate-service-account --key-file=-
gcloud --quiet config set project ${GOOGLE_PROJECT_ID}
gcloud --quiet config set compute/zone ${GOOGLE_COMPUTE_ZONE}
gcloud --quiet container clusters get-credentials multi-cluster
Shout out to #DazWilkin for shedding a light

Ensuring docker containers have certificate

I have an issue where a self-signed certificate has been added to a testing environment.
So this means my selenium grid that is hosted in Docker containers is unable to get to this environment due to the certificate.
I get this error when executing tests
Message: OpenQA.Selenium.WebDriverException : The HTTP request to the remote WebDriver server for URL http://xxx.xx.x.x:4444/wd/hub/session/0ee03d72bff0d5527cff926121b496bb/url timed out after 60 seconds.
----> System.Net.WebException : The request was aborted: The operation has timed out.
TearDown : OpenQA.Selenium.WebDriverException : The HTTP request to the remote WebDriver server for URL http://xxx.xx.x.x:4444/wd/hub/session/0ee03d72bff0d5527cff926121b496bb/screenshot timed out after 60 seconds.
----> System.Net.WebException : The operation has timed out
The docker environment is set up with docker-compose and using chrome and hub images.
Compose file is this
version: "3"
services:
selenium-hub:
image: selenium/hub:latest
container_name: selenium-hub
ports:
- "4444:4444"
chrome:
image: selenium/node-chrome:latest
volumes:
- /dev/shm:/dev/shm
depends_on:
- selenium-hub
environment:
- HUB_HOST=selenium-hub
- HUB_PORT=4444
I added the certificates to the host hoping this would be enough but obviously not as each container is separated.
My question is how do I insert the certificates into each chrome node that spins up?
More information
When running a curl from within the container I get the following error
#b94ed81b0110:/etc# curl https://xxxx.xxxx.co.uk
curl: (60) SSL certificate problem: unable to get local issuer certificate
More details here: https://curl.haxx.se/docs/sslcerts.html
curl failed to verify the legitimacy of the server and therefore could not
establish a secure connection to it. To learn more about this situation and
how to fix it, please visit the web page mentioned above.
But I have installed the required certificates to the container
root#b94ed81b0110:/etc# update-ca-certificates
Updating certificates in /etc/ssl/certs...
2 added, 0 removed; done.
Running hooks in /etc/ca-certificates/update.d...
Adding debian:admin.pem
Adding debian:assessor.pem
done.
done.

Gitlab ci exits lftp command when ending connection with error

I'm trying to deploy my web app using ftp protocols and the continouis integration of gitlab. The files all get uploaded and the site works fine, but i keep getting the following error when the gitlab runner is almost done.
my gitlab-ci.yml file
stages:
- build
- test
- deploy
build:
stage: build
tags:
- shell
script:
- echo "Building"
test:
stage: test
tags:
- shell
script: echo "Running tests"
frontend-deploy:
stage: deploy
tags:
- debian
allow_failure: true
environment:
name: devallei
url: https://devallei.azurewebsites.net/
only:
- master
script:
- echo "Deploy to staging server"
- apt-get update -qq
- apt-get install -y -qq lftp
- lftp -c "set ftp:ssl-allow yes; set ssl:verify-certificate false; debug; open -u devallei\FTPAccesHoussem,Devallei2019 ftps://waws-prod-dm1-131.ftp.azurewebsites.windows.net/site/wwwroot; mirror -Rev ./frontend/dist /site/wwwroot"
backend-deploy:
stage: deploy
tags:
- shell
allow_failure: true
only:
- master
script:
- echo "Deploy spring boot application"
I expect the runner goes through and passes the job but it gives me the following error.
---- Connecting data socket to (23.99.220.117) port 10033
---- Data connection established
---> ALLO 4329977
<--- 200 ALLO command successful.
---> STOR vendor.3b66c6ecdd8766cbd8b1.js.map
<--- 125 Data connection already open; Transfer starting.
---- Closing data socket
<--- 226 Transfer complete.
---> QUIT
gnutls_record_recv: The TLS connection was non-properly terminated. Assuming
EOF.
<--- 221 Goodbye.
---- Closing control socket
ERROR: Job failed: exit code 1
I don't know the reason for the "gnutls_record_recv: The TLS connection was non-properly terminated. Assuming EOF." error but it makes your lftp command return a non zero exit code. That makes GitLab think your job failed. The best thing would be to fix it.
If you think everything works fine and prevent the lftp command to fail, add an || true to the end of the lftp command. But be aware that your job wouldn't fail even if a real error happens.

Resources