How to connect Airbyte with Airflow - docker

I have Airflow and Airbyte installed locally with Docker. I want to set a connection in Airflow to connect Airbyte. I read the Airbyte docs and did exactly what it says but I am getting error. I have configured Airflow's docker compose yaml to install necessary packages.
ADDITIONAL_REQUIREMENTS: ${_PIP_ADDITIONAL_REQUIREMENTS:- apache-airflow-providers-http apache-airflow-providers-airbyte apache-airflow-providers-airbyte[http]}
My Airflow executor is CeleryExecutor
In Airflow I configured the connection how excatly the Airbyte's docs says. I also tried with Conn Type: Airbyte but still getting the error.
The error says:
HTTPConnectionPool(host='localhost', port=8001): Max retries exceeded with url: /api/v1/health (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f30e9e4fb10>: Failed to establish a new connection: [Errno 111] Connection refused'))

Airbyte's blog covers this scenario and how to get it working: https://airbyte.com/tutorials/how-to-use-airflow-and-airbyte-together
Disclaimer, I am the author of that article.

Finally got around to testing this. For me using the Airbyte connection type that comes with the Airbyte provider plus including the username and password (default is "airbyte"/"password") worked with Airflow 2.5.1 and Airbyte provider 3.2.0.
On the Airbyte side I followed their getting started docs.

Related

Error on etcd health check while setting up RKE cluster

i'm trying to set up a rke cluster, the connection to the nodes goes well but when it starts to check etcd health returns:
failed to check etcd health: failed to get /health for host [xx.xxx.x.xxx]: Get "https://xx.xxx.x.xxx:2379/health": remote error: tls: bad certificate
If you are trying to upgrade the RKE and facing this issue then it could be due to the missing of kube_config_<file>.yml file from the local directory when you perform rke up.
This similar kind of issue was reported and reproduced in this git link . Can you refer to the work around and reproduce it by using the steps provided in the link and let me know if this works.
Refer to this latest SO and doc for more information.

WSO2 EI 6.4.0 Docker Container -javax.net.ssl.SSLPeerUnverifiedException: SSL peer failed hostname validation for name: null

There is an implementation where API-1 is calling another API-2, Both are deployed in same WSO2 docker container 6.4.0.
Internal API Call is not working, Got below ERROR in logs.
Unable to sendViaPost to url[https://integ.company.com/wso2/api/queue_service]
javax.net.ssl.SSLPeerUnverifiedException: SSL peer failed hostname validation for name: null
at org.opensaml.ws.soap.client.http.TLSProtocolSocketFactory.verifyHostname(TLSProtocolSocketFactory.java:233)
at org.opensaml.ws.soap.client.http.TLSProtocolSocketFactory.createSocket(TLSProtocolSocketFactory.java:194)
In the background, There is some SSL Certificate renewal activity happened at HA Proxy level, Post this we started to get above ERROR.
Can I get some suggestion to resolve this ERROR?
Try importing the certificate used for 'https://integ.company.com/wso2/api/queue_service' to WSO2 servers client-trustore. If that doesn't resolve the issue add the full Stacktrace of the exception.

Connection refused trying to get account info in Solana

solana account <address>
When I get account info, I have this error:
Error: RPC request error: cluster version query failed: error sending request for url (http://localhost:8899/): error trying to connect: tcp connect error: Connection refused (os error 111)
The error is indicating that the CLI RpcClient can not communicate with the Solana validator.
This is usually caused by not having solana-test-validator running in another terminal. Many make the mistake of thinking that the localhost is running the validator all the time... it's not.
In one terminal do: solana-test-validator which will startup up the local validator
Open a second terminal and do solana account - This will return account info for the default keypair
It's a network connection problem.
I use ubuntu 20.04, I was using windscribe VPN because my location and got the error, now I'm using psiphon VPN and it's working fine.

Is there a way to allow cloudbuild steps to access the Cloud SQL in GCP

I'm setting up a cloud build trigger in order to deploy a PHP/Symfony Application. When the docker file runs the php app/console assetic:dump command in order to create the assets I get the following error.
SQLSTATE[HY000] [2002] Connection timed out
[PDOException]
SQLSTATE[HY000] [2002] Connection timed out
[Doctrine\DBAL\Driver\PDOException]
An exception occurred in driver: SQLSTATE[HY000] [2002]
Connection timed out
[Doctrine\DBAL\Exception\ConnectionException]
I have resolved to trying to get the docker container to connect to the database instead of trying to fix the symfony application because I don't know enough about the framework or php.
Is it possible to set this up so that I can allow some kind of IP on the CLOUDSQL side to allow these connections?
A solution to setup the proxy in the same step is described in the answer here:
Run node.js database migrations on Google Cloud SQL during Google Cloud Build

Hyperledger - Blockchain Peers not connecting - Docker container properties

I am creating a sample Blockchain network using tutorial https://hyperledger-fabric.readthedocs.io/en/release-1.2/build_network.html. I am facing an error while connecting the peers :
Error: failed to create deliver client: orderer client failed to connect to orderer.example.com:7050: failed to create new connection: context deadline exceeded.
I found a probable solution here which I would like to test but I need help for below :
How to update default network of the containers
How to add property for each container.
While accessing my etc/docker directory I am getting error 'Server returned empty listing for directory '/etc/docker' and also it says permission denied when I try to access it from terminal. Any help will be appreciated.
There is no need for making any changes in Docker containers. I faced similar issue, you can clean up the system space or if you are using a VM you install a fresh network in a new VM(assuming you already have all configuration files copied).

Resources