Chainlink node can't make requests to Chainlink external adapter (on localhost) - docker

I have a chainlink Node which is running on port 6688. I'm running it with docker, with the following command:
cd ~/.chainlink-rinkeby && docker run -p 6688:6688 \
-v ~/.chainlink-rinkeby:/chainlink \
-it --env-file=.env \
smartcontract/chainlink:1.4.0-root local n -p /chainlink/.password -a /chainlink/.api
And I have an external adapter running on port 8080.
If I request it { "id": 0, "data":{ "columns": ["blood","heath"], "linesAmount":500 } } it returns me a correct payload, in the format that is expected from the external adapter:
{
"jobRunID": 0,
"data": {
"ipfsHash": "anIpfshashShouldBeHere",
"providers": [
"0x03996eF07f84fEEe9f1dc18B255A8c01A4986701"
],
"result": "anIpfshashShouldBeHere"
},
"result": "anIpfshashShouldBeHere",
"statusCode": 200
}
The problem is, in the chainlink node, specifically in the fetch method it gives me an error:
error making http request: Post "http://localhost:8080": dial tcp 127.0.0.1:8080: connect: connection refused
Is it related to the docker container? I don't see why it wouldn't be able to request resuources from another port in the same machine. Am I missing some configuration maybe?
From what I've read from the docs it's possible to run the adapter locally.
Below, a picture with more information:

If you're External Adapter (EA) is running on http://localhost:8080 and you're trying to reach that EA from a Chainlink node running inside Docker, then you can't use localhost, you need to get out of the Docker container and onto the host running the Docker engine (your Windows or Mac machine).
To do, so define your bridge to use http://host.docker.internal:8080.
Further details can be found in the Docker Docs.

Related

Error accessing Scylladb cluster outside docker container

I'm running Scylladb locally in a docker container and I want to access the cluster outside the docker container. That's when I'm getting the following error: cassandra.cluster.NoHostAvailable: ('Unable to connect to any servers')
Datacenter: datacenter1
=======================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns Host ID Rack
UN 172.17.0.2 776 KB 256 ? ad698c75-a465-4deb-a92c-0b667e82a84f rack1
Note: Non-system keyspaces don't have the same replication settings, effective ownership information is meaningless
Cluster Information:
Name: Test Cluster
Snitch: org.apache.cassandra.locator.SimpleSnitch
DynamicEndPointSnitch: disabled
Partitioner: org.apache.cassandra.dht.Murmur3Partitioner
Schema versions:
443048b2-c1fe-395e-accd-5ae9b6828464: [172.17.0.2]
I have no problem accessing the cluster using cqlsh on port 9042:
Connected to at 172.17.0.2:9042.
[cqlsh 5.0.1 | Cassandra 3.0.8 | CQL spec 3.3.1 | Native protocol v4]
Now I'm trying to access the cluster from my fastapi app that is outside the docker container.
from cassandra.cluster import Cluster
cluster = Cluster(['172.17.0.2'])
session = cluster.connect('Test Cluster')
And here's the Error that I'm getting:
raise NoHostAvailable("Unable to connect to any servers", errors)
cassandra.cluster.NoHostAvailable: ('Unable to connect to any servers', {'172.17.0.2:9042': OSError(51, "Tried connecting to [('172.17.0.2', 9042)]. Last error: Network is unreachable")})
with a little bit of tinkering, it's possible to achieve a connection to the Scylla running in a container outside of the container for local development.
I've tried on M1 Mac with docker desktop:
Run scylla container with couple of new parameters[src]:
--listen-address 0.0.0.0 for simplification as we are spawning Scylla inside the container to allow connection to the container from any network
--broadcast-rpc-address 127.0.0.1 required if --listen-address set to 0.0.0.0. We are going to port forward 9042 from container to host (local) machine, so this is an IP where it will be acessible.
The final command to spawn the container is:
$ docker run --rm -ti \
-p 127.0.0.1:9042:9042 \
scylladb/scylla \
--smp 1 \
--listen-address 0.0.0.0 \
--broadcast-rpc-address 127.0.0.1
The -p 127.0.0.1:9042:9042 is to make port 9042 accessible on host (local) machine.
Install pip3 install scylla-driver as it has support of darwin/arm64 architecture.
Write a simple python script:
# so74265199.py
from cassandra.cluster import Cluster
cluster = Cluster(['127.0.0.1'])
session = cluster.connect()
# Select from a table that is available without keyspace
res = session.execute('SELECT * FROM system.versions')
print(res.one())
Run your script
$ python3 so74265199.py
Row(key='local', build_id='71178cf6db7021896cd8251751b78b3d9e3afa8d', build_mode='release', version='5.0.5-0.20221009.5a97a1060')
Disclaimer: I'm not an expert in Scylla's configuration, so feel free to point out a better approach.

404 Error with "local" Step Functions State Machine calling moto_server on host

Using AWS Step Functions State Machine (SFSM) in "local" mode, ie running inside Docker on my laptop
Trying to run a task pointing to a mocked service on my laptop host
I can install SFSM correctly on Docker, update it, run it
aws stepfunctions --endpoint http://localhost:8083 create-state-machine \
--definition file://src/my_sfn.json \
--role-arn 'arn:aws:iam::ACCTNUM:role/DummyRole' \
--name my_sfn
I can run curl http://host.docker.internal:5000 from docker and connect to the moto_server
I installed aws cli on the SFSM container, and ran aws sns --endpoint http://host.docker.internal:5000 list-topics and it showed the correct topics
I set up the SNS_ENDPOINT url in my docker env file to point to host machine
But I always get 404 Not Found errors if running the SFSM:
2021-03-04 18:59:24.472: arn:aws:states:us-east-1:ACCTNUM:execution:my_sfn:7151fcf4-7e6f-4b16-9b64-2ac913e27e4c : {"Type":"TaskSubmitFailed","PreviousEventId":4,"TaskSubmitFailedEventDetails":{"ResourceType":"sns","Resource":"publish","Error":"SNS.AmazonSNSException","Cause":"null (Service: AmazonSNS; Status Code: 404; Error Code: 404 NOT FOUND; Request ID: null; Proxy: null)"}}
2021-03-04 18:59:24.473: arn:aws:states:us-east-1:ACCTNUM:execution:my_sfn:7151fcf4-7e6f-4b16-9b64-2ac913e27e4c : {"Type":"ExecutionFailed","PreviousEventId":5,"ExecutionFailedEventDetails":{"Error":"SNS.AmazonSNSException","Cause":"null (Service: AmazonSNS; Status Code: 404; Error Code: 404 NOT FOUND; Request ID: null; Proxy: null)"}}
my_sfn.json:
{
"StartAt": "step1",
"States": {
"step1": {
"Type": "Task",
"Resource": "arn:aws:states:::sns:publish",
"Parameters":{
"Message":{"Input":"howdy"},
"TopicArn":"arn:aws:sns:us-west-2:ACCTNUM:foo"
},
"End": true
}
}
}
docker-env.txt:
SNS_ENDPOINT=http://host.docker.internal:5000
Any idea how to fix?
Looks like I had to add more info to the environment file:
AWS_ACCOUNT_ID=123456789012
AWS_ACCESS_KEY_ID=AAAAAAAAAAAAAAAA
AWS_SECRET_ACCESS_KEY=AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AWS_DEFAULT_REGION=us-west-2
Used the same account id and region consistently, and it worked

Hashicorp Vault docker networking issue

When setting up on a brand new EC2 server as a test I run the following and it all works fine.
/vault/config/local.json
{
"listener": [{
"tcp": {
"address": "0.0.0.0:8200",
"tls_disable": 1
}
}],
"storage": {
"file": {
"path": "/vault/data"
}
},
"max_lease_ttl": "10h",
"default_lease_ttl": "10h",
"ui": true
}
docker run -d -p 8200:8200 -v /home/ec2-user/vault:/vault --cap-add=IPC_LOCK vault server
export VAULT_ADDR='http://0.0.0.0:8200'
vault operator init
I unseal and login fine.
On one of our corporate test servers I use 0.0.0.0 and I get a web server busy sorry page on the init. However, if I export 127.0.0.1 the init works fine. I cannot access the container from the server command line with a curl with 0.0.0.0 or 127.0.0.1. I'm unsure why the behaviours are different?
I understand that 127.0.0.1 should not work but why am I get server busy on 0.0.0.0 on one server and not another in the actual container?
Thanks Mark
The listener works fine in the container with 0.0.0.0. To access the container externally you need to VAULT_ADDR to an address the server understands not the container.

Config Vault Docker container with Consul Docker container

I am trying to deploy Vault Docker image to work with Consul Docker image as its storage.
I have the following Json config file for the vault container:
{
"listener": [{
"tcp": {
"address": "0.0.0.0:8200",
"tls_disable" : 1
}
}],
"storage" :{
"consul" : {
"address" :"127.0.0.1:8500"
"path" :"vault/"
}
}
"max_lease_ttl": "10h",
"default_lease_ttl": "10h",
"ui": true,
}
Running consul container:
docker run -d -p 8501:8500 -it consul
and than running the vault container:
docker run -d -p 8200:8200 -v /root/vault:/vault --cap-add=IPC_LOCK vault server
Immediately after the vault container is up, it stop running, and when querying the logs I receive the following error:
Error detecting api address: Get http://127.0.0.1:8500/v1/agent/self: dial tcp 127.0.0.1:8500: connect: connection refused
Error initializing core: missing API address, please set in configuration or via environment
Any ideas why I am getting this error, and if I have any configuration problem?
Since you are running docker, the "127.0.0.1" address you are pointing to is going to be inside your docker, but consul isn't listening there, it's listening on your docker-servers localhost!
So I would recommend that you do a link (--link consul:consul) when you start vault docker, and set "address" :"consul:8500" in the config.
Or, change "address" :"127.0.0.1:8500" to "address" :"172.17.0.1:8500" to let it connect to your docker servers forwarded 8500. The IP is whatever is set on your docker0 interface. Not as nice though since it's not official and that it can be changed in the configuration, so I recommend linking.

Vault Docker Image - Cant get REST Response

I am deploying vault docker image on Ubuntu 16.04, I am successful initializing it from inside the image itself, but I cant get any Rest Responses, and even curl does not work.
I am doing the following:
Create config file local.json :
{
"listener": [{
"tcp": {
"address": "127.0.0.1:8200",
"tls_disable" : 1
}
}],
"storage" :{
"file" : {
"path" : "/vault/data"
}
}
"max_lease_ttl": "10h",
"default_lease_ttl": "10h",
}
under /vault/config directory
running the command to start the image
docker run -d -p 8200:8200 -v /home/vault:/vault --cap-add=IPC_LOCK vault server
entering bash terminal of the image :
docker exec -it containerId /bin/sh
Running inside the following command
export VAULT_ADDR='http://127.0.0.1:8200' and than vault init
It works fine, but when I am trying to send rest to check if vault initialized:
Get request to the following url : http://Ip-of-the-docker-host:8200/v1/sys/init
Getting No Response.
even curl command fails:
curl http://127.0.0.1:8200/v1/sys/init
curl: (56) Recv failure: Connection reset by peer
Didnt find anywhere online with a proper explanation what is the problem, or if I am doing something wrong.
Any Ideas?
If a server running in a Docker container binds to 127.0.0.1, it's unreachable from anything outside that specific container (and since containers usually only run a single process, that means it's unreachable by anyone). Change the listener address to 0.0.0.0:8200; if you need to restrict access to the Vault server, bind it to a specific host address in the docker run -p option.

Resources