Vault Docker Image - Cant get REST Response - docker

I am deploying vault docker image on Ubuntu 16.04, I am successful initializing it from inside the image itself, but I cant get any Rest Responses, and even curl does not work.
I am doing the following:
Create config file local.json :
{
"listener": [{
"tcp": {
"address": "127.0.0.1:8200",
"tls_disable" : 1
}
}],
"storage" :{
"file" : {
"path" : "/vault/data"
}
}
"max_lease_ttl": "10h",
"default_lease_ttl": "10h",
}
under /vault/config directory
running the command to start the image
docker run -d -p 8200:8200 -v /home/vault:/vault --cap-add=IPC_LOCK vault server
entering bash terminal of the image :
docker exec -it containerId /bin/sh
Running inside the following command
export VAULT_ADDR='http://127.0.0.1:8200' and than vault init
It works fine, but when I am trying to send rest to check if vault initialized:
Get request to the following url : http://Ip-of-the-docker-host:8200/v1/sys/init
Getting No Response.
even curl command fails:
curl http://127.0.0.1:8200/v1/sys/init
curl: (56) Recv failure: Connection reset by peer
Didnt find anywhere online with a proper explanation what is the problem, or if I am doing something wrong.
Any Ideas?

If a server running in a Docker container binds to 127.0.0.1, it's unreachable from anything outside that specific container (and since containers usually only run a single process, that means it's unreachable by anyone). Change the listener address to 0.0.0.0:8200; if you need to restrict access to the Vault server, bind it to a specific host address in the docker run -p option.

Related

Chainlink node can't make requests to Chainlink external adapter (on localhost)

I have a chainlink Node which is running on port 6688. I'm running it with docker, with the following command:
cd ~/.chainlink-rinkeby && docker run -p 6688:6688 \
-v ~/.chainlink-rinkeby:/chainlink \
-it --env-file=.env \
smartcontract/chainlink:1.4.0-root local n -p /chainlink/.password -a /chainlink/.api
And I have an external adapter running on port 8080.
If I request it { "id": 0, "data":{ "columns": ["blood","heath"], "linesAmount":500 } } it returns me a correct payload, in the format that is expected from the external adapter:
{
"jobRunID": 0,
"data": {
"ipfsHash": "anIpfshashShouldBeHere",
"providers": [
"0x03996eF07f84fEEe9f1dc18B255A8c01A4986701"
],
"result": "anIpfshashShouldBeHere"
},
"result": "anIpfshashShouldBeHere",
"statusCode": 200
}
The problem is, in the chainlink node, specifically in the fetch method it gives me an error:
error making http request: Post "http://localhost:8080": dial tcp 127.0.0.1:8080: connect: connection refused
Is it related to the docker container? I don't see why it wouldn't be able to request resuources from another port in the same machine. Am I missing some configuration maybe?
From what I've read from the docs it's possible to run the adapter locally.
Below, a picture with more information:
If you're External Adapter (EA) is running on http://localhost:8080 and you're trying to reach that EA from a Chainlink node running inside Docker, then you can't use localhost, you need to get out of the Docker container and onto the host running the Docker engine (your Windows or Mac machine).
To do, so define your bridge to use http://host.docker.internal:8080.
Further details can be found in the Docker Docs.

how to add docker name parameter into kuberntes cluster

I am deploy the xxl-job application in Kubernetes(v1.15.2), now the application deploy success but registry client service failed.If deploy it in docker, it should look like this:
docker run -e PARAMS="--spring.datasource.url=jdbc:mysql://mysql-service.example.com/xxl-job?Unicode=true&characterEncoding=UTF-8 --spring.datasource.username=root --spring.datasource.password=<mysql-password>" -p 8180:8080 -v /tmp:/data/applogs --name xxl-job-admin -d xuxueli/xxl-job-admin:2.0.2
and when start application,the server side give me tips:
22:33:21.078 logback [http-nio-8080-exec-7] WARN o.s.web.servlet.PageNotFound - No mapping found for HTTP request with URI [/xxl-job-admin/api/registry] in DispatcherServlet with name 'dispatcherServlet'
I am searching from project issue and find the problem may be I could not pass the project name in docker to be part of it's url, so give me this tips.The client side give this error:
23:19:18.262 logback [xxl-job, executor ExecutorRegistryThread] INFO c.x.j.c.t.ExecutorRegistryThread - >>>>>>>>>>> xxl-job registry fail, registryParam:RegistryParam{registryGroup='EXECUTOR', registryKey='job-schedule-executor', registryValue='172.30.184.4:9997'}, registryResult:ReturnT [code=500, msg=xxl-rpc remoting fail, StatusCode(404) invalid. for url : http://xxl-job-service.dabai-fat.svc.cluster.local:8080/xxl-job-admin/api/registry, content=null]
so to solve the problem, I should execute command as possible as the same in kubernetes like execute with docker. The question is: How to pass the docker command --name to kubernetes environment? I have already tried this:
"env": [
{
"name": "name",
"value": "xxl-job-admin"
}
],
and also tried this:
"containers": [
{
"name": "xxl-job-admin",
"image": "xuxueli/xxl-job-admin:2.0.2",
}
]
Both did not work.

Hashicorp Vault docker networking issue

When setting up on a brand new EC2 server as a test I run the following and it all works fine.
/vault/config/local.json
{
"listener": [{
"tcp": {
"address": "0.0.0.0:8200",
"tls_disable": 1
}
}],
"storage": {
"file": {
"path": "/vault/data"
}
},
"max_lease_ttl": "10h",
"default_lease_ttl": "10h",
"ui": true
}
docker run -d -p 8200:8200 -v /home/ec2-user/vault:/vault --cap-add=IPC_LOCK vault server
export VAULT_ADDR='http://0.0.0.0:8200'
vault operator init
I unseal and login fine.
On one of our corporate test servers I use 0.0.0.0 and I get a web server busy sorry page on the init. However, if I export 127.0.0.1 the init works fine. I cannot access the container from the server command line with a curl with 0.0.0.0 or 127.0.0.1. I'm unsure why the behaviours are different?
I understand that 127.0.0.1 should not work but why am I get server busy on 0.0.0.0 on one server and not another in the actual container?
Thanks Mark
The listener works fine in the container with 0.0.0.0. To access the container externally you need to VAULT_ADDR to an address the server understands not the container.

Config Vault Docker container with Consul Docker container

I am trying to deploy Vault Docker image to work with Consul Docker image as its storage.
I have the following Json config file for the vault container:
{
"listener": [{
"tcp": {
"address": "0.0.0.0:8200",
"tls_disable" : 1
}
}],
"storage" :{
"consul" : {
"address" :"127.0.0.1:8500"
"path" :"vault/"
}
}
"max_lease_ttl": "10h",
"default_lease_ttl": "10h",
"ui": true,
}
Running consul container:
docker run -d -p 8501:8500 -it consul
and than running the vault container:
docker run -d -p 8200:8200 -v /root/vault:/vault --cap-add=IPC_LOCK vault server
Immediately after the vault container is up, it stop running, and when querying the logs I receive the following error:
Error detecting api address: Get http://127.0.0.1:8500/v1/agent/self: dial tcp 127.0.0.1:8500: connect: connection refused
Error initializing core: missing API address, please set in configuration or via environment
Any ideas why I am getting this error, and if I have any configuration problem?
Since you are running docker, the "127.0.0.1" address you are pointing to is going to be inside your docker, but consul isn't listening there, it's listening on your docker-servers localhost!
So I would recommend that you do a link (--link consul:consul) when you start vault docker, and set "address" :"consul:8500" in the config.
Or, change "address" :"127.0.0.1:8500" to "address" :"172.17.0.1:8500" to let it connect to your docker servers forwarded 8500. The IP is whatever is set on your docker0 interface. Not as nice though since it's not official and that it can be changed in the configuration, so I recommend linking.

Error checking seal status while connecting to Vault Docker

I'm trying to run vault docker in server mode as described here. This is the command I'm using to run vault
docker run --cap-add=IPC_LOCK -e 'VAULT_LOCAL_CONFIG={"backend": {"file": {"path": "/home/jwahba/PycharmProjects/work/vault/vault.json"}}, "default_lease_ttl": "168h", "max_lease_ttl": "720h"}' vault server
And this is the vault.json configuration file
storage "inmem" {}
listener "tcp" {
address = "127.0.0.1:8200"
tls_disable = 1
}
disable_mlock = true
The container comes up successfully.
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
55100205d2ab vault "docker-entrypoint..." 6 minutes ago Up 6 minutes 8200/tcp stoic_blackwell
However, when I try to execute
docker exec stoic_blackwell vault status
I get the below error:
Error checking seal status: Get https://127.0.0.1:8200/v1/sys/seal-status: dial tcp 127.0.0.1:8200: connect: connection refused
There is a similar question here but unfortunately I couldn't figure out what I misconfigured.
Any suggestions please?
The VAULT_LOCAL_CONFIG parameter specifies the configuration of your Vault; using the {"backend": {"file": annotation you set a file backend as the storage one.
So, in VAULT_LOCAL_CONFIG you should directly include what you wrote in your configuration file (vault.json).
Sidenote: The configuration file that you wrote is in HCL language, not json.
Please try it with below command,
vault status -tls-skip-verify

Resources