When setting up on a brand new EC2 server as a test I run the following and it all works fine.
/vault/config/local.json
{
"listener": [{
"tcp": {
"address": "0.0.0.0:8200",
"tls_disable": 1
}
}],
"storage": {
"file": {
"path": "/vault/data"
}
},
"max_lease_ttl": "10h",
"default_lease_ttl": "10h",
"ui": true
}
docker run -d -p 8200:8200 -v /home/ec2-user/vault:/vault --cap-add=IPC_LOCK vault server
export VAULT_ADDR='http://0.0.0.0:8200'
vault operator init
I unseal and login fine.
On one of our corporate test servers I use 0.0.0.0 and I get a web server busy sorry page on the init. However, if I export 127.0.0.1 the init works fine. I cannot access the container from the server command line with a curl with 0.0.0.0 or 127.0.0.1. I'm unsure why the behaviours are different?
I understand that 127.0.0.1 should not work but why am I get server busy on 0.0.0.0 on one server and not another in the actual container?
Thanks Mark
The listener works fine in the container with 0.0.0.0. To access the container externally you need to VAULT_ADDR to an address the server understands not the container.
Related
I have a chainlink Node which is running on port 6688. I'm running it with docker, with the following command:
cd ~/.chainlink-rinkeby && docker run -p 6688:6688 \
-v ~/.chainlink-rinkeby:/chainlink \
-it --env-file=.env \
smartcontract/chainlink:1.4.0-root local n -p /chainlink/.password -a /chainlink/.api
And I have an external adapter running on port 8080.
If I request it { "id": 0, "data":{ "columns": ["blood","heath"], "linesAmount":500 } } it returns me a correct payload, in the format that is expected from the external adapter:
{
"jobRunID": 0,
"data": {
"ipfsHash": "anIpfshashShouldBeHere",
"providers": [
"0x03996eF07f84fEEe9f1dc18B255A8c01A4986701"
],
"result": "anIpfshashShouldBeHere"
},
"result": "anIpfshashShouldBeHere",
"statusCode": 200
}
The problem is, in the chainlink node, specifically in the fetch method it gives me an error:
error making http request: Post "http://localhost:8080": dial tcp 127.0.0.1:8080: connect: connection refused
Is it related to the docker container? I don't see why it wouldn't be able to request resuources from another port in the same machine. Am I missing some configuration maybe?
From what I've read from the docs it's possible to run the adapter locally.
Below, a picture with more information:
If you're External Adapter (EA) is running on http://localhost:8080 and you're trying to reach that EA from a Chainlink node running inside Docker, then you can't use localhost, you need to get out of the Docker container and onto the host running the Docker engine (your Windows or Mac machine).
To do, so define your bridge to use http://host.docker.internal:8080.
Further details can be found in the Docker Docs.
I am trying to deploy Vault Docker image to work with Consul Docker image as its storage.
I have the following Json config file for the vault container:
{
"listener": [{
"tcp": {
"address": "0.0.0.0:8200",
"tls_disable" : 1
}
}],
"storage" :{
"consul" : {
"address" :"127.0.0.1:8500"
"path" :"vault/"
}
}
"max_lease_ttl": "10h",
"default_lease_ttl": "10h",
"ui": true,
}
Running consul container:
docker run -d -p 8501:8500 -it consul
and than running the vault container:
docker run -d -p 8200:8200 -v /root/vault:/vault --cap-add=IPC_LOCK vault server
Immediately after the vault container is up, it stop running, and when querying the logs I receive the following error:
Error detecting api address: Get http://127.0.0.1:8500/v1/agent/self: dial tcp 127.0.0.1:8500: connect: connection refused
Error initializing core: missing API address, please set in configuration or via environment
Any ideas why I am getting this error, and if I have any configuration problem?
Since you are running docker, the "127.0.0.1" address you are pointing to is going to be inside your docker, but consul isn't listening there, it's listening on your docker-servers localhost!
So I would recommend that you do a link (--link consul:consul) when you start vault docker, and set "address" :"consul:8500" in the config.
Or, change "address" :"127.0.0.1:8500" to "address" :"172.17.0.1:8500" to let it connect to your docker servers forwarded 8500. The IP is whatever is set on your docker0 interface. Not as nice though since it's not official and that it can be changed in the configuration, so I recommend linking.
I am deploying vault docker image on Ubuntu 16.04, I am successful initializing it from inside the image itself, but I cant get any Rest Responses, and even curl does not work.
I am doing the following:
Create config file local.json :
{
"listener": [{
"tcp": {
"address": "127.0.0.1:8200",
"tls_disable" : 1
}
}],
"storage" :{
"file" : {
"path" : "/vault/data"
}
}
"max_lease_ttl": "10h",
"default_lease_ttl": "10h",
}
under /vault/config directory
running the command to start the image
docker run -d -p 8200:8200 -v /home/vault:/vault --cap-add=IPC_LOCK vault server
entering bash terminal of the image :
docker exec -it containerId /bin/sh
Running inside the following command
export VAULT_ADDR='http://127.0.0.1:8200' and than vault init
It works fine, but when I am trying to send rest to check if vault initialized:
Get request to the following url : http://Ip-of-the-docker-host:8200/v1/sys/init
Getting No Response.
even curl command fails:
curl http://127.0.0.1:8200/v1/sys/init
curl: (56) Recv failure: Connection reset by peer
Didnt find anywhere online with a proper explanation what is the problem, or if I am doing something wrong.
Any Ideas?
If a server running in a Docker container binds to 127.0.0.1, it's unreachable from anything outside that specific container (and since containers usually only run a single process, that means it's unreachable by anyone). Change the listener address to 0.0.0.0:8200; if you need to restrict access to the Vault server, bind it to a specific host address in the docker run -p option.
I've been running the vault server mode with the official example provided in the Docker vault documentation. Though the server started successfully I cannot interact with the Vault server via its HTTP Rest API. Find my docker run command attached below.
docker run -e 'SKIP_SETCAP=1' -e 'VAULT_LOCAL_CONFIG={"backend": {"file": {"path": "/vault/file"}}, "listener": { "tcp": { "address": "0.0.0.0:8200", "tls_disable": 1 } }, "default_lease_ttl": "168h", "max_lease_ttl": "720h", "disable_mlock": "true"}' vault server
When I try to curl into the vault server to validate the vault server initialization it throws a Connection refused error.
ravindu#ravindu-Aspire-F5-573G:~$ curl http://127.0.0.1:8201/v1/sys/init
curl: (7) Failed to connect to 127.0.0.1 port 8201: Connection refused
Given below is the message displayed when docker vault docker container is up and running,
==> Vault server configuration:
Cgo: disabled
Listener 1: tcp (addr: "0.0.0.0:8200", cluster address: "0.0.0.0:8201", tls: "disabled")
Log Level: info
Mlock: supported: true, enabled: false
Storage: file
Version: Vault v0.8.3
Version Sha: 6b29fb2b7f70ed538ee2b3c057335d706b6d4e36
==> Vault server started! Log data will stream in below:
Given below is my local.json within my vault container,
{"backend": {"file": {"path": "/vault/file"}}, "listener": { "tcp": { "address": "0.0.0.0:8200", "tls_disable": 1 } }, "default_lease_ttl": "168h", "max_lease_ttl": "720h", "disable_mlock": "true"}
The reason that you can't curl is because you haven't exposed the ports.
You need to add -p 8200:8200 to your run command, and use the port 8200 to connect.
I'm trying to run a Rails project using Nginx with docker and vagrant. Everything is ok if I use the vagrant box ubuntu/trusty64, I provision the VM and everything is ok. But I wanted to create my own box from ubuntu/trusty64 and this is when all my problems began.
So I created the box using packer and this template:
{
"variables": {
"home": "{{env `HOME`}}"
},
"provisioners": [
{
"type": "shell",
"execute_command": "echo 'vagrant'|sudo -S sh '{{.Path}}'",
"override": {
"virtualbox-ovf": {
"scripts": [
"scripts/docker.sh",
"scripts/ansible.sh",
"scripts/cleanup.sh",
"scripts/zerodisk.sh"
]
}
}
}
],
"post-processors": [
{
"type": "vagrant",
"override": {
"virtualbox": {
"output": "ubuntu-14-04-x64-virtualbox.box"
}
}
}
],
"builders": [
{
"type": "virtualbox-ovf",
"headless": "true",
"boot_wait": "10s",
"source_path": "{{user `home`}}/.vagrant.d/boxes/ubuntu-VAGRANTSLASH-trusty64/14.04/virtualbox/box.ovf",
"ssh_username": "vagrant",
"ssh_password": "vagrant",
"ssh_port": 22,
"ssh_wait_timeout": "10000s",
"shutdown_command": "echo 'shutdown -P now' > /tmp/shutdown.sh; echo 'vagrant'|sudo -S sh '/tmp/shutdown.sh'",
"vboxmanage": [
[ "modifyvm", "{{.Name}}", "--memory", "512" ],
[ "modifyvm", "{{.Name}}", "--cpus", "1" ]
]
}
]
}
Then added the box to pedrof/base-box in vagrant boxes and used this Vagrantfile to start the VM:
Vagrant.configure(2) do |config|
config.vm.provider 'virtualbox' do |v|
v.memory = 2048
v.cpus = 2
end
config.vm.box = 'pedrof/base-box'
config.vm.synced_folder '.', '/vagrant', type: 'nfs', mount_options: ['nolock,vers=3,udp,noatime,actimeo=1']
config.vm.network :private_network, ip: '172.17.8.100'
config.vm.provision 'shell', path: "docker/build.sh"
config.vm.provision 'shell', path: "docker/init.sh", run: 'always'
end
It starts the VM and starts docker containers using docker-compose. Everything is ok, except that I can't access http://172.17.8.100 from the browser but ping respond ok from the host. I ran curl to hit Nginx from inside the VM and it responded with the proper index page, but nothing from outside the VM. The weird thing is that everything works fine if I reload vagrant using vagrant reload.
Am I building the box incorrectly? Something is missing in the Vagrantfile?
I assume you start a docker container inside the Vagrant box. The container is a web server and you want to access the web server with your browser. Then you will need port forwarding to your host machine.
So first your container port must be mapped onto a box port. This is done by the -p Docker Parameter. Example: -p 8080:8080. Then the port will be available inside the box. You say, that you can access inside the box so I think this already correctly configured.
Try to forward this port out of the box. Add this to your Vagrantfile
...
config.vm.network "forwarded_port", guest: 8080, host: 8080
...
Now try to access port 8080 with http://localhost:8080
If you want to make http://172.17.8.100 work then you will have to map the container port onto port 80 of the box port which needs root access.