I'm trying to get Consul Connect side car envoy to work but the health checks for sidecar keeps failing.
I'm using following versions of Consul and Nomad
Consul : 1.7.3
Nomad : 0.11.1
CNI Plugins : 0.8.6
My setup looks like follows.
1 Consul Server running consul in docker container.
docker run -d --net=host --name=server -v /var/consul/:/consul/config consul:1.7 agent -server -ui -node=server-1 -bind=$internal_ip -ui -bootstrap-expect=1 -client=0.0.0.0
internal_ip is the internal IP address of my GCP VM.
1 Nomad Server with Consul Agent in client mode
nohup nomad agent -config=/etc/nomad.d/server.hcl &
docker run -d --name=consul-client --net=host -v ${volume_path}:/consul/config/ consul:1.7 agent -node=$node_name -bind=$internal_ip -join=${server_ip} -client=0.0.0.0
interal_ip is the internal IP address of GCP VM and server_ip is the internal IP address of Server VM.
2 Nomad Client with Consul Agent in client mode
nohup nomad agent -config=/etc/nomad.d/client.hcl &
docker run -d --name=consul-client --net=host -v ${volume_path}:/consul/config/ consul:1.7 agent -node=$node_name -bind=$internal_ip -join=${server_ip} -client=0.0.0.0
On Nomad clients, I also have consul binary available in path.
Now I'm trying to deploy the sample Nomad and Consul Connect job from here
job "countdash" {
datacenters = ["dc1"]
group "api" {
network {
mode = "bridge"
}
service {
name = "count-api"
port = "9001"
connect {
sidecar_service {}
}
}
task "web" {
driver = "docker"
config {
image = "hashicorpnomad/counter-api:v1"
}
}
}
group "dashboard" {
network {
mode = "bridge"
port "http" {
static = 9002
to = 9002
}
}
service {
name = "count-dashboard"
port = "9002"
connect {
sidecar_service {
proxy {
upstreams {
destination_name = "count-api"
local_bind_port = 8080
}
}
}
}
}
task "dashboard" {
driver = "docker"
env {
COUNTING_SERVICE_URL = "http://${NOMAD_UPSTREAM_ADDR_count_api}"
}
config {
image = "hashicorpnomad/counter-dashboard:v1"
}
}
}
}
The docker container for service and sidecar gets started and gets registered in Consul, but I'm unable to access any of the service.
I SSH onto the Nomad Client node and can see the container running.
Odd thing I noticed is that I cannot see port forwarded to the host
I cannot access it via curl from host.
I tried doing curl $internal_ip:9002 but it didn't work.
I checked if Nomad created any new bridge network since that's what I used as mode in the network stanza but there are no new networks.
Is there anything that I'm missing in my setup ?
Have you tried setting COUNTING_SERVICE_URL = "http://${NOMAD_UPSTREAM_ADDR_count_api}" to COUNTING_SERVICE_URL = "http://localhost:8080", since that is the local bind port that the envoy proxy will be listening on to forward traffic to the count-api.
An example of a working connect setup can be found at https://github.com/hashicorp/video-content/tree/master/nomad-connect-integration/nomad_jobs
Related
I am having trouble establishing communication between two docker containers via nomad. Containers are in the same task group but still unable to reach each other. Even when using NOMAD_ADDR_ environment variable. Can anyone help in this regard? I tried both host and bridge network mode.
My nomad config is given below. Images are pulled and the Redis container and application container starts, but then app container crashes with Redis Connection Refused error
The second issue is, as you might have guessed is of prettifying the code with proper indentation etc. Just like Javascript or HTML or YAML is automatically formatted in VS code. I am unable to find a code prettifier for the HCL language.
job "app-deployment" {
datacenters = ["dc1"]
group "app" {
network {
mode = "bridge"
port "web-ui" { to = 5000 }
port "redis" { to = 6379 }
}
service {
name = "web-ui"
port = "web-ui"
// check {
// type = "http"
// path = "/health"
// interval = "2s"
// timeout = "2s"
// }
}
task "myapp" {
driver = "docker"
config {
image_pull_timeout = "10m"
image = "https://docker.com"
ports = ["web-ui"]
}
env {
REDIS_URL="redis://${NOMAD_ADDR_redis}"
// REDIS_URL="redis://$NOMAD_IP_redis:$NOMAD_PORT_redis"
NODE_ENV="production"
}
}
task "redis" {
driver = "docker"
config {
image = "redis"
ports = ["redis"]
}
}
}
}
So I was able to resolve it, basically, when you start nomad agent in dev mode, by default it binds to the loopback interface and that is why you get 127.0.0.1 as IP and node port in NOMAD env variables. 127.0.0.1 resolves to localhost inside container and hence it is unable to reach the Redis server.
To fix the issue, simply run
ip a
Identify the primary network interface for me it was my wifi interface. Then start the nomad like below.
nomad agent -dev -network-interface="en0"
# where en0 is the primary network interface
That way u will still be able to access the nomad UI on localhost:4646 but your containers will get the HOST IP from your network rather then 127.0.0.1
I deployed NATS to my kubernetes cluster, and the nats-box image in my cluster (installed alongside my NATS image vis helm) can apparently connect to it, but I can't seem to get my own microservice to connect to it. How is nats-box successful, but my own microservice is not?
helm install my-nats nats/nats
installs NATS with statefulset called "my-nats" and a service called "my-nats":
my-nats ClusterIP None <none>
4222/TCP,6222/TCP,8222/TCP,7777/TCP,7422/TCP,7522/TCP
But my test app uses stdin to accept a url input and tries to connect to "my-nats" and yet it fails:
public static void Main(string[] args)
{
Console.Write("=>");
string url = Console.ReadLine();
Console.WriteLine($"Connecting to {url}");
try
{
using (IConnection pubConnection = new ConnectionFactory().CreateConnection(url))
{
Console.WriteLine($"Connected to {url}!");
}
}
catch (NATSNoServersException)
{
Console.WriteLine($"No Server found for url, '{url}'!");
return;
}
...
docker run -it testapp
=>my-nats
Connecting to my-nats
No Server found for url, 'my-nats'!
How can I get my microservice to connect to my "my-nats" cluster just like nats-box does?
Are you trying to connect from local(microservice) to nats in your cluster(K8s running in your machine).
if so you might need to do a port forward of the nats from your cluster then use http://localhost:4222 as the url for your microservice nats connection url.
I am trying to setup self hosted gitlab CI with its own registry. I am also using self signed certificates for TLS, signed this certificate using my own CA, which is installed as a trusted CA in my host machine
Gitlab-CE 13.6.3 version is installed on Ubuntu 18.04. Have installed snap microk8s cluster on the same host
Questions (some very basics)
Does Gitlab registry use the docker daemon ?
How is the connectivity achieved
Docker client --> NGINX (5050) --> Gitlab registry (5000)
I have below configuration in gitlab.rb file
registry['enable'] = true
registry['registry_http_addr'] = "127.0.0.1:5000"
registry['log_directory'] = "/var/log/gitlab/registry"
registry['env'] = {
'SSL_CERT_DIR' => "/etc/gitlab/ssl"
}
# Below you can find settings that are exclusive to "Registry NGINX"
registry_nginx['enable'] = true
registry_nginx['ssl_certificate'] = "/etc/gitlab/ssl/gitlab.local.crt"
registry_nginx['ssl_certificate_key'] = "/etc/gitlab/ssl/gitlab.local.key"
registry_nginx['proxy_set_headers'] = {
"Host" => "$http_host",
"X-Real-IP" => "$remote_addr",
"X-Forwarded-For" => "$proxy_add_x_forwarded_for",
"X-Forwarded-Proto" => "https",
"X-Forwarded-Ssl" => "on"
}
# When the registry is automatically enabled using the same domain as `external_url`,
# it listens on this port
registry_nginx['listen_port'] = 5050
registry_nginx['listen_addresses'] = ['*', '[::]']
When I try to docker login, following errors are observed. Is it expected based on the above configuration ?
- with URL: https://127.0.0.1:5000 - > Login Success
- with URL: https://127.0.0.1:5050 - > Login Success
- with URL: https://gitlab.local:5050 - > x509 certificate signed by unknown authority
I have gitlab k8s & docker runners, Can they access the gitlab registry (nginx) port 5050 from within the container ?
[[runners]]
name = "docker"
token = "xxxxxxx"
executor = "docker"
[runners.docker]
image = "docker:stable"
privileged = true
volumes = ["/cache", "/var/run/docker.sock:/var/run/docker.sock"]
Note: I tried various gitlab forums/posts about the certificate issues on gitlab registry to build/push images, but to no success
Thank you
Try by placing the certificate in docker by:
sudo mkdir -p /etc/docker/certs.d/gitlab.local:5050
cp /yourcerts/gitlab.local.crt /etc/docker/certs.d/gitlab.local:5050/ca.crt
sudo service docker reload
From a regular ECS container running with the bridge mode, or from a standard EC2 instance, I usually run
curl http://169.254.169.254/latest/meta-data/local-ipv4
to retrieve my IP.
In an ECS container running with the awsvpc network mode, I get the IP of the underlying EC2 instance which is not what I want. I want the address of the ENI attached to my container. How do I do that?
A new convenience environment variable is injected by the AWS container agent into every container in AWS ECS: ${ECS_CONTAINER_METADATA_URI}
This contains the URL to the metadata endpoint, so now you can do
curl ${ECS_CONTAINER_METADATA_URI}
The output looks something like
{
"DockerId":"redact",
"Name":"redact",
"DockerName":"ecs-redact",
"Image":"redact",
"ImageID":"redact",
"Labels":{ },
"DesiredStatus":"RUNNING",
"KnownStatus":"RUNNING",
"Limits":{ },
"CreatedAt":"2019-04-16T22:39:57.040286277Z",
"StartedAt":"2019-04-16T22:39:57.29386087Z",
"Type":"NORMAL",
"Networks":[
{
"NetworkMode":"awsvpc",
"IPv4Addresses":[
"172.30.1.115"
]
}
]
}
Under the key Networks you'll find IPv4Address
You application code can then look something like this (python)
METADATA_URI = os.environ['ECS_CONTAINER_METADATA_URI']
container_metadata = requests.get(METADATA_URI).json()
ALLOWED_HOSTS.append(container_metadata['Networks'][0]['IPv4Addresses'][0])
import * as publicIp from 'public-ip';
const publicIpAddress = await publicIp.v4(); // your container's public IP
I am trying to deploy Vault Docker image to work with Consul Docker image as its storage.
I have the following Json config file for the vault container:
{
"listener": [{
"tcp": {
"address": "0.0.0.0:8200",
"tls_disable" : 1
}
}],
"storage" :{
"consul" : {
"address" :"127.0.0.1:8500"
"path" :"vault/"
}
}
"max_lease_ttl": "10h",
"default_lease_ttl": "10h",
"ui": true,
}
Running consul container:
docker run -d -p 8501:8500 -it consul
and than running the vault container:
docker run -d -p 8200:8200 -v /root/vault:/vault --cap-add=IPC_LOCK vault server
Immediately after the vault container is up, it stop running, and when querying the logs I receive the following error:
Error detecting api address: Get http://127.0.0.1:8500/v1/agent/self: dial tcp 127.0.0.1:8500: connect: connection refused
Error initializing core: missing API address, please set in configuration or via environment
Any ideas why I am getting this error, and if I have any configuration problem?
Since you are running docker, the "127.0.0.1" address you are pointing to is going to be inside your docker, but consul isn't listening there, it's listening on your docker-servers localhost!
So I would recommend that you do a link (--link consul:consul) when you start vault docker, and set "address" :"consul:8500" in the config.
Or, change "address" :"127.0.0.1:8500" to "address" :"172.17.0.1:8500" to let it connect to your docker servers forwarded 8500. The IP is whatever is set on your docker0 interface. Not as nice though since it's not official and that it can be changed in the configuration, so I recommend linking.