Change Nomad bind port - port

In HashiCorp Nomad, is it possible to specify the bind port for a Server and Client (e.g., from 464{6,7,8} to 600{6,7.8}?
The addresses stanza does not allow port specification, and neither does the -bind switch.
The advertise stanza does not change the port on which Nomad binds.
The -bind switch allows specifying IP only:
> nomad agent -server -data-dir=/tmp/nomad -bind=0.0.0.0
==> No configuration files loaded
==> Starting Nomad agent...
==> Nomad agent configuration:
Advertise Addrs: HTTP: <HOSTIP>:4646; RPC: <HOSTIP>:4647; Serf: <HOSTIP>:4648
Bind Addrs: HTTP: 0.0.0.0:4646; RPC: 0.0.0.0:4647; Serf: 0.0.0.0:4648
Attempting to specify a port errors out:
> nomad agent -server -data-dir=/tmp/nomad -bind=0.0.0.0:6000
==> Failed to parse HTTP advertise address (, 0.0.0.0:6000, 4646, false): Error resolving bind address "0.0.0.0:6000": lookup 0.0.0.0:6000: no such host

Use the ports stanza to select ports for the agent, whether in server or client mode.

you should look on hashicorp website how to launch nomad with server.hcl client.hcl and nomad.hcl file
:
echo 'server {
enabled = true
bootstrap_expect = 1
}' > /etc/nomad.d/server.hcl
echo 'region = "global"
datacenter = "dc1"
data_dir = "/opt/nomad"
bind_addr = "1.2.3.4" # the default
advertise {
# Defaults to the first private IP address.
http = "1.2.3.4"
rpc = "1.2.3.4" #put your adresse here
serf = "1.2.3.4:5648" # non-default ports may be specified
}
' > /etc/nomad.d/nomad.hcl
and run this :
nomad agent -config /etc/nomad.d

Related

freeradius docker Ignoring request to auth address when build through jenkins

I am using freeradius docker for AAA server authentication. For integration tests I am using docker-compose which contains freeradius and other services also. When I build, it creates the containers and test the authentication and after that stops the containers.
From one docker container I am sending request to freeradius docker container for authentication, which is working fine on my local machine but when I am trying to build through jenkins, I am getting
Ignoring request to auth address * port 1812 bound to server default from unknown client 192.168.96.1 port 36096 proto udp
below is my client.conf file -
client dockernet {
ipaddr = x.x.0.0
secret = testing123
netmask = 24
shortname = dockernet
require_message_authenticator = no
}
client jenkins {
ipaddr = 192.168.0.0
secret = testing123
netmask = 24
shortname = jenkins
require_message_authenticator = no
}
Your client subnet/netmask definition is incorrect.
192.168.0.0/24 will match addresses in the subnet 192.168.0.x (192.168.0.0 to 192.168.0.255), but the request is coming from 192.168.96.1.
Either change the jenkins client definition to 192.168.96.0 (leaving the netmask as 24), or use netmask = 16 which will include all addresses from 192.168.0.0 to 192.168.255.255.
I would recommend limiting the range to the exact IP, or as small a range as possible, so therefore suggest the former.

docker container TCP communication in AWS ECS(EC2 launch type ), Nest js

I have set up the ECS + EC2 launch type deployment and separate task definition for each container. when I communicate between the docker container via TCP communication.
set up the service discovery and get the service discovery endpoint as host for communication.
Error: listen EADDRNOTAVAIL: address not available xxx.31.x.100:3002
app.connectMicroservice({
transport: Transport.TCP,
options: {
host: process.env.TCP_RECEIVE_HOST || 'localhost',
port: data.TcpReceive.TCP_RECEIVE_PORT
}
});

Cannot access docker host from macos

I am trying to access my host system from a docker container
have tried all the following instead of 127.0.0.1 and localhost:
gateway.docker.internal,
docker.for.mac.host.internal,
host.docker.internal ,
docker.for.mac.host.internal,
docker.for.mac.localhost,
but none seem to work.
If I run my docker run command with --net=host, I can indeed access localhost however none of my port mappings get exposed and in accessible from outside docker.
I am using Docker version 20.10.5, build 55c4c88
some more info. I am running a piece of software called impervious (a layer on top of the bitcoin lightning network). It needs to connect to my local Polar lightning node on localhost:10001. Here is the config file the tool itself uses(see lnd section):
# Server configurations
server:
enabled: true # enable the GRPC/HTTP/websocket server
grpc_addr: 0.0.0.0:8881 # SET FOR DOCKER
http_addr: 0.0.0.0:8882 # SET FOR DOCKER
# Redis DB configurations
sqlite3:
username: admin
password: supersecretpassword # this will get moved to environment variable or generated dynamically
###### DO NOT EDIT THE BELOW SECTION#####
# Services
service_list:
- service_type: federate
active: true
custom_record_number: 100000
additional_service_data:
- service_type: vpn
active: true
custom_record_number: 200000
additional_service_data:
- service_type: message
active: true
custom_record_number: 400000
additional_service_data:
- service_type: socket
active: true
custom_record_number: 500000
additional_service_data:
- service_type: sign
active: true
custom_record_number: 800000
additional_service_data:
###### DO NOT EDIT THE ABOVE SECTION#####
# Lightning
lightning:
lnd_node:
ip: host.docker.internal
port: 10001 #GRPC port of your LND node
pub_key: 025287d7d6b3ffcfb0a7695b1989ec9a8dcc79688797ac05f886a0a352a43959ce #get your LND pubkey with "lncli getinfo"
tls_cert: /app/lnd/tls.cert # SET FOR DOCKER
admin_macaroon: /app/lnd/admin.macaroon # SET FOR DOCKER
federate:
ttl: 31560000 #Federation auto delete in seconds
imp_id: YOUR_IMP_ID #plain text string of your IMP node name
vpn:
price: 100 #per hour
server_ip: http://host.docker.internal #public IP of your VPN server
server_port: 51820 #port you want to listen on
subnet: 10.0.0.0/24 #subnet you want to give to your clients. .1 == your server IP.
server_pub_key: asdfasdfasdf #get this from your WG public key file
allowed_ips: 0.0.0.0/0 #what subnets clients can reach. Default is entire world.
binary_path: /usr/bin/wg #where your installed the "wg" command.
dns: 8.8.8.8 #set your preferred DNS server here.
socket:
server_ip: 1.1.1.1 #public IP of your socket server
I run impervious using the following docker comand:
docker run -p8881:8881 -p8882:8882 -v /Users/xxx/dev/btc/impervious/config/alice-config-docker.yml:/app/config/config.yml -v /Users/xxx/.polar/networks/1/volumes/lnd/alice/tls.cert:/app/lnd/tls.cert -v /Users/xxx/.polar/networks/1/volumes/lnd/alice/data/chain/bitcoin/regtest/admin.macaroon:/app/lnd/admin.macaroon -it impant/imp-releases:v0.1.4
but it just hangs when it tries to connect to the node at host.docker.internal
Have you tried docker-mac-net-connect?
The problem is related to macOS.Unlike Docker on Linux, Docker for macOS does not expose container networks directly on the macOS host.
You can use host.docker.internal which gives the localhost of the macos.
https://docs.docker.com/desktop/networking/#i-want-to-connect-from-a-container-to-a-service-on-the-host
The host has a changing IP address (or none if you have no network
access). We recommend that you connect to the special DNS name
host.docker.internal which resolves to the internal IP address used by
the host. This is for development purpose and does not work in a
production environment outside of Docker Desktop.
Mac running the desktop version of docker.
The docker isn't running on the host machine and using a kind of virtual machine that includes Linux kernel. The network of this virtual machine is different from the host machine. To connect from your Mac host to running docker container used a kind of VPN connection:
When you run your docker with --net host switch you connect the container to a virtual machine network instead connect to your host machine network as it's working on Linux.
Then trying to connect to 127.0.0.1 or to localhost isn't allow connections to the running container.
The solution to this issue is to expose needed ports from running container:
docker run -p 8080:8080
If you need to expose all ports from your container you can use -P switch.
For opposite connection use host.docker.internal URL from container.
More documentation about docker desktop for Mac networking

Local Consul join K8s Consul Mac

So I'm currently running on my local Kubernetes cluster (running on docker) the stable/consul chart from helm.
$ helm install -n wet-fish --namespace consul stable/consul
This creates two services
==> v1/Service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
wet-fish-consul ClusterIP None <none> 8500/TCP,8400/TCP,8301/TCP,8301/UDP,8302/TCP,8302/UDP,8300/TCP,8600/TCP,8600/UDP 0s
wet-fish-consul-ui NodePort 10.110.229.223 <none> 8500:30276/TCP
So this means I can run localhost:30276 and see the consul ui.
Now I'm running on my local machine
$ consul agent -dev -config-dir=./consul.d -node=machine
$ consul join 127.0.0.1:30276
This just results in:
Error joining address '127.0.0.1:30276': Unexpected response code: 500 (1 error occurred:
* Failed to join 127.0.0.1: received invalid msgType (72), expected pushPullMsg (6) from=127.0.0.1:30276
)
Failed to join any nodes.
and
2020/01/17 15:17:35 [WARN] agent: (LAN) couldn't join: 0 Err: 1 error occurred:
* Failed to join 127.0.0.1: received invalid msgType (72), expected pushPullMsg (6) from=127.0.0.1:30276
2020/01/17 15:17:35 [ERR] http: Request PUT /v1/agent/join/127.0.0.1:30276, error: 1 error occurred:
* Failed to join 127.0.0.1: received invalid msgType (72), expected pushPullMsg (6) from=127.0.0.1:30276
from=127.0.0.1:59693
There must be a way to have a local consul agent running that can connect to the k8s consul server...
This is on a Mac, so networking isn't as good....
There may be two problems here, the first is that consul agent -dev starts the agent in dev mode. By default dev mode is going to start both a server and an agent. This might be part of the reason behind the error.
The other problem could be due to localhost, the server running in Kubernetes will attempt to health check local agents. It needs to be able to ping the local agent, so even if you manage to join in the first step, it would probably fail health checks.
I agree about networking on Mac it does not make things easy, one thing you will probably have to do is set the advertise address for the local agent (non kube). Docker for mac has a host name docker.for.mac.localhost which is a routable ip to the local machine from a container. When starting the local agent if you set the advertise address to the ip value of that host Kubernetes Consul server should be able to route to the locally running agent.
Potential fix:
1. Ensure local agent is starting in client mode (manually configure not -dev)
2. Set advertise advertise address to an ip address which is routable from Kubernetes docker.for.mac.localhost
Give me a shout if that does not work for you, I have used a setup like this myself, 9/10 it is networking between Docker and the local machine.
Kind regards,
Nic

How should I setup Traefik on ECS?

 In Short
I've managed to run Traefik locally and on AWS ECS but now I'm wondering how should I setup some sort of load balancing to make my two services with random IPs available to the public.
My current setup on ECS
[Internet]
|
[Load balancer on port 443 + ALB Security group on 443]
|
[Target group on port 443 + Security group from *any* port]
|
[cluster]
|
[service1 container ports "0:5000"]
While this works, I'd now like to add another container, eg. service2 also with random ports eg 0:8000. And that's why I need something like Traefik.
What I did
Here's Toml file:
[api]
address = ":8080"
[ecs]
clusters = ["my-cluster"]
watch = true
domain = "mydomain.com"
region = "eu-central-1"
accessKeyID = "AKIA..."
secretAccessKey = "..."
Also I've added the host entry in /etc/hosts:
127.0.0.1 service1.mydomain.com
127.0.0.1 service2.mydomain.com
And the relative labels on the containers and I can curl service1.mydomain.com/status and get a 200.
Now my last bit is just the following question:
How should publish all this to the internet? AWS ALB? AWS Network LB? Network Bridge/host/other?
AWS ALB vs AWS Network LB depends on who do you want to handle SSL.
If you have a wildcard certificate and all your services are subdomains of the same domain ALB may be a good choice
If you want to use Let's encrypt with traefik Network LB may be a better choice
In both case your setup will look something like this :
[Internet]
|
[LB]
|
[Target group]
|
[Traefik]
| |
[service1] [service2]
In both case, easiest way to get this is to make traefik ecs services to auto register to the target group.
This can be done at service creation (network configuration section) and can not be done later. Link to documentation
Screen of configuration console

Resources