I'm looking for solution how to get a hostname from Nagios/Icinga by searching it by custom variable with cmd/status.cgi.
I have a custom variable with unique specific IDs on every host. I have to get the hostname by searching on ID. There is a documentation for CGI commands but I could not find the needed functionality: https://icinga.com/docs/icinga1/latest/en/cgiparams.html
UPD: I am using python for CGI requests. Maybe there is also a library to do that.
Does anyone know, if it is possible?
For Nagios at least, this is possible. You can call the host details on the objectjson.cgi for a hostgroup and in your result.json(), you would have the custom_variables for each of the hosts. With that, you can map an ID to the hostname.
make your request to https://<your_url>/nagios/cgi-bin/objectjson.cgi?query=hostlist&details=true&hostgroup=<your_hostgroup>
{...
"data": {
"hostlist": {
"<host1>": {
....
"custom_variables": {
<custom host variables dict>
},
"<host2>": {
....
}
}
}
}
untested! using python's requests module:
hostlist = result.json().get('data').get('hostlist')
id_map = {hostlist.get(host).get('custom_variables').get('your_id_key'):host for host in hostlist.keys()}
Related
right now I'm deploying to cloud run and run
gcloud run deploy myapp --tag pr123 --no-traffic
I can then access the app via
https://pr123---myapp-jo5dg6hkf-ez.a.run.app
Now I would like to have a custom domain mapping going to this tag. I know how to point a custom domain to the service but I don't know how to point it to the tagged version of my service.
Can I add labels to the DomainMapping that would cause the mapping to got this version of my cloud run service? Or is there a routeName, eg. myapp#pr123 that would do the trick there?
In the end I would like to have
https://pr123.dev.mydomain.com
being the endpoint for this service.
With a custom domain, you configure a DNS to point to a service, not a revision/tag of the service. So, you can't by this way.
The solution is to use a load balancer with a serverless NEG. The most important is to define the URL mask that you want to map the tag and service from the URL which is received by the Load Balancer.
I ended up building the loadbalancer with a network endpoint group (as suggested). For further reference, here is my terraform snippet to create it. The part is then the traffic tag you assign to your revision.
resource "google_compute_region_network_endpoint_group" "api_neg" {
name = "api-neg"
network_endpoint_type = "SERVERLESS"
region = "europe-west3"
cloud_run {
service = data.google_cloud_run_service.api_dev.name
url_mask = "<tag>.preview.mydomain.com"
}
}
I'm tring to deploy a simple test app on cloud with digital ocean.
I created a new app with the vue cli (VUE3).
After i dockerized the app and exposed to 8080.
I configured the nginx so that it route traffic from port :80 to :8080 on the container.
Everything is ok, but when i try to visit the page i got this error "Invalid host header".
I searched on google and everybody suggest to create a vue.config.js file with this code:
module.exports = {
devServer: {
disableHostCheck: true
} }
I tried this solution but nothing is changed. How can i fix this error?
I also read that this kind of solution create vulnerabilities, is there a way to fix without this solution?
Thank you in advance for the response
In your vue.config.js file you can try this settings
const { defineConfig } = require('#vue/cli-service')
module.exports = defineConfig({
transpileDependencies: true,
devServer: {
allowedHosts: "all"
}
})
Found the solution!
The mentioned solutions above does not work for me.
I am not sure when the property allowedHosts was changed.
currently, we supposed to provide an array to the allowedHosts property.
devServer: {
allowedHosts: [
'yourdomain.com'
]
}
Just look for the file vue.config.js, then replace: yourdomain.com with your own personal domain.
An alternative to Oren Hahiashvili's answer when you don't know ahead of time what hosts will be accessing the devServer (e.g., when testing on multiple environments) is to set devServer.diableHostCheck in vue.config.js. For example,
module.exports = {
devServer: {
disableHostCheck: true
}
};
Note this is less secure than Oren Hahiashvili's answer, so only use this when you don't know the hosts, and you still need to serve using devServer.
I define the URL for my backend service container in my docker-compose.yaml.
environment:
PORT: 80
VUE_APP_BACKEND_URL: "mm_backend:8080"
When the containers spin up, I inspect my frontend container and can verify that the env variable was set correctly as shown below.
However, when I attempt to use my frontend service to connect to my backend (retrieve data) it tells me that the VUE_APP_BACKEND_URL is undefined in the network tab.
The implementation and usage of this environment variable is such within my vue.js code
getOwners(){
fetch(`${process.env.VUE_APP_BACKEND_URL}/owners`, defaultOptions)
.then((response) => {
return response.json();
})
.then((data) => {
data.forEach((element) => {
var entry = {
value: element.id,
text: `${element.display_name} (${element.name})`
}
this.owners.push(entry)
})
})
Any assistance is appreciated.
This won’t work because process.env.someKey is not available on the browser. In other words, docker-compose won’t help much if you want to pass any env variable into your front end application. Simplest approach is to define the backendUrl in the code itself at one place and use it to make api calls. If you are not happy doing this, there are already some good answers/solutions available on StackOverflow for this same problem.
I need to setup a configuration for many similar environments. Each will have a different hostname that follows a pattern, e.g. env1, env2, etc.
I can use a pool per environment and a single virtual server with an irule that selects a pool based on hostname.
What I'd prefer to do is dynamically generate and select the pool name based on the requested hostname rather than listing out every pool in the switch statement. It's easier to maintain and automatically handles new environments.
The code might look like:
when HTTP_REQUEST {
pool [string tolower [HTTP:host]]
}
and each pool name matches the hostname.
Is this possible? Or is there a better method?
EDIT
I've expanded my hostname pool selection. I'm now trying to include the port number. The new rule looks like:
when HTTP_REQUEST {
set lb_port "[LB::server port]"
set hostname "[string tolower [getfield [HTTP::host] : 1]]"
log local0.info "Pool name $hostname-$lb_port-pool"
pool "$hostname-$lb_port-pool"
}
This is working, but I'm seeing no-such-pool errors in the logs because somehow a port 0 request is coming into the pool. It seems to be the first request and the followed by the request with the legitimate port.
Wed Feb 17 20:39:14 EST 2016 info tmm tmm[6519] Rule /Common/one-auto-pool-select-by-hostname-port <HTTP_REQUEST>: Pool name my.example.com-80-pool
Wed Feb 17 20:39:14 EST 2016 err tmm1 tmm[6519] 01220001 TCL error: /Common/one-auto-pool-select-by-hostname-port <HTTP_REQUEST> - no such pool: my.example.com-0-pool (line 1) invoked from within "pool "$hostname-$lb_port-pool""
Wed Feb 17 20:39:14 EST 2016 info tmm1 tmm[6519] Rule /Common/one-auto-pool-select-by-hostname-port <HTTP_REQUEST>: Pool name my.example.com-0-pool
What is causing the port 0 request? And is there any workaround? e.g. could I test for port 0 and select a default port or ignore it?
ONE MORE EDIT
Rebuilt the virtual server, and now the error has gone. The rebuild of the VS was just to rename it though. I'm fairly sure I recreated the settings exactly the same.
Yes, you can specify the pool name in a string. What you have there would work as long as you have a pool with that same name. Though it doesn't show an example of doing it this way, you can also check out the pool wiki page on DevCentral for more information.
As an aside, in my environment I generally create pools with the suffix _pool to distinguish them from other objects when looking at config files. So in my iRules, I would do something like this (essentially the same thing):
when HTTP_REQUEST {
pool "[string tolower [HTTP::host]]_pool"
}
The simple case mentioned by Michael works. I'd recommend removing the port value if present:
when HTTP_REQUEST {
pool "pool_[string tolower [getfield [HTTP::host] : 1]]_[LB::server port]"
}
Keep in mind that clients might send a partial hostname. If the DNS search path is set to example.org then the client might hit shared/ which maps to shared.example.org, but the HTTP::host header will just have shared. Some API libraries may append the port number even if it's on the default port. Simple code might not send a Host header. Malicious code might send completely bogus Host headers. You could trap these cases with catch.
You can also use a datagroup to map hostnames to pools. This allows multiple hosts to use the same pool. Sample code:
when HTTP_REQUEST {
set host [string tolower [getfield [HTTP::host] ":" 1]]
if { $host == "" } {
# if there's no Host header, pull from virtual server name
# we use: pool_<virtualserver>_PROTOCOL
set host [getfield [virtual name] _ 2]
} elseif { not ($host contains ".") } {
# if Host header does not contain a dot, assume example.org
set host $host.example.org
}
set pool [class match -value $host[HTTP::uri] starts_with dg_shared.example.org]
if { $pool ne ""} {
set matched [class match -name $host[HTTP::uri] starts_with dg_shared.example.org]
set log(matched) $matched
set log(pool) $pool
if { [catch { pool $pool } ] } {
set log(reason) "Failed to Connect to Pool"
call hsllog log
call errorpage 404 $log(reason) "https://[HTTP::host][HTTP::uri]" log
}
} else {
call errorpage 404 "No Pool Found" "https://[HTTP::host][HTTP::uri]" log
}
}
when SERVER_CONNECTED {
if {!($pool ends_with "_HTTPS") } {
SSL::disable serverside
}
}
This allows host.example.org/path1 to be on a different pool than host.example.org or host.example.org/path2 by including separate entries in the datagroup. I didn't include the hsllog and errorpage procs here. They dump the log array as well as the other passed parameters.
We then disable serverside ssl for pools that don't end in _HTTPS.
Note: As with dynamically generated pool names, the BIG-IP UI does not look inside datagroups for pool references, so the interface will allow you do delete one of these pools thinking it's not in use.
We use BigIPReport to identify orphan pools:
https://devcentral.f5.com/s/articles/bigip-report
how do you get cloudfoundry to assign a port? I am adding applications and I'd like to have a different port for each but VCAP_APP_PORT is not set. VCAP_APP_HOST is set but VCAP_APP_PORT is not.
Take a look at http://show-env.cloudfoundry.com/
It's a node application I knocked together just to output the environment and the request headers when you call it, the code looks like this;
var http = require('http');
var util = require('util');
http.createServer(function (req, res) {
res.writeHead(200, {'Content-Type': 'text/plain'});
res.write(util.inspect(process.env));
res.write("\n\n************\n\n");
res.end(util.inspect(req.headers));
}).listen(3000);
You can see the VCAP_APP_PORT in the output;
It would be handy to know which framework you are using, however, all these variables should be stored in the system environment so it shouldn't really matter.
Cloud Foundry will automatically assign each application instance an IP address and port and these values are accessible in the VCAP_* variable as Dan describes. You don't get to tell Cloud Foundry which port you prefer. Each instance of your app may receive a different IP address and port, so you should always interrogate the environment to find out what they are if you need that information.