Cannot use std vmod in varnish 6.5.1 - docker

I am using the official varnish:6.5.1 docker image and have this vcl:
vcl 4.0;
import std;
backend default {
.host = std.getenv("PROXY_HOST");
.port = std.getenv("PROXY_PORT");
}
.....
When I try to run the image (with docker-compose) it instantly fails with this error:
varnish_1 | Could not delete 'vcl_boot.1612728251.581028/vgc.sym': No such file or directory
varnish_1 | Error:
varnish_1 | Message from VCC-compiler:
varnish_1 | Expected CSTR got 'std'
varnish_1 | (program line 369), at
varnish_1 | ('/etc/varnish/default.vcl' Line 13 Pos 17)
varnish_1 | .host = std.getenv("PROXY_HOST");
How is this failing? I would understand not being able to connect to the backend but the VCL parse should be fine, the documentation on the std VMOD is very simple for getenv.
What am I missing here?
EDIT
backend default {
.host = "${PROXY_HOST}";
.port = "${PROXY_PORT}";
}
in combination with
#!/bin/bash
envs=`printenv`
for env in $envs
do
IFS== read name value <<< "$env"
sed -i "s|\${${name}}|${value}|g" /etc/varnish/default.vcl
done
varnishd -f /etc/varnish/default.vcl
works as per this post but that seems hardly optimal.

Varnish Cache only supports static backends
Varnish Cache, the open source version of Varnish doesn't support dynamic backends.
When the VCL file is loaded and compiled, the .host and .port values need to be strings, not expressions.
The error message also indicates this:
Expected CSTR got 'std'
The compiler says it expects a C-string, which means something that starts with a quote, but instead it found std.
Dynamic backend support in Varnish Enterprise
Varnish Enterprise, the commercial version of Varnish, does support dynamic backends.
See https://docs.varnish-software.com/varnish-cache-plus/vmods/goto/ for more information.
Disclaimer: although Varnish Enterprise is the commercial version of Varnish Cache, you can still use it without upfront license payments if you use it on AWS, Azure or GCP.
Varnish Enterprise on AWS
Varnish Enterprise on Azure
Varnish Enterprise on GCP

Related

Ansible connection to docker engine on osx apple Silicon

I'm trying to connect to my local docker engine running on OSX (m1 chip) in order to create a dynamic inventory.
I've created a host file with the following config
I made sure that docker_containers module is well installed.
plugin: community.docker.docker_containers
docker_host: "unix://Users/ME/.docker/run/docker-cli-api.sock"
Then I run ansible-inventory --graph -i ./hosts/hosts-docker-local.yaml.
But I'm getting the following error:
[WARNING]: * Failed to parse /Users/ME/Projects/ansible-test/hosts/hosts-docker-local.yaml with auto plugin: inventory source '/Users/ME/Projects/ansible-test/hosts/hosts-docker-local.yaml' could not be
verified by inventory plugin 'community.docker.docker_containers'
[WARNING]: * Failed to parse /Users/ME/Projects/ansible-test/hosts/hosts-docker-local.yaml with yaml plugin: Plugin configuration YAML file, not YAML inventory
[WARNING]: * Failed to parse /Users/ME/Projects/ansible-test/hosts/hosts-docker-local.yaml with ini plugin: Invalid host pattern 'plugin:' supplied, ending in ':' is not allowed, this character is reserved to
provide a port.
[WARNING]: Unable to parse /Users/ME/Projects/ansible-test/hosts/hosts-docker-local.yaml as an inventory source
[WARNING]: No inventory was parsed, only implicit localhost is available
#all:
|--#ungrouped:
I tried
ansible-doc -t inventory -l | grep docker
community.docker.docker_containers Ansible dynamic inv...
community.docker.docker_machine Docker Machine inve...
community.docker.docker_swarm Ansible dynamic inv...
but somehow if I do this
ansible localhost -i ./hosts/hosts-docker-local.yaml -m community.docker.docker_containers
It complains
localhost | FAILED! => {
"msg": "The module community.docker.docker_containers was not found in configured module paths"
}
maybe something wrong with my module path, something wierd with OSX? (I installed Ansible with brew)
The inventory file must end in docker.yaml, as pointed out by #Zeitounator.
Uses a YAML configuration file that ends with docker.[yml|yaml].
https://docs.ansible.com/ansible/latest/collections/community/docker/docker_containers_inventory.html#synopsis

Citrix NetScaler CPX: provision content switching

Context
I am working on a POC for a client which involves a Citrix Netscaler. My entire demo is a docker-compose.yml with:
different DBMS
some web services
my monitoring solution (grafana, prometheus, telegraf)
I would like to use this image as a reverse proxy for the web services and monitor this service with prometheus.
Need
I would like to set thing so that no manual action would be required to run the demo. In the context of nginx, I would simply mount the relevant conf file somewhere in /etc/nginx/conf.d. Using a Citrix netscaler, I am not sure
whether it is even possible
how to proceed (the only doc I could found display a very graphical/complicated process)
In a nutshell, I would like to be able to route http requests to the different web services by overriding some configuration file, like so:
netscaler:
image: store/citrix/netscalercpx:12.0-56.20
container_name: ws-netscaler
ports:
- 444:443
- 81:80
expose:
- 161
volumes:
- ./netscaler/some.conf:/nsconfig/some.conf:ro # what I am trying to achieve
environment:
- EULA=yes
cap_add:
- NET_ADMIN
ulimits:
nproc: 1
About this specific image
It appears that all netscaler related files are here
root#61baa67a839f:/# ls /netscaler
cli_script.sh nitro ns_service_stop nscli_linux nsconmsg nsnetsvc nssslgen pitboss
docker_startup.sh ns_reboot nsaggregatord nsconfigaudit nslinuxtimer nsppe nstraceaggregator showtechsupport.pl
netscaler.conf ns_service_start nsapimgr nsconfigd nslped nssetup_linux nstracemergenclean.sh snmpd
and here
root#61baa67a839f:/# ls -R nsconfig
nsconfig:
dns monitors nsboot.conf snmpd.conf ssl
nsconfig/dns:
nsconfig/monitors:
nsconfig/ssl:
ns-root.cert ns-root.req ns-server.cert ns-server.req ns-sftrust-root.key ns-sftrust-root.srl ns-sftrust.der ns-sftrust.req
ns-root.key ns-root.srl ns-server.key ns-sftrust-root.cert ns-sftrust-root.req ns-sftrust.cert ns-sftrust.key ns-sftrust.sig
Based on nsboot.conf's content
root#61baa67a839f:/# cat /nsconfig/nsboot.conf
add route 0 0 172.18.0.1
set rnat 192.0.0.0 255.255.255.0 -natip 172.18.0.2
add ssl certkey ns-server-certificate -cert ns-server.cert -key ns-server.key
set tcpprofile nstcp_default_profile mss 1460
set ns hostname 61baa67a839f
and this documentation, I am assuming that this would be the place. Am I right in assuming so?
Edit
Overriding nsboot.conf did not work as expected, for this file is quite probably written by entrypoint.sh. I end up with multiple definitions. It seems that the correct way to do it is by injecting /etc/cpx.conf (source).
# /etc/cpx.conf
WS_ADDRESS=$(getent hosts some_web_service | awk '{ print $1 }')
add cs vserver some_ws HTTP $WS_ADDRESS 5000
But I can't access the resource through the netscaler (mainly because I do not understand NetScaler CLI yet)
$ curl http://localhost:5000/hello
Hello, World!%
$ curl http://localhost:81/some_ws/hello
<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<html><head>
<title>404 Not Found</title>
</head><body>
<h1>Not Found</h1>
<p>The requested URL /some_ws/hello was not found on this server.</p>
</body></html>

Starting Zabbix Server within docker replaces strings with nothing in config file →

→ or totally ignored strings like name of new DB for testing purposes.
Firstly tries to add something about ~250 to 250 already added hosts and Z-server shutted down. I've restarted it and inside docker logs I saw this:
6:20191014:091840.201 using configuration file: /etc/zabbix/zabbix_server.conf
6:20191014:091840.223 current database version (mandatory/optional): 04020000/04020001
6:20191014:091840.223 required mandatory version: 04020000
6:20191014:091840.484 __mem_malloc: skipped 7 asked 108424 skip_min 304 skip_max 12192
6:20191014:091840.484 [file:dbconfig.c,line:94] __zbx_mem_realloc(): out of memory (requested 108424 bytes)
6:20191014:091840.484 [file:dbconfig.c,line:94] __zbx_mem_realloc(): please increase CacheSize configuration parameter
6:20191014:091840.484 === memory statistics for configuration cache ===
Solution for those problem was to increase CacheSize in zabbix_server.conf . Okay, that's not a problem and after this Im push a new config to Z-server and restart it... → and z-server stops already after start and logs says the same problem. After reading config in container I saw what string what I corrected to matching my wishes are missing O_o. Strings are deleted.
My config:
LogType=console
DBHost=postgres-server
DBName=zabbix_pwd
DBSchema=public
DBUser=zabbix
DBPassword=zabbix
DBPort=5432
StartPollers=5
StartIPMIPollers=5
StartPollersUnreachable=5
SNMPTrapperFile=/var/lib/zabbix/snmptraps/snmptraps.log
StartSNMPTrapper=1
CacheSize=512M
HistoryCacheSize=512M
HistoryIndexCacheSize=512M
TrendCacheSize=512m
ValueCacheSize=256M
AlertScriptsPath=/usr/lib/zabbix/alertscripts
ExternalScripts=/usr/lib/zabbix/externalscripts
FpingLocation=/usr/sbin/fping
Fping6Location=/usr/sbin/fping6
SSHKeyLocation=/var/lib/zabbix/ssh_keys
SSLCertLocation=/var/lib/zabbix/ssl/certs/
SSLKeyLocation=/var/lib/zabbix/ssl/keys/
SSLCALocation=/var/lib/zabbix/ssl/ssl_ca/
LoadModulePath=/var/lib/zabbix/modules/
And what I've getting after starting z-server:
LogType=console
DBHost=postgres-server
DBName=zabbix_pwd
DBSchema=public
DBUser=zabbix
DBPassword=zabbix
DBPort=5432
SNMPTrapperFile=/var/lib/zabbix/snmptraps/snmptraps.log
StartSNMPTrapper=1
AlertScriptsPath=/usr/lib/zabbix/alertscripts
ExternalScripts=/usr/lib/zabbix/externalscripts
FpingLocation=/usr/sbin/fping
Fping6Location=/usr/sbin/fping6
SSHKeyLocation=/var/lib/zabbix/ssh_keys
SSLCertLocation=/var/lib/zabbix/ssl/certs/
SSLKeyLocation=/var/lib/zabbix/ssl/keys/
SSLCALocation=/var/lib/zabbix/ssl/ssl_ca/
LoadModulePath=/var/lib/zabbix/modules/
Any suggestions to how-to rule the world and don't be captured by doctors ?
With docker you need to send conf parameters in the docker-compose.yml file, or in your docker run command using the -e :
For example from my docker yml file:
zabbix-server:
image: zabbix/zabbix-server-pgsql:ubuntu-4.2.6
environment:
ZBX_MAXHOUSEKEEPERDELETE: 5000
ZBX_STARTPOLLERS: 15
ZBX_CACHESIZE: 8M
ZBX_STARTDBSYNCERS: 4
ZBX_HISTORYCACHESIZE: 16M
ZBX_TRENDCACHESIZE: 4M
ZBX_VALUECACHESIZE: 8M
ZBX_LOGSLOWQUERIES: 3000
Another way to work with zabbix:
https://hub.docker.com/r/monitoringartist/zabbix-3.0-xxl/

Hyperledger Sawtooth - Preflight error while submitting transaction

I am trying to submit a transaction to Hyperledger Sawtooth v1.0.1 using javascript to a validator running on localhost. The code for the post request is as below:
request.post({
url: constants.API_URL + '/batches',
body: batchListBytes,
headers: { 'Content-Type': 'application/octet-stream' }
}, (err, response) => {
if (err) {
console.log(err);
return cb(err)
}
console.log(response.body);
return cb(null, response.body);
});
The transaction gets processed when submitted from an backend nodejs application, but it returns an OPTIONS http://localhost:8080/batches 405 (Method Not Allowed) error when submitted from client. These are the options that I have tried:
Inject Access-Control-Allow-* headers into the response using an extension: The response still gives the same error
Remove the custom header to bypass preflight request: This makes the validator throw an error as shown:
...
sawtooth-rest-api-default | KeyError: "Key not found: 'Content-Type'"
sawtooth-rest-api-default | [2018-03-15 08:07:37.670 ERROR web_protocol] Error handling request
sawtooth-rest-api-default | Traceback (most recent call last):
...
The unmodified POST request from the browser gets the following response headers from the validator:
HTTP/1.1 405 Method Not Allowed
Content-Type: text/plain; charset=utf-8
Allow: GET,HEAD,POST
Content-Length: 23
Date: Thu, 15 Mar 2018 08:42:01 GMT
Server: Python/3.5 aiohttp/2.3.2
So, I guess OPTIONS method is not handled in the validator. A GET request for the state goes through fine when the CORS headers are added. This issue was also not faced in Sawtooth v0.8.
I am using docker to start the validator, and the commands to start it are a slightly modified version of those given in the LinuxFoundationX: LFS171x course. The relevant commands are below:
bash -c \"\
sawadm keygen && \
sawtooth keygen my_key && \
sawset genesis -k /root/.sawtooth/keys/my_key.priv && \
sawadm genesis config-genesis.batch && \
sawtooth-validator -vv \
--endpoint tcp://validator:8800 \
--bind component:tcp://eth0:4004 \
--bind network:tcp://eth0:8800
Can someone please guide me as to how to solve this problem?
CORS issues are always the best.
What is CORS?
Your browser trying to protect users from bring directed to a page they think is the frontend for an API, but is actually fraudulent. Anytime a web page tries to access an API on a different domain, that API will need to explicitly give the webpage permission, or the browser will block the request. This is why you can query the API from Node.js (no browser), and can put the REST API address directly into your address bar (same domain). However, trying to go from localhost:3000 to localhost:8008 or from file://path/to/your/index.html to localhost:8008 is going to get blocked.
Why doesn't the Sawtooth REST API handle OPTIONS requests?
The Sawtooth REST API does not know the domain you are going to run your web page from, so it can't whitelist it explicitly. It is possible to whitelist all domains, but this obviously destroys any protection CORS might give you. Rather than try to weigh the costs and benefits of this approach for all Sawtooth users everywhere, the decision was made to make the REST API as lightweight and security agnostic as possible. Any developer using it would be expected to put it behind a proxy server, and they can make whatever security decisions they need on that proxy layer.
So how do you fix it?
You need to setup a proxy server that will put the REST API and your web page on the same domain. There is no quick configuration option for this. You will have to set up an actual server. Obviously there are lots of ways to do this. If you are already familiar with Node, you could serve the page from Node.js, and then have the Node server proxy the API calls. If you are already running all of the Sawtooth components with docker-compose though, it might be easier to use Docker and Apache.
Setting up an Apache Proxy with Docker
Create your Dockerfile
In the same directory as your web app create a text file called "Dockerfile" (no extension). Then make it look like this:
FROM httpd:2.4
RUN echo "\
LoadModule proxy_module modules/mod_proxy.so\n\
LoadModule proxy_http_module modules/mod_proxy_http.so\n\
ProxyPass /api http://rest-api:8008\n\
ProxyPassReverse /api http://rest-api:8008\n\
RequestHeader set X-Forwarded-Path \"/api\"\n\
" >>/usr/local/apache2/conf/httpd.conf
This is going to do a couple of things. First it will pull down the httpd module from DockerHub, which is just a simple static server. Then we are using a bit of bash to add five lines to Apache's configuration file. These five lines import the proxy modules, tell Apache that we want to proxy http://rest-api:8008 to the /api route, and set the X-Forwarded-Path header so the REST API can properly build response URLs. Make sure that rest-api matches the actual name of the Sawtooth REST API service in your docker compose file.
Modify your docker compose file
Now, to the docker compose YAML file you are running Sawtooth through, you want to add a new property under the services key:
services:
my-web-page:
build: ./path/to/web/dir/
image: my-web-page
container_name: my-web-page
volumes:
- ./path/to/web/dir/public/:/usr/local/apache2/htdocs/
expose:
- 80
ports:
- '8000:80'
depends_on:
- rest-api
This will build your Dockerfile located at ./path/to/web/dir/Dockerfile (relative to the docker compose file), and run it with its default command, which is to start up Apache. Apache will serve whatever files are located in /usr/local/apache2/htdocs/, so we'll use volumes to link the path to your web files on your host machine (i.e. ./path/to/web/dir/public/), to that directory in the container. This is basically an alias, so if you update your web app later, you don't need to restart this docker container to see the changes. Finally, ports will take the server, which is at port 80 inside the container, and forward it out to localhost:8000.
Running it all
Now you should be able to run:
docker-compose -f path/to/your/compose-file.yaml up
And it will start up your Apache server along with the Sawtooth REST API and validator and any other services you defined. If you go to http://localhost:8000, you should see your web page, and if you go to http://localhost:8000/api/blocks, you should see a JSON representation of the blocks on chain. More importantly you should be able to make the request from your web app:
request.post({
url: 'api/batches',
body: batchListBytes,
headers: { 'Content-Type': 'application/octet-stream' }
}, (err, response) => console.log(response) );
Whew. Sorry for the long response, but I'm not sure if it is possible to solve CORS any faster. Hopefully this helps.
The transaction Header should have details like, address of the block where it would be save. Here is example which I have used and is working fine for me :
String payload = "create,0001,BLockchain CPU,Black,5000";
logger.info("Sending payload as - "+ payload);
String payloadBytes = Utils.hash512(payload.getBytes()); // --fix for invaluid payload seriqalization
ByteString payloadByteString = ByteString.copyFrom(payload.getBytes());
String address = getAddress(IDEM, ITEM_ID); // get unique address for input and output
logger.info("Sending address as - "+ address);
TransactionHeader txnHeader = TransactionHeader.newBuilder().clearBatcherPublicKey()
.setBatcherPublicKey(publicKeyHex)
.setFamilyName(IDEM) // Idem Family
.setFamilyVersion(VER)
.addInputs(address)
.setNonce("1")
.addOutputs(address)
.setPayloadSha512(payloadBytes)
.setSignerPublicKey(publicKeyHex)
.build();
ByteString txnHeaderBytes = txnHeader.toByteString();
byte[] txnHeaderSignature = privateKey.signMessage(txnHeaderBytes.toString()).getBytes();
String value = Signing.sign(privateKey, txnHeader.toByteArray());
Transaction txn = Transaction.newBuilder().setHeader(txnHeaderBytes).setPayload(payloadByteString)
.setHeaderSignature(value).build();
BatchHeader batchHeader = BatchHeader.newBuilder().clearSignerPublicKey().setSignerPublicKey(publicKeyHex)
.addTransactionIds(txn.getHeaderSignature()).build();
ByteString batchHeaderBytes = batchHeader.toByteString();
byte[] batchHeaderSignature = privateKey.signMessage(batchHeaderBytes.toString()).getBytes();
String value_batch = Signing.sign(privateKey, batchHeader.toByteArray());
Batch batch = Batch.newBuilder()
.setHeader(batchHeaderBytes)
.setHeaderSignature(value_batch)
.setTrace(true)
.addTransactions(txn)
.build();
BatchList batchList = BatchList.newBuilder()
.addBatches(batch)
.build();
ByteString batchBytes = batchList.toByteString();
String serverResponse = Unirest.post("http://localhost:8008/batches")
.header("Content-Type", "application/octet-stream")
.body(batchBytes.toByteArray())
.asString()
.getBody();

Windows Etsy: Peer certificate cannot be authenticated with given CA certificates

In an effort to be OAuth'd with Etsy, I have tried countless solutions in C# to at least start the authentication process (ie get the login URL):
eg
mashery.com, http://term.ie/oauth/example/client.php and question #8321034
but the response is always the same:
oauth_problem=signature_invalid&debug_sbs=GET&https%3A%2F%2Fopenapi.etsy.com%2Fv2%2Foauth%2Frequest_token&oauth_consumer_key%3D...my-consumer-key...%26oauth_nonce%3D2de91e1361d1906bbae04b15f42ab38d%26oauth_signature_method%3DHMAC-SHA1%26oauth_timestamp%3D1502362164%26oauth_version%3D1.0%26scope%3Dlistings_w%2520listings_r
and so I'm resorting to the dreaded world of PHP...
On my machine, I've installed the following (Windows 10):
XAMPP (xampp-win32-7.1.7-0-VC14-installer) with default options
JDK (jdk-8u144-windows-i586)
JRE (jre-8u144-windows-i586)
php_oauth.dll ([php_oauth-2.0.2-7.1-ts-vc14-x86.zip][4]) and copying it to C:\xampp\php\ext
[cacert.pem][4], (dated Jun 7 03:12:05 2017) and coping it to the following directories:
C:\xampp\perl\vendor\lib\Mozilla\CA
C:\xampp\phpMyAdmin\vendor\guzzle\guzzle\src\Guzzle\Http\Resources
Apache and Tomcat would not run to begin with from XAMPP because it said that ports 443 and 80 were being used/blocked and so I duly changed these to 444 and 122 in
C:\xampp\apache\conf\extra\httpd-ssl.conf
C:\xampp\apache\conf\httpd.conf
All good so far but when I run the following script in my browser (http://localhost:444/dashboard/etsy.php):
<?php
$base_uri = 'https://openapi.etsy.com';
$api_key = 'my-etsy-api-key';
$secret = 'my-etsy-api-secret';
$oauth = new OAuth($api_key, $secret, OAUTH_SIG_METHOD_HMACSHA1, OAUTH_AUTH_TYPE_URI);
$req_token = $oauth->getRequestToken($base_uri .= "/v2/oauth/request_token?scope=listings_w%20transactions_r", 'oob');
$login_url = $req_token['login_url'];
print "Please log in and allow access: $login_url \n\n";
$verifier = readline("Please enter verifier: ");
$verifier = trim($verifier);
$oauth->setToken($req_token['oauth_token'], $req_token['oauth_token_secret']);
$acc_token = $oauth->getAccessToken($base_uri .= "/v2/oauth/access_token", null, $verifier);
$oauth_token = $acc_token['oauth_token'];
$oauth_token_secret = $acc_token['oauth_token_secret'];
$oauth->setToken($oauth_token, $oauth_token_secret);
print "Token: $oauth_token \n\n";
print "Secret: $oauth_token_secret \n\n";
?>
I get the following error message:
Fatal error: Uncaught OAuthException: making the request failed (Peer
certificate cannot be authenticated with given CA certificates) in
C:\xampp\htdocs\dashboard\etsy.php:8 Stack trace: #0
C:\xampp\htdocs\dashboard\etsy.php(8):
OAuth->getRequestToken('https://openapi...', 'oob') #1 {main} thrown
in C:\xampp\htdocs\dashboard\etsy.php on line 8
I've tried running the script with each thread safe, x86 version of OAuth (http://windows.php.net/downloads/pecl/releases) - stop, restart Apache) but no luck.
I'm at my wits end.
How to I resolve this Peer certificate problem?
Simply disable the SSL on local.
$oauth->disableSSLChecks()
Oauth by default using CURL SSL Certificate. The simple way for local apache server is to disable it. Either configure the SSL for the CURL. It will also resolve the issue for oauth.
as per php documentation
we can set the certificate path simply.
$oauth->setCAPath("F:\xampp\php\extras\ssl\cacert.pem");
print_r($oauth->getCAPath());
You can also set the request engine to curl or php stream if the ssl is already configured.
Official PHP documentation

Resources