I tried to find answers for this in related posts but i just can´t find it.
I created a local plugin and configured it this way:
traefik.toml
[experimental.localPlugins.myplugin]
moduleName = "test.com/traefik/myplugin"
.traefik.yml
displayName: Do some stuff
type: middleware
import: test.com/traefik/myplugin
summary:do a lot of nice stuffs
testData:
Headers:
Foo: Bar
I can see in the logs that the plugin is loaded:
Static configuration loaded {"global":{"checkNewVersion":true},"serversTransport":{"maxIdleConnsPerHost":200},"entryPoints":{"http":{"address":":80","transport":{"lifeCycle":{"graceTimeOut":"10s"},"respondingTimeouts":{"idleTimeout":"3m0s"}},"forwardedHeaders":{},"http":{},"udp":{"timeout":"3s"}},"traefik":{"address":":8080","transport":{"lifeCycle":{"graceTimeOut":"10s"},"respondingTimeouts":{"idleTimeout":"3m0s"}},"forwardedHeaders":{},"http":{},"udp":{"timeout":"3s"}}},"providers":{"providersThrottleDuration":"2s","docker":{"watch":true,"endpoint":"unix:///var/run/docker.sock","defaultRule":"Host(`{{ normalize .Name }}`)","exposedByDefault":true,"swarmModeRefreshSeconds":"15s"}},"api":{"insecure":true,"dashboard":true},"ping":{"entryPoint":"http","terminatingStatusCode":503},"log":{"level":"DEBUG","format":"common"},"accessLog":{"format":"json","filters":{},"fields":{"defaultMode":"keep","headers":{"defaultMode":"drop"}}},"pilot":{"dashboard":true}**,"experimental":{"localPlugins":{"myplugin":{"moduleName":"test.com/traefik/myplugin"}}}}
But when i try to use it like this:
traefik.http.routers.myrouter.middlewares="testplugin-myplugin"
or
traefik.http.routers.myrouter.middlewares="myplugin"
i get an error like:
middleware "testplugin-mypluginn#docker" does not exist
I´m using Traefik 2.5.2 docker image> I tried a few combinations of static and dynamic values without luck :(I´m using Traefik 2.5.2 docker image. I tried a few combinations of static and dynamic values without luck :( Am i missing some configuration?
You need to reference #file when the middleware is created in a file. Otherwise you can see it is looking for testplugin-myplugin#docker which would imply the middleware is built in the docker rules.
Try something like:
traefik.http.routers.myrouter.middlewares="myplugin#file"
Related
I have an .env file which contains two sensitive items and two non-sensitive. Running Sveltekit 1.0 and using Netlify Serverless functions with a db-helper file which has
require('dotenv').config();
const dbName = process.env.MONGODB_DATABASE;
and similarly gets the other variables. However, this crashes with error "cant find module 'dotenv'!
I tried, with same error ..
const dbName = import.meta.env.MONGODB_DATABASE
I tried process.env['MONGODB_DATABASE'] and import.meta.env['MONGODB_DATABASE']. Failed.
I tried prefixing env vars with VITE using both process.env and import.meta.env with and without [' '] wrapper. Failed.
I read that you dont have to explicitly load dotenv as Vite does this. Tried without. Failed.
Has anyone got a solution to this?
Environment variables should be accessed through these modules:
$env/dynamic/private
$env/dynamic/public
$env/static/private
$env/static/public
Public restricts them to those prefixed with PUBLIC_ (which can be configured).
There is some additional documentation for some adapters regarding the loading of these variables, e.g. for the Node adapter, but there is nothing specific for Netlify.
Looking at the Netlify docs, it looks like you have to use its UI/tools or a Netlify config to load variables rather than using an .env file:
With the Netlify CLI, use env:set to update a site environment variable, env:import to import from an updated .env file, and env:unset to delete a site environment variable and all of its contextual values.
I was using
require('dotenv').config()
which worked locally but gave a "cannot find module dotenv" when deployed (to Netlify). I found that in I used "import" instead of "require" then it worked ..
import dotenv from 'dotenv';
dotenv.config();
So thats a solution. But, H.B. correctly pointed out the new sveltekit way env variables should be used. Thanks for that.
PS. The require/import also fails/works for jsonwebtoken so use import * as jwt from 'jsonwebtoken' instead of const jwt = require('jsonwebtoken')
I am using spring-boot 2.7.1 with native configuration as the guide follows in the link.
Spring native official doc
My problem is that when running bootBuildImage, the buildpack ["gcr.io/paketo-buildpacks/java-native-image:7.19.0"] is trying to download external dependency paketo-buildpacks/bellsoft-liberica from https://download.bell-sw.com/vm/22.3.0/bellsoft-liberica-vm-core-openjdk17.0.5+8-22.3.0+2-linux-amd64.tar.gz which is not allowed by company firewall.
I then researched that you can configure dependeny-mapping bindings towards these dependencies within required buildpack, at-least using this pack cli guide.
But when using purely pack-cli the gradle bootBuildImage gets a bit irrelevant and then I have to use some external tool to fix the native docker container and image. And I would like to only use the bootBuildImage to map these dependency-bindings.
I found this binding function within Gradle bootBuildImage docs. but I am not sure what string it expects, if the path should be similar to pack-cli config or not, can't find any relevant info.
The provided image show the bootBuildImage config
bootBuildImage {
builder = 'docker.io/paketobuildpacks/builder:tiny'
runImage = 'docker.io/paketobuildpacks/run:tiny-cnb'
buildpacks = ['gcr.io/paketo-buildpacks/java-native-image']
binding("bindnings/bellsoft-jre-config:/platform/bindings/bellsoft-jre-config")
environment = [
"BP_NATIVE_IMAGE" : "true",
]
}
The dependency-mapping config contains 2 files:
The type file contains:
echo "dependency-mapping" >> type
The sha256 (bellsoft-liberica) file 3dea0f7a9312c738d22b5e399b6ce9abe13b45b2bc2c04346beb941a94e8a932 contains:
'echo "https://download.bell-sw.com/vm/22.3.0/bellsoft-liberica-vm-core-openjdk17.0.5+8-22.3.0+2-linux-amd64.tar.gz" >> 3dea0f7a9312c738d22b5e399b6ce9abe13b45b2bc2c04346beb941a94e8a932'
And yes I'm aware that this is the exact same url, but this is just to test that the binding config is correctly setup. Because if ok it should fail on untrusted certificate when downloading instead.
Currently the build fails with:
Caused by: org.springframework.boot.buildpack.platform.docker.transport.DockerEngineException: Docker API call to 'localhost/v1.24/containers/create' failed with status code 400 "Bad Request"
at org.springframework.boot.buildpack.platform.docker.transport.HttpClientTransport.execute(HttpClientTransport.java:156)
at org.springframework.boot.buildpack.platform.docker.transport.HttpClientTransport.execute(HttpClientTransport.java:136)
at org.springframework.boot.buildpack.platform.docker.transport.HttpClientTransport.post(HttpClientTransport.java:108)
at org.springframework.boot.buildpack.platform.docker.DockerApi$ContainerApi.createContainer(DockerApi.java:340)
at org.springframework.boot.buildpack.platform.docker.DockerApi$ContainerApi.create(DockerApi.java:331)
at org.springframework.boot.buildpack.platform.build.Lifecycle.createContainer(Lifecycle.java:237)
at org.springframework.boot.buildpack.platform.build.Lifecycle.run(Lifecycle.java:217)
at org.springframework.boot.buildpack.platform.build.Lifecycle.execute(Lifecycle.java:151)
at org.springframework.boot.buildpack.platform.build.Builder.executeLifecycle(Builder.java:157)
at org.springframework.boot.buildpack.platform.build.Builder.build(Builder.java:115)
at org.springframework.boot.gradle.tasks.bundling.BootBuildImage.buildImage(BootBuildImage.java:521)
Which i assume is caused by invalid binding config. But I can't find what is should be.
Paketo configuration (binding)
Dependency mapping bindings can be tricky. There are a number of things that have to be just right, or the buildpacks won't pick up the binding and won't map dependencies.
While there are talks of how we can change this in buildpacks to make swapping out dependencies easier, the short-term solution is to use binding-tool.
You can run bt dm -b paketo-buildpacks/bellsoft-liberica and it will go download the dependencies from the specified buildpack and generate the binding files for you.
It will by default download dependencies and write the bindings to $PWD/bindings but you can change that. For example, I like to put my dependencies in my home directory so I can share them across apps. Ex: SERVICE_BINDING_ROOT=~/.bt/bindings bt dm ..., or export SERVICE_BINDING_ROOT=~/.bt/bindings (or whatever command you run to set an env variable in your shell).
Once you have the bindings created, you just need to point your app to them. How you set the property differs between Maven & Gradle, but the value of the property is the same. It should be <local-path>:<container-path>.
The local path should be the full or relative path to where you created the bindings with bt dm. The container path should almost always be /platform/bindings. This maps your full set of bindings locally to the full set of bindings that the buildpacks will consume. In other words, put all of your bindings into the same directory locally, map that to /platform/bindings and the buildpacks will see everything.
For example with Gradle: binding("bindings/:/platform/bindings").
You can adjust the container path by setting SERVICE_BINDING_ROOT in the container as well, but it doesn't offer a lot of advantage.
You can also set multiple entries for bindings, so long as the paths are unique. So you could set binding("/home/user/.bt/bindings/foo:/platform/bindings/foo") and also binding("bindings/bar:/platform/bindings/bar"). That would let you take bindings from two different locations locally and map them into the /platform/bindings directory so both would be visible to buildpacks. This gives you more fine-grained control but as you can see becomes pretty verbose.
Details on configuring Maven and configuring Gradle for buildpacks can be found at those links.
I've attempted to enable Read-only user attributes in Keycloak as per the docs: https://www.keycloak.org/docs/latest/server_admin/
However the documented configuration does not actually prevent a user from changing their attributes.
Using Keycloak 15.0.0 with the regular Docker image from docker hub
Made a .cli file and added it to my Docker image, built from
FROM jboss/keycloak:15.0.0
ADD RESTRICT_USER_ATTRIBUTES.cli /opt/jboss/startup-scripts/
With contents of RESTRICT_USER_ATTRIBUTES.cli:
embed-server --server-config=standalone-ha.xml --std-out=echo
batch
/subsystem=keycloak-server/spi=userProfile/:add
/subsystem=keycloak-server/spi=userProfile/provider=legacy-user-profile/:add(properties={},enabled=true)
/subsystem=keycloak-server/spi=userProfile/provider=legacy-user-profile/:map-put(name=properties,key=read-only-attributes,value=[myUserAttribute])
run-batch
stop-embedded-server
The .cli file is processed according to the log. I can exec into the docker instance and check the configuration using jboss-cli.sh.
But the end user can freely edit myUserAttribute using Postman or another tool.
What am i doing wrong here?
I just had this issue, and it seems the documentation is out-of-date.
They changed the provider name, probably in 15.0.0.
Try changing your cli script to:
# ...
/subsystem=keycloak-server/spi=userProfile/:add
/subsystem=keycloak-server/spi=userProfile/provider=declarative-user-profile/:add(properties={},enabled=true)
/subsystem=keycloak-server/spi=userProfile/provider=declarative-user-profile/:map-put(name=properties,key=read-only-attributes,value=[myUserAttribute])
# ...
I am building a dropwizard service which will connect to multiple data sources including mySQL and Elasticsearch. All the mySQL settings can be defined in the yaml config file which gets read in after running from the commandline.
But what about other settings that I need to read in for other data sources that I will connect with myself, for example Elasticsearch? Where can I define those settings?
I thought I could add another commandline Command - which I tried, but I can only run a single command (from the commandline) at a time - so I can't seem to run both the 'server' command as well as my custom command, 'custom' which is followed by the my own config file for elasticsearch.
How can I introduce settings either individually or from a file - which are defined at run time (not hard coded)?
Thanks
Anton
Check out the Dropwizard Core documentation on adding custom configuration.
You'd create an ElasticSearchFactory class similar to the MessageQueueFactory in the example, reference this in your Configuration (that's in turn referenced in your Application), and then the options you need can be added to your main yaml configuration.
I'm trying to migrate a website from php5-fpm to hhvm.
We use Docker for local dev environments, and we set things like MySQL and Redis details using environment variables that are created using Docker's --link.
With php5-fpm, it was easy to pass these variables into PHP by setting them in www.conf.
With hhvm 3.5.0, I can't seem to find the equivalent. I got close when I found EnvVariables in an example config on this page, but hhvm 3.5.0 now uses INI files for config and I can't seem to find where to set them using these.
Any ideas?
Thanks.
Turns out they were there all along in $_ENV instead of $_SERVER. Must be fastcgi_params.