I've been updating my company's projects to the latest versions of all the packages. When I try to run swagger (http://localhost:4000/swagger/?url=/swagger/swagger.json), it now redirects me to the Petstore.
After searching Google for an answer, it seems like I need to set the following parameter:
queryConfigEnabled=true
According to the Swagger documentation, I can set the parameter in the following ways:
Swagger UI accepts configuration parameters in four locations.
From lowest to highest precedence:
The swagger-config.yaml in the project root directory, if it exists, is baked into the application
configuration object passed as an argument to Swagger UI (SwaggerUI({ ... }))
configuration document fetched from a specified configUrl
configuration items passed as key/value pairs in the URL query string
I tried the following URL
http://localhost:4000/swagger/?queryConfigEnabled=true&url=http://localhost:4000/swagger/swagger.json
but that still redirected me to the Petstore.
I then tried to create a swagger-config.yaml file in the project root directory with the following contents:
queryConfigEnabled: "true"
url: "/swagger/swagger.json"
dom_id: "#swagger-ui"
validatorUrl: "https://validator.swagger.io/validator"
That didn't work either. I even tried to copy the config file to the project /src folder to see if that made a difference.
I tried other things like using a swagger-config.json file instead, copying the config file to the app root in the docker image, setting the configURL to point to the config file, setting the docker environment CONFIG_URL to point to the swagger config file. None of these solutions worked. I'm still being redirected to the Petstore.
As a last resort, I modified the dist/swagger-initializer.js file and added the queryConfigEnabled parameter.
That worked.
Obviously, if I delete the /node_modules folder and re-run npm install, that change will go away.
What am I doing wrong? How do I fix this?
Related
I am using spring-boot 2.7.1 with native configuration as the guide follows in the link.
Spring native official doc
My problem is that when running bootBuildImage, the buildpack ["gcr.io/paketo-buildpacks/java-native-image:7.19.0"] is trying to download external dependency paketo-buildpacks/bellsoft-liberica from https://download.bell-sw.com/vm/22.3.0/bellsoft-liberica-vm-core-openjdk17.0.5+8-22.3.0+2-linux-amd64.tar.gz which is not allowed by company firewall.
I then researched that you can configure dependeny-mapping bindings towards these dependencies within required buildpack, at-least using this pack cli guide.
But when using purely pack-cli the gradle bootBuildImage gets a bit irrelevant and then I have to use some external tool to fix the native docker container and image. And I would like to only use the bootBuildImage to map these dependency-bindings.
I found this binding function within Gradle bootBuildImage docs. but I am not sure what string it expects, if the path should be similar to pack-cli config or not, can't find any relevant info.
The provided image show the bootBuildImage config
bootBuildImage {
builder = 'docker.io/paketobuildpacks/builder:tiny'
runImage = 'docker.io/paketobuildpacks/run:tiny-cnb'
buildpacks = ['gcr.io/paketo-buildpacks/java-native-image']
binding("bindnings/bellsoft-jre-config:/platform/bindings/bellsoft-jre-config")
environment = [
"BP_NATIVE_IMAGE" : "true",
]
}
The dependency-mapping config contains 2 files:
The type file contains:
echo "dependency-mapping" >> type
The sha256 (bellsoft-liberica) file 3dea0f7a9312c738d22b5e399b6ce9abe13b45b2bc2c04346beb941a94e8a932 contains:
'echo "https://download.bell-sw.com/vm/22.3.0/bellsoft-liberica-vm-core-openjdk17.0.5+8-22.3.0+2-linux-amd64.tar.gz" >> 3dea0f7a9312c738d22b5e399b6ce9abe13b45b2bc2c04346beb941a94e8a932'
And yes I'm aware that this is the exact same url, but this is just to test that the binding config is correctly setup. Because if ok it should fail on untrusted certificate when downloading instead.
Currently the build fails with:
Caused by: org.springframework.boot.buildpack.platform.docker.transport.DockerEngineException: Docker API call to 'localhost/v1.24/containers/create' failed with status code 400 "Bad Request"
at org.springframework.boot.buildpack.platform.docker.transport.HttpClientTransport.execute(HttpClientTransport.java:156)
at org.springframework.boot.buildpack.platform.docker.transport.HttpClientTransport.execute(HttpClientTransport.java:136)
at org.springframework.boot.buildpack.platform.docker.transport.HttpClientTransport.post(HttpClientTransport.java:108)
at org.springframework.boot.buildpack.platform.docker.DockerApi$ContainerApi.createContainer(DockerApi.java:340)
at org.springframework.boot.buildpack.platform.docker.DockerApi$ContainerApi.create(DockerApi.java:331)
at org.springframework.boot.buildpack.platform.build.Lifecycle.createContainer(Lifecycle.java:237)
at org.springframework.boot.buildpack.platform.build.Lifecycle.run(Lifecycle.java:217)
at org.springframework.boot.buildpack.platform.build.Lifecycle.execute(Lifecycle.java:151)
at org.springframework.boot.buildpack.platform.build.Builder.executeLifecycle(Builder.java:157)
at org.springframework.boot.buildpack.platform.build.Builder.build(Builder.java:115)
at org.springframework.boot.gradle.tasks.bundling.BootBuildImage.buildImage(BootBuildImage.java:521)
Which i assume is caused by invalid binding config. But I can't find what is should be.
Paketo configuration (binding)
Dependency mapping bindings can be tricky. There are a number of things that have to be just right, or the buildpacks won't pick up the binding and won't map dependencies.
While there are talks of how we can change this in buildpacks to make swapping out dependencies easier, the short-term solution is to use binding-tool.
You can run bt dm -b paketo-buildpacks/bellsoft-liberica and it will go download the dependencies from the specified buildpack and generate the binding files for you.
It will by default download dependencies and write the bindings to $PWD/bindings but you can change that. For example, I like to put my dependencies in my home directory so I can share them across apps. Ex: SERVICE_BINDING_ROOT=~/.bt/bindings bt dm ..., or export SERVICE_BINDING_ROOT=~/.bt/bindings (or whatever command you run to set an env variable in your shell).
Once you have the bindings created, you just need to point your app to them. How you set the property differs between Maven & Gradle, but the value of the property is the same. It should be <local-path>:<container-path>.
The local path should be the full or relative path to where you created the bindings with bt dm. The container path should almost always be /platform/bindings. This maps your full set of bindings locally to the full set of bindings that the buildpacks will consume. In other words, put all of your bindings into the same directory locally, map that to /platform/bindings and the buildpacks will see everything.
For example with Gradle: binding("bindings/:/platform/bindings").
You can adjust the container path by setting SERVICE_BINDING_ROOT in the container as well, but it doesn't offer a lot of advantage.
You can also set multiple entries for bindings, so long as the paths are unique. So you could set binding("/home/user/.bt/bindings/foo:/platform/bindings/foo") and also binding("bindings/bar:/platform/bindings/bar"). That would let you take bindings from two different locations locally and map them into the /platform/bindings directory so both would be visible to buildpacks. This gives you more fine-grained control but as you can see becomes pretty verbose.
Details on configuring Maven and configuring Gradle for buildpacks can be found at those links.
My goal is to put my telegraf config into source control. To do so, I have a repo in my user's home directory with the appropriate config file which has already been tested and proven working.
I have added the path to the new config file in the "default" environment variables file:
/etc/default/telegraf
like this:
TELEGRAF_CONFIG_PATH="/home/ubuntu/some_repo/telegraf.conf"
... as well as other required variables such as passwords.
However, when I attempt to run
telegraf --test
It says No config file specified, and could not find one in $TELEGRAF_CONFIG_PATH etc.
Further, if I force it by
telegraf --test --config /home/ubuntu/some_repo/telegraf.conf
Then the process fails because it is missing the other required variables.
Questions:
What am I doing wrong?
Is there not also a way of specifying a config directory too (I would like to break my file down into separate input files)?
Perhaps as an alternative to all of this... is there not a way of specifying additional configuration files to be included from within the default /etc/telegraf/telegraf.conf file? (I've been unable to find any mention of this in documentation).
What am I doing wrong?
See what user:group owns /etc/default/telegraf. This file is better used when running telegraf as a service via systemd. Additionally, if you run env do you see the TELEGRAF_CONFIG_PATH variable? What about your other variables? If not, then you probably need to source the file first.
Is there not also a way of specifying a config directory too (I would like to break my file down into separate input files)?
Yes! Take a look at all the options of telegraf with telegraf --help and you will find:
--config-directory <directory> directory containing additional *.conf files
Perhaps as an alternative to all of this... is there not a way of specifying additional configuration files to be included from within the default /etc/telegraf/telegraf.conf file? (I've been unable to find any mention of this in documentation).
That is not the method I would suggest going down. Check out the config directory option above I mentioned.
Ok, after a LOT of trial and error, I figured everything out. For those facing similar issues, here is your shortcut to the answer:
Firstly, remember that when adding variables to the /etc/default/telegraf file, it must effectively be reloaded. So for example using ubuntu systemctl, that requires a restart.
You can verify that the variables have been loaded successfully using this:
$ sudo strings /proc/<pid>/environ
where <pid> is the "Main PID" from the telegraf status output
Secondly, when testing (eg telegraf --test) then (this is the part that is not necessarily intuitive and isn't documented) you will have to ALSO load the same environmental variables into the current user (eg: SET var=value) such that running
$ env
shows the same results as the previous command.
Hint: This is a good method for loading the current env file directly rather than doing it manually.
New to Grails 3- starting to port.
Have tried placing config values in application.groovy and application.yml within plugin conf dir to no avail - trying to read values from within plugin service fails. Adding values to the main application's application.groovy works.
What is the secret here? Previously I could load specific conf files via config.locations simply by naming them which was nice and simple. I've seen some resolutions that include needing to setup env vars with paths to config files which I'd like to avoid. Then they set up file URIs for dev and classpath URIs for other envs that will be war packaged - don't really want to do any of this.
Do we no longer have the ability to place config within a plugin and have that automatically merged with the applications config?
The plugin may provide config settings in grails-app/conf/plugin.yml.
I have trouble connecting to sites with ssl, i.e. https. It can successfully download artifacts from the internet if the url begins with http.
bower install will download dependencies via https. Is there anyway make it download via http?
I had troubles with this too, and I couldn't find an elegant way to fix it. My workaround was:
Go to your global npm folder and find the "bower" folder (on Windows 7 that is "C:\Users\\AppData\Roaming\npm\node_modules").
In that folder, search the default.js file placed in node_modules\bower-config\lib\util\default.js
Inside that file you will find a "var defaults". Replace the "registry" url property from "https" to "http".
Yes, I know. This shouldn't be done like this, but at least help me to bypass the connection error.
Hope that helps!
You can change the registry used by Bower in the .bowerrc file. The default registry is: https://bower.herokuapp.com and is defined in node_modules/bower-config/lib/util/default.js (as described by Jean Manuel Arias in his answer).
To override for your project, add a value for the registry setting in .bowerrc. An example file might be:
{
"directory": "<YOUR LIBRARY INSTALL DIRECTORY>",
"registry":"http://bower.herokuapp.com"
}
In the above example, the default https registry is being overridden with the http version. A full list of the available .bowerrc settings can be found at: Bower Spec.
You can do a global override for the current user by creating a %USERPROFILE%\.bowerrc file (for windows, in Linux it is: ~/.bowerrc). Bower follows a similar search path when applying settings to NPM (see npmrc settings). This is probably a better route as it avoids cluttering your project with local settings.
I have in my main config something like:
grails.config.locations = ["file:grails-app/config/Jawr.groovy"].
When running the application with grails run-app, everything is OK.
But, on deployment (creating the war archive) this does not work anymore, as the file "Jawr.groovy" is not kept anymore on the filesystem (it should be only in the war).
Do you have a solution for that? Hw do you include external files into the grails main configuration file?
Thanks.
Okay, a few things here.
First, because you don't have a leading slash on your config path, that is a path relative to who knows where. I played with this in Tomcat, and that path ends up being relative to the working directory you were in when starting the Tomcat server. If you start Tomcat, shut it down, change directories, then start it again, you are going to get two different config paths.
Second, the grails-app directory only exists within the source tree of your Grails project. The structure of an unpacked WAR file is more like the web-app folder of your Grails source tree, with folders like WEB-INF, META-INF, js, images, etc.
Third, you probably want to avoid putting your externalized config file inside the folder of your webapp. The next time you deploy your app, that configuration is going to get wiped away with the old version of the app. One of the points of the externalized config is so that you can redeploy without having to reconfigure.
A simple, but less than ideal, solution would be to use a static, fully qualified path, like /etc/yourApp/conf.groovy, then put that in the documentation. There is also a plug-in that handles this.
http://www.grails.org/plugin/external-config
I haven't used it, but the description makes it sound like it does sensible things.
see this: https://stackoverflow.com/questions/6341117/is-it-possible-that-grails-xxconfig-groovy-as-a-script-no-compile
Then I put it into /shared, and
modify:
//Config.groovy
grails.config.locations =
["file:shared/TZLibConfig.groovy"]
//BuildConfig.groovy
grails.war.resources = { stagingDir, args ->
copy(todir: "${stagingDir}/WEB-INF/shared"){
fileset(dir:"shared",includes:"**")
}
}
In my work, our team often use a system properties to save the path to the config file (often in home folder of the user running the app - for privilege sake). Then we manually copy the config file into that path
To identify that it's the production environment, we use the following code in Config.groovy:
if (System.properties["${appName}.config.location"]) {
grails.config.locations = ["file:" + System.properties["${appName}.config.location"]]
}
This article suggests allowing the user to specify the location of the config file as an environment variable or as a java property --- meaning you can simply specify it with -D on the command-line. This can be used in addition to all the other methods.