I am trying to change node name of mongooseim to my local ip, something like 10.0.0.21. I changed the config in MongooseIM/rel/vars.config.>
{node_name,"mongooseim#10.0.0.21"}
Is there anything i need to change?
I am getting the following error when i change the node name
=INFO REPORT==== 11-Mar-2016::17:11:05 ===
Can't set short node name!
Please check your configuration
escript: exception error: no match of right hand side value
{error,
{{shutdown,
{failed_to_start_child,net_kernel,
{'EXIT',nodistribution}}},
{child,undefined,net_sup_dynamic,
{erl_distribution,start_link,
[['mongooseim_maint_6589#10.0.XXX.XXX',
shortnames]]},
permanent,1000,supervisor,
[erl_distribution]}}}
From what you write I assume that you're changing the node name at build time and relying on the build machinery to generate correct configuration file. This is OK.
In this light, you also have to modify rel/files/vm.args - find the line:
-sname {{node_name}}
and change it to:
-name {{node_name}}
Now the explanation. -name is used to run an Erlang node as a distributed node. This requires a DNS server to be set up. -sname also enables distributed mode, but is fine with just /etc/hosts entries - the node names, however, can't contain dots: host#localdomain is fine, but host#my.fictional.domain is not. The latter is your case, with the small difference that you use numbers instead of words.
Please note that nodes using -name can't use Distributed Erlang communication with nodes using -sname.
For a production multi-node cluster you might consider generating the server release once, taking the generated config files, tweaking them to your needs and replicating as makes sense for the expected number of nodes. Then each time you deploy a new node you use the same generated release (which saves on build time), but add your configuration files customized for the relevant node.
Related
How can I tell if the settings files associated with a Mosquitto instance, have been properly applied?
I want to add a configuration file to the conf.d folder to override some settings in the default file, but I do not know how to check they have been applied correctly once the Broker is running.
i.e. change persistence to false (without editing the standard file).
Test it.
You can run mosquitto with verbose output enabled, which will generally give you feedback on what options were set, but don't just believe that.
To do that, stop running Mosquitto as a service (how you do this depends on you setup) and manually run it from the shell with the -v option. Be sure to point it at the correct configuration file with the -c option.
That's not enough to be sure that it's actually working properly. To do that you need to test it.
Options have consequences or we wouldn't use them.
If you configure Mosquitto to listen on a specific port, test it by trying to connect to that port.
If you configure Mosquitto to require secure connections on a port, test it by trying to connect to the port unsecured (this shouldn't work) and secured (this should work).
You should be able to devise relatively simple tests for any options you can set in the configuration file. If you care if it's actually working, don't just take it on faith; test it.
For extra credit you can bundle the tests up into a script so that you can run an entire test suite easily in the future and test your Mosquitto installation anytime you make changes to it.
Having duplicate configuration options with different values is a REALLY bad idea.
The behaviour of mosquitto is not defined in this case, which value should be honoured, the first found, the last? When using the conf.d directory, what order will the files be loaded in?
Also will you always remember that you have changed the value in a conf.d file in the future when you go looking?
If you want to change one of the defaults in the /etc/mosquitto/mosquitto.conf file then edit that file. (Any sensible package management system will notice the file has been changed and ask what to do at the point of upgrade)
The conf.d/ directory is intended for adding extra listeners.
Also be aware that there really isn't a default configuration file, you must always specify a configuration file with the -c command line option. The file at /etc/mosquitto/mosquitto.conf just happens to be the config file that is passed when mosquitto is started as a service when installed using most Linux package managers. (The default Fedora install doesn't even setup the /etc/mosquitto/conf.d directory)
I am using spring-boot 2.7.1 with native configuration as the guide follows in the link.
Spring native official doc
My problem is that when running bootBuildImage, the buildpack ["gcr.io/paketo-buildpacks/java-native-image:7.19.0"] is trying to download external dependency paketo-buildpacks/bellsoft-liberica from https://download.bell-sw.com/vm/22.3.0/bellsoft-liberica-vm-core-openjdk17.0.5+8-22.3.0+2-linux-amd64.tar.gz which is not allowed by company firewall.
I then researched that you can configure dependeny-mapping bindings towards these dependencies within required buildpack, at-least using this pack cli guide.
But when using purely pack-cli the gradle bootBuildImage gets a bit irrelevant and then I have to use some external tool to fix the native docker container and image. And I would like to only use the bootBuildImage to map these dependency-bindings.
I found this binding function within Gradle bootBuildImage docs. but I am not sure what string it expects, if the path should be similar to pack-cli config or not, can't find any relevant info.
The provided image show the bootBuildImage config
bootBuildImage {
builder = 'docker.io/paketobuildpacks/builder:tiny'
runImage = 'docker.io/paketobuildpacks/run:tiny-cnb'
buildpacks = ['gcr.io/paketo-buildpacks/java-native-image']
binding("bindnings/bellsoft-jre-config:/platform/bindings/bellsoft-jre-config")
environment = [
"BP_NATIVE_IMAGE" : "true",
]
}
The dependency-mapping config contains 2 files:
The type file contains:
echo "dependency-mapping" >> type
The sha256 (bellsoft-liberica) file 3dea0f7a9312c738d22b5e399b6ce9abe13b45b2bc2c04346beb941a94e8a932 contains:
'echo "https://download.bell-sw.com/vm/22.3.0/bellsoft-liberica-vm-core-openjdk17.0.5+8-22.3.0+2-linux-amd64.tar.gz" >> 3dea0f7a9312c738d22b5e399b6ce9abe13b45b2bc2c04346beb941a94e8a932'
And yes I'm aware that this is the exact same url, but this is just to test that the binding config is correctly setup. Because if ok it should fail on untrusted certificate when downloading instead.
Currently the build fails with:
Caused by: org.springframework.boot.buildpack.platform.docker.transport.DockerEngineException: Docker API call to 'localhost/v1.24/containers/create' failed with status code 400 "Bad Request"
at org.springframework.boot.buildpack.platform.docker.transport.HttpClientTransport.execute(HttpClientTransport.java:156)
at org.springframework.boot.buildpack.platform.docker.transport.HttpClientTransport.execute(HttpClientTransport.java:136)
at org.springframework.boot.buildpack.platform.docker.transport.HttpClientTransport.post(HttpClientTransport.java:108)
at org.springframework.boot.buildpack.platform.docker.DockerApi$ContainerApi.createContainer(DockerApi.java:340)
at org.springframework.boot.buildpack.platform.docker.DockerApi$ContainerApi.create(DockerApi.java:331)
at org.springframework.boot.buildpack.platform.build.Lifecycle.createContainer(Lifecycle.java:237)
at org.springframework.boot.buildpack.platform.build.Lifecycle.run(Lifecycle.java:217)
at org.springframework.boot.buildpack.platform.build.Lifecycle.execute(Lifecycle.java:151)
at org.springframework.boot.buildpack.platform.build.Builder.executeLifecycle(Builder.java:157)
at org.springframework.boot.buildpack.platform.build.Builder.build(Builder.java:115)
at org.springframework.boot.gradle.tasks.bundling.BootBuildImage.buildImage(BootBuildImage.java:521)
Which i assume is caused by invalid binding config. But I can't find what is should be.
Paketo configuration (binding)
Dependency mapping bindings can be tricky. There are a number of things that have to be just right, or the buildpacks won't pick up the binding and won't map dependencies.
While there are talks of how we can change this in buildpacks to make swapping out dependencies easier, the short-term solution is to use binding-tool.
You can run bt dm -b paketo-buildpacks/bellsoft-liberica and it will go download the dependencies from the specified buildpack and generate the binding files for you.
It will by default download dependencies and write the bindings to $PWD/bindings but you can change that. For example, I like to put my dependencies in my home directory so I can share them across apps. Ex: SERVICE_BINDING_ROOT=~/.bt/bindings bt dm ..., or export SERVICE_BINDING_ROOT=~/.bt/bindings (or whatever command you run to set an env variable in your shell).
Once you have the bindings created, you just need to point your app to them. How you set the property differs between Maven & Gradle, but the value of the property is the same. It should be <local-path>:<container-path>.
The local path should be the full or relative path to where you created the bindings with bt dm. The container path should almost always be /platform/bindings. This maps your full set of bindings locally to the full set of bindings that the buildpacks will consume. In other words, put all of your bindings into the same directory locally, map that to /platform/bindings and the buildpacks will see everything.
For example with Gradle: binding("bindings/:/platform/bindings").
You can adjust the container path by setting SERVICE_BINDING_ROOT in the container as well, but it doesn't offer a lot of advantage.
You can also set multiple entries for bindings, so long as the paths are unique. So you could set binding("/home/user/.bt/bindings/foo:/platform/bindings/foo") and also binding("bindings/bar:/platform/bindings/bar"). That would let you take bindings from two different locations locally and map them into the /platform/bindings directory so both would be visible to buildpacks. This gives you more fine-grained control but as you can see becomes pretty verbose.
Details on configuring Maven and configuring Gradle for buildpacks can be found at those links.
This is a rebar3 release compiled and released without any error. (There was this line ( ===> Missing beam file elf_format <<"/usr/local/lib/erlang/lib/hipe-3.11.3/ebin/elf_format.beam">>) I guess that's something else).
There is a line in the error which says that there is another link using the same hostname. Seeing this I deleted all such projects of the same name and recompiled and re released.
Also when i try start option no output is shown and localhost:8080 is not started.(I have been trying to do the example on this link in rebar3 :- http://jordenlowe.com/title/Explain_like_I_am_5_-_Erlang_REST_Framework_Cowboy)
What is/could be the reason for this error.
The main error is: the name hello_erlang#you_host seems to be in use by another Erlang node which says that there is another running Erlang node with the same name. You can see a list of running Erlang node with following Erlang Port Mapper Daemon (epmd) command:
epmd -names
You must stop or kill currently running node with the desired node name and then start the new one.
Is there ways to specify path to schema in cowboy app? Maybe it's possible to set in my_app.app.src or any config file?
The path to the mnesia directory has to be provided to erlang VM before mnesia application is started through application configuration parameters. In Mnesia tutorial, this is done with the -Application par val VM arguments syntax.
What you call a cowboy application is probably an Erlang OTP release (built by relx as per cowboy tutorial). The solutions, quickly described in Cowboy issue #595, are as follows.
The choice between solutions really depends on style as well as some constraints. Any sufficiently complex release would use a configuration file, so it would be a good choice. vm.args seems easier to deal with. Eventually, you might need to alter the start script (for example to run several nodes from a single deployment), and include some logic to define the mnesia directory.
Provide relx with a configuration file (sys_config option)
To do so, add the following term to relx.config as documented.
{sys_config, "./config/sys.config"}.
sys.config actually is a standard Erlang configuration file, also documented. Specifying mnesia dir is done by adding a section for mnesia application. If no other configuration is required, the file would be:
[{mnesia, [{dir, "/path/to/dir"}]}].
Get relx to pass arguments to the vm (vm_args option)
The vm.args file is actually passed to the VM through -args_file option. This is a simple text file with arguments.
You would add the following term to relx.config as documented.
{vm_args, "./config/vm.args"}.
And put the following content in the vm.args file:
-mnesia dir foo
Write your own start script
relx actually creates a start script, which passes -config sys.config and args_file vm.args to the VM as required. You could modify this script or roll your own to actually pass the -mnesia dir argument to the VM.
in case of vm.args, planit text -mnesia dir foo is invalid, please use format:
-mnesia {dir,foo}
We work in a team and run Fortify software on our machines locally. We all have our project code setup in different root directories e.g I have project code at C:\work\development\, few of my colleagues have something like C:\Development\mainCodeLine\ etc etc. i.e. the root-folder where the project-code resides differs. Initially only I was working on Fortify but now there are many members of the team running Fortify. We currently share the FPR file that is saved in repository. We download it from the repository and run SCA commands over the same file so as to retain the details like hidden/suppressed issues. Over the period of time we observed that :
The Unique Instance ID that gets generated is unique over a single machine only. i.e. the Unique Instance ID remains same over scans on my machine only and it changes when the scan is carried out in my team-mate's machine. Is there any way we can configure Fortify to keep it same over multiple scans over multiple machines? Because of this we can't use the Unique Instance ID in the filter-file.
If I and my team-mate run scans parallelly on 2 separate machines on same code (only the project's root directory differs as stated earlier) then is there any way we can integrate these 2 reports?
There are indeed methods to combine scan results generated on different machines. I believe that the best way to accomplish this is to utilize the Fortify Software Security Center (SSC). Users conduct "fresh" scans each time, and when uploaded into a project in SSC, they will be merged - retaining any previous auditing information.
An alternative approach is to use the command line FPRUtility. (I don't have an install in front of me at the moment so the name might be slightly off - but it's in the bin directory along with sourceanalyzer and auditworkbench). The -h option should provide the info to get started merging FPRs.
Hope this helps.
If different IIDs are being changed by a different overall root, that seems a bug. SCA uses the canonical root usually so it shouldn't make any difference where its placed.
Xelco52 had it partially correct, but if you want to merge when they have different IIDs, it's best to use FPRUtility with the -forceMigration option, such as:
FPRUtility -merge -project Results1.fpr -source Results2.fpr -f mergedResults.fpr -forceMigration
You should also be able to get this affect in AWB by setting com.fortify.model.ForceIIDMigration=true in Core/config/fortify.properties (and restarting AWB)
Look into using HP Fortify Software Security Center (SSC) if possible. This will allow users to upload scans to a central repository and merge results. This helps create a running history of scans and know who uploaded what.
Also this will allow your team to use a feature called "Collaborative Audit" that will let each developer pull the latest FPR down from Software Security Center (SSC) and into their IDE. Developers can then make changes and push back up to SSC where results are once again merged.
I don't think merge is the right approach. I would do it this way:
(1) Among all developers (user#), on their own machine, establish the naming convention of the ProjectRoot (point to user#'s code base, i.e. /home/user#/mycode) and WorkingDirectory (i.e. /local/sharebuild)
(2) Each user use the following commands on their own machine:
(2a) CLEAN CACHE: ~/sourceanalyzer -b user# -Dcom.fortify.sca.ProjectRoot=/home/user#/mycode -Dcom.fortify.WorkingDirectory=/local/sharebuild/ -logfile /local/sharebuild/user#.sca.log -clean
(2b) TRANSLATE: ~/sourceanalyzer -b user# -64 -Xmx11000M -Xss24M -Dcom.fortify.sca.ProjectRoot=/home/user#/mycode/ -Dcom.fortify.WorkingDirectory=/local/sharebuild/ -logfile /local/sharebuild/user#.sca.log -source 1.5 -cp 'your_class_path' -extdirs 'your *.war file' '/home/user#/mycode/**/*'
(3) INTEGRATE ALL INTERMEDIATE CODE TO THE BUILD MACHINE : each user copy his entire /local/sharebuild/sca#.## to the centralized build machine, under directory /local/sharebuild/sca#.##/build/ (you will find sub directory ./user# (each user's build id) which contains all intermediate code tree (.nst).
(4) SCAN: on the build server, do scan with command:
~/sourceanalyzer -b user1 -b user2 -b user3 -b user# -64 -Xmx11000M -Xss24M -Dcom.fortify.sca.ProjectRoot=/home/user#/mycode/ -Dcom.fortify.WorkingDirectory=/local/sharebuild/ -logfile /local/sharebuild/scan.sca.log -scan -f build_all.fpr
The step 4 will pick up all .nst (normalized syntax tree) files and performing the scan.
If each user mount his portion of code to the centralized machine on step 2a, then step 3 can be omitted.