Define network architecture manually using connection profile in Hyperledger Composer - hyperledger

I am trying to manually define the componenets of my network i.e. 2 orderers and 2 peers. The default structure created by ./createComposerProfile.sh as demonstrated here has only 1 orderer and 1 peer.
I have tried editing the file at ~/.composer-connection-profiles/hlfv1/connection.json to no avail. It simply ignores my edits and still creates the default network.
How do I go about doing this?

the connection profile represents how you want to interact with a hyperledger fabric network, it doesn't define the network. Hyperledger composer is designed to work with whatever Hyperledger fabric network you define, you then need to create connection profiles that represent that defined network.
If you want a 2 peer network, you need to create one yourself then build a connection profile that represents how you want to interact with that network.
See http://hyperledger-fabric.readthedocs.io/en/latest/build_network.html
for information about how to build fabric networks.

in answer to 'How do I go about linking the connection profile with my network' and referring back to your original problem - you need to 1) create a separate subdirectory under .composer-connection-profiles - you tried editing the connection.json in the hlfv1 subdirectory I suspect and that was recreated. You need to create your own subdirectory first (at the same level, same permissions as 'hlfv1' profile from the Dev setup) and call it (say) hlfv1custom. Then add your connection.json file there. As you're setting up 'local' - you can probably use the existing connection.json as the basis for your custom one (adding the info about your new Fabric setup eg additional peer(s) etc into the new connection.json (its "type" (in the conx profile) is still 'hlfv1' fyi) - . See more info here and an example of a HLFV1 connection.json -> https://hyperledger.github.io/composer/reference/connectionprofile.html

Related

Add a URL path prefix to artifactory installation (Docker)

I'm running Artifactory CPP CE 7.7.3 and Traefik v2.2 using docker-compose. The service is only available over http://localhost/ui/. Now, what I need is an option which allows to add a URL path-prefix (e. g. http://localhost/artifactroy/ui).
My Setup
I used the described setup process from the Artifactory Docs suggest it.
My docker.compose.yaml is the official extracted from the jfrog-artifactory-cpp-ce-7.7.3-compose.tar.gz: ./templates/docker-compose.yaml.
I'm using a reverse proxy (traefik). For this, I've added the necessary traefik configuration lines to the docker-compose-file. Here is a small extract what I've added:
[...]
labels:
- "traefik.http.routers.artifactory.rule=Host(`localhost`) && PathPrefix(`/ui`)"
- "traefik.http.routers.artifactory.middlewares=artifactory-stripprefix"
- "traefik.http.middlewares.artifactory-stripprefix.stripprefix.prefixes=/"
- "traefik.http.services.artifactory.loadbalancer.server.port=8082"
With this I managed to access artifactory over http://localhost/ui/.
Problem:
I have multiple small services running on my server, each of this service is accusable via http://localhost/<service-name>. This is very convenient and want to make clear that this URL is related to this service on my production server.
Because of this, I want to have an URL like http://localhost/artifactroy/ui/... instead of http://localhost/ui/...
I struggled getting artifactory setup in that way. I already managed to get a redirection from typing e. g. http://localhost/artifactroy/ to http://localhost/ui/ but this is not what I want on my production server.
What I did
Went through the documentation in hope of finding an option which I just can passt to artifactroy to add a prefix (Not successful).
Tried configure traefik two full days, to alter headers to get the repose point to http://localhost/artifactroy/ui/... (Only partially successful, redirection didn’t work afterwards)
Tried finding the configuration which is responsible for configure artifactory in $JFROG_HOME/artifactory/var/etc (Not successful)
Is this even possible? Help is highly appreciated..
This example (even though not traefic example) gives you a direction to implement it. There are certain routes already used within the product. You need to add a context over and above it to ensure all comes via the new context path.
https://jfrog.com/knowledge-base/how-to-remove-artifactory-from-the-context-url-in-artifactory-7/

Zabbix auto registration using metadata fails with a "cannot link template(s)" error message

I have a Linux host with a specific metadata value (linuxhosts) which I have set in the zabbix_agentd.conf
I also set an Action with an auto registration event source with the following configuration:
Conditions:
Host metadata like linuxhosts
Operations:
Add to host groups SystemTestLinux
Link to template Linux system test template
The issue is that the host is not being linked to "Linux system test" template.
Looking at zabbix_server.log, I see the following error:
cannot link template(s) "Linux system test" to host "xxxxx": conflicting item key "net.if.discovery" found. The template "Linux system test" is not linked to any other template and I do not have any discovery rule enabled.
It is also important to note that I currently have a lot of Windows hosts that are linking fine to templates, the problem only occurs with Linux hosts.
The problem was solved with a workaround.
The issue is that Zabbix was unable to handle two very similar metadata strings that linked each agent to its appropriate group and template.
For example, if you have one agent reporting "productionDev" and another agent reporting "productionDevOps" you might end up having the same issue that I have. To work around the issue, you will need to have two conditions for each auto registration action:
like "productionDev"
notlike "productionDevOPS"
This will make sure that your "productionDev" agents will join their appropriate groups and templates.

Enable multi instance of neo4j on same server , but http port is disaballed and https port enabled with different port number for two instances

Is possible to enable multi instance of neo4j enterprise edition on same server , but the http port is disabled on both instances and two instance with different configurations having two different port numbers for https , if so please let me know the process
Use the zip installation, not any of the package installations ... and make as many copies of the installation directory as you need instances
Modify neo4j.conf in each instance directory and make sure the following parameters are different in each :
dbms.connector.bolt.listen_address=:XXXX
dbms.connector.http.listen_address=:YYYY
dbms.connector.https.listen_address=:ZZZZ
Also make sure that you explicitely assign memory to each instance (or they'll fight for it).
Start each instance from it's instance directory and you should have no conflicts.
Hope this helps.
Regards,
Tom

Proxy Auto Configuration

I have configured our local tfs proxy against the Active Directory site for our local office using the below syntax:
tf proxy /add http://MyProxy:8080 /default:site /Site:LocalOffice /name:MyProxy
When I run
tf proxy /configure
it correctly identifies my site, and sets up the correct proxy.
However, I'm seeing inconsistent behaviour during get operations.
My understanding is that when run a get operation (either via tf get or through Visual Studio), it should automatically recognise that the site has a proxy, and configure it.
When I tried this on a VM that had never used a proxy, this seems to have worked fine. However, on my own machine, I went into VS and removed the proxy settings, then closed the VS instance. Then I attempted a tf get from powershell, and found that it did not configure the proxy correctly (I confirmed using tf proxy).
I'm expecting the proxy to be automatically configured for any user who is currently in our office, overriding any manual settings they have. Is there additional setup I need to do in order to do this?
Update
Based on the documentation here, I would expect it to set up the proxy on a my machine when I ask for the code
If you add a proxy record with the default set to site, the first time that a developer from within the specified Active Directory domain performs a get operation, Team Foundation Server will redirect that developer's request to the proxy that is specified by the record that is associated with the site.
However, this doesn't happen even if I clear out the proxy settings in VC (and untick the box) and perform a get after a reboot. I can understand it perhaps not overriding a setting I enter by hand, but I would expect it to configure when no setting is present at all.
You need to use the /default flag:
tf proxy /add http://MyProxy:8080 /default:site /Site:LocalOffice
A full description of how this works can be found on: http://blogs.msdn.com/b/deepakkhare/archive/2014/05/06/tfs-proxy-unsung-hero.aspx

Network Service account does not accept local paths

I am creating a program that runs as a service and creates database backups (using pg_dump.exe) at certain points during the day. This program needs to be able to write the backup files to local drives AND mapped network drives.
At first, I was unable to write to network drives, but solved the problem by having the service log on as an administrator account. However, my boss wants the program to run without users having to key in a username and password for the account.
I tried to get around this by using the Network Service account (which does not need a password and always has the same name). Now my program will write to network drives, but not local drives! I tried using the regular C:\<directory name>\ path syntax as well as \\<computer name>\C$\<directory name>\ syntax and also \\<ip address>\C$\<directory name>\, none of which work.
Is there any way to get the Network Service account to access local drives?
Just give the account permission to access those files/directories, it should work. For accessing local files, you need to tweak ACLs on the files and directories. For accessing via network share, you have to make changes to file ACLs, as well as permissions on network share.
File ACLs can be modified in Exploler UI, or from command line, using standard icacls.exe. E.g. this command line will give directory and all files underneath Read, Write and Delete permissions for Network Service.
icacls c:\MyDirectory /T /grant "NT AUTHORITY\Network Service":(R,W,D)
File share permissions are easier to modify from UI, using fsmgmt.msc tool.
You will need to figure out what minimal set of permissions necessary to be applied. If you don't worry about security at all, you can give full permissions, but it is almost always an overkill, and opens you up more if for any reason the service is compromised.
I worked around this problem by creating a new user at install time which I add to the Administrators group. This allows the service to write to local and network drives, without ever needing password/username info during the setup.

Resources