Problem: Change the user password of account neo4j
Attempted Flow:
curl -H 'Content-Type: application/json' -X POST \
-d '{"password": "newPwd"}' \
-u neo4j:"oldPwd" \
http://IP-ADDR:7474/user/neo4j/password
Observation:
Only one instance has reflecting with the changed user password.
Expected:
All instances of the Causal Cluster should be applied with the password changes.
Version: Neo4j 3.5.1
Database type: Causal Cluster (3 instances)
Update: NEO4J Team Replied in their chat
Neo4j 4.x no longer has this limitation and users/roles are auto
propagated amongst members
Ref:
https://community.neo4j.com/t/causal-cluster-how-to-change-user-password-across-all-instances/31449
In 3.5 after changing the password in one member, you need to copy the neo4j_directory/data/dbms/auth file manually to all other cluster members , there is no need to shutdown the members, the file will be re-read autimatically in few minutes after you copy. ( The issue is fixed in neo4j 4.X )
Related
I'm trying to debug my app. When I hit the production elasticsearch host through my python app, the results are returned. When I change it to localhost, it works when I hit it manually through the browser, but not through the app.
I'd like to log all queries that are hitting my elasticsearch container, I've tried env variables such as "DEBUG=TRUE" or "DEBUG=*", and no requests are being logged (even when hitting it manually and results are returned).
Any idea how I'd do this?
Thanks
You can use the slow queries log with a really reduced treshold. See https://www.elastic.co/guide/en/elasticsearch/reference/current/index-modules-slowlog.html for more details on this feature. For example:
index.search.slowlog.threshold.query.debug: 0s
Using cluster or index settings api you're able to change this settings while running the cluster.
curl -XPUT "http://localhost:9200/_all/_settings" -H "content-type: application/json" -d'
{
"index.search.slowlog.threshold.query.debug": "0s"
}'
There are even more settings you can use to log and monitor index, fetch or search duration.
I'm using the Trans Union New Access system to run credit reports. I'm doing this on my Windows 7 64 bit development machine.
I have a Web Application (web forms) project that uses the system. In one button, I have the following code:
CreditReportRequestXML requestXMLSupplier = new CreditReportRequestXML();
requestXMLSupplier.RunPendingRequests();
This code calls a method in another project that I coded myself. The code constructs XML that is posted to Transunion. I get perfectly good responses.
I have another project that's a Windows service project. The relevant code in this project is:
CreditReportRequestXML requestXMLSupplier = new CreditReportRequestXML();
requestXMLSupplier.RunPendingRequests();
When the account that the service runs under is my account, this works just fine. The service is installed as a service and can be started and stopped with the Services console.
So far, so good.
Here's the bad. When I configure the service to run under the Network Service account, I get the following error:
Could not create SSL/TLS secure channel.
I've been trying to use winhttpcertcfg to fix the problem. I've tried -
winhttpcertcfg -g -c LOCAL_MACHINE\My -s ******** -a "Network Service"
(where ******** is the subject name of the certificate. I can see this name when I debug my service using ?clientCertificate.SubjectName.Name)
I've also tried
winhttpcertcfg -i certfile.p12 -c LOCAL_MACHINE\My -a "Network Service" -p pwforcert
(where certfile is the file name - note that the file I have is a p12 file, not a PFX file; pwforcert is the password I used to create the system client on the Trans Union site.)
The service fails with the message above after trying both commands. When I list accounts that should have access to the private key using
winhttpcertcfg -l -c LOCAL_MACHINE\My -s MyCertificate
the output shows the correct matching certificate, and says 'Additional accounts and groups with access to the private key include:', and lists NT AUTHORITY\NETWORK SERVICE as one of the accounts.
The reason I want to use Network Service to run the service is that my boss wants me to do this. I talked with our network guys today, and they don't have an account on our servers that has administrative privileges.
What am I missing? Or, is there some other way around this problem?
I wound up exporting one of the certificates to a PFX file, including the Private Key and all related certificates. This could only be done from one or two of the certificates on my machine. I then deleted all TU certificates. The winhttpcertcfg -i filename.pfx -c LOCAL_MACHINE\My -a "NETWORK SERVICE" -p **** was used. That worked.
I Create a google compute instance with service account
gcloud --project my-proj compute instances create test1 \
--image-family "debian-9" --image-project "debian-cloud" \
--machine-type "g1-small" --network "default" --maintenance-policy "MIGRATE" \
--service-account "gke-build-robot#myproj-184015.iam.gserviceaccount.com" \
--scopes "https://www.googleapis.com/auth/cloud-platform" \
--tags "gitlab-runner" \
--boot-disk-size "10" --boot-disk-type "pd-standard" --boot-disk-device-name "$RESOURCE_NAME" \
--metadata register_token=mytoken,config_bucket=gitlab_config,runner_name=test1,gitlab_uri=myuri,runner_tags=backend \
--metadata-from-file "startup-script=startup-scripts/prepare-runner.sh"
Log to instance though ssh: gcloud compute --project "myproj" ssh --zone "europe-west1-b" "gitlab-shared-runner-pool"
After install and configure docker machine. i try create instance:
docker-machine create --driver google --google-project myproj test2
Running pre-create checks...
(test2) Check that the project exists
(test2) Check if the instance already exists
Creating machine...
(test2) Generating SSH Key
(test2) Creating host...
(test2) Opening firewall ports
(test2) Creating instance
(test2) Waiting for Instance
Error creating machine: Error in driver during machine creation: Operation error: {EXTERNAL_RESOURCE_NOT_FOUND The resource '1045904521672-compute#developer.gserviceaccount.com' of type 'serviceAccount' was not found. []}
1045904521672-compute#developer.gserviceaccount.com is my default account.
I don;t understand why it used. Because activated is gke-build-robot#myproj-184015.iam.gserviceaccount.com
gcloud config list
[core]
account = gke-build-robot#myproj-184015.iam.gserviceaccount.com
disable_usage_reporting = True
project = novaposhta-184015
Your active configuration is: [default]
gcloud auth list
Credentialed Accounts
ACTIVE ACCOUNT
* gke-build-robot#myproj-184015.iam.gserviceaccount.com
Can some one explain me, what i do wrong?
There was double problem.
First of all, docker-machine can't work with specific service account, at least in 0.12 and 0.13 version.
Docker+Machine google driver have only scope parameter and can't get specific one.
So Instance where docker+machine was installed is work fine with specified sa. But instance that was created with docker+machine, must have default service account.
And when during debug, I turn off it.
I've got this error as a result.
A similar issue (bosh-google-cpi-release issue 144) suggests somehow the
This error message is unclear, particularly because the credentials which also need to be specified in the manifest may be associated with another account altogether.
The default service_account for the bosh-google-cpi-release is set to "default" if it is not proactively set by the bosh manifest, so this will happen anytime you use service_scopes instead of a service_account.
While you are not using bosh-google-cpi-release, the last sentence made me double-check the gcloud reference page, in particular gcloud compute instance create.
A service account is an identity attached to the instance. Its access tokens can be accessed through the instance metadata server and are used to authenticate applications on the instance.
The account can be either an email address or an alias corresponding to a service account. You can explicitly specify the Compute Engine default service account using the 'default' alias.
If not provided, the instance will get project's default service account.
It is as if your service account is either ignored or incorrect (and falls back to the project default's one)
See "Creating and Enabling Service Accounts for Instances" to double-check its value:
Usually, the service account's email is derived from the service account ID, in the format:
[SERVICE-ACCOUNT-NAME]#[PROJECT_ID].iam.gserviceaccount.com
Or try setting first the service scope and account.
I am currently setting up a nexus3 OSS server. I've created and ansible role that wil set up an official nexus3 docker container behing and nginx reverse proxy. I have my storage setup separately so my artifacts will persist if the instance gets killed (for say, a base image update). I'd like to set up the ansible role so I don't have to go into the nexus gui to setup LDAP and repositories everytime I recreate the server. Is there a way to inject this kind of configuration into nexus?
Nexus Repository Manager 3 includes a scripting API that you can use for this sort of work. Have a look at the documentation and the demo videos.
If you find anything we should expand the API on or need some help contact us on the mailing list or via live chat.
There is a pretty easy workaround to automate nexus, I usually do that by following the nexus API which uses an API documentation tool called swagger, in order to do that you can go to http://localhost:8081/#admin/system/api or you can go to:
System administration and configuration > System > API
you can check the complete API documentation of nexus and so to do the provisioning you can create a script containing multiple curl calling any API you want.
the generated API request will look like this:
# Add jenkins user
curl -X 'POST' \
"http://${NEXUS_URL}/service/rest/v1/security/users" \
-u "admin:admin123" \
-H 'accept: application/json' \
-H 'Content-Type: application/json' \
-d '{
"userId" : "jenkins",
"firstName" : "jenkins",
"lastName" : "jenkins",
"emailAddress" : "jenkins#domain.com",
"password" : "jenkins",
"status" : "active",
"roles" : [ "nx-admin" ]
}'
this for example will create a new jenkins user.
You can find more in nexus API documentation
I've just resorted to creating proxy-repositories in nexus2, I'll turn these into hosted repo's later. The storage here is much more straightforward and accessible and I've hosted it on a discrete persistent EBS. I'll use this for now and upgrade to 3.1 when that's released. Thanks anyway!
I am new to neo4j,I just follow the neo4j official manual:
install two instances on one machine ,my environment is ubuntu-11.10.I had success start up the neo4j service and entered the websites http://localhost:7474/webadmin/ .But when I tried to run the "DELETE /db/data/cleandb/secret-key' command in its http console .It returned error 401. Any idea about this?
Which version of neo4j are you using?
You have to configure two different ports for the two servers. Think you did this.
The clean-db-addon doesn't come out of the box, you have to download it and and copy it in the plugins directory and adjust the neo4j-server.properties config file.
org.neo4j.server.thirdparty_jaxrs_classes=org.neo4j.server.extension.test.delete=/cleandb
org.neo4j.server.thirdparty.delete.key=<please change secret-key>
Then you can call it for each of your servers with:
curl -X DELETE http://localhost:<port>/cleandb/secret-key