Having a rather strange issue - imaged a hard drive that had an active (and working) install of gsutil with Python 2.7.10 on Windows 8.1, booted it back up on an identical machine, and now every time I try to run a command with gsutil, I get the error "Failure: invalid_grant".
Here's the full text of the error:
C:\Windows\system32>c:\Python27\python.exe c:\gsutil\gsutil ls gs://twinpreviews
INFO 1021 00:52:48.415000 multistore_file.py] Error decoding credential, skippin
g
Traceback (most recent call last):
File "c:\gsutil\third_party\oauth2client\oauth2client\multistore_file.py", lin
e 381, in _refresh_data_cache
(key, credential) = self._decode_credential_from_json(cred_entry)
File "c:\gsutil\third_party\oauth2client\oauth2client\multistore_file.py", lin
e 400, in _decode_credential_from_json
credential = Credentials.new_from_json(json.dumps(cred_entry['credential']))
File "c:\gsutil\third_party\oauth2client\oauth2client\client.py", line 292, in
new_from_json
return from_json(s)
File "c:\gsutil\third_party\gcs-oauth2-boto-plugin\gcs_oauth2_boto_plugin\oaut
h2_client.py", line 465, in from_json
data['token_expiry'], EXPIRY_FORMAT)
TypeError: must be string, not None
Failure: invalid_grant.
I've attempted to re-run the configuration multiple times - note that this error only stays with a service account, and does not occur if I use browser-based authentication.
I have also attempted to delete the .boto file in my home directory, along with the .gsutil folder in my home directory, then re-run gsutil config -e multiple times to no avail.
I have also created a new set of credentials and have tried a new .json key file, but still no success.
Any suggestions? Hoping someone here has had the same issue and has found a solution. Thanks!
Solved the problem!
Turns out the system clock on the machine was off (couple of days in the future), and apparently, gsutil does not pay any attention to that when setting the access key expiration time.
Quite disappointing to find out that was the issue, but glad it is now solved.
Related
I have nextcloud installed and working fine in a docker but want to have fail2ban monitor the log files for brute force attempts. I know nextcloud has it's own baked in but it just throttles the log in attempts and I would like to all out ban them (I also have this problem with other containers as well). The docker-compose is set to create the nextcloud.log file to /mnt/nextcloud/log/nextcloud.log. I followed this guide to create the jail
https://www.c-rieger.de/nextcloud-installation-guide-ubuntu/#c06
Fail2ban is running on the host machine however, fail2ban fails to start with:
[447]: ERROR Failed during configuration: Have not found any log file for nextcloud jail
[447]: ERROR Async configuration of server failed
Thinking it was simply a permission issue, I chowned everything to root and tried to start again but still the service won't start. What am I doing wrong?
Thanks for the help!
The docker-compose is set to create the nextcloud.log file to /mnt/nextcloud/log/nextcloud.log
Be sure this file really exists and your jail.local has correct entry logpath:
[nextcloud]
...
logpath = /mnt/nextcloud/log/nextcloud.log
You can also check resulting config using dump:
fail2ban-client -d | grep 'nextcloud.*logpath'
But I'm still not sure the error message you provide was throwed by fail2ban, because its error messages look different, see https://github.com/fail2ban/fail2ban/commit/27947407bc7910f0f50972113218ebc73c4a22c7
It should be something like:
-have not found a log file for nextcloud log
+Have not found any log file for nextcloud jail
When I do :server connect with neo4j and neo4j I get Neo.ClientError.Security.Unauthorized: The client is unauthorized due to authentication failure..
I tried uncommenting the line dbms.security.auth_enabled=false in /etc/neo4j/neo4j.conf and restarting, but it still asks me to log in and still denies the login.
I can get in with /usr/bin/cypher-shell -u neo4j -p neo4j
I tried /usr/bin/neo4j-admin set-initial-password secret but it says command failed: The specified user 'neo4j' already exists.
I tried sudo rm /var/lib/neo4j/data/dbms/auth and restarting, but it gives the same result.
ubuntu 16.04
Installed with sudo apt-get install neo4j=1:3.5.0
I had the same problem. I tried to set the initial password and it said The specified user 'neo4j' already exists. I had thought I had set the initial password earlier via the command line, but it didn't take because there were special characters in the password string.
What ended up working for me was opening up the Neo4j Browser and it prompted me for a password. I entered 'neo4j' and then it gave me the option to set a new password through the browser. Once I did that, it worked.
If you need to turn off auth_enabled to test something, make sure to remember to restart the server. sudo neo4j restart It can also take a few minutes to restart, so make sure it's fully up and running first. (And then, of course, don't forget to turn auth_enabled back on again.)
It also took me a few tries to get the configuration correct in the conf file at /etc/neo4j/neo4j.conf
I set dbms.connectors.default_listen_address=0.0.0.0
And dbms.connectors.default_advertised_address=your.webdomain.com
Also, this guide helped me with setting up a certificate for the neo4j browser endpoint. https://medium.com/neo4j/getting-certificates-for-neo4j-with-letsencrypt-a8d05c415bbd
I faced the issue with the initial setup. Kept getting the same unauthorised message. The issue with me was I was trying to access it in firefox. Tried in Chrome and it worked and prompted me to change my password. Found one issue stating this:
< connecting to Neo4j browser through Firefox >
I disabled/uncomment authenitication in /etc/neo4j/neo4j.conf
To disable authentication, uncomment this line
dbms.security.auth_enabled=false
it worked for me.
Make sure to comment it back when you are done for security purposes
I cannnot figure out why it's giving me this result.
Warning: Failed to create the file /Users/myname/.bash_profile: Permission
Warning: denied
curl: (23) Failed writing body (0 != 1968)
When I enter the second step see in the link below.
Instructions provided here that I was given from a bootcamp I am learning to code from.
If some one could take the time out their day to please answer my question. I would gladly appreciated.
Try appending "sudo" in the beginning of the command.
If that does not work, attempt to edit permissions for the directory by using chmod.
Be careful with using 777, but if it is on your local device, it should not mean a whole lot.
Usage: sudo chmod XYZ /di/rec/tory
Replace XYZ with chmod settings.
Read this if you are in doubt.
I have the following line in my Inno Setup script:
SignTool=MySign cmd /c C:\SigningTools\signtool.exe sign /f C:\MyCert.pfx /p MyPassword $f
This works on my local machine.
I then commit my changes to our server and Jenkins will compile and make a build automatically. Jenkins does not work and I get the following error.
Error on line 43 in C:\Windows\TEMP\fxbundler8328922406343131203\images\win-exe.image\MyProgram.iss: Value of [Setup] section directive "SignTool" is invalid.
Compile aborted.
I have no idea what the issue is, I have tried numerous things but can't seem to figure it out. I would settle with learning some better options to output error messages with Inno Setup.
I have verired that MySign exists on the server's compiler IDE (http://www.jrsoftware.org/ishelp/index.php?topic=setup_signtool)
I have tried numerous variations of having $q surround file paths
I have verified that the file paths match the two machines
You need to define the SignTool in your call to the compiler via the /s switch.
Example: "/sMySign$q=sign_application.bat$q $f"
sign_application.bat receives the path of the file to sign as first parameter and calls signtool.exe as you've already tried.
Take a look here: http://www.jrsoftware.org/ishelp/index.php?topic=setupcmdline
Do not forget to Configure Sign Tools in the Inno Setup Compiler. I simply added signtool $p string.
In my case, the certificate has expired.
I found the following article usefull:
https://www.nextofwindows.com/how-to-check-a-pfx-certifications-expiry-date-on-windows
I opened a command prompt in the directory where my pfx file was and used this command to get details about the certificate:
certutil -dump "nameofcertfile.pfx"
Change nameofcertfile.pfx to your file name. You probably will be prompted for a password. Enter the password you used in your script (MyPassword in the OPs script). You may also copy/paste it.
NOTE: You will not see any character beeing typed while entering or pasting the password - so don't be confused.
I'm running GSUTIL v3.42 from a Windows CMD script on a Windows server 2008 R2 using Python 2.7.6. Files to be uploaded arrive in an "outgoing" directory and are uploaded in parallel by GSUTIL to an "incoming" bucket. The script requests a listing of the "incoming" bucket after uploading has finished and then compares the files listed with those it attempted to upload, in order to detect any upload failures. Another separate script moves files from the "incoming" bucket to a "processed" bucket afterwards.
If I attempt to upload the identical file (same name/size/content/date etc.) a second time, it doesn't upload, although I get no errors and nothing in my logging to indicate failure. I am not using the "no clobber" option, so I would expect gsutil to just upload the file.
In the scenario below, assume that the file has been successfully uploaded and moved to the "processed" bucket already on that day. In case timings matter, the second upload is being attempted within half an hour of the first.
File A arrives in "outgoing" directory.
I get a file listing of "outgoing" and write this to dirListing.txt
I perform a GSUTIL upload using
type dirListing.txt | python gsutil -m cp -I -L myGsutilLogFile.txt gs://myIncomingBucket
I then perform a GSUTIL listing
python gsutil ls -l -h gs://myIncomingBucket > bucketListing.txt
File match dirListing.txt and bucketListing.txt to detect mismatches and hence upload failures.
On the second run, File A isn't being uploaded in step 3 and consequently isn't returned in step 4, causing a mismatch in step 5. [I've checked the content of all of the relevant files and it's definitely in dirListing.txt and not in bucketListing.txt]
I need the ability to re-process a file in case the separate script that moves the file from the "incoming" to the "processed" bucket fails for some reason or doesn't do what it should do. I have to upload in parallel because there are normally hundreds of files on each run.
Is what I've described above expected behaviour from GSUTIL? (I haven't seen anything in the documentation that suggests this) If so, is there any way of forcing GSUTIL to re-attempt the upload? Or am I missing something obvious, please? I have debug output from GSUTIL if that's necessary/useful.
From the above, it looks like you're uploading using "-L" to log to a manifest file. If you're using the same manifest file, and the file has already been uploaded once, then gsutil will not try to re-upload the file. From the docs on "-L" in "gsutil help cp":
If the log file already exists, gsutil will use the file as an
input to the copy process, and will also append log items to the
existing file. Files/objects that are marked in the existing log
file as having been successfully copied (or skipped) will be
ignored.