InfluxDB-version: 1.6.3
I've created a backup of a database called 'test.mydb' using the legacy backup format:
influxd backup -database <mydatabase> <path-to-backup>
The backup went fine but when I tried to restore:
sudo influxd restore -db "test.mydb" -newdb "test.mydb" -datadir /var/lib/influxdb/data /home/ubuntu/influxdb/test.mydb/
I got the error: backup tarfile name incorrect format.
After searching I think it is because of this code in influxdb/cmd/influxd/restore/restore.go:
// should get us ["db","rp", "00001", "00"]
pathParts := strings.Split(filepath.Base(tarFile), ".")
if len(pathParts) != 4 {
return fmt.Errorf("backup tarfile name incorrect format")
}
It checks how many dots there are in the backup file names. The amount needs to be 4 but because of my database-name the files have 5 dots.
Are there any workarounds?
I did not find an optimal solution to this problem so I manually copied and pasted the data to InfluxDB.
Related
I have a substantial (~1 TB) directory that already has a backup on google archive storage. For space reasons on local machine, I had to migrate the directory to somewhere else but now when I try to run the script that was synchronizing it to the cloud (using new directory as source) it attempts to upload everything. I guess the problem lies with timestamps on migrated files, because when I experiment with "-c" (CRC comparison) it works fine but just far too slow to be workable (even with compiled CRC).
By manually inspecting timestamps it seems they were copied across well (used robocopy /mir for the migration), so what timestamp exactly is upsetting/confusing gsutil..?
I see few ways out of this:
Finding a way to preserve original timestamps on copy (I still have the original folder, so that's an option)
Somehow convincing gsutil to only patch the timestamps of the cloud files or fall back to size-only
Bite the bullet and re-upload everything
Will appreciate any suggestions.
command used for the migration:
robocopy SOURCE TARGET /mir /unilog+:robocopy.log /tee
Also tried:
robocopy SOURCE TARGET /mir /COPY:DAT /DCOPY:T /unilog+:robocopy.log /tee
command used for sync with google:
gsutil -m rsync -r "source" "gs://MYBUCKET/target"
So turns out that even when you try to sync timestamps they end up different:
>>> os.stat(r'file.copy')
nt.stat_result(st_mode=33206, ... st_size=1220431L, st_atime=1606987626L, st_mtime=1257521848L, st_ctime=1512570325L)
>>> os.stat(r'file.original')
nt.stat_result(st_mode=33206, ... st_size=1220431L, st_atime=1606987624L, st_mtime=1257521847L, st_ctime=1512570325L)
can clearly see that mtime and atime are just fractionally off (later)
trying to sync them:
>>> os.utime(r'file.copy', (1606987626, 1257521847))
>>> os.stat(r'file.copy')
nt.stat_result(st_mode=33206, ... st_size=1220431L, st_atime=1606987626L, st_mtime=1257521848L, st_ctime=1512570325L)
results in mtime still being off, but if i go a bit further back in time:
>>> os.utime(r'file.copy', (1606987626, 1257521845))
>>> os.stat(r'file.copy')
nt.stat_result(st_mode=33206, ... st_size=1220431L, st_atime=1606987626L, st_mtime=1257521846L, st_ctime=1512570325L)
It changes, but still not accurate.
However, now after taking it back in time I can use the "-u" switch to ignore newer files in destination:
gsutil -m rsync -u -r "source" "gs://MYBUCKET/target"
script that does fixes timestamps for all files in target:
import os
SOURCE = r'source'
TARGET = r'target'
file_count = 0
diff_count = 0
for root, dirs, files in os.walk(SOURCE):
for name in files:
file_count += 1
source_filename = os.path.join(root, name)
target_filename = source_filename.replace(SOURCE, TARGET)
try:
source_stat = os.stat(source_filename)
target_stat = os.stat(target_filename)
except WindowsError:
continue
delta = 0
while source_stat.st_mtime < target_stat.st_mtime:
diff_count += 1
#print source_filename, source_stat
#print target_filename, target_stat
print 'patching', target_filename
os.utime(target_filename, (source_stat.st_atime, source_stat.st_mtime-delta))
target_stat = os.stat(target_filename)
delta += 1
print file_count, diff_count
it's far from being perfect, but running the command no longer results in everything trying to sync. Hopefully someone will fine that useful, other solutions are still welcome.
How we can generate FortiFy report using command ??? on linux.
In command, how we can include only some folders or files for analyzing and how we can give the location to store the report. etc.
Please help....
Thanks,
Karthik
1. Step#1 (clean cache)
you need to plan scan structure before starting:
scanid = 9999 (can be anything you like)
ProjectRoot = /local/proj/9999/
WorkingDirectory = /local/proj/9999/working
(this dir is huge, you need to "rm -rf ./working && mkdir ./working" before every scan, or byte code piles underneath this dir and consume your harddisk fast)
log = /local/proj/9999/working/sca.log
source='/local/proj/9999/source/src/**.*'
classpath='local/proj/9999/source/WEB-INF/lib/*.jar; /local/proj/9999/source/jars/**.*; /local/proj/9999/source/classes/**.*'
./sourceanalyzer -b 9999 -Dcom.fortify.sca.ProjectRoot=/local/proj/9999/ -Dcom.fortify.WorkingDirectory=/local/proj/9999/working -logfile /local/proj/working/9999/working/sca.log -clean
It is important to specify ProjectRoot, if not overwrite this system default, it will put under your /home/user.fortify
sca.log location is very important, if fortify does not find this file, it cannot find byte code to scan.
You can alter the ProjectRoot and Working Directory once for all if your are the only user: FORTIFY_HOME/Core/config/fortify_sca.properties).
In such case, your command line would be ./sourceanalyzer -b 9999 -clean
2. Step#2 (translate source code to byte code)
nohup ./sourceanalyzer -b 9999 -verbose -64 -Xmx8000M -Xss24M -XX:MaxPermSize=128M -XX:+CMSClassUnloadingEnabled -XX:+UseConcMarkSweepGC -XX:+UseParallelGC -Dcom.fortify.sca.ProjectRoot=/local/proj/9999/ -Dcom.fortify.WorkingDirectory=/local/proj/9999/working -logfile /local/proj/9999/sca.log -source 1.5 -classpath '/local/proj/9999/source/WEB-INF/lib/*.jar:/local/proj/9999/source/jars/**/*.jar:/local/proj/9999/source/classes/**/*.class' -extdirs '/local/proj/9999/source/wars/*.war' '/local/proj/9999/source/src/**/*' &
always unix background job (&) in case your session to server is timeout, it will keep working.
cp : put all your known classpath here for fortify to resolve the functiodfn calls. If function not found, fortify will skip the source code translation, so this part will not be scanned later. You will get a poor scan quality but FPR looks good (low issue reported). It is important to have all dependency jars in place.
-extdir: put all directories/files you don't want to be scanned here.
the last section, files between ' ' are your source.
-64 is to use 64-bit java, if not specified, 32-bit will be used and the max heap should be <1.3 GB (-Xmx1200M is safe).
-XX: are the same meaning as in launch application server. only use these to control the class heap and garbage collection. This is to tweak performance.
-source is java version (1.5 to 1.8)
3. Step#3 (scan with rulepack, custom rules, filters, etc)
nohup ./sourceanalyzer -b 9999 -64 -Xmx8000M -Dcom.fortify.sca.ProjectRoot=/local/proj/9999 -Dcom.fortify.WorkingDirectory=/local/proj/9999/working -logfile /local/ssap/proj/9999/working/sca.log **-scan** -filter '/local/other/filter.txt' -rules '/local/other/custom/*.xml -f '/local/proj/9999.fpr' &
-filter: file name must be filter.txt, any ruleguid in this file will not be reported.
rules: this is the custom rule you wrote. the HP rulepack is in FORTIFY_HOME/Core/config/rules directory
-scan : keyword to tell fortify engine to scan existing scanid. You can skip step#2 and only do step#3 if you did notchange code, just want to play with different filter/custom rules
4. Step#4 Generate PDF from the FPR file (if required)
./ReportGenerator -format pdf -f '/local/proj/9999.pdf' -source '/local/proj/9999.fpr'
I have a MySQL4 database db4 (charset: latin1) which I want to copy to a MySQL5 database (standard charset: utf-8) db5 using the following command:
mysqldump -u dbo4 --password="..." --default-character-set="latin1" db4 | mysql -S /tmp/mysql5.sock -u dbo5 --password="..." --default-character-set="latin1" db5
The values of the entries are copied in a correct way. But the the german Umlaute (äüö...) which are contained in standard-values of some fields, are afterwards schown as "?"
What is wrong with my copy-command?
I simply want to keep everything as is was before (all data in the database stored as "latin1")
I've worked extensively with ROOT, which has it's own format for data files, but for various reasons we would like to switch to HDF5 files. Unfortunately we'd still require some way of translating files between formats. Does anyone know of any existing libraries which do this?
You might check out rootpy, which has a facility for converting ROOT files into HDF5 via PyTables: http://www.rootpy.org/commands/root2hdf5.html
If this issue is still of interest to you, recently there have been large improvements to rootpy's root2hdf5 script and the root_numpy package (which root2hdf5 uses to convert TTrees into NumPy structured arrays):
root2hdf5 -h
usage: root2hdf5 [-h] [-n ENTRIES] [-f] [--ext EXT] [-c {0,1,2,3,4,5,6,7,8,9}]
[-l {zlib,lzo,bzip2,blosc}] [--script SCRIPT] [-q]
files [files ...]
positional arguments:
files
optional arguments:
-h, --help show this help message and exit
-n ENTRIES, --entries ENTRIES
number of entries to read at once (default: 100000.0)
-f, --force overwrite existing output files (default: False)
--ext EXT output file extension (default: h5)
-c {0,1,2,3,4,5,6,7,8,9}, --complevel {0,1,2,3,4,5,6,7,8,9}
compression level (default: 5)
-l {zlib,lzo,bzip2,blosc}, --complib {zlib,lzo,bzip2,blosc}
compression algorithm (default: zlib)
--script SCRIPT Python script containing a function with the same name
that will be called on each tree and must return a tree or
list of trees that will be converted instead of the
original tree (default: None)
-q, --quiet suppress all warnings (default: False)
As of when I last checked (a few months ago) root2hdf5 had a limitation that it could not handle TBranches which were arrays. For this reason I wrote a bash script: root2hdf (sorry for non-creative name).
It takes a ROOT file and the path to the TTree in the file as input arguments and generates source code & compiles to an executable which can be run on ROOT files, converting them into HDF5 datasets.
It also has a limitation that it cannot handle compound TBranch types, but I don't know that root2hdf5 does either.
I have a rails site. I'd like, on mongrel restart, to write the current svn version into public/version.txt, so that i can then put this into a comment in the page header.
The problem is getting the current local version of svn - i'm a little confused.
If, for example, i do svn update on a file which hasn't been updated in a while i get "At revision 4571.". However, if i do svn info, i get
Path: .
URL: http://my.url/trunk
Repository Root: http://my.url/lesson_planner
Repository UUID: #########
Revision: 4570
Node Kind: directory
Schedule: normal
Last Changed Author: max
Last Changed Rev: 4570
Last Changed Date: 2009-11-30 17:14:52 +0000 (Mon, 30 Nov 2009)
Note this says revision 4570, 1 lower than the previous command.
Can anyone set me straight and show me how to simply get the current version number?
thanks, max
Subversion comes with a command for doing exactly this: SVNVERSION.EXE.
usage: svnversion [OPTIONS] [WC_PATH [TRAIL_URL]]
Produce a compact 'version number' for the working copy path
WC_PATH. TRAIL_URL is the trailing portion of the URL used to
determine if WC_PATH itself is switched (detection of switches
within WC_PATH does not rely on TRAIL_URL). The version number
is written to standard output. For example:
$ svnversion . /repos/svn/trunk
4168
The version number will be a single number if the working
copy is single revision, unmodified, not switched and with
an URL that matches the TRAIL_URL argument. If the working
copy is unusual the version number will be more complex:
4123:4168 mixed revision working copy
4168M modified working copy
4123S switched working copy
4123:4168MS mixed revision, modified, switched working copy
If invoked on a directory that is not a working copy, an
exported directory say, the program will output 'exported'.
If invoked without arguments WC_PATH will be the current directory.
Valid options:
-n [--no-newline] : do not output the trailing newline
-c [--committed] : last changed rather than current revisions
-h [--help] : display this help
--version : show version information
I use the following shell script snippet to create a header file svnversion.h which defines a few constant character strings I use in compiled code. You should be able to something very similar:
#!/bin/sh -e
svnversion() {
svnrevision=`LC_ALL=C svn info | awk '/^Revision:/ {print $2}'`
svndate=`LC_ALL=C svn info | awk '/^Last Changed Date:/ {print $4,$5}'`
now=`date`
cat <<EOF > svnversion.h
// Do not edit! This file was autogenerated
// by $0
// on $now
//
// svnrevision and svndate are as reported by svn at that point in time,
// compiledate and compiletime are being filled gcc at compilation
#include <stdlib.h>
static const char* svnrevision = "$svnrevision";
static const char* svndate = "$svndate";
static const char* compiletime = __TIME__;
static const char* compiledate = __DATE__;
EOF
}
test -f svnversion.h || svnversion
This assumes that you would remove the created header file to trigger the build of a fresh one.
If you just want to print latest revision of the repository, you can use something like this:
svn info <repository_url> -rHEAD | grep '^Revision: ' | awk '{print $2}'
You can use capistrano for deployment, it creates REVISION file, which you can copy to public/version.txt
It seems that you are running svn info on the directory, but svn update on a specific file. If you update the directory to revision 4571, svn info should print:
Path: .
URL: http://my.url/trunk
Repository Root: http://my.url/lesson%5Fplanner
Repository UUID: #########
Revision: 4571
[...]
Last Changed Rev: 4571
Note that the "last changed revision" does not necessarily align with the latest revision of the repository.
Thanks to everyone who suggested capistrano and svninfo.
We do actually use capistrano, and it does indeed make this REVISION file, which i guess i saw before but didn't pay attention to. As it happens, though, this isn't quite what i need because it only gets updated on deploy, whereas sometimes we might sneakily update a couple of files then restart, rather than doing a full deploy.
I ended up doing my own file using svninfo, grep and awk as many people have suggested here, and putting it in public. This is created on mongrel start, which is part of the deploy process and the restart process so gets done both times.
thanks all!