I have a backup procedure that uses kpartx to read from a partitioned lvm volume.
Seldomly it happens that the device cannot be unmapped.
Right now when I try to remove the mapping I get the following:
# kpartx -d /dev/loop7
read error, sector 0
read error, sector 1
read error, sector 29
I tried dmsetup clean loop7p1 but nothing changed.
How can I free the partition without rebooting the server?
thanks
You can use 'dmsetup remove_all' to remove this mapping. You shouldn't need to use -f (force), but if you do, it may remove mappings that are in use.
Related
I'm trying to dump a jena database as triples.
There seems to be a command that sounds perfectly suited to the task: tdb2.dump
jena#debian-clean:~$ ./apache-jena-3.8.0/bin/tdb2.tdbdump --help
tdbdump : Write a dataset to stdout (defaults to N-Quads)
Output control
--output=FMT Output in the given format, streaming if possible.
--formatted=FMT Output, using pretty printing (consumes memory)
--stream=FMT Output, using a streaming format
--compress Compress the output with gzip
Location
--loc=DIR Location (a directory)
--tdb= Assembler description file
Symbol definition
--set Set a configuration symbol to a value
--mem=FILE Execute on an in-memory TDB database (for testing)
--desc= Assembler description file
General
-v --verbose Verbose
-q --quiet Run with minimal output
--debug Output information for debugging
--help
--version Version information
--strict Operate in strict SPARQL mode (no extensions of any kind)
jena#debian-clean:~$
But I've not succeded in getting it to write anything to STDOUT.
When I use the --loc parameter to point to a DB, a new copy of that DB appears in the subfolder: Data-0001, but nothing appears in STDOUT.
When I try the --tdb parameter, and point it to a ttl file, I get a stack trace complaining about its formatting.
Google has turned up the Jena documentation telling me the command exists, and that's it. So any help appreciated.
"--loc" should be the same as used to create the database.
Suppose that's "DB2". For TDB2 (not TDB1) after the database is created, then "DB2/Data-0001" will already exist. Do not use this for --loc. Use "--loc DB2".
If it is a TDB1 database (the files are in the directory at "--loc", no "Datat-0001"), the use tdbdump. An empty database has no triples/quads in it so you would get no output.
Fuseki currently (up to 3.16.0) has to be called with the same setup each time it is run, which is fragile regarding TDB1/TDB2. If you created the TDB2 database outside Fuseki and only use command line args, you'll need "--tdb2" each time.
Fuseki in next release (3.17.0) detects existing database type.
I'm trying to make a full memory dump of ESP8266 running nodemcu (with some .lua files) to make a copy of it. I'm using esptool.py for a dump. Like this: esptool.py.exe -p COM3 -b 230400 read_flash 0 0x200000 test.bin
When I look inside test.bin, I can see my lua code. So, those files are definitely there. But, when I upload this .bin to another ESP , nodemcu starts filesystem format procedure. Therefore, all .lua files are deleted. NodeMCU build itself seems to be fine.
Why does this happen if I make a full dump of the flash? Is there a register flag or something what defines that nodemcu should format filesystem? How can I copy ESP keeping all .lua files in place?
Seems like NodeMCU was missing init data on 0x3fc000.
So, the best way to solve an issue was to make a complete memory dump.
esptool.py.exe -p COM3 -b 230400 read_flash 0 0x400000 test.bin in my case.
I'd like to move my /var/lib/docker data to another place, and to make it safe, i'd like to use rsync.
But the data are stored with sparse files, and rsync does not seem to handle it properly.
What would be the right parameters for rsync?
-a preserves properly the uid/gid+rights
-S handle sparse files efficiently, but rsync never seems to end
Without -S, rsync tries to copy more data than the original location can contain (100G on a 48G partition). With the -S, I seem to be stuck forever after about 10G.
It seems that rsync -avXS is working like a charm.
Should you rsync a /var/lib/docker to a remote server be sure to tell rsync to do no mapping of the uids and gids between the two systems. Otherwise you could end up with wrong ownerships of files in your containers.
so this would create an exact copy:
rsync -avHXS --numeric-ids /var/lib/docker/. root#some.host.com:/var/lib/docker
My goal is to install and fully setup Postgresql by following railscast video.
P.S I am on a Mountain Lion 10.8
$ brew install postgresql
seems okay.
$ initdb /usr/local/var/postgres
ok's ok's then...
FATAL: could not create shared memory segment: Cannot allocate memory
DETAIL: Failed system call was shmget(key=1, size=2072576, 03600).
HINT: This error usually means that PostgreSQL's request for a shared memory segment exceeded available memory or swap space, or exceeded your kernel's SHMALL parameter. You can either reduce the request size or reconfigure the kernel with larger SHMALL. To reduce the request size (currently 2072576 bytes), reduce PostgreSQL's shared memory usage, perhaps by reducing shared_buffers or max_connections.
So like a good young SO grasshopper I start googling. and come to This SO post:
PostgreSQL installation error -- Cannot allocate memory
the suggested answer in this post lead me to this answer:http://willbryant.net/software/mac_os_x/postgres_initdb_fatal_shared_memory_error_on_leopard
$ sudo sysctl -w kern.sysv.shmall=65536
Password:
kern.sysv.shmall: 1024 -> 65536
$ sudo sysctl -w kern.sysv.shmmax=16777216
kern.sysv.shmmax: 4194304 -> 16777216
looks like everything worked so far, but in order to protect my changes from reboot, I need to update my /etc/sysctl.conf file. The problem is that I can't find it!
how do I locate this file? From my peanut sized understanding of computers, there is no filepath that exists, and if it did what is before the /etc ?? it certainly is not on my desktop. all I get is no such file exists, but I don't know how to find this file.
Embarrassing. I was trying to CD into my file. just do $ cd /etc
When compiling a latex document with 15 or so packages and about five includes, pdflatex throws a "too many open files"-error. All includes are ended with \endinput. Any ideas what might cause the error?
The error seems to depend on how many packages are used (no surprise...); however, this is not the first time I use this many packages, while I've never encountered such an error before.
#axel_c: This is not about linux. As you may or may not know, LaTeX is also available on windows (which just happens to be what I'm using right now).
Try inserting
\let\mypdfximage\pdfximage
\def\pdfximage{\immediate\mypdfximage}
before \documentclass.
See also these threads from the pdftex mailing list:
Error message: Too many open files.
Too many files open
Type
ulimit -n
to get the maximum number of open files. To change it to e.g. 2048, type
ulimit -S -n 2048
What is this command giving you:
$ ulimit -n
You might want to increase it by editing /etc/security/limits.conf file.
This could be caused by a low value in your 'max open file descriptors' kernel configuration. Assuming you're using Linux, you can run this command to find out current limit:
cat /proc/sys/fs/file-max
If the limit is low (say, 1024 or so, which is the default in some Linux distros), you could try raising it by editing /etc/sysctl.conf:
fs.file-max = 65536
Details may differ depending on your Linux distribution, but a quick google search will let you fix it easily.