Error running BLAST when database is on external hard drive - storage

I have two Macs: a Desktop with a 3 TB hard drive, and a laptop with a 512 GB SSD and an attached 8 TB USB-C external hard drive. I have the same BLAST database set up on both; for the laptop, the database is on the external hard drive. When I run a BLAST query on the desktop, it runs fine. However, when I run the exact same command on the laptop, I get the following error message.
Error memory mapping:/Volumes/LaCie/data/blastdb/all_proteins.FASTA.03.phr openedFilesCount=251 threadID=0
BLAST Database error: Cannot memory map /Volumes/LaCie/data/blastdb/all_proteins.FASTA.03.phr. Number of files opened: 251
I had no luck Googling this message, which is why I'm asking for help here! Is this error related to the BLAST database being on the external hard drive? Is there any way to fix this problem?
Thank you in advance for any help!

Related

Dell EMC Unity 300 Storage - System not initialized and no management IP address assigned

I have a Dell EMC Unity 300 shared storage for my 2 servers.
I have a problem where I can not be able to get SP-a and SP-b IP addresses for the storage, which is essential in helping me create my LUNs.
I can only get the storage management IP address which I have tried to ping from my server but it throws an error of “Destination Host Unreachable”.
All cables are well plugged in.
Further investigations using the Dell SP LED status indicators, below was observed from the SP fault LED:
I would like to solve this issue so that I can ping the storage IPs and hence log in to the storage.
Please help with your expertise, thoughts, possible solutions and ideas if you have seen this before.

Raspberrry Pi 3 + Windows IOT Core crashes after some time

Im developing an uwp app on Raspberry Pi 3 with Windows IOT Core. But after I deploy my app and use it for couple days the os crashes. It says something went wrong. It says "Your pc ran into a problem and needs to restart". It restarts couple times but still same error on every boot.
I tried to remove the sd card(Class 10,64 GB) format it and reinstall everything. At first it was okay but after some time same error appears.
I tried to use different os builds and it didnt work.
I tried to use industrial power supply (5V3A) and also it didnt work.
My SD Card is not one of the recommended ones but do I really have to get the recommended sd cards to use the windows iot core properly?
"Your PC ran into a problem and needs to restart" is a typical blue screen message seen on Windows systems from the last few years - laptops and desktops with far larger hard drives and no SD card. The error is not associated with a RAM or disk space shortage (operating systems running in graphical mode usually monitor and actively warn about either). In your case, it is showing at startup, when not much is running (taking up RAM), and you can check the amount of space used on the card with the PC.
The key stats for SD cards are size (you have plenty) and speed (clearly enough or you would have trouble installing/running anything after starting the Pi). The cause is something else, and finding out what will require getting a more detailed error message from Windows - "a problem" could mean anything. In my experience, blue screen errors have mostly involved having a wrong driver installed, sometimes a bad Windows update - but IoT Core has its own alternatives, like "bad system configuration". Look for the underscored string (e.g., BAD_SYSTEM_CONFIG_INFO) at the end of your blue screen message, as that is the first hint.
Unfortunately, most Windows BSoD documentation is for traditional PCs, so I cannot recommend specific troubleshooting tools and be sure that they will run on the Pi.
You can use Windows Debugger to debug the kernel and drivers on Windows IoT Core. WinDbg is a very powerful debugger that most Windows developers are familiar with. Or you can also refer to this topic in MSDN, it shows how to create the dump file when the app crashes. If possible, you can share your code so that we can reproduce the issue.

how to Add a database on the server to be accessible by all computers on the server

I made a sample database for my students to learn SQL
in that i created it and saved it
i added 30 entries to it
and i saved it
and i cannot copy the same file to 100 computers in my lab
so tell me how to do this
i searched the net but to no avail
sql> tables
-----------------------------------
dhana
-----------------------------------
task completed in 0.57 seconds
i want to put the same database in 100 computers but i cannot do it it will take long time to open the windows xp computer and copy the file from the network paste it and shut down the computer is too tedious
hmm.
refer the computer science for class 11 with python by Sumita Arora
and u may get it.
u are not searching the net properly
and the questions are kind of easy to be found on google
what is ur webbrowser
You can use something like mysql workbench and access your server remotely, although I've never tried 100 simultaneous connections. Another option is ssh from clients into your server and use database cli. Of course i assume you want one database and many clients. Not many databases.

Hardware issue ? zfs scrub always repair

Hello Stackoverflowers,
I have a curious problem with an old Tyan server motherboard and ZFS.
In short, I can run zfs scrubevery hour, it always repair checksums, with no further error.
I ran memtest86 all nignt long with no error (16GB ECC memory)
I ran smartctl -t long /dev/ada{0,1,2} showhing no error neither
But scrubbing keep showing checksum errors.
Thanks for any clue
Xav
This means that either a) you're writing bad sectors to the disk, or b) you're reading bad sectors back. If it's a small number of sectors being corrected each time, my experience is that it's a bad controller or driver.
That is all assuming you don't get console errors.
Reasoning? Well ... if it's the drives, they generally are smart enough to report their errors (at least most of them). If it's the cables, generally you'll be getting checksum errors from the driver on your console. You've mostly eliminated memory, so... You're left with controllers and drivers.
Luckily with ZFS, you can "try" the drives in another machine without too much hassle, usually.
Thanks,
I'll suspect the controller too, while it dosen't want to load its BIOS for the last reboot (Two Adaptec 1210 cards)
Is ZFS smart enough to recognise the pool if I blindly move the cables to the motherboard's controller ?

FoxPro OleDbException: File Too Large

I am running a FoxPro OLEDB query with several joins over a fairly large dataset. However despite only asking for "MAX" or "TOP 100" [rows] data, I get the following error:
System.Data.OleDb.OleDbException (0x80004005): File
c:\users\appX\appdata\local\temp\4\00004y7t002o.tmp is too large.
[LOCAL]
OR
System.Data.OleDb.OleDbException (0x80004005): Error writing to file
c:\users\appX\appdata\local\temp\00002nuh0025.tmp. [REMOTE]
(I have tried the query both locally and remotely).
Seemingly the OLEDB query creates/deletes a huge amount of temp files, e.g.
This would suggest my query is simply too large and will require several smaller queries/workarounds.
The question is: is this a known issue? Is there an official workaround? Would the FoxPro ODBC adapter have the same problem?
Basically 2GB is the upper limit for any file that Visual FoxPro has to deal with. None of those temp files are anywhere near that. Does the location they are being created in have enough disk space? Are there user disk quotas in effect?

Resources