HDF5 error from MEEP on Ubuntu - hdf5

I have installed MEEP, everything is running fine however when I try to run this tutorial code for the straight waveguide I get the following message:
-----------
Initializing structure...
Working in 2D dimensions.
Computational cell is 16 x 8 x 0 with resolution 10
block, center = (0,0,0)
size (1e+20,1,1e+20)
axes (1,0,0), (0,1,0), (0,0,1)
dielectric constant epsilon diagonal = (12,12,12)
time for set_epsilon = 0.0840669 s
-----------
creating output file "./eps-000000.00.h5"...
HDF5-DIAG: Error detected in HDF5 (1.8.4) thread 3078944464:
#000: ../../../src/H5F.c line 1430 in H5Fcreate(): unable to create file
major: File accessability
minor: Unable to open file
#001: ../../../src/H5F.c line 1220 in H5F_open(): unable to open file
major: File accessability
minor: Unable to open file
#002: ../../../src/H5FD.c line 1079 in H5FD_open(): open failed
major: Virtual File Layer
minor: Unable to initialize object
#003: ../../../src/H5FDsec2.c line 365 in H5FD_sec2_open(): unable to open file
major: File accessability
minor: Unable to open file
#004: ../../../src/H5FDsec2.c line 365 in H5FD_sec2_open(): Permission denied
major: Internal error (too specific to document in detail)
minor: System error message
meep: error on line 450 of ../../../src/h5file.cpp: error opening HDF5 output file

It looks like a permission error. Do you have write permission in the current directory? Or does the file ./eps-000000.00.h5 already exists and cannot be overwritten?

Related

Visual studio generating corrupt (or maybe empty) library file

This was all working a few days ago. Not sure what changed.
I have a bunch of 32 bit library files which I am building on Windows 10 with VS2019. When I try to link with them, I get an error saying that the library file is corrupt. I've tried rebuilding, repairing visual studio, and rebooting.
1>CVTRES : fatal error CVT1107: 'C:\sandbox\debug\WinIPC.lib' is corrupt
1>LINK : fatal error LNK1123: failure during conversion to COFF: file invalid or corrupt
Looking at one particular library, it is 40 megs in size which seems reasonable. Dumpbin /all, however, shows this
File Type: COFF OBJECT
FILE HEADER VALUES
0 machine (Unknown)
0 number of sections
0 time date stamp
0 file pointer to symbol table
0 number of symbols
0 size of optional header
0 characteristics
Summary
I have four library files that show this problem and several that don't. I don't see any difference in the way that they are built.
I do see from the dumpbin output that the libraries that don't work are file type "COFF OBJECT" while the ones which do work are "Library"
In the release build, visual studio does NOT complain about the file being corrupt, but it does complain about the symbols that should be in the library being unresolved, and dumpbin still shows that the file is empty.
Anyone have any suggestions?

HDF5 call 'file = H5Fopen(nuceos_table_name, H5F_ACC_RDONLY, H5P_DEFAULT)' returned error code -1

when I run an executable on Linux I met an error I'm not able to explain. The executable is meant to read an .h5 file whose name is given by the nuceos_table_name variable. I already checked the filename and should be right i.e. nuceos_table_name = "./BLH_new.h5", the file is in the same folder where I launch the command from ("srun <path-to-executable>"). The error is the following
HDF5-DIAG: Error detected in HDF5 (1.12.0) thread 0:
#000: H5F.c line 793 in H5Fopen(): unable to open file
major: File accessibility
minor: Unable to open file
HDF5-DIAG: Error detected in HDF5 (1.12.0) thread 0:
HDF5-DIAG: Error detected in HDF5 (1.12.0) thread 0:
#000: H5F.c line 793 in H5Fopen(): unable to open file
major: File accessibility
minor: Unable to open file
HDF5-DIAG: Error detected in HDF5 (1.12.0) thread 0:
#000: H5F.c line 793 in H5Fopen(): unable to open file
major: File accessibility
minor: Unable to open file
#001: H5VLcallback.c line 3500 in H5VL_file_open(): open failed
major: Virtual Object Layer
minor: Can't open object
Thank you for helping and sorry if I haven't been clear enough, I'm new to this computer science stuff, anyway I'm here to explain better.
P.S. The code fragment which opens the file is the following:
void nuc_eos_C_ReadTable(const cGH* cctkGH)
{
DECLARE_CCTK_PARAMETERS;
using namespace nuc_eos;
using namespace nuc_eos_private;
CCTK_VInfo(CCTK_THORNSTRING,"*******************************");
CCTK_VInfo(CCTK_THORNSTRING,"Reading nuc_eos table file:");
CCTK_VInfo(CCTK_THORNSTRING,"%s",nuceos_table_name);
CCTK_VInfo(CCTK_THORNSTRING,"*******************************");
CCTK_INT my_reader_process = reader_process;
if (my_reader_process < 0 || my_reader_process >= CCTK_nProcs(cctkGH))
{
CCTK_VWarn(CCTK_WARN_COMPLAIN, __LINE__, __FILE__, CCTK_THORNSTRING,
"Requested IO process %d out of range. Reverting to process 0.", my_reader_process);
my_reader_process = 0;
}
const int doIO = !read_table_on_single_process || CCTK_MyProc(cctkGH) == my_reader_process;
hid_t file;
if (doIO && !file_is_readable(nuceos_table_name)) {
CCTK_VError(__LINE__, __FILE__, CCTK_THORNSTRING,
"Could not read nuceos_table_name %s \n",
nuceos_table_name);
}
HDF5_DO_IO(file = H5Fopen(nuceos_table_name, H5F_ACC_RDONLY, H5P_DEFAULT));
// Use these two defines to easily read in a lot of variables in the same way
// The first reads in one variable of a given type completely

Getting Error while reading a manual_test.csv file in Jupyter. (Data and warehouse mining, Machine Learning)

I am trying to run a cell in Jupyter notebook and getting permission error number 13. The code is
df_manual_testing = pd.concat([df_fake_manual_testing,df_true_manual_testing], axis = 0)
df_manual_testing.to_csv("manual_testing.csv")
The error which I am getting is:
---------------------------------------------------------------------------
PermissionError Traceback (most recent call last)
<ipython-input-16-f14a4d175882> in <module>
1 df_manual_testing = pd.concat([df_fake_manual_testing,df_true_manual_testing], axis = 0)
----> 2 df_manual_testing.to_csv("manual_testing.csv")
~\anaconda\lib\site-packages\pandas\core\generic.py in to_csv(self, path_or_buf, sep, na_rep, float_format, columns, header, index, index_label, mode, encoding, compression, quoting, quotechar, line_terminator, chunksize, date_format, doublequote, escapechar, decimal)
3202 decimal=decimal,
3203 )
-> 3204 formatter.save()
3205
3206 if path_or_buf is None:
~\anaconda\lib\site-packages\pandas\io\formats\csvs.py in save(self)
182 close = False
183 else:
--> 184 f, handles = get_handle(
185 self.path_or_buf,
186 self.mode,
~\anaconda\lib\site-packages\pandas\io\common.py in get_handle(path_or_buf, mode, encoding, compression, memory_map, is_text)
426 if encoding:
427 # Encoding
--> 428 f = open(path_or_buf, mode, encoding=encoding, newline="")
429 elif is_text:
430 # No explicit encoding
PermissionError: [Errno 13] Permission denied: 'manual_testing.csv'
I cannot understand how to change the permission of my file. I think so I will need to change it from root to user, but I am not sure exactly how to do this.
If you are not interested in writing the csv file into the current working directory, you could simply specify the full path of a folder in which you are sure you have the permissions to write.
For example:
df_manual_testing.to_csv("C:/Users/../manual_testing.csv")
However, if you want to write in a particular folder, you can check from the terminal if you have the permissions to write here, using the command ls -lh. Eventually, you could change the permissions using the account of the owner of the folder, with the command chmod 777 myfolder.
If you need more information about file permissions, you could look at this reference.

Error while loading graphlab.SFrame('home_data.gl/')

I am doing Machine Learning Course from Coursera by University of Washington. In which I am using iPython's graphlab. During practise when I execute below command:
sales = graphlab.SFrame('home_data.gl/')
I am getting error.
IOError Traceback (most recent call last)
<ipython-input-2-e6a249ea422b> in <module>()
----> 1 sales = graphlab.SFrame('home_data.gl/')
C:\Users\chinesh\Anaconda2\envs\gl-env\lib\site-packages\graphlab\data_structures\sframe.pyc in __init__(self, data, format, _proxy)
951 pass
952 else:
--> 953 raise ValueError('Unknown input type: ' + format)
954
955 sframe_size = -1
C:\Users\chinesh\Anaconda2\envs\gl-env\lib\site-packages\graphlab\cython\context.pyc in __exit__(self, exc_type, exc_value, traceback)
47 if not self.show_cython_trace:
48 # To hide cython trace, we re-raise from here
---> 49 raise exc_type(exc_value)
50 else:
51 # To show the full trace, we do nothing and let exception propagate
IOError: C:\Users\chinesh\home_data.gl not found.
where i can find home_data.gl in my computer or the problem is something else ..
You need to have your ipynb file and the data file in the same directory for the above to work. Alternatively, specify the full or relative path of the data file in the function
sales = graphlab.SFrame('C:\FULL-PATH\home_data.gl/')
Here is a link to the course reading for how to arrange your directories for the course. https://www.coursera.org/learn/ml-foundations/supplement/IT04V/reading-where-should-my-files-go
Make sure to download the zip folder to the same folder where you will work on this data. For example: I downloaded the zip file to Downloads then I opened my notebook from the download file
Just follow the instructions and download the training data into the ipython working directory.
Go to terminal and run:
unzip home_data.gl.zip
You will see following files in directory home_data.gl:
Now in ipython, run:
sales = graphlab.SFrame('home_data.gl/')
sales
which will display the data in tabular format:

PyBBIO Analog Input: Failure to Load ADC File

While running the PyBBIO examples phant_test.py and analog_test.py I received the following error (I believe 'could' is a typo meant to be 'could not'):
Traceback (most recent call last):
File "analog_test.py", line 47, in <module>
run(setup, loop)
File "/usr/lib/python2.7/site-packages/PyBBIO-0.9-py2.7-linux-armv7l.egg/bbio/bbio.py", line 63, in run
loop()
File "analog_test.py", line 37, in loop
val1 = analogRead(pot1)
File "/usr/lib/python2.7/site-packages/PyBBIO-0.9-py2.7-linux-armv7l.egg/bbio/platform/beaglebone/bone_3_8/adc.py", line 46, in analogRead
raise Exception('*Could load overlay for adc_pin: %s' % adc_pin)
Exception: *Could load overlay for adc_pin: ['/sys/devices/ocp.2/PyBBIO-AIN0.*/AIN0', 'PyBBIO-AIN0', 'P9.39']
I have tried restarting the BeagleBone (rev A6 running Angstrom with a 3.8 kernel, with no capes connected) to clear the /sys/devices/bone_capemgr.7/slots file, but that did not work. It seems PyBBIO is accessing the slots file and adding overlays because the slots file looks like this after the example program runs:
0: 54:PF---
1: 55:PF---
2: 56:PF---
3: 57:PF---
4: ff:P-O-L Override Board Name,00A0,Override Manuf,PyBBIO-ADC
5: ff:P-O-L Override Board Name,00A0,Override Manuf,PyBBIO-AIN0
Since there were some changes being made to the slots file I checked what files the analog_read(adc_pin) function in the adc.py file of PyBBIO was retrieving. With some print statements I figured out the root problem was that the /sys/devices/ocp.2/PyBBIO-AIN0.*/AIN0 file, which apparently stores the analog read values, does not exist. The glob.glob function returns a null array, and ls /sys/devices/ocp.2/PyBBIO-AIN0.10/ shows modalias power subsystem uevent as the only contents. Is there something wrong in the overlay file? Or could there be another program or problem that is preventing the BeagleBone from writing the AIN0 file that PyBBIO is trying to read? The python code seems to be logically correct, but the overlay is working incorrectly or being blocked in some way.

Resources