I am using XlsxReaderWriter | Objective - C to insert data and generate a .xlsx file. I am using demo.xlsx from bundle directory and then copying to documents directory to insert data and then saving it with a new name. The demo.xlsx has Read and Write for permission for everyone. After generating the new file I am giving an option to the user to export the file to Microsoft Excel app on their device. When I try to open file in Microsoft Excel app it says that it is read only file and my data till column "T" of the sheet is not shown and the sheet shows from column "U" with no data inside. But the data is being shown in other third party apps or document controller, why is this happening ?
The code I tried :
let fileManager = NSFileManager.defaultManager()
let directory = NSSearchPathForDirectoriesInDomains(.DocumentDirectory, .UserDomainMask, true)[0]
let path = "\(directory)/file.txt"
let attributes: [String:AnyObject] = [NSFilePosixPermissions: NSNumber(short: 0o666)]
let success = fileManager.createFileAtPath(path, contents: nil, attributes: attributes)
if success && fileManager.isWritableFileAtPath(path) && fileManager.isReadableFileAtPath(path) {
NSLog("Worked!")
} else {
NSLog("Failed!")
}
The file was created before I gave it permission instead of "createFileAtPath"
The permissions were perfectly set, I checked in documents directory. Just that the Microsoft Excel app on iOS says it's still "Read Only" file.
I'm guessing it's likely that the problem is not anything about the file permissions, but something about the contents of the .xlsx file your app has generated that Microsoft Excel is unhappy about - hence why it is also not displaying all the data. A .xlsx file is just a zip file containing xml.
You can un-package it using unzip, and then format all the xml for easy reading using a tool like xmllint --format.
If you produce a very simple xlsx that shows the problem (the simplest example that still shows the problem) from your own app, then create a similar xlsx manually from scratch in Excel, you can do the unzip & format on both of them then do a diff -r to find the differences between them.
An alternative approach is to load the file that's showing as 'read only' into desktop microsoft excel, make a trivial edit and save it - if that file then loads into the mobile app correctly, you can diff that file against yours.
There will probably be a reasonable number of differences, so you'll need to apply some brain power to figure out which ones are most likely to cause Excel to make it readonly. Aside from anything obvious like an attribute named 'readonly', I'd look for things like the problem file using older XML namespaces.
If you find a different you can quickly find if it's the issue by hand editing, rezipping the xml (make sure your editor doesn't leave any backup files behind, as Excel tends to object to unexpected files in the zip), then load it into Excel and see if it's still readonly.
Here's a script I use to compare the xml for two ooxml documents:
#!/bin/bash
set -e
first=$1
second=$2
# convert to absolute paths
first=$(cd $(dirname $first); pwd)/$(basename $first)
second=$(cd $(dirname $second); pwd)/$(basename $second)
WORKDIR=~/tmp.$$
mkdir -p $WORKDIR
cd $WORKDIR
mkdir 1
cd 1
unzip $first
cd ..
mkdir 2
cd 2
unzip $second
cd ..
for i in `find . -name '*.xml' -o -name '*.vml' -o -name '*.rels'`; do
xmllint --format $i > $i.new
mv -f $i.new $i
done
diff -U 5 -r 1 2 | cat -v
#or kaleidoscope for better diff display:
#ksdiff 1 2
echo $WORKDIR
rm -rf $WORKDIR
Tried own code. Same thing. But it works perfectly in other apps. Seems more like Microsoft Excell app issue. Give feedback to Excell team, provide your file, and ask to fix bug, if it’s not, anyway they will give you more information than here on SO
Related
I have a container that I want to export as a .tar file. I have used a podman run with a tar --exclude=/dir1 --exclude=/dir2 … that outputs to a file located on a bind-mounted host dir. But recently this has been giving me some tar: .: file changed as we read it errors, which podman/docker export would avoid. Besides the export I suppose is more efficient. So I'm trying to migrate to using the export, but the major obstacle is I can't seem to find a way to exclude paths from the tar stream.
If possible, I'd like to avoid modifying a tar archive already saved on disk, and instead modify the stream before it gets saved to a file.
I've been banging my head for multiple hours, trying useless advices from ChatGPT, looking at cpio, and attempting to pipe the podman export to tar --exclude … command. With the last I did have small success at some point, but couldn't make tar save the result to a particularly named file.
Any suggestions?
(note: I do not make distinction between docker and podman here as their export command is completely the same, and it's useful for searchability)
Problem outline
I'm trying to get all the files from an URL: https://archive-gw-1.kat.ac.za/public/repository/10.48479/7epd-w356/data/basic_products/bucket_contents.html
which appears to be a list of contents of an S3 bucket with associated download links.
When I attempt to download all the files with the extension *.jpeg, I'm simply returned the directory structure leading up to an subdirectory with no downloaded files.
Things I've tried
To do this I've tried all the variations of leading parameters for:
$ wget -r -np -A '*.jpeg' https://archive-gw-1.kat.ac.za/public/repository/10.48479/7epd-w356/data/basic_products/
...that I can think of, but none have actually downloaded the jpeg files.
If you provide the path to a specific file e.g.
$ wget https://archive-gw-1.kat.ac.za/public/repository/10.48479/7epd-w356/data/basic_products/Abell_133_hi.jpeg
...the files can be downloaded, which would suggest that I must be mishandling the wildcard aspect of the download surely?
Thoughts which could be wrong owing to limited knowledge of wget and website protocols
Unless the fact that the contents are held in a bucket_contents.html rather than an index.html is causing problems?
The man page for tar uses the word "dump" and its forms several times. What does it mean? For example (manual page for tar 1.26):
"-h, --dereferencefollow symlinks; archive and dump the files they point to"
Many popular systems have a "trash can" or "recycle bin." I don't want the files dumped there, but it kind of sounds that way.
At present, I don't want tar to write or delete any file, except that I want tar to create or update a single tarball.
FYI, the man page for the tar installed on the system I am using at the moment is a lot shorter than what appears to be the current version. And the description of -h, --dereference there seems very different to me:
"When reading or writing a file to be archived, tar accesses the file that a symbolic link points to, rather than the symlink itself. See section Symbolic Links."
P.S. I could not get "block quote" to work properly in this post.
File system backups are also called dumps.
—#raymond-chen, quoting GNU tar manual
I'm trying to getting error logs from Parse Crash reporting for my app, its logging but not showing symbolic crash reports, also at Parse they're asking to add symbolic files for my app. I'd search for it and found that needs to upload symbolic files each time when you create a new build.
This is the sample script from Parse:
export PATH=/usr/local/bin:$PATH
cd "<path_to_cloudcode_folder>"
parse symbols -p "${DWARF_DSYM_FOLDER_PATH}/${DWARF_DSYM_FILE_NAME}"
I want to add dynamic path to path_to_cloudcode_folder because we're working remotely via git so the path_to_cloudcode_folder is different based on each user.
How do I add a dynamic path there, so it will work at all of the places without error.
P.S. I thought $SCRROOT would work, but it won't. It gives me error,
No such file or directory.
What's wrong?
echo $SCRROOT
gives me following folder path,
/Hagile/Workspace/Git/TestApp
Above path contains a folder, parse having 3 sub folders. i.e.
- Hagile |
- Workspace |
- Git |
- TestApp |
- cloud | config | public
This worked for me:
cd "${PROJECT_DIR}"/<path to cloud folder>/parse
My problem laid in the fact that I had spaces in my path and this was tripping up the compiler. Providing the double quotes did it for me.
See this more in-depth explanation.
#Julian answer help me to get this working ! I needed to change it little.
echo "-start-----------------------------"
export PATH=/usr/local/bin:$PATH
cd "${PROJECT_DIR}"/parse
parse symbols -p "${DWARF_DSYM_FOLDER_PATH}/${DWARF_DSYM_FILE_NAME}"
echo "-end-----------------------------"
Here's the output
I currently have 3TB of data on a disk with small to medium files in hundreds of folders.
I need to find certain text files witch contain certain words ( more than one word ).
I've already tried grep-ping for them.
This works as it prints the path to every file.
But this is a long list and I'm now looking for a workable way to copy them to another folder.
any ideas ?
Is there some way to put -exec cp -rv /estinationfolder in the syntax and have it copy all results to the folder ?
Yes , certainly there is a way.
You can pipe the grep output to copy command and provide required destination directory.
Here is a example,
find . -type f | xargs grep -l "textToSearch" | cpio -pV $destination_path
this script will copy files to destination path provided in destination_path variable
Best part with this is, it will copy the files while preserving the full path.