SELinux type getting set incorrectly for files uploaded VIA a Rails application - ruby-on-rails

So I have a web application running on Centos 6.5.
The application is a Ruby/Rails app, but the images are served by Apache HTTPD.
The application folder is in a user home folder, but I've granted HTTPD the correct permissions, and have enabled httpd_enable_home_dirs within SELinux. All static images are working just fine.
The problem I am seeing is when an end user uploads an image (A profile icon), the SELinux context for the file is getting set to unconfined_u:object_r:user_tmp_t:s0 instead of unconfined_u:object_r:usr_t:s0.
If I manually run restorcon on the file, the context gets fixed, and the image works. Any idea how I can make sure the file gets created with the correct context? I've looked into using restorcond, but it looks like it won't recursively check subdirectories, and the subdirectory structure is not predictable.
Any help is appreciated.

Most likely your application is moving 'mv' the object from /tmp, or /var/tmp to the destination location.
By default when a object is moved with 'mv', then so is its security metadata. Thus the object ends up at the destination with old and inaccurate security metadata. Running 'restorecon' on the destination objects resets the contexts to what the policy thinks it should be.
There are various ways you can deal with this. Either allow your webapp to read the object with the inaccurate context or tell your webapp to either use 'mv' with the -Z option, or use 'cp' instead. (the 'cp' command copies the object, and as a consequence the target object ends up with the appropriate security metadata, usually mostly inherited from the targets parent directory.

So apparently SELinux suppresses some error messages...
In order to debug this I had to run
semodule -DB
This rebuilds/restarts the local policy with the disable "don't log" flag. Once "don't log" is disabled, the error messages show up in the audit log and you can add a new policy using the regular:
sealert -a /var/log/audit.log
Then find the audit2allow command for the error in question.
You can set your logging back to normal after by running
semodule -B

Related

How to enable caching in ArangoDB via Docker or arangojs?

I would like to enable caching in ArangoDB, automatically when my app start.
I'm using docker-compose to start the whole thing but apparently there's no simple parameter to enable caching in ArangoDB official image.
According to the doc, all the files in /docker-entrypoint-initdb.d/ are executed at container start. So I added a js file with that code:
require('#arangodb/aql/cache').properties({mode: 'on'});
It is indeed executed but caching doesn't seem to be enabled (from what I see with arangosh within the container).
My app is a JS app using arangojs, so if I can do it this way, I'd be happy too.
Thanks!
According to the performance and server config docs, you can enable caching in several ways.
Your method of adding require("#arangodb/aql/cache").properties({ mode: "on" }); to a .js file in the /docker-entrypoint-initdb.d/ directory should work, but keep an eye on the logs. You may need to redirect log output with a different driver (journals, syslog, etc.) to see what's going on. Make sure to run the command via arangosh to see if it works.
If that's a bust, you might want to see if there is a way to pass parameters at runtime (such as --query.cache-mode on). Unfortunately, I don't use Docker Compose, so I can't give you direct advice here, but try something like -e QUERY.CACHE-MODE=ON
If there isn't a way to pass params, then you could modify the config file: /etc/arangodb3/arangod.conf.
And don't forget about the REST API methods for system management. You can access AQL configuration (view and alter) in the Web UI by clicking on the Support -> Rest API -> AQL.
One thing to keep in mind - I'm not sure if the caching settings are global or tied to a specific database. View the configuration on multiple databases (including _system) to test the settings.

Creating a file that becomes executable chmod 777 gihook file

I'm working in rails on UBUNTU
IM triggering an initializer that writes a githook file into the git directory, all good, file gets created code is good, but on the commit trigger I get
hint: The '.git/hooks/commit-msg' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
The file is created with executable rights.... I think?
new_file = File.new(file, File::CREAT|File::TRUNC|File::RDWR, 777)
However, I think I'm doing something wrong or missing a permission step?
Secondly, how do I access the variables in the commit from the githook? is there good documentation somewhere?
You have a typo in your Ruby code. In general, Unix file modes are specified in octal, which is one of the last cases where people practically use octal in programming. However, you've specified a decimal value here.
As a result, the octal mode you've specified is 1411, which makes your file have no executable permissions for the user, no read permissions for group or other, and the sticky bit set, which is probably not what you wanted.
You can fix this by writing the mode as 0777:
new_file = File.new(file, File::CREAT|File::TRUNC|File::RDWR, 0777)
Note also that it is in general a security problem to write files with mode 777, since any user on the system can modify them. That means that any user who can access the directory in which this hook is written can modify it to execute arbitrary code whenever the hook is run (which, it looks like, is when git commit is run). A more appropriate mode might be 755, which prevents parties other than the user from modifying it.
The documentation for the commit-msg hook is in the githooks manual page. According to the documentation:
It takes a single parameter, the name of the file that holds the proposed commit log message. Exiting with a non-zero status causes the command to abort.

How do I make a simple public read-only WebDAV server with SabreDAV?

I recently began looking into WebDAV, as I found it to be an option for letting me play a Blu-ray folder remotely - i.e. without requiring the viewer to download the whole 24gb ISO first.
Add a WebDAV source in Kodi v18 to a Blu-ray folder - and it actually plays! Very awesome.
The server can also be mounted on Windows with
net use m: http://example.com/webdavfolder/
or in Linux with
sudo mount -t davfs http://example.com/webdavfolder/ /mnt/mywebdav
-and should then (in theory) play with any software media players that supports Blu-ray Disc Java (BD-J), such as PowerDVD and VLC.
vlc bluray:///mnt/mywebdav --bluray-menu
PowerDVD.exe AUTOPLAY BD m:
(Unless of course time-out values has been set too low, which seems to be the case for VLC at the moment).
Anyway, all this is great, except I can't figure out how to make my WebDAV server read-only. Currently anyone can delete files as they wish, and that's of course not optimal.
So far I've only experimented with SabreDAV, because afaik that's the only option I have if I want to keep using my existing webhost. Trying with very minimal setups, because I've read that minimal setups should default to a read-only solution. It just doesn't seem to happen.
I initially used the setup from http://sabre.io/dav/gettingstarted/ and tried removing some lines. Also tried calling chmod 0444 MainFolder -R on the webserver. And I can see that everything does get a read-only attribute. But it changes nothing. It's still possible to delete whatever I want. :-(
What am I missing?
Maybe I'm using the wrong technology for what I want to do? Is there some other/better way of offering a Blu-ray folder for remote viewing? (One that includes the whole experience - i.e. full Java menus etc).
I should probably mention that all of this is of course perfectly legal. It is my own Blu-ray project - not copyright material.
Also: Difficult to decide if this belongs on StackOverflow or SuperUser. I ended up posting it on StackOverflow because SabreDAV is about coding, and because there's no sabredav tag on SuperUser.
You have two options:
Create your own file/directory classes for sabre/dav that simply throw an error when trying to delete. You can basically start with a copy of Sabre\DAV\FS\Directory and Sabre\DAV\FS\File and change the methods that do writing.
Since you're considering just using linux file permissions, really the key thing you are missing is that that 'deleting' is not controlled on the file or directory you're trying to delete. To delete a file or directory in unix, all you need is write permissions on the parent directory. However, I wouldn't recommend going this route as doing this will just cause a weird error in sabre/dav, which might leave clients in a confused state. It would result in a 500 error, not the expected 403 error.

Granting "Local System" permissions the installation folder in Advanced Installer

I have an "Advanced Installer" project that I am trying to use to install my "TopShelf" windows service that I have built.
I found the spot in Advanced Installer to grant permissions to the installation folder, but I don't see a way to grant permissions to the "Local System" account.
Manually, this is done by going to the security option in the properties of the folder and adding a user with the same name as the computer name but ending with a $. For Example MyNiceComputer$. (Oh, and you have to select the "Computers" option in the types area.)
But there is no way to do all this in Advanced Installer. If I do make one like MyNiceComputer$, it just makes an empty entry in the security tab. (No Permissions on it, even though I setup for full control.)
Has anyone ever needed to do this with Advanced Installer?
Additional Details:
I am installing an app that runs as a windows service. (It is a console app built with TopShelf).
Our company policy is to install all our applications into a folder that looks like this:
C:\OurCompanyApps\MyApp
When I create the installer, it runs fine, but then I when I start up the Windows Service, I get the following error:
Windows could not start the MyApp service on Local Computer
Error 5: Access is denied.
But when I grant access to Local System (by giving Full Control rights to myNiceComputer$ on the MyApp folder), then this error goes away and the app runs fine. From what I read, this is because the application is running as Local System.
It seems odd that it needs full control but it does not work without it. (But as far as I can see, the contents of the folder are un-altered).
#Bogdan Mitrache seems to indicate that granting permissions to Local System is not possible via Advanced Installer. This is good to know (so I don't waste more time looking). I will probably ask my System Admins for a dedicated system account to run my service as. Not ideal, but it will serve as a work around.
So, in one of my "less finer" moments of debugging, I mixed up two different things.
There was also a file missing, (my config file). I restored that and changed the permissions at the same time. But then I forgot to go back and verify which one was the actual fix. (I know, not good debugging.)
So, the Access is denied error was due to a missing file.

AttachFile locks attachment for deleting on Windows XP

I'd been working on a plugin when I discovered this. I can't say for sure if this behavior happened before or not on my machine (it doesn't on our test server, a Linux box), but after attaching a file, I can't delete it until the server restarts. I can't delete through the UI or by manually navigating to the server directory and trying to delete from there.
Has anyone ever encountered this before? Could it be something environmental on my box??
Most probably it's a permission issue in that folder, which allows your JIRA user (a user under which privileges JIRA instance runs) to create files, but not to delete them (or something even more fun) :) Try deleting the temp folder (where your uploaded attachments reside) and recreate it again, adding your JIRA web user to the access list for that folder.
The workaround to delete files, when some other process is holding a lock on the file, without having to terminate that process, is to use Unlocker. But be warned, when Unlocker unlocks a file it does that in a way which does not notify the lock holder that the file has been unlocked by force. That means that the lock holder still thinks it holds the lock on the open file which it doesn't (the file handle is invalid). That means that some applications might crash due to unexpected state of the supposedly open file. Btw, I've been using Unlocker since forever and it rarely caused any crashes, but it's better to be warned.

Resources