User privileges for files on DAS drive connected to multiple servers - storage

If a DAS drive is mounted on several servers can we still control privileges of server-specific user accounts on the files/folders on the DAS drive?
ex: on both Server1 and Server2 DAS drive has been mounted as /data now User1 of Server1 writes to /data a file called file1.
And later User2 on Server2 looks up permissions on file1, it seems that he would see some userid(a number) as owner of the file when he does "ls -al" and sometimes it does prevent him from accessing/writing the file.
I am confused as to how can we apply/ensure consistent permissions on files in DAS especially when it is being parallelly accessed by several users on several systems.

If you have a SAN Block level device shared between multiple servers you need to have an application that knows how to address that and not cause problems.
This could be Virtualization software like VMWare or HyperV, clustering software, or a filesystem that knows how to handle that. With that in mind, if you had the right application on the SAN drive you can control privileges.
It is also possible that you are asking about a NAS (Network attached storage) where there is already a filesystem on the shared storage. In this case yes you can control specific privileges by user depending on what NAS software features are supported.

Related

Dask +SLURM over ftp mount (CurlFtpFS)

So I have a working DASK/SLURM cluster of 4 raspberry Pis with a common NFS share, that I can run Python jobs succesfully.
However, I want to add some more arm devices to my cluster that do not support NFS mounts (Kernel module missing) so I wish to move to fuse based ftp mounts wiht CurlftpFS.
I have setup the mounts sucesfully with anonymous username and without any passwords and the common FTP share can be seen by all the nodes (just as before when it was an NFS share).
I can still run SLURM jobs (since they do not use the share) but when I try to run a DASK job the master node timesout complaining that no worker nodes could be started.
I am not sure what exactly is the problem, since the share it open to anyone for read/write access (e.g. logs and dask queue intermediate files).
Any ideas how I can troubleshoot this?
I don't believe anyone has a cluster like yours!
At a guess, the filesystem access via FUSE, ftp and the pi is much slower than the OS is expecting, and you are seeing the effects of low-level timeouts, i.e., from Dask's point of view it appears that files reads are failing. Dask needs access to storage for configuration and sometimes temporary files. You would want to make sure that these locations are on local storage or tuned off. However, if this is happening during import of modules, which you have on the shared drive by design, there may be no fixing it (python loads many small files during import). Why not use rsync to move the files to the nodes?

Mount network share with nfs with username / password

I am trying to mount a NAS using nfs for an application.
The Storage team has exported it to the host server and I can access it at /nas/data.
I am using containerized application and this file system export to the host machine will be a security issue as any container running on the host will be able to use the share. So this linux to linux mounting will not work for me.
So the only alternate solution I have is mounting this nas folder during container startup with a username /password.
The below command works fine on a share supporting Unix/Windows. I can mount on container startup
mount -t cifs -osec=ntlmv2,domain=mydomain,username=svc_account,password=password,noserverino //nsnetworkshare.domain.company/share/folder /opt/testnas
I have been told that we should use nfs option instead of cifs.
So just trying to find out whether using nfs or cifs will make any difference.
Specifying nfs option gives below error.
mount -t nfs -o nfsvers=3,domain=mydomain,username=svc_account,password=password,noserverino //nsnetworkshare.domain.company/share/folder /opt/testnas
mount.nfs: remote share not in 'host:dir' format
Below command doesnt' seem to work either.
mount -t nfs -o nfsvers=3,domain=mydomain,username=svc_account,password=password,noserverino nsnetworkshare.domain.company:/share/folder /opt/testnas
mount.nfs: an incorrect mount option was specified
I couldn't find a mount -t nfs option example with username /password. So I think we can't use mount -t nfs with credentials.
Please pour in ideas.
Thanks,
Vishnu
CIFS is a file sharing protocol. NFS is a volume sharing protocol. The difference between the two might not initially be obvious.
NFS is essentially a tiny step up from directly sharing /dev/sda1. The client actually receives a naked view of the shared subset of the filesystem, including (at least as of NFSv4) a description of which users can access which files. It is up to the client to actually manage the permissions of which user is allowed to access which files.
CIFS, on the other hand, manages users on the server side, and may provide a per-user view and access of files. In that respect, it is similar to FTP or WebDAV, but with the ability to read/write arbitrary subsets of a file, as well as a couple of other features related to locking.
This may sound like NFS is distinctively inferior to CIFS, but they are actually meant for a different purpose. NFS is most useful for external hard drives connected via Ethernet, and virtual cloud storage. In such cases, it is the intention to share the drive itself with a machine, but simply do it over Ethernet instead of SATA. For that use case, NFS offers greater simplicity and speed. A NAS, as you're using, is actually a perfect example of this. It isn't meant to manage access, it's meant to not be exposed to systems that shouldn't access it, in the first place.
If you absolutely MUST use NFS, there are a couple of ways to secure it. NFSv4 has an optional security model based on Kerberos. Good luck using that. A better option is to not allow direct connection to the NFS service from the host, and instead require going through some secure tunnel, like SSH port forwarding. Then the security comes down to establishing the tunnel. However, either one of those requires cooperation from the host, which would probably not be possible in the case of your NAS.
Mind you, if you're already using CIFS and it's working well, and it's giving you good access control, there's no good reason to switch (although, you'd have to turn the NFS off for security). However, if you have a docker-styled host, it might be worthwhile to play with iptables (or the firewall of your choice) on the docker-host, to prevent the other containers from having access to the NAS in the first place. Rather than delegating security to the NAS, it should be done at the docker-host level.
Well I would say go with CIFS as NFS (Old) few of linux/Unix bistro even stopped support for it.
NFS is the “Network File System” specifically used for Unix and Linux operating systems. It allows files communication transparently between servers and end users machines like desktops & laptops. NFS uses client- server methodology to allow user to view read and write files on a computer system. A user can mount all or a portion of a file system via NFS.
CIFS is abbreviation for “Common Internet File System” used by Windows operating systems for file sharing. CIFS also uses the client-server methodology where A client makes a request of a server program for accessing a file .The server takes the requested action and returns a response. CIFS is a open standard version of the Server Message Block Protocol (SMB) developed and used by Microsoft and it uses the TCP/IP protocol.
If I have a Linux <-> Linux I would choose nfs but if it's a Windows <-> Linux cifs would be the best option.

Best practice to automatically backup remotely hosted server

I am trying to setup a server for team note taking, and I am wondering what is the best way to backup its data, A.K.A my notes, automatically.
Currently I plan to run the server in a docker image.
The docker image will be hosted by a hosting service (such as Google).
I found a free hosting service that fits my need, but it does not allow mounting volumes to a docker image.
Therefore, I think the only way for me to backup my data is to transfer them to some other cloud services.
However, this requires that I have to store some sort of sensitive data for authentication in my docker image, apparently this is not cool.
So:
Is it possible to transfer data from a docker image to a cloud service without taking the risk of leaking password/private key?
Is there any other way to backup my data?
I don't have to use docker as all I need is actually Node.js.
But the server must be hosted on some remote machines because I don't have the ability/time/money to host a machine on my own...
I use borg backup to backup our servers (including docker volumes) ... and it's saved the day many times due to failure and stupidity.
It transfers over SSH so comms are encrypted. The repositories it uses are also encrypted on disk so that makes all your data safe. It de-duplicates, snapshots, prunes, compresses ... the feature list is quite large.
After the first backup, subsequent backups are much faster because it only submits the changes since the previous backup.
You can also mount the snapshots as filesystems so you can hunt down the single file you deleted or just restore the whole lot. The mounts can also be done remotely.
I've configured ours to backup /home, /etc and the /var/lib/docker/volumes directories (among others).
We rent a few cheap storage VPSs and send the data up to them nightly. They're in different geographic locations with different hosting providers, you know, because we're paranoid.
Beside docker swarm secrets, don't forget bind mounts strategies: you could have your data in a volume.
In that case, you can have a backup strategy done on the host (instead of the container at runtime), which would take that volume, compress it and save it elsewhere. See for instance this answer or this one.

Shared mount point between virtual machines in vmware

We are currently running vmware esx server in our office network. Our vmware guest machines are running Ubuntu Server 11.04.
What we're looking for is a way to share a storage space accessible by guest machines by using a virtual disk. If one of the guest machines writes to the shared storage space, then all other guests would see the change.
I have read a post about creating a vmdk that get's mounted on the guest. But the post also mentions that none of the guests would see changes if one of the guests writes unless the disk is remounted. Is this correct?
Does anyone know how set this up strictly via vmware ? (meaning not using a nas guest machine configured with cifs, smb, nfs, etc.. for sharing)
You are asking for a "multi-writer" shared disk. VMware supports this (its used to support MSCS clusters, and VMware fault tolerant VMs), but there are not a lot of filesystems or OSes that can take advantage of this -- I'm pretty confident none of the standard Ubuntu filesystems are capable of this. This setup is the virtual equivalent of plugging a single SCSI drive into two separate hosts ... the caching and consistency issues are not trivial for the hosts to use the shared storage without clobbering each other.
Here is a VMware KB article about enabling multi-writer mode:
http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1034165
There are also implications for VM management (migration, backups, etc) that get much more complicated with shared disks to be aware of.
For making a physical drive writable by several Ubuntu machines the easiest way is to add the VMWare disk to ONE of the virtual machines, then use NFS to share that drive with other machines. The only real limitation, relative to trying to share a VMware disk directly, is that, if the virtual machine actually hosting the disk is down, the disk will be unavailable to the other virtual machines. However, an additional advantage of using NFS (apart from not corrupting the disk, as discussed in the other responses) is that the NFS VMware disk can also be shared with other, non-virtual, machines as well.
One of the (many) sets of instructions on how to set up NFS under Ubuntu is at: https://help.ubuntu.com/community/SettingUpNFSHowTo

How do you create a shared folder to the host using vSphere?

VMWare player and workstation has the ability to easily create a shared folder directly to the host:
http://www.vmware.com/support/ws5/doc/ws_running_shared_folders.html
This feature seems to be missing or is moved in vSphere. How do you set it up in vSphere?
Thanks.
Actually, we can't have shared folders using ESXi. But we can workaround it by creating a folder in the host datastore and copying files from/to it using scp protocol. Of course, you need to have administrative privileges on the host for that.
This link explains how to set up SSH Server and Shell Access on ESXi:
http://pubs.vmware.com/vsphere-50/index.jsp?topic=%2Fcom.vmware.vcli.migration.doc_50%2Fcos_upgrade_technote.1.4.html
This feature doesn't make sense with vSphere, which is why you can't find it.
Workstation, Player, Server all run on top of a "host OS" while ESX (vSphere managed) runs on bare-metal. You're not supposed to have access to the native file system on the host - so there is no option to do so.

Resources