I am trying to mount a NAS using nfs for an application.
The Storage team has exported it to the host server and I can access it at /nas/data.
I am using containerized application and this file system export to the host machine will be a security issue as any container running on the host will be able to use the share. So this linux to linux mounting will not work for me.
So the only alternate solution I have is mounting this nas folder during container startup with a username /password.
The below command works fine on a share supporting Unix/Windows. I can mount on container startup
mount -t cifs -osec=ntlmv2,domain=mydomain,username=svc_account,password=password,noserverino //nsnetworkshare.domain.company/share/folder /opt/testnas
I have been told that we should use nfs option instead of cifs.
So just trying to find out whether using nfs or cifs will make any difference.
Specifying nfs option gives below error.
mount -t nfs -o nfsvers=3,domain=mydomain,username=svc_account,password=password,noserverino //nsnetworkshare.domain.company/share/folder /opt/testnas
mount.nfs: remote share not in 'host:dir' format
Below command doesnt' seem to work either.
mount -t nfs -o nfsvers=3,domain=mydomain,username=svc_account,password=password,noserverino nsnetworkshare.domain.company:/share/folder /opt/testnas
mount.nfs: an incorrect mount option was specified
I couldn't find a mount -t nfs option example with username /password. So I think we can't use mount -t nfs with credentials.
Please pour in ideas.
Thanks,
Vishnu
CIFS is a file sharing protocol. NFS is a volume sharing protocol. The difference between the two might not initially be obvious.
NFS is essentially a tiny step up from directly sharing /dev/sda1. The client actually receives a naked view of the shared subset of the filesystem, including (at least as of NFSv4) a description of which users can access which files. It is up to the client to actually manage the permissions of which user is allowed to access which files.
CIFS, on the other hand, manages users on the server side, and may provide a per-user view and access of files. In that respect, it is similar to FTP or WebDAV, but with the ability to read/write arbitrary subsets of a file, as well as a couple of other features related to locking.
This may sound like NFS is distinctively inferior to CIFS, but they are actually meant for a different purpose. NFS is most useful for external hard drives connected via Ethernet, and virtual cloud storage. In such cases, it is the intention to share the drive itself with a machine, but simply do it over Ethernet instead of SATA. For that use case, NFS offers greater simplicity and speed. A NAS, as you're using, is actually a perfect example of this. It isn't meant to manage access, it's meant to not be exposed to systems that shouldn't access it, in the first place.
If you absolutely MUST use NFS, there are a couple of ways to secure it. NFSv4 has an optional security model based on Kerberos. Good luck using that. A better option is to not allow direct connection to the NFS service from the host, and instead require going through some secure tunnel, like SSH port forwarding. Then the security comes down to establishing the tunnel. However, either one of those requires cooperation from the host, which would probably not be possible in the case of your NAS.
Mind you, if you're already using CIFS and it's working well, and it's giving you good access control, there's no good reason to switch (although, you'd have to turn the NFS off for security). However, if you have a docker-styled host, it might be worthwhile to play with iptables (or the firewall of your choice) on the docker-host, to prevent the other containers from having access to the NAS in the first place. Rather than delegating security to the NAS, it should be done at the docker-host level.
Well I would say go with CIFS as NFS (Old) few of linux/Unix bistro even stopped support for it.
NFS is the “Network File System” specifically used for Unix and Linux operating systems. It allows files communication transparently between servers and end users machines like desktops & laptops. NFS uses client- server methodology to allow user to view read and write files on a computer system. A user can mount all or a portion of a file system via NFS.
CIFS is abbreviation for “Common Internet File System” used by Windows operating systems for file sharing. CIFS also uses the client-server methodology where A client makes a request of a server program for accessing a file .The server takes the requested action and returns a response. CIFS is a open standard version of the Server Message Block Protocol (SMB) developed and used by Microsoft and it uses the TCP/IP protocol.
If I have a Linux <-> Linux I would choose nfs but if it's a Windows <-> Linux cifs would be the best option.
Related
New to Docker/K8s. I need to be able to mount all the containers (across all pods) on my K8s cluster to a shared file system, so that they can all read from and write to files on this shared file system. The file system needs to be something residing inside of -- or at the very least accessible to -- all containers in the K8s cluster.
As far as I can tell, I have two options:
I'm guessing K8s offers some type of persistent, durable block/volume storage facility? Maybe PV or PVC?
Maybe launch a Dockerized Samba container and give my others containers access to it somehow?
Does K8s offer this type of shared file system capability or do I need to do something like a Dockerized Samba?
NFS is a common solution to provide you the file sharing facilities. Here's a good explanation with example to begin with. Samba can be used if your file server is Windows based.
You are right you can use the File system in the backend with Access Mode ReadWriteMany.
ReadWirteMany will allow the container to mount to a single PVC and write on it.
You can also use the NFS system as suggested by the gohm'c, for NFS you can set up the GlusterFS or MinIO containers.
Read more about the Access mode ReadWriteMany : https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes
This is related to docker container which is deployed in microk8s cluster. The container when deployed thru k8s with host volume mounted inside it. when the container runs, it makes few keys and token generation to establish a secure tunnel with another container outside of this node. The container creates those keys inside the provided mount path. The keys and token which are generated are created as plain files (like public.key, private.key, .crt, .token etc) under the mounted path inside container. Also the tokens are refreshed in some time interval.
Now I want to secure those tokens/keys which are generated post container runs so that it can't be accessed by outsiders to harm the system/application. Something kind of vault store, but I want to maintain inside container or outside the container on host in some encrypted way. So that whenever the container application wants the files, it can decrypt from that path/location and use it.
Is there any way this can be achieved inside docker container system based on Ubuntu 18 host OS and k8s v1.18. Initially I thought of linux keyrings or some gpg encrypt mechanism. But I am bot sure whether it can affect the container runtime performance or not. I am fine to implement any code in python/c to encrypt/decrypt the files for the application inside container. But the encryption mechanism should be fips compliant or industry standard.
Also anyway we can encrypt the directory where those keys are generated and use it decrypting when needed by the application..or some directory level permission we can set so that it can't read by other users to make those files secure.
thanks for reading this long post. However I donot have a clear solution for this as of now. any pointers and suggestion in this regard is much appreciated.
thanks
I'd like to spin up an Ubuntu image with certain tools like testdisk for disk recovery. How can I manage all detected volumes on the host machine with testdisk inside a Docker container?
The O'reilly info worked for Windows with supposed limitations (inability to repartition). I'm assuming if you use disk management to see the disk number (0,1,2,etc) it will correspond to the sd# you have to reference. Supposedly with Windows Server Editions, you can use the device flag and specify a device class GUID to share inside Docker. But like previously mentioned, it isn't raw access but rather a shared device.
I have not tried Docker containers yet but I'm evaluating it for the needs I have. For instance I'd like to use a basic file system to read/write some data during runtime. I also want some kind of encryption for this. I can encrypt the bytes myself but perhaps standard file system encryption is better/faster? The question is, does Docker containers have support for any encrypted file systems? (I will host this container on Linux and using .net core as app-framework.)
You can use volumes for that. The volume must reside on a filesystem encrypted with cryptsetup (dm-crypt). This filesystem can be file-backed.
Some pointers:
https://wiki.archlinux.org/index.php/Dm-crypt/Device_encryption
https://wiki.archlinux.org/index.php/ext4
I am attempting to use etcd's remote api to configure a coreOS box remotely with static values like ip address, dns resolve address, gateway, ect.
I theory I should be able to file something like:
curl -X PUT "http://xxx.xxx.xxx.xxx:4001/v2/keys/etcd/registry/???_/_state?prevExist=false" -d value=10.10.10.1
But i can't find a reference to the exact syntax to use.
etcd doesn't handle configuration of the host system. It is a distributed key / value store. It can certainly store configuration for applications and maybe even the host. But you have need some other tool to pull the data from the store and transform it into configuration that the application or host recognizes. The application I use to do this inside Docker containers is confd (https://github.com/kelseyhightower/confd).
For configuration of the CoreOS host, you would generally be using Cloud-Config (https://coreos.com/docs/cluster-management/setup/cloudinit-cloud-config/) and writing unit files to deal with certain parts of the system, such as networking (https://coreos.com/docs/cluster-management/setup/network-config-with-networkd/). Hope this helps!