I have not tried Docker containers yet but I'm evaluating it for the needs I have. For instance I'd like to use a basic file system to read/write some data during runtime. I also want some kind of encryption for this. I can encrypt the bytes myself but perhaps standard file system encryption is better/faster? The question is, does Docker containers have support for any encrypted file systems? (I will host this container on Linux and using .net core as app-framework.)
You can use volumes for that. The volume must reside on a filesystem encrypted with cryptsetup (dm-crypt). This filesystem can be file-backed.
Some pointers:
https://wiki.archlinux.org/index.php/Dm-crypt/Device_encryption
https://wiki.archlinux.org/index.php/ext4
Related
This is related to docker container which is deployed in microk8s cluster. The container when deployed thru k8s with host volume mounted inside it. when the container runs, it makes few keys and token generation to establish a secure tunnel with another container outside of this node. The container creates those keys inside the provided mount path. The keys and token which are generated are created as plain files (like public.key, private.key, .crt, .token etc) under the mounted path inside container. Also the tokens are refreshed in some time interval.
Now I want to secure those tokens/keys which are generated post container runs so that it can't be accessed by outsiders to harm the system/application. Something kind of vault store, but I want to maintain inside container or outside the container on host in some encrypted way. So that whenever the container application wants the files, it can decrypt from that path/location and use it.
Is there any way this can be achieved inside docker container system based on Ubuntu 18 host OS and k8s v1.18. Initially I thought of linux keyrings or some gpg encrypt mechanism. But I am bot sure whether it can affect the container runtime performance or not. I am fine to implement any code in python/c to encrypt/decrypt the files for the application inside container. But the encryption mechanism should be fips compliant or industry standard.
Also anyway we can encrypt the directory where those keys are generated and use it decrypting when needed by the application..or some directory level permission we can set so that it can't read by other users to make those files secure.
thanks for reading this long post. However I donot have a clear solution for this as of now. any pointers and suggestion in this regard is much appreciated.
thanks
I'd like to spin up an Ubuntu image with certain tools like testdisk for disk recovery. How can I manage all detected volumes on the host machine with testdisk inside a Docker container?
The O'reilly info worked for Windows with supposed limitations (inability to repartition). I'm assuming if you use disk management to see the disk number (0,1,2,etc) it will correspond to the sd# you have to reference. Supposedly with Windows Server Editions, you can use the device flag and specify a device class GUID to share inside Docker. But like previously mentioned, it isn't raw access but rather a shared device.
i'm new to docker and i'm trying to understand persistent storage in docker.
in the section Manage Application data > Store data within containers > About storage drivers
https://docs.docker.com/storage/storagedriver/ Storage drivers allow
you to create data in the writable layer of your container. The files
won’t be persisted after the container is deleted, and both read and
write speeds are lower than native file system performance.
but later on the section Manage Application data > Store data within containers > Use the Device Mapper storage driver
https://docs.docker.com/storage/storagedriver/device-mapper-driver/
they use direct-lvm that creates logical volumes that allow to persist data
my question : using a storage driver means :
the container-generated data is ephemeral ?
the container-generated data is ephemeral if we are using a logical
volume on a loopback-device ?
the container-generated data is persistent if we are using a logical
volume on a block device ?
The storage driver configuration is essentially an install-time setting that's not really relevant once you've gotten it set up correctly. In particular if you run docker info and it says it's using an overlay2 driver I would recommend closing this particular browser tab and not changing anything.
Of the paragraph you quoted, the important thing to take away is that files you create inside a container, that aren't inside a volume directory, will be lost as soon as the container is deleted. It doesn't matter what underlying storage driver you're using. The performance differences between the container filesystem, named volumes, and bind-mounts almost never matter (except on MacOS hosts where bind mounts are very very slow).
The data the storage driver persists includes both the temporary container filesystems (they get persisted until the container is deleted) and the underlying image data. It does not include named Docker volumes or other bind-mounted host directories.
If you're using devicemapper, you might see if you can upgrade your host to a newer Linux distribution that can use the overlay2 driver. In particular that avoids the fixed space limit of the devicemapper driver. If you must use devicemapper, general wisdom has been that using a dedicated partition for it is better than using a file. As I said up front, though, this is essentially install-time configuration and has no bearing on your application or docker run commands.
I'm trying to deploy and run a docker image in a GCP VM instance.
I need it to access a certain Cloud Storage Bucket (read and write).
How do I mount a bucket inside the VM? How do I mount a bucket inside the Docker container running in my VM?
I've been reading google cloud documentation for a while, but I'm still confused. All guides show how to access a bucket from a local machine, and not how to mount it to VM.
https://cloud.google.com/storage/docs/quickstart-gsutil
Found something about Fuse, but it looks overly complicated for just mounting a single bucket to VM filesystem.
Google Cloud Storage is a object storage API, it is not a filesystem. As a result, it isn't really designed to be "mounted" within a VM. It is designed to be highly durable and scalable to extraordinarily large objects (and large numbers of objects).
Though you can use gcsfuse to mount it as a filesystem, that method has pretty significant drawbacks. For example, it can be expensive in operation count to do even simple operations for a normal filesystem.
Likewise, there are many surprising behaviors that are a result of the fact that it is an object store. For example, you can't edit objects -- they are immutable. To give the illusion of writing to the middle of an object, the object is, in effect, deleted and recreated whenever a call to close() or fsync() happens.
The best way to use GCS is to design your application to use the API (or the S3 compatible API) directly. That way the semantics are well understood by the application, and you can optimize for them to get better performance and control your costs. Thus, to access it from your docker container, ensure your container has a way to authenticate through GCS (either through the credentials on the instance, or by deploying a key for a service account with the necessary permissions to access the bucket), then have the application call the API directly.
Finally, if what you need is actually a filesystem, and not specifically GCS, Google Cloud does offer at least 2 other options if you need a large mountable filesystem that is designed for that specific use case:
Persistent Disk, which is the baseline filesystem that you get with a VM, but you can mount many of these devices on a single VM. However, you can't mount them read/write to multiple VMs at once -- if you need to mount to multiple VMs, the persistent disk must be read only for all instances they are mounted to.
Cloud Filestore is a managed service that provides an NFS server in front of a persistent disk. Thus, the filesystem can be mounted read/write and shared across many VMs. However it is significantly more expensive (as of this writing, about $0.20/GB/month vs $0.04/GB/month in us-central1) than PD, and there are minimum size requirements (1TB).
Google Cloud Storage buckets cannot be mounted in Google Compute instances or containers without third-party software such as FUSE. Neither Linux nor Windows have built-in drivers for Cloud Storage.
GCS VM comes with google cloud SDK installed. So without mounting you can copy in and out files using those commands.
gsutil ls gs://
I am trying to mount a NAS using nfs for an application.
The Storage team has exported it to the host server and I can access it at /nas/data.
I am using containerized application and this file system export to the host machine will be a security issue as any container running on the host will be able to use the share. So this linux to linux mounting will not work for me.
So the only alternate solution I have is mounting this nas folder during container startup with a username /password.
The below command works fine on a share supporting Unix/Windows. I can mount on container startup
mount -t cifs -osec=ntlmv2,domain=mydomain,username=svc_account,password=password,noserverino //nsnetworkshare.domain.company/share/folder /opt/testnas
I have been told that we should use nfs option instead of cifs.
So just trying to find out whether using nfs or cifs will make any difference.
Specifying nfs option gives below error.
mount -t nfs -o nfsvers=3,domain=mydomain,username=svc_account,password=password,noserverino //nsnetworkshare.domain.company/share/folder /opt/testnas
mount.nfs: remote share not in 'host:dir' format
Below command doesnt' seem to work either.
mount -t nfs -o nfsvers=3,domain=mydomain,username=svc_account,password=password,noserverino nsnetworkshare.domain.company:/share/folder /opt/testnas
mount.nfs: an incorrect mount option was specified
I couldn't find a mount -t nfs option example with username /password. So I think we can't use mount -t nfs with credentials.
Please pour in ideas.
Thanks,
Vishnu
CIFS is a file sharing protocol. NFS is a volume sharing protocol. The difference between the two might not initially be obvious.
NFS is essentially a tiny step up from directly sharing /dev/sda1. The client actually receives a naked view of the shared subset of the filesystem, including (at least as of NFSv4) a description of which users can access which files. It is up to the client to actually manage the permissions of which user is allowed to access which files.
CIFS, on the other hand, manages users on the server side, and may provide a per-user view and access of files. In that respect, it is similar to FTP or WebDAV, but with the ability to read/write arbitrary subsets of a file, as well as a couple of other features related to locking.
This may sound like NFS is distinctively inferior to CIFS, but they are actually meant for a different purpose. NFS is most useful for external hard drives connected via Ethernet, and virtual cloud storage. In such cases, it is the intention to share the drive itself with a machine, but simply do it over Ethernet instead of SATA. For that use case, NFS offers greater simplicity and speed. A NAS, as you're using, is actually a perfect example of this. It isn't meant to manage access, it's meant to not be exposed to systems that shouldn't access it, in the first place.
If you absolutely MUST use NFS, there are a couple of ways to secure it. NFSv4 has an optional security model based on Kerberos. Good luck using that. A better option is to not allow direct connection to the NFS service from the host, and instead require going through some secure tunnel, like SSH port forwarding. Then the security comes down to establishing the tunnel. However, either one of those requires cooperation from the host, which would probably not be possible in the case of your NAS.
Mind you, if you're already using CIFS and it's working well, and it's giving you good access control, there's no good reason to switch (although, you'd have to turn the NFS off for security). However, if you have a docker-styled host, it might be worthwhile to play with iptables (or the firewall of your choice) on the docker-host, to prevent the other containers from having access to the NAS in the first place. Rather than delegating security to the NAS, it should be done at the docker-host level.
Well I would say go with CIFS as NFS (Old) few of linux/Unix bistro even stopped support for it.
NFS is the “Network File System” specifically used for Unix and Linux operating systems. It allows files communication transparently between servers and end users machines like desktops & laptops. NFS uses client- server methodology to allow user to view read and write files on a computer system. A user can mount all or a portion of a file system via NFS.
CIFS is abbreviation for “Common Internet File System” used by Windows operating systems for file sharing. CIFS also uses the client-server methodology where A client makes a request of a server program for accessing a file .The server takes the requested action and returns a response. CIFS is a open standard version of the Server Message Block Protocol (SMB) developed and used by Microsoft and it uses the TCP/IP protocol.
If I have a Linux <-> Linux I would choose nfs but if it's a Windows <-> Linux cifs would be the best option.