ESXi4 lost the datastore? - esxi

I have an ESXi 4.1 host with some virtual machine. The host was using an external storage via NFS and local storage with a SATA disk.
I've moved all virtual machines from the NFS datastore to the SATA datastore. Then, i tried to unmount the NFS datastore, but failed with the error that was in use. But, the datasotre was empty.
So, I've used the SSH access to unmount the NFS datastore:
~ # esxcfg-nas -l
nfs1 is /vmware from 192.168.2.131 mounted
~ # esxcfg-nas -d nfs1
NAS volume nfs1 deleted.
~ # esxcfg-nas -l
nfs1 is /vmware from 192.168.2.131 unmounted
But, now at the vSphere Client, there's a big message showing:
The VMware ESX Server does not have persistent storage.
At configuration->Storage, the list is empty, and before remove the NFS datastore, there was the two datastores (NFS and SATA).
But, all seems to be working perfect. All virtual machines continues working.
I tried to Rescan All, with no luck. If I try to add a new storage, the SATA disk appears as available.
What can I do to restore the datastore ? I'm scared to do anything and lost all my data from the SATA disk.
Any idea ?

It seems that there is two very smart people than can downvote my problem without sharing their thoughts.
For all other people with the same problem, I've found the solution. When I try to refresh datastores, although the vSphere Client shows 'Complete', at the file /var/log/messages this is logged:
Jun 13 11:32:34 Hostd: [2014-06-13 11:32:34.677 2C3E1B90 error 'FSVolumeProvider' opID=EB3B0782-00001239] RefreshVMFSVolumes: ProcessVmfs threw HostCtlException Error interacting with configuration file /etc/vmware/esx.conf
Jun 13 11:32:34 ker failed : Error interacting with configuration file /etc/vmware/esx.conf: I am being asked to delete a .LOCK file that I'm not sure is mine. This is a bad thing and I am going to fail.
[...]
Jun 13 11:32:35 ith configuration file /etc/vmware/esx.conf: I am being asked to delete a .LOCK file that I'm not sure is mine. This is a bad thing and I am going to fail. Lock should be released by (0)
To solve this, just run from the SSH access:
# services.sh restart
And my SATA datastore appears with no problem.
Hope this helps somebody sometime.

Related

Firewall detected while mounting volume in docker

I am trying to mount volume using share drive option on Docker-Desktop but each time, when I am click on Drive to share it, I am getting below error:
My docker version is : 2.1.0.5
My System : Windows 10
I am on my office system and connected to internet using VPN. I have disconnected my VPN and tried to connect to internet directly, still I am getting this error. I don't have full access to modify settings on my laptop. I really need mount option to share some file between local machine and container and I am not able to do it. Could you please help me to resolve this issue or any workaround that I could try to mount my local files to container without sharing Drive?
You want to upgrade to a 2.2.x.x release of docker desktop or newer. In that release they updated the file sharing to remove the samba based mounts.
Users don’t have to expose the Samba port, and therefore do not experience issues related to IT firewall or drive-sharing policy.
There were a few issues in the first few releases, so be sure to use the latest patch.

How do I have write privileges for Mounted Drives, External or otherwise for docker?

I have been working a lot with elasticsearch. I have a huge issue with trying to expand my container HDD space. I want to shift my volumes to an External HDD (NTFS or otherwise), but it seems that when I use docker-compose for something like:
volumes:
- /Volumes/Elements/volume_folder/data03:/usr/share/elasticsearch/data
It seems that it doesn't have write permissions. I confirmed it on Windows and Mac that this is the case, but I figured this is actually a common issue massively overcome with docker already. I have been looking but unable to do this.
How is this done? I have Mounted (internal) Drives on my Windows 10 Machine I wanted to set up to store this data, as well as multiple External HDD I wanted to do the same.
I notice that I as the current user always have the r/w/e privileges to the Devices, so I was thinking that there was a way to have docker run as the current user for the purposes of determining Drive privileges?
The current issue is that a container falls outside the scope of the current user, and it seems that the external is something akin to 775.
Can someone assist with this? I was looking on stackoverflow and all the mounts were based on the host machine, but NOT a different drive like this. I can easily set a volume anywhere on the machine but when it comes to External HDD or H:/ or I:/, it seems to be a different story.
I was looking at this Stackoverflow question: Docker volume on external hard drive and I was looking into seeing what I can do. When I looked at preferences, I saw that /Volumes was shared. When I did docker-compose up it says that the system is readonly. (Like previously stated). It is 755. Is there a way to run docker compose as a particular user?
Edit: I was noticing that docker-compose allows a user option, and since I saw that the mounted HDD is owned by me. I said create, maybe i can pass my user into each container and it will access it correctly. I saw this article stating i could do this: https://medium.com/redbubble/running-a-docker-container-as-a-non-root-user-7d2e00f8ee15
I added user to each service, like this: user: ${CURRENT_UID}
and then in the CLI, i put a couple different options:
CURRENT_UID="$(whoami)" docker-compose up
CURRENT_UID="$(id -u):$(id -g)" docker-compose up
The top one failed because the user was not in passwd, but the bottom one gave me a "permissions denied" error. I was thinking it might have worked, but didnt.

Mount network share with nfs with username / password

I am trying to mount a NAS using nfs for an application.
The Storage team has exported it to the host server and I can access it at /nas/data.
I am using containerized application and this file system export to the host machine will be a security issue as any container running on the host will be able to use the share. So this linux to linux mounting will not work for me.
So the only alternate solution I have is mounting this nas folder during container startup with a username /password.
The below command works fine on a share supporting Unix/Windows. I can mount on container startup
mount -t cifs -osec=ntlmv2,domain=mydomain,username=svc_account,password=password,noserverino //nsnetworkshare.domain.company/share/folder /opt/testnas
I have been told that we should use nfs option instead of cifs.
So just trying to find out whether using nfs or cifs will make any difference.
Specifying nfs option gives below error.
mount -t nfs -o nfsvers=3,domain=mydomain,username=svc_account,password=password,noserverino //nsnetworkshare.domain.company/share/folder /opt/testnas
mount.nfs: remote share not in 'host:dir' format
Below command doesnt' seem to work either.
mount -t nfs -o nfsvers=3,domain=mydomain,username=svc_account,password=password,noserverino nsnetworkshare.domain.company:/share/folder /opt/testnas
mount.nfs: an incorrect mount option was specified
I couldn't find a mount -t nfs option example with username /password. So I think we can't use mount -t nfs with credentials.
Please pour in ideas.
Thanks,
Vishnu
CIFS is a file sharing protocol. NFS is a volume sharing protocol. The difference between the two might not initially be obvious.
NFS is essentially a tiny step up from directly sharing /dev/sda1. The client actually receives a naked view of the shared subset of the filesystem, including (at least as of NFSv4) a description of which users can access which files. It is up to the client to actually manage the permissions of which user is allowed to access which files.
CIFS, on the other hand, manages users on the server side, and may provide a per-user view and access of files. In that respect, it is similar to FTP or WebDAV, but with the ability to read/write arbitrary subsets of a file, as well as a couple of other features related to locking.
This may sound like NFS is distinctively inferior to CIFS, but they are actually meant for a different purpose. NFS is most useful for external hard drives connected via Ethernet, and virtual cloud storage. In such cases, it is the intention to share the drive itself with a machine, but simply do it over Ethernet instead of SATA. For that use case, NFS offers greater simplicity and speed. A NAS, as you're using, is actually a perfect example of this. It isn't meant to manage access, it's meant to not be exposed to systems that shouldn't access it, in the first place.
If you absolutely MUST use NFS, there are a couple of ways to secure it. NFSv4 has an optional security model based on Kerberos. Good luck using that. A better option is to not allow direct connection to the NFS service from the host, and instead require going through some secure tunnel, like SSH port forwarding. Then the security comes down to establishing the tunnel. However, either one of those requires cooperation from the host, which would probably not be possible in the case of your NAS.
Mind you, if you're already using CIFS and it's working well, and it's giving you good access control, there's no good reason to switch (although, you'd have to turn the NFS off for security). However, if you have a docker-styled host, it might be worthwhile to play with iptables (or the firewall of your choice) on the docker-host, to prevent the other containers from having access to the NAS in the first place. Rather than delegating security to the NAS, it should be done at the docker-host level.
Well I would say go with CIFS as NFS (Old) few of linux/Unix bistro even stopped support for it.
NFS is the “Network File System” specifically used for Unix and Linux operating systems. It allows files communication transparently between servers and end users machines like desktops & laptops. NFS uses client- server methodology to allow user to view read and write files on a computer system. A user can mount all or a portion of a file system via NFS.
CIFS is abbreviation for “Common Internet File System” used by Windows operating systems for file sharing. CIFS also uses the client-server methodology where A client makes a request of a server program for accessing a file .The server takes the requested action and returns a response. CIFS is a open standard version of the Server Message Block Protocol (SMB) developed and used by Microsoft and it uses the TCP/IP protocol.
If I have a Linux <-> Linux I would choose nfs but if it's a Windows <-> Linux cifs would be the best option.

Where are containers located in the host's file system?

I'm currently experimenting with Docker containers on Windows Server. I've created a number of containers, and I want to see where they are actually saved on the host's file system (like a .vhd file for Hyper-V). Is there a default location I can look, or a way to find that out using Docker CLI?
Other answers suggest the data might be stored in:
C:\Users\Public\Documents\Hyper-V\Virtual hard disks\MobyLinuxVM.vhdx
or since the Windows 10 Anniversary Update:
C:\ProgramData\docker\containers
You can find out by entering:
docker info
Credit to / More info:
https://stackoverflow.com/a/38419398/331637
https://stackoverflow.com/a/39971954/331637

Docker container behavior when used in production

I am currently reading up on Docker. From what I understand, a container which is based on an image saves only the changes. If I were to use this in a production setup, does it persist it as soon as changes are written to disk by applications running "inside" the container or does it have to be done manually?
My concern is - what if the host abruptly shuts down? Will all the changes be lost?
The theory is that there's no real difference between a Docker container and a classical VM or physical host in most situations.
If the host abruptly dies, you can loose recent data using a container as well as using a physical host:
your application may not have decided to really send the write operation to save the data on disk,
the Operating System may have decided to wait a bit before sending data to storage devices
the filesystem may not have finished the write
the data may not have been really flushed to the physical storage device.
Now by default, Docker uses AUFS (stackable filesystem) which works at the file level.
If you're writing to a file that was existing in the Docker image, AUFS will first copy this base file to the upper, writable layer (container), before writing your change. This causes a delay depending on the size of the original file. Interesting and more technical information here.
I guess that if a power cut occurs happens while this original file is being copied and before your changes have been written, then that would be one reason to get more data loss with a Docker container than with any "classical" host.
You can move your critical data to a Docker "volume", which would be a regular filesystem on the host, bind-mounted into the container. This is the recommended way to deal with important data that you want to keep across containers deployments
To mitigate the AUFS potential issue, you could tell Docker to use LVM thin provisioning block devices instead of AUFS (wipe /var/lib/dockerand start the daemon with docker -d -s devicemapper). However I don't know if this storage backend received as much testing as the default AUFS one (it works ok for me though).

Resources