Windows NFS user mapping issue when working with NetApp clustered ONTAP - mapping

I have an issue with user mapping while using NFS export from NetApp and trying to mount it on a windows using the NFS client feature.
I have a qtree exported from the NetApp using NFS and I installed the NFS client on Windows (via server manager - roles - file server). I am able to mount, read and write as anonymous user - the problem is in the user mapping.
I tried to add to the registry at HKLM\Software\Microsoft\ClientForNFS\CurrentVersion\Default the two DWOD values AnonymousUid and AnonymousGid and gave them the desired UID and GID in decimal form and restarted the service.
While this trick worked fine on an NFS export which I exported on my RHEL 6.3 and also on a 7 mode NetApp but it seems the thing doesn't work the same here.
Somehow when I used shownount -e on the RHEL or 7 mode it showed the exports and when I tried on the clustered ONTAP vserver it only shows the / (and even though the mount command completes successfully when writing the path)
Any suggestions?
Details:
Client is windows server 2008 R2 x64 bit program (services for NFS)
Server is NetApp clustered ONTAP 8.2.3p4
Edit:
I also tried user-mapping at server side using both vserver name-mapping win-unix to index 1.
And added a rule to the export-policy:
10.0.0.1 (I changed the IP for security reasons)
User is to which anonymous users are mapped to: 1000
Any thoughts?

Well, apparently a good old reboot solved the problem (reboot to the client). Actually I tried to uninstall and reinstall client for NFS and windows cried out for a reboot before it could reinstall it.
After I reinstalled it just worked. I tried to narrow it down to see what was the solution.
It was the registry tweak: once I removed the DWORD values it stopped the mapping and worked again when I recreated them. It also continued working after I changed the export policy to anon 65543.

Related

Cannot pull the project from Bitbucket (the project is with IP restrictions) while using Docker with WSL2 Ubuntu-20.04 Distro

I've a Symfony project that I am running on my PC with Symfony serve.
This project is on Bitbucket that has IP restrictions, I can only work from home and nowhere else for security reasons, and all works just fine :).
I wanted to create a Docker image so that I can easily change my machine and be able to deploy it elsewhere.
So I created a Docker image and did the necesseray configurations and all seems good, I can open the project and work the same way as before. The Docker has the default WSL (WSL1) and I've noticed that the application isn't running as fast as usual (outside the Docker, to load a page it would take 3 seconds, while with Docker it takes at least 30 seconds).
I did some research and found out that I could use the WSL2 with DOCKER which provides better performance than the legacy Hyper-V Backend and Enabled integration for the distro UBUNTU-20.4. The problem using the WSL2 is that I am no longer able to pull my project in the WSL2 (from the Ubuntu-20.0) because of the IP restrictions.
It is really strange that I cannot find any configuration for this and I have no idea what should I do to change it. If I pull the project outside the WLS2 distro it works, with the default WSL it works also but not with the WSL2.
I removed the IP restrcitions and the Docker image worked fine, I have the same speed as If I was outside the Docker. The only problem is that I cannot use the IP restrcitoins for this !
Does anyone know how to fix this ? I haven't been able to find any documentation for this issue.
I am using Windows 10 and the Docker version : 4.5.1 (74721)
Thanks a lot for any information.

How do I implement XAMPP and Docker side-by-side in Windows 10

I recently ran into the issue where I was working on two Laravel projects: one using Docker, the other using XAMPP. I started my Docker project earlier, so I gave it access to port 3306.
When I went to implement the XAMPP project, I tried editing all the DB settings in the proper places to use the port 3308 so that it didn't collide with my DB docker container. Problem was, now I couldn't connect to phpMyAdmin. I was receiving errors that the settings were incorrect. So what was the solution?
The solution was to reset all of my settings to 3306, docker-compose down my Docker project, and then restart the XAMPP services. Worked like a charm.
So I'll note a couple things:
It seems like phpMyAdmin assumes it has access to 3306 even if you've changed your settings in config.inc.php.
Unrelated to this precise problem, I discovered that XAMPP's PHP version was different than what was installed on my Windows machine, which meant that I had two php.ini files. My php-cli was using the C/Program Files/PHP/php.ini, whereas XAMPP was using the XAMPP php.ini. While the XAMPP php.ini had the correct extensions uncommented, I needed to manually uncomment the appropriate extensions in the php-cli ini file. If you have xampp, go to the command line and use php --ini to check where your CLI ini file is located.
I suggest to try devilbox
The Devilbox is a modern and highly customisable dockerized PHP stack supporting full LAMP and MEAN and running on all major platforms. The main goal is to easily switch and combine any version required for local development. It supports an unlimited number of projects for which vhosts, SSL certificates and DNS records are created automatically. Reverse proxies per project are supported to ensure listening server such as NodeJS can also be reached. Email catch-all and popular development tools will be at your service as well. Configuration is not necessary, as everything is already pre-setup.

How to change file permissions in localhost Windows 10 pro/docker/ddev container for Drupal site?

I have installed a drupal 8.8 site using Composer on a Windows 10 pro system and docker and ddev as the development environment.
The drupal site seems to be functioning normally: I see no errors in the drupal log nor when I run ddev describe.
The only exception: Drupal gives me a warning that sites/default/settings.php needs to be write protected. In the past I have done this on a live site using Filezilla, but this is a development only site and it seems Filezilla does not apply permissions on local files--at least, when I right-click the file locally, I do not find a command for changing permissions.
I tried changing the write permissions with Windows 10 itself, but that did not seem to have any effect--I suspect for windows those are different kinds of permissions.
I poked around online and saw something that made me think I could use phpmyadmin to change permissions. Got caught up in that and struggled with it, until getting some help here (How to access phpmyadmin on DDEV Windows 10 pro localhost with SSL record too long error) but it turns out you can't change file permissions with phpmyadmin, apparently.
I tried to use the address that connected me to phpmyadmin in my browser to connect with Putty, but Putty tells me the host does not exist.
So the help I am looking for: how can I change file permissions for sites/default/settings.php in Windows 10 pro localhost running docker/ddev development environment for my drupal site?
Thank you!
I assume you're talking about this warning?
First, you can ignore this warning completely. You're on a local development environment, and so you shouldn't have any concerns about the permissions of settings.php.
Unfortunately, in a Windows environment, you can't make simple permissions changes as Drupal 8 is suggesting that you do.
Note that settings.ddev.php explicitly provides the skip_permissions_hardening option, $settings['skip_permissions_hardening'] = TRUE; to tell Drupal 8 not to try to change permissions on sites/default and sites/default/settings.php because it's just a dev environment and because when Drupal does these things it just makes things harder.
However, to make most things easier on Windows (doesn't solve that problem)...
Use nfs_mount_enabled
I see there are loads of problems with the new "official" Drupal 8.8.0 composer build on Windows. Most of them are due to the composer build making some assumptions about the ability to set time and ownership, but the docker mount used by default (CIFS) has everything owned by root, so the container can't change permissions (even thought they're wide open).
I found that I could get by all of these things by using NFS to mount into the container, and you'll also find it improves performance quite a lot. Set up for NFS by following the instructions at https://ddev.readthedocs.io/en/stable/users/performance/#windows-nfs-setup

Teradata & Continuous Integration

Status quo:
We are developing a project at the client side. There's an existing Teradata appliance on the DEV side and one on the production side.
On the DEV side there is more than one supplier and every supplier has its own sub-database. The DBAs are not granted with direct permissions but call macros to create users and databases, grant rights etc. But no SYSDBA permissions on Teradata.
On the PRD side these macros don't exist. Every statement has to be run as is and has to be run automatically (packaged via RPM).
Therefore it is currently impossible to do a complete packaging and integration testing.
We have a Jenkins running which is doing several other tasks. The system is virtual, we're root and we already have an established packaging process.
What we need/ideas: an image of a plain Teradata database we can connect to (remote is ok) and run our DDL scripts.
The idea is to start some kind of image (Docker, VMWare, VirtualBox) which provides a small Teradata installation, we run our DDLs and throw the result away at the end.
Best case would be Docker in this case, but I'm open for ideas. Is there some kind of trial Teradata (v15) which can be used in this case?
I have looked into this (as I need to do the same) and here is what I have found:
You can actually run the VMWare image in Virtualbox (which is what I will be doing).
Once I have the image running I tarred and dumped out the file system at root (/) and I was able to startup docker.
However Teradata Express has also got a RAID1 setup (I think) which are the two vmdks PDISK0 and PDISK1 (SCSI sdb and sbc). I couldn't find a way to replicate this in docker (without spending more time and my time is up on this) so for now I think running in docker is not an option but if someone more familar with docker could find a way to virtualize the RAID1 I am happy to be corrected.

Can I use Docker like this ...?

My work laptop is running LinuxMint as the base OS, plus Virtualbox to run Windows 7 which is the actual work environment, usually plus an additional Virtualbox VM to run a different Windows installation in which I do my client project work (I have one VM per client, to avoid messing up my main OS).
But I'm wondering if it's feasible and beneficent to switch to using Docker for the client project stuff? That is, I'd like to keep LinuxMint (to preserve my sanity), and keep Windows ('cause I have to use some MS products), but then instead of that series of "client VM's" use Docker containers?
I'm not entirely clear on how containers are useful. Can I, for instance, have a container in which I've installed dotNET and MS SQL; and then another container where I've installed an Azure Powershell; and a third container where I've installed Java and Eclipse -- and then decide which of these "sets" of software is available on the same common base OS (Windows, with VPN and Outlook and Notepad++)?
This post makes me think I'm asking for a solution from the wrong tool?
Or should I perhaps attack the root problem from a different angle, and ask the following over at Workplace.SE: How to work as a consultant without "cluttering up" one's (Windows) OS with more or less temporary installations of all sorts of software necessary for client projects?
AFAIK there is no WindowsOS ready to be run INSIDE a docker container localy, but they are anounced. See www.docker.com/microsoft and msdn windowscontainers
What you can do is run Linux OSs in docker containers within Windows. But in your case you should run the docker engine in your Mint Linux
Not really an answer, more like several comments -- though it's too long to fit within a comment
First of all I would not run Mint, but that's off the question.
Then, it may probably worth to take a look at How is Docker different from a normal virtual machine?.
Also, as you linked, Docker does not aim (at all) to run several programs. Indeed, their policy is Caas: Container as a Service. So basically one program per container. Saying all that, you can probably run wine within container and run one application on each container (over wine).
Have fun!

Resources