I am providing a docker container for my software that would run directly on user machine. The software is supposed to use Node locked license which would be bound to the MAC address of the host machine. FlexLM is used to validate the license.
The problem is that the docker container does not by default accesses the host machine's MAC address. One has to either bind the docker with host machine network using the --net argument or provide the MAC address explicitly using the --mac-address argument.
The problem is that one can pass any argument in --mac-address argument and the docker container will use that MAC address. This defeats the whole purpose of Node locked license. How do I make sure that the docker always gets the host machine's MAC address?
Short Answer:"there is currently no good solution for nodelocking within a container. Everything is virtualized so there is nothing safe to bind to."
Suggestion: Have you hear about Flexera's REST-based licensing API? Also know as the Cloud Monetization API or CMAPI.
This API was designed for cloud to cloud license checking. It does not require the SDK libraries, you can call it from any language that can make a REST call. It makes for a super light weight client, but requires back end functionality (FlexNet Operations and Cloud Licensing Service) to support it.
It's a great solution for applications deployed in a docker container.
Take a look at the FlexNet Licensing datasheet here:
https://www.flexerasoftware.com/resources.html?type=datasheet
Then contact your account manager for more information.
Source - Flexera Customer Community - https://community.flexera.com/t5/FlexNet-Publisher-Forum/Support-for-Docker-and-Kubernetes/m-p/111022
Related
I have a unique Docker issue. I am developing an application which needs to connect to multiple Docker containers. The gist is, that this application will use the Docker SDK to spin up containers and connect to them as needed.
However, due to the nature of the application, we should assume that each one of these containers is compromised and unsafe. Therefore, I need to separate them from the host network (so they cannot access my devices and the WAN). I still have the constraint of needing to connect to them from my application.
It is a well-known problem that the macOS networking stack doesn't support connecting to a docker network. Normally, I'd get around this by exposing a port I need. However, this is not possible with my application, as I am using internal networks with Docker.
I'd like to accomplish something like the following. Imagine Container 2 and Container 3 are on their own private internal network. The host (which isn't a container) is controlling the Docker SDK and can query their internal IPs. Thus, it can easily connect to these machines without this network being exposed to the network of the host. Fortunately, this sort of setup works on Linux. However, I'd like to come up with a cross platform solution that works on macOS.
I had a similar situation. What I ended up doing was:
The app manages a dynamic container-to-port mapping (just a hash table).
When my app (on the host) wants to launch a container, it finds an unused port in a pre-defined range (e.g. 28000-29000).
Once it has a port, it maps the container's port to some port in a pre-determined range (e.g. -p 28003:80).
When my app needs to refer to a container, it uses localhost:<port> (e.g. localhost:28001).
It turns out to not be a lot of code, but if you go that route, make sure you encapsulate the way you refer to containers (i.e. don't hard-code the hostname and port, use a class that generates the string).
All that said, you should really do some testing with a VM deployment option before you rule it out as too slow.
I am trying to mount a NAS using nfs for an application.
The Storage team has exported it to the host server and I can access it at /nas/data.
I am using containerized application and this file system export to the host machine will be a security issue as any container running on the host will be able to use the share. So this linux to linux mounting will not work for me.
So the only alternate solution I have is mounting this nas folder during container startup with a username /password.
The below command works fine on a share supporting Unix/Windows. I can mount on container startup
mount -t cifs -osec=ntlmv2,domain=mydomain,username=svc_account,password=password,noserverino //nsnetworkshare.domain.company/share/folder /opt/testnas
I have been told that we should use nfs option instead of cifs.
So just trying to find out whether using nfs or cifs will make any difference.
Specifying nfs option gives below error.
mount -t nfs -o nfsvers=3,domain=mydomain,username=svc_account,password=password,noserverino //nsnetworkshare.domain.company/share/folder /opt/testnas
mount.nfs: remote share not in 'host:dir' format
Below command doesnt' seem to work either.
mount -t nfs -o nfsvers=3,domain=mydomain,username=svc_account,password=password,noserverino nsnetworkshare.domain.company:/share/folder /opt/testnas
mount.nfs: an incorrect mount option was specified
I couldn't find a mount -t nfs option example with username /password. So I think we can't use mount -t nfs with credentials.
Please pour in ideas.
Thanks,
Vishnu
CIFS is a file sharing protocol. NFS is a volume sharing protocol. The difference between the two might not initially be obvious.
NFS is essentially a tiny step up from directly sharing /dev/sda1. The client actually receives a naked view of the shared subset of the filesystem, including (at least as of NFSv4) a description of which users can access which files. It is up to the client to actually manage the permissions of which user is allowed to access which files.
CIFS, on the other hand, manages users on the server side, and may provide a per-user view and access of files. In that respect, it is similar to FTP or WebDAV, but with the ability to read/write arbitrary subsets of a file, as well as a couple of other features related to locking.
This may sound like NFS is distinctively inferior to CIFS, but they are actually meant for a different purpose. NFS is most useful for external hard drives connected via Ethernet, and virtual cloud storage. In such cases, it is the intention to share the drive itself with a machine, but simply do it over Ethernet instead of SATA. For that use case, NFS offers greater simplicity and speed. A NAS, as you're using, is actually a perfect example of this. It isn't meant to manage access, it's meant to not be exposed to systems that shouldn't access it, in the first place.
If you absolutely MUST use NFS, there are a couple of ways to secure it. NFSv4 has an optional security model based on Kerberos. Good luck using that. A better option is to not allow direct connection to the NFS service from the host, and instead require going through some secure tunnel, like SSH port forwarding. Then the security comes down to establishing the tunnel. However, either one of those requires cooperation from the host, which would probably not be possible in the case of your NAS.
Mind you, if you're already using CIFS and it's working well, and it's giving you good access control, there's no good reason to switch (although, you'd have to turn the NFS off for security). However, if you have a docker-styled host, it might be worthwhile to play with iptables (or the firewall of your choice) on the docker-host, to prevent the other containers from having access to the NAS in the first place. Rather than delegating security to the NAS, it should be done at the docker-host level.
Well I would say go with CIFS as NFS (Old) few of linux/Unix bistro even stopped support for it.
NFS is the “Network File System” specifically used for Unix and Linux operating systems. It allows files communication transparently between servers and end users machines like desktops & laptops. NFS uses client- server methodology to allow user to view read and write files on a computer system. A user can mount all or a portion of a file system via NFS.
CIFS is abbreviation for “Common Internet File System” used by Windows operating systems for file sharing. CIFS also uses the client-server methodology where A client makes a request of a server program for accessing a file .The server takes the requested action and returns a response. CIFS is a open standard version of the Server Message Block Protocol (SMB) developed and used by Microsoft and it uses the TCP/IP protocol.
If I have a Linux <-> Linux I would choose nfs but if it's a Windows <-> Linux cifs would be the best option.
I am wondering if I can replace my virtual machine.
I usually use a Windows VM to get in, connect to my enterprise VPN and do some work.
If the containers are light I like move to, from my MV
Basically I see using containers like processes but not like interactive logon to get into it.
Is this possible?
Regards,
No, stick with your VM. Complicated network setups ("launch a VPN within my container's network space") and desktop applications ("use my Web browser") are both bad matches for Docker, plus running any Docker command requires administrative access on the host, which you probably don't want just to access intranet content.
I am attempting to use etcd's remote api to configure a coreOS box remotely with static values like ip address, dns resolve address, gateway, ect.
I theory I should be able to file something like:
curl -X PUT "http://xxx.xxx.xxx.xxx:4001/v2/keys/etcd/registry/???_/_state?prevExist=false" -d value=10.10.10.1
But i can't find a reference to the exact syntax to use.
etcd doesn't handle configuration of the host system. It is a distributed key / value store. It can certainly store configuration for applications and maybe even the host. But you have need some other tool to pull the data from the store and transform it into configuration that the application or host recognizes. The application I use to do this inside Docker containers is confd (https://github.com/kelseyhightower/confd).
For configuration of the CoreOS host, you would generally be using Cloud-Config (https://coreos.com/docs/cluster-management/setup/cloudinit-cloud-config/) and writing unit files to deal with certain parts of the system, such as networking (https://coreos.com/docs/cluster-management/setup/network-config-with-networkd/). Hope this helps!
I have an apache/php/mysql bitnami install running in a VM on a windows box. I can enter the IP of the VM into my host machines browser and the site-in-progress comes up just fine.
However, I need to view the site in progress from another device/computer (mobile testing). How can this be done?
A bit late with this answer but I think it adds value...
Most VM software has what is commonly called a 'bridged' mode (it's actually called 'Bridged Networking' in VirtualBox - I don't recall what it's called in VMware). This in essence allows the VM to get it's own independent IP on the LAN (just like a 'real' PC).
The downside is that it's not as secure (because the VM is totally exposed to the network), but the upside is that it's much quicker and easier to set up because you don't need to muck around with port forwarding and there is no risk of posts conflicting with the host or other VMs.
For full details of networking options in VirtualBox see this page: http://www.turnkeylinux.org/docs/virtual-networking-explained
I don't know bitnami but virtualization software usually allows you to do port forwarding.
I do use VirtualBox and here is a good post describing how to set it up properly.
Virtualbox "port forward" from Guest to Host