Below is snippet of my domain xml file which I am using to create SUSE Enterprise Linux Server using virsh. I want to know how I can retrieve this env value TEST_KEY from my guest VM after log in.
<qemu:commandline>
<qemu:env name='MY_KEY' value='TEST_KEY'/>
</qemu:commandline>
You've mis-interpreted what this XML configuration does I'm afraid. It sets this environment variable when QEMU is executed on the host OS. It is not exposed inside the guest OS at all.
Related
How i can access or read windows environment variable in kubernetes. I achieved the same in docker compose file.
How i can do the same in kubernetes as i am unable to read the windows environment variables?
Nothing in the standard Kubernetes ecosystem can be configured using host environment variables.
If you're using the core kubectl tool, the YAML files you'd feed into kubectl apply are self-contained manifests; they cannot depend on host files or environment variables. This can be wrapped in a second tool, Kustomize, which can apply some modifications, but that explicitly does not support host environment variables. Helm lets you build Kubernetes manifests using a templating language, but that also specifically does not use host environment variables.
You'd need to somehow inject the environment variable value into one of these deployment systems. With all three of these tools, you could include those in a file (a Kubernetes YAML manifest, a Kustomize overlay, a Helm values file) that could be checked into source control; you may also be able to retrieve these values from some sort of external storage. But just relaying host environment variables into a container isn't an option in Kubernetes.
I am providing a docker container for my software that would run directly on user machine. The software is supposed to use Node locked license which would be bound to the MAC address of the host machine. FlexLM is used to validate the license.
The problem is that the docker container does not by default accesses the host machine's MAC address. One has to either bind the docker with host machine network using the --net argument or provide the MAC address explicitly using the --mac-address argument.
The problem is that one can pass any argument in --mac-address argument and the docker container will use that MAC address. This defeats the whole purpose of Node locked license. How do I make sure that the docker always gets the host machine's MAC address?
Short Answer:"there is currently no good solution for nodelocking within a container. Everything is virtualized so there is nothing safe to bind to."
Suggestion: Have you hear about Flexera's REST-based licensing API? Also know as the Cloud Monetization API or CMAPI.
This API was designed for cloud to cloud license checking. It does not require the SDK libraries, you can call it from any language that can make a REST call. It makes for a super light weight client, but requires back end functionality (FlexNet Operations and Cloud Licensing Service) to support it.
It's a great solution for applications deployed in a docker container.
Take a look at the FlexNet Licensing datasheet here:
https://www.flexerasoftware.com/resources.html?type=datasheet
Then contact your account manager for more information.
Source - Flexera Customer Community - https://community.flexera.com/t5/FlexNet-Publisher-Forum/Support-for-Docker-and-Kubernetes/m-p/111022
Docker is a wonderful tool for running/deploying your application in a well-defined, controlled environment, and is well supported by e.g. the GitLab CI or by MS Azure.
We would like to use it also in the development phase, so that all developers have the same environment available. Of course, we want to keep the image as light as possible and we do not want e.g. any IDE or other development tool inside of it.
So the actual development takes place outside of docker.
Running our (python) application inside of docker is no problem, but debugging it is not trivial: I do not know of a way to attach a debugger to an application running inside docker. In theory this should be possible, but how does one do it?
Additional info: we use visual studio code, that does have some docker, plugin, but nothing of this sort is mentioned.
Turns out that this is possible, following the same steps needed for remote debugging.
The IP address of the docker image can be retrieved through:
docker inspect <container_id> | grep -i ip
just be sure to add at the beginning of your application:
import ptvsd
# Allow other computers to attach to ptvsd at this IP address and port, using the secret
ptvsd.enable_attach(secret=None, address = ('0.0.0.0', 3000))
ptvsd.wait_for_attach()
'0.0.0.0' means on all interfaces.
For vscode, the last steps consists in adapting the python: Attach configuration, specifying the address and the remote and local roots for your script.
However, for some mysterious reason my breakpoints are ignored.
I'm trying to containerize a Web API project using aspnet Docker image. My web API needs to use a connection string with datasource = <DB alias>. I can use cliconfg.exe to set the DB alias on Windows. However I'm not able to set the DB alias in aspnet docker image as it does not have SQL client tools installed on it.
Is there a way I can install SQL client tools on my docker container and be able to deploy the Web API with connection string having DB alias?
I've had the same problem with this, especially because I used linux. If you only wanted to use this alias inside mssql database you could add host alias to /etc/hosts by adding in docker command line
--add-host=myDbAlias:127.0.0.1
On Linux your should just add to hosts mapping to your aliases, and that's all.
On Windows
Adding aliases should be placed in this branches of Windows Registry on Client side machine.
HKLM\Software\Microsoft\MSSQLServer\Client\ConnectTo
HKLM\Software\Wow6432Node\Microsoft\MSSQLServer\Client\ConnectTo
Inside put Key as Your aliasNameServer and in data put "DBMSSOCN,YourRealServerName" where YourRealServerName = your real server name
You could of course add your alias to hosts also, but if you have no privilege to access system files, you can use registry solution
I am trying to host an application in docker. I am following a guide to host that application in docker, in that its mentioned use the 'env-file' in this way and given all the parameters to include in the env-file.
My question is , how do I edit the env-file in centos ? where will this be located in??
Exporting the variables in a .sh file inside /etc/profile.d/ or in ~/.bash_profile would do the trick. Keep in mind that if you intend to use these environment variables in a service script, it might not work as you expect since service purges all environment variables except a few.
See https://unix.stackexchange.com/a/44378/148497.