I have some native Linux apps and some Wine apps that need to store configuration. I want to write the config data to ~/.config/some/path. But I'm not sure how to get to that directory from Wine.
I tried reading the Linux environment variables, but it doesn't work:
user#ubuntu:~$ echo $HOME
/home/user
user#ubuntu:~$ wine cmd
Microsoft Windows 10.0.17134
Z:\home\user\.wine>echo %HOME%
%HOME%
I also tried using ~, but that just creates a directory named ~ in the current working directory.
Any advice? How can I access the ~/.config directory from within a wine app?
Related
From a Windows Subsystem for Linux (v1) Alpine bash terminal, I would like to set an environment variable that get's passed into a windows executable. Is there any way to do this?
example of what I was hoping would print "Hello, World!":
windows-10:~# export X=World
windows-10:~# cmd.exe /c 'echo Hello, %X%!'
Hello, %X%!
See answer from Philipe below.
Here is a copy of the pertinent info from https://learn.microsoft.com/en-us/windows/wsl/interop
Share environment variables between Windows and WSL
Available in Windows Insider builds 17063 and later.
Prior to 17063, only Windows environment variable that WSL could access was PATH (so you could launch Win32 executables from under WSL).
Starting in 17063, WSL and Windows share WSLENV, a special environment variable created to bridge Windows and Linux distros running on WSL.
Properties of WSLENV:
It is shared; it exists in both Windows and WSL environments.
It is a list of environment variables to share between Windows and WSL.
It can format environment variables to work well in Windows and WSL.
There are four flags available in WSLENV to influence how that environment variable is translated.
WSLENV flags:
/p - translates the path between WSL/Linux style paths and Win32 paths.
/l - indicates the environment variable is a list of paths.
/u - indicates that this environment variable should only be included when running WSL from Win32.
/w - indicates that this environment variable should only be included when running Win32 from WSL.
Flags can be combined as needed.
Can you try this ?
~$ export X=World
~$ export WSLENV=X/w
~$ cmd.exe /c 'echo Hello, %X%!'
Hello, World!
How do I make a script to run another program after a file is created in a specific folder? Could be Windows or Linux. To be specific, I want to use rclone to move file to remote folder right after it is created. The program itself doesnt have built-in monitoring function
Could be Windows or Linux.
Linux: It's easy with inotifywait.
cd "a specific folder"
inotifywait -me close_write --format %f . | while read file
do rclone move "$file" …
done
Unfortunately this doesn't work with the Ubuntu Linux app from the Windows Store for files on mounted Windows filesystems.
I have an environment file named .env337_dev. I need to run this file to set the environment before running another command. How to run this file?
Inside the file, it contains several variables like this
export AB_HOME=/et/dev/abinitio/sit1/abinitio-V2 #/gcc3p32 # for 32-bit
export PATH=${AB_HOME}/bin:${PATH}
Apart from . ./.env337_devcommand which will run and set the environment, is there any other way to run this file ?
Are you looking for the user-specific .bashrc (bash is the default shell on RHEL 6) or a system-wide /etc/profile.d/<something>.sh? For the first, you would edit $HOME/.bashrc and append a line like . .env337_dev (it's still run before any "regular" command, because .bashrc is the Bash standard personal initialization file). Second option suggests that you use an absolute path.
If this doesn't answer your question, a more specific question and/or more details would be very helpful.
You tagged this ab-initio, so you should only be setting a very few environment variables, including:
export AB_HOME=<path-to-co>operating-system>
export PATH=$AB_HOME/bin:$PATH
If you are working with Ab Initio web applications:
export AB_APPLICATION_HUB=<path-to-application-hub>
export JAVA_HOME=<path-to-jdk>
export PATH=$JAVA_HOME/bin:$PATH
and specific settings for different applications, e.g.
export AB_MHUB_HOME=<path-to-metadata-hub-installation>
Typically you put those into the file .profile in your home directory, which shells evaluate for interactive sessions.
I run some installation scripts via docker, they change ~/.bashrc but then I need to source it to use installed commands in RUN instructions below.
Tried obvious RUN . ~/.bashrc and got /bin/sh: 13: /root/.bashrc: shopt: not found error.
Tried RUN . ~/.profile and got mesg: ttyname failed: Inappropriate ioctl for device
I do not want to use ENV instructions. The point of having external installation scripts is to use them in non-Docker environments, for example when running unit tests locally. ENV instructions would duplicate environment setup which is already done in installation scripts.
You should not try to set up shell dotfiles in Docker. Many typical paths do not run them at all; for example
# In a Dockerfile
CMD ["some", "command", "here"]
# From the command line
docker run myimage some command here
The Docker environment is, fundamentally, different from a standalone Linux system; in addition to shell dotfiles, "home directory" isn't really a Docker concept, and if you have a multi-part process, on Docker it's standard to run each part in a separate container, but on standalone Linux you could use the init system to keep all of the parts running together. If you're expecting things to work exactly the same with exactly the same installation scripts, a virtual machine would be a better technological match for what you're attempting.
("Inappropriate ioctl for device" also suggests that there are things in the dotfiles that strongly expect to be run from an actual terminal, which you don't necessarily have at docker build time.)
My generic advice here is:
If possible, install things in the "system" directories within the image and avoid needing custom environment variable settings. (Don't use a version manager like nvm or rvm; don't use a Python virtual environment.)
If you do have to set environment variables, ENV is the way to do it.
If you really can't do either of the above, you can set environment variables in an ENTRYPOINT script before launching the main process; but if it's important to you that variables show up in docker inspect or docker exec shells, they won't be set there.
(Also remember that each RUN command launches a new container with a totally new shell environment. You can RUN . .profile; foo, but the environment variable settings won't carry through to the next RUN line.)
How can I persist environment variables for my Rails app hosted on Digital Ocean?
Locally I run a bash script with commands like, export SERVICE_USERNAME='somekey'.
But when I try to manually run export SERVICE_USERNAME='somekey', while ssh-ing onto the Ubuntu server, they only last for the session... which is not helpful for my app.
How can I persist environment variables on Digital Ocean? Is there a simple way?
You can either set your environment variables to the user (in the ~/.profile or ~./pam_environment files of the user) or system wide (in the /etc/environment file)
You can find more info on this Ubuntu documentation.
EDIT;
You could have your script to do this:
sudo echo 'export SERVICE_USERNAME="somekey"' >> /etc/environment