conemu pass env var to WSL bash terminal - environment-variables

I'm trying to get a task defined in ConEmu to run multiple instance of Ubuntu bash using the WSL layer of Windows 10.
I followed the examples to set up a task to split the UI the way I want, and that part works great. My problem is that I'm trying to use environment variables to pass through commands to run after logging in, and I want different things to run in each panel.
Here is the task command I'm using:
set "STARTUP_CMD='gfp && make server' " & set "PATH=%ConEmuBaseDirShort%\wsl;%PATH%" & %ConEmuBaseDirShort%\conemu-cyg-64.exe --wsl -cur_console:p -cur_console:d:C:\xxx\yyy
On the Linux side I have code in my ~/.bash_aliases file that looks for the STARTUP_CMD env var and tries to execute it. I found code that can pull env vars from the Windows side, which is where the 'set' commands appear to be storing things. Problem is, Windows doesn't know what to do with these, and it tries to expand them when they are read, so it all blows up.
I had this working before, but had to wipe and rebuild my machine recently, and unfortunately didn't have the working command backed up anywhere.
I thought this was the recommended way to run bash with WSL, but I would rather have a way to send stuff directly to the Linux layer as env vars (or if someone has a better way to queue up different commands for each pane, I'm all for that too). Any help would be much appreciated.
Thanks!

Oh course I find the answer right after posting the question... posting here to help others that hit the same issue (or my future self if I forget and have to wipe my machine again).
set "PATH=%ConEmuBaseDirShort%\wsl;%PATH%" & %ConEmuBaseDirShort%\conemu-cyg-64.exe --wsl -eSTARTUP_CMD="gfp && make server" -cur_console:p -cur_console:d:C:\xxx\yyy
You just have to prefix the env var you want with -e and pass it as a param to conemu-cyg. It goes through without any modification on the Windows side and you can read it just like any other env var on the Linux side.

Related

My docker container keeps instantly closing when trying to run an image for bigcode-tools

I'm new to Docker, and I'm not sure how to quite deal with this situation.
So I'm trying to run a docker container in order to replicate some results from a research paper, specifically from here: https://github.com/danhper/bigcode-tools/blob/master/doc/tutorial.md
(image link: https://hub.docker.com/r/tuvistavie/bigcode-tools/).
I'm using a windows machine, and every time I try to run the docker image (via: docker run -p 80:80 tuvistavie/bigcode-tools), it instantly closes. I've tried running other images, such as the getting-started, but that image doesn't close instantly.
I've looked at some other potential workarounds, like using -dit, but since the instructions require setting an alias/doskey for a docker run command, using the alias and chaining it with other commands multiple times results in creating a queue for the docker container since the port is tied to the alias.
Like in the instructions from the GitHub link, I'm trying to set an alias/doskey to make api calls to pull data, but I am unable to get any data nor am I getting any errors when performing the calls on the command prompt.
Sorry for the long question, and thank you for your time!
Going in order of the instructions:
0. I can run this, it added the image to my Docker Desktop
1.
Since I'm using a windows machine, I had to use 'set' instead of 'export'
I'm not exactly sure what the $ is meant for in UNIX, and whether or not it has significant meaning, but from my understanding, the whole purpose is to create a directory named 'bigcode-workspace'
Instead of 'alias,' I needed to use doskey.
Since -dit prevented my image from instantly closing, I added that in as well, but I'm not 100% sure what it means. Running docker run (...) resulted in the docker image instantly closing.
When it came to using the doskey alias + another command, I've tried:
(doskey macro) (another command)
(doskey macro) ^& (another command)
(doskey macro) $T (another command)
This also seemed to be using github api call, so I also added a --token=(github_token), but that didn't change anything either
Because the later steps require expected data pulled from here, I am unable to progress any further.
Looks like this image is designed to be used as a command-line utility. So it should not be running continuously, but you run it via alias docker-bigcode for your tasks.
$BIGCODE_WORKSPACE is an environment variable expansion here. So on a Windows machine it's %BIGCODE_WORKSPACE%. You might want to set this variable in Settings->System->About->Advanced System Settings, because variables set with SET command will apply to the current command prompt session only. Or you can specify the path directly, without environment variable.
As for alias then I would just create a batch file with the following content:
docker run -p 6006:6006 -v %BIGCODE_WORKSPACE%:/bigcode-tools/workspace tuvistavie/bigcode-tools %*
This will run the specified command appending the batch file parameters at the end. You might need to add double quotes if BIGCODE_WORKSPACE path contains spaces.

running pycharm interpreter using nvidia-docker2

Im working on Ubuntu 20. I've installed docker, nvidia-docker2. On Pycharm, I've followed jetbrain guide, but in the advanced steps it isn't consistent with what I see in my setup. I use PyCharm Proffesional 2022.2.
In this step:
in the run options I put additionally --runtime=nvidia and --gpus=all.
Step 4 finishes as same as in the guide (almost, but it seems that it doesn't bother anything so on that later) and on step 5 I put manually the path to the interpreter in the virtual environment I've created using the Dockerfile.
In that way I am able to run the command of nvidia-smi and see correctly the GPU, but I don't see any packages I've installed during the Dockerfile build.
There is another option to connect the interpreter a little bit differently in which I do see the packages, but I can't run the nvidia-smi command and the torch.cuda.is_availble return False.
The way is instead of doing this as in the guide:
I press on the little down arrow in left of the Add Interpreter button and then click on Show all:
After which I can press the + button :
works, so it might be PyCharm "Python Console" issue.
and then I can choose Docker:
which will result in the difference mentioned above in functionality and also in the path dispalyed (the first one is the first remote interpreter top to bottom direction and the second is the second correspondingly):
Here of course the effect of the first and the second correspondingly:
Here is the results of the interpreter run with the first method connected interpreter:
and here is the second:
Of the following code:
Here is the Dockerfile file if you want to take a look:
Anyone configured it correctly and can help ?
Thank you in advance.
P.S: if I run the docker from services and enter the terminal the command nvidia-smi works fine and also the import of torch and the command torch.cuda.is_available return True.
P.S.2:
The thing that has worked for me for now is to change the Dockerfile to install directly torch with pip without create conda environement.
Then I set the path to the python2.7 and I can run the code, but not debug it.
for run the result is as expected (the packages list as was shown before is still empty, but it works, I guess somehow my IDE cannot access the packages list of the remote interpreter in that case, I dont know why):
But the debugger outputs the following error:
Any suggestions for the debugger issue also will be welcome, although it is a different issue.
Please update to 2022.2.1 as it looks like a known regression that has been fixed.
Let me know if it still does not work well.

docker install with tcp enabled 0.0.0.0

Wondering if anyone knows how to install with tcp enabled? Something like below? I
yum install docker --tcp-enabled --host 0.0.0.0
I understand I can go and manual change OPTIONS in /etc/sysconfig/docker.
I am trying to provision a server with a fresh docker install through scripts and I do not want to log onto the box and make these changes, everytime a new version comes out. I also understand I can just use a script with sed/awk to do this, But just wondering if easier way, without having to maintain a script.
My preferred solution is to use /etc/docker/daemon.json. This will let you add options to just about any install.
Note that I don't believe this will unset options that were defined on the command line, it's designed to let you use both. Those command line options are defined by your startup script, which from your description is systemd on a RedHat/CentOS environment with /etc/sysconfig/docker injected environment variables (you won't see this on other platforms like Debian). So if you need to remove an option, you'll still need to update your /etc/sysconfig/docker.

Do I need to re-make and re-install couchdb everytime I want to test a change to the source?

I am trying to contribute more with couchdb code, but I have really no idea how it is done the right way.
I have cloned the source from apache git repository and built it with
./configure
make && sudo make install
Then I wanted to change a file from the source called couch_httpd_show.erl
Do I need to run make && sudo make install again for every change I make to the source code and want to see how it behaves?
I am sure there's a more practical way to do it, because this approach is a bit time and patience consuming right?
Yes, there is a shortcut.
./configure
make dev
./utils/run
This builds and runs CouchDB entirely in the current directory. Instead of running as a background daemon, CouchDB will run in the foreground and output log messages to the terminal. It uses some local directories to store stuff: ./tmp/log for logs, ./tmp/lib for databases, and (if I remember correctly) ./etc/couch/local_dev.ini for configuration.
If you run this instead:
./utils/run -i
then you will also have an interactive Erlang prompt, which you can use to help debug.
When I work on CouchDB, I run this in the shell:
make dev && ./utils/run -i
After I change some code, I press ^C, up-arrow, return.
When I joined Couchio, I was responsible for production CouchDB deployments. I asked Chris Anderson for advice about something and he said, "Sorry, ask Jan. I've been just using utils/run for years!"
You can rebuild that one file and drop the output beam in place and restart.
erlc <file.erl>
& then copy the .beam file into place. To restart couchdb use either init:restart(). in the erlang shell or POST /_restart to CouchDB.
Although you might want to consider using the commandline erlang & javascript test suite also to ensure you didn't break anything.

Ruby on Rails- how to run a bash script as root?

What I'm wanting to do is use 'button_to' & friends to start different scripts on a linux server. Not all scripts will need to be root, but some will, since they'll be running "apt-get dist-upgrade" and such.
The PassengerDefaultUser is set to www-data in apache2.conf
I have already tried running scripts from the controller that do little things like writing to text files, etc, just so that I know that I am having Rails execute the script correctly. (in other words, I know how to run a script from the controller) But I cannot figure out how to run a script that requires root access. Can anyone give me a lead?
A note on security: Thanks for all the warnings against hacking that were given. You don't need to loose any sleep, though, because A) the webapp is not accessible from the public internet, it will only be on private intranets, B) the app is password protected, and C) because the user will not be able to supply custom input, only make selections from a form that will be passed as variables to the script. However, because I say this does not mean that I am disregarding your recommendations for security- I will be considering them very carefully in my design.
You should be using the setuid bit to achieve the same functionality without sudo. But you shouldn't be running Bash scripts. Setuid is much more secure than sudo. Using either sudo or setuid, you are running a program as root. But only setuid (with the help of certain languages) offers some added security measures.
Essentially you'll be using scripts that are temporarily allowed to run as a the owner, instead of the user that invoked them. Ruby and Perl can detect when a script is run as a different user than the caller and enforces security measures to protect against unsafe calls. This is called Taint mode. Bash does not run in taint mode at all.
Taint mode essentially works by declaring all input from an outside source unsafe for use when passed to a system call.
Setting it up:
Use chmod to set permissions on the script you want to run as 4755 and set it's owner to root:
$ chmod 4755 script.rb
$ chown root script.rb
Then just run the script as you normally would. The setuid bit kicks in and runs the script as if it was run by root. This is the safest way to temporarily elevate privileges.
See Ruby's documentation on safe levels and taint to understand Ruby's sanitation requirements to protect against tainted input causing harm. Or the perlsec faq to learn the how the same thing is done in Perl.
Again. If you're dead set on running scripts as root from an automated system. Do Not Use Bash! Use Ruby or Perl instead of Bash. Taint mode forces you to take security seriously and can avoid many unnecessary problems down the line.

Resources