Im working on Ubuntu 20. I've installed docker, nvidia-docker2. On Pycharm, I've followed jetbrain guide, but in the advanced steps it isn't consistent with what I see in my setup. I use PyCharm Proffesional 2022.2.
In this step:
in the run options I put additionally --runtime=nvidia and --gpus=all.
Step 4 finishes as same as in the guide (almost, but it seems that it doesn't bother anything so on that later) and on step 5 I put manually the path to the interpreter in the virtual environment I've created using the Dockerfile.
In that way I am able to run the command of nvidia-smi and see correctly the GPU, but I don't see any packages I've installed during the Dockerfile build.
There is another option to connect the interpreter a little bit differently in which I do see the packages, but I can't run the nvidia-smi command and the torch.cuda.is_availble return False.
The way is instead of doing this as in the guide:
I press on the little down arrow in left of the Add Interpreter button and then click on Show all:
After which I can press the + button :
works, so it might be PyCharm "Python Console" issue.
and then I can choose Docker:
which will result in the difference mentioned above in functionality and also in the path dispalyed (the first one is the first remote interpreter top to bottom direction and the second is the second correspondingly):
Here of course the effect of the first and the second correspondingly:
Here is the results of the interpreter run with the first method connected interpreter:
and here is the second:
Of the following code:
Here is the Dockerfile file if you want to take a look:
Anyone configured it correctly and can help ?
Thank you in advance.
P.S: if I run the docker from services and enter the terminal the command nvidia-smi works fine and also the import of torch and the command torch.cuda.is_available return True.
P.S.2:
The thing that has worked for me for now is to change the Dockerfile to install directly torch with pip without create conda environement.
Then I set the path to the python2.7 and I can run the code, but not debug it.
for run the result is as expected (the packages list as was shown before is still empty, but it works, I guess somehow my IDE cannot access the packages list of the remote interpreter in that case, I dont know why):
But the debugger outputs the following error:
Any suggestions for the debugger issue also will be welcome, although it is a different issue.
Please update to 2022.2.1 as it looks like a known regression that has been fixed.
Let me know if it still does not work well.
Related
I'm new to Docker, and I'm not sure how to quite deal with this situation.
So I'm trying to run a docker container in order to replicate some results from a research paper, specifically from here: https://github.com/danhper/bigcode-tools/blob/master/doc/tutorial.md
(image link: https://hub.docker.com/r/tuvistavie/bigcode-tools/).
I'm using a windows machine, and every time I try to run the docker image (via: docker run -p 80:80 tuvistavie/bigcode-tools), it instantly closes. I've tried running other images, such as the getting-started, but that image doesn't close instantly.
I've looked at some other potential workarounds, like using -dit, but since the instructions require setting an alias/doskey for a docker run command, using the alias and chaining it with other commands multiple times results in creating a queue for the docker container since the port is tied to the alias.
Like in the instructions from the GitHub link, I'm trying to set an alias/doskey to make api calls to pull data, but I am unable to get any data nor am I getting any errors when performing the calls on the command prompt.
Sorry for the long question, and thank you for your time!
Going in order of the instructions:
0. I can run this, it added the image to my Docker Desktop
1.
Since I'm using a windows machine, I had to use 'set' instead of 'export'
I'm not exactly sure what the $ is meant for in UNIX, and whether or not it has significant meaning, but from my understanding, the whole purpose is to create a directory named 'bigcode-workspace'
Instead of 'alias,' I needed to use doskey.
Since -dit prevented my image from instantly closing, I added that in as well, but I'm not 100% sure what it means. Running docker run (...) resulted in the docker image instantly closing.
When it came to using the doskey alias + another command, I've tried:
(doskey macro) (another command)
(doskey macro) ^& (another command)
(doskey macro) $T (another command)
This also seemed to be using github api call, so I also added a --token=(github_token), but that didn't change anything either
Because the later steps require expected data pulled from here, I am unable to progress any further.
Looks like this image is designed to be used as a command-line utility. So it should not be running continuously, but you run it via alias docker-bigcode for your tasks.
$BIGCODE_WORKSPACE is an environment variable expansion here. So on a Windows machine it's %BIGCODE_WORKSPACE%. You might want to set this variable in Settings->System->About->Advanced System Settings, because variables set with SET command will apply to the current command prompt session only. Or you can specify the path directly, without environment variable.
As for alias then I would just create a batch file with the following content:
docker run -p 6006:6006 -v %BIGCODE_WORKSPACE%:/bigcode-tools/workspace tuvistavie/bigcode-tools %*
This will run the specified command appending the batch file parameters at the end. You might need to add double quotes if BIGCODE_WORKSPACE path contains spaces.
I'm trying to get a task defined in ConEmu to run multiple instance of Ubuntu bash using the WSL layer of Windows 10.
I followed the examples to set up a task to split the UI the way I want, and that part works great. My problem is that I'm trying to use environment variables to pass through commands to run after logging in, and I want different things to run in each panel.
Here is the task command I'm using:
set "STARTUP_CMD='gfp && make server' " & set "PATH=%ConEmuBaseDirShort%\wsl;%PATH%" & %ConEmuBaseDirShort%\conemu-cyg-64.exe --wsl -cur_console:p -cur_console:d:C:\xxx\yyy
On the Linux side I have code in my ~/.bash_aliases file that looks for the STARTUP_CMD env var and tries to execute it. I found code that can pull env vars from the Windows side, which is where the 'set' commands appear to be storing things. Problem is, Windows doesn't know what to do with these, and it tries to expand them when they are read, so it all blows up.
I had this working before, but had to wipe and rebuild my machine recently, and unfortunately didn't have the working command backed up anywhere.
I thought this was the recommended way to run bash with WSL, but I would rather have a way to send stuff directly to the Linux layer as env vars (or if someone has a better way to queue up different commands for each pane, I'm all for that too). Any help would be much appreciated.
Thanks!
Oh course I find the answer right after posting the question... posting here to help others that hit the same issue (or my future self if I forget and have to wipe my machine again).
set "PATH=%ConEmuBaseDirShort%\wsl;%PATH%" & %ConEmuBaseDirShort%\conemu-cyg-64.exe --wsl -eSTARTUP_CMD="gfp && make server" -cur_console:p -cur_console:d:C:\xxx\yyy
You just have to prefix the env var you want with -e and pass it as a param to conemu-cyg. It goes through without any modification on the Windows side and you can read it just like any other env var on the Linux side.
I'm using wkhtmltopdf for nodejs, followed instructions for windows installation (and added it to PATH after installation). When i start my app through bash, it works just fine as it should. I manage to convert html to pdf.
But it doesnt work when im using docker, like it doesnt even exists. Im assuming there is some other way to install it for docker, or some way to add PATH to docker?? Any other ideas? hints?
And before u say it, been googling it and looking for images and installations for docker, none helped. Got one that u know it works?
Anyways for all the others that found themselves in the same pickle... I was trying to use wkhtmltopdf within docker container while wkhtmltopdf was only installed and executable within system (windows/linux) environment and not in the actual docker environment... after updating dockerfile to automatically install wkhtml with the build, I also had to SET THE PATH.. for linux docker smth like this
cp wkhtmltox/bin/* /usr/local/bin/ &&
that made everything works just as it should.
Hello Im trying to create initial flash/build for IoT development following this tutorial https://developer.android.com/things/hardware/imx7d.html#flashing_the_image
Im sorry if my questions is too broad, this is my first IoT attempt, but it seems to me like I have a wrong setup, beacuse Im constantly running into new errors.
Im stuck at step 2.4 Execute the flash-all.sh. Running
sudo ./flash-all.sh
I got this in my logs:
./flash-all.sh: line 52: ./u-boot.imx: Permission denied
If I change permissons
chmod 777 u-boot.imx
I got
./flash-all.sh: line 52: ./u-boot.imx: cannot execute binary file:
Exec format error
I already solved several other issues which werent described in tutorial, including
I have to run script as sudo, otherwise I got
< waiting for any device >
I had to rewrite fastboot command to $(which fastboot) inside flash-all.sh (same with flash and bootloader), otherwise commands are unknown even thought I added them to PATH
I am using
ubuntu 16.14,
android studio with installed sdk 26
Pico Pro Maker Kit with Pico i.MX7 Dual Development Board
What am I doing wrong?
I had to rewrite fastboot command to $(which fastboot) inside flash-all.sh (same with flash and bootloader), otherwise commands are unknown even thought I added them to PATH
This seems like it might be the root of the problem, as somehow the subsequent lines for each command are not being parsed as arguments for fastboot, but rather as their own executable commands.
You also shouldn't need to run the script with sudo. This might be why you can run which fastboot successfully (which would indicate it's in your PATH), but the script cannot see this.
So I'm trying set run OpenAI gym in a docker container, but it looks like this:
Notice the pong window has a weird render issue where it's repeating things and the colors are off. Here is space invaders:
NOTE FOR "NOT A PROGRAMMING ISSUE" PEOPLE: The solution involves the correct bash script code to call the right API methods to render the arrays of pixels correctly. Also only a graphics programmer is likely to "recognize the render glitch".
My setup is very simple.
- I'm on a local ubuntu 16.04 install with an Nvidia gtx1060 and corei7
- I installed nvida runfile driver with --no-opengl-files (as per instructions from Nvidia and many place).
- Specifically, I'm running floydhub/pytorch docker image.
Does anyone recognize the particular render glitch and what it could mean? It almost looks like a StackOverflow of a frame buffer! What can I do to track down the bug?
EDIT: I have eliminated all the extra dependencies I had been installing and am just doing simple x-forwarding according to the ROS GUI guide.
You can easily reproduce this as follows:
docker run -it --user=$(id -u) --env="DISPLAY" --workdir="/home/$USER" --volume="/tmp/.X11-unix:/tmp/.X11-unix:rw" floydhub/pytorch:0.1.11-gpu-py3.6 bash
Now in the image, type python and then the following:
import gym
gym.make('Pong-v0').render()
That should open up an x-forwarded window on your machine, but the display is corrupt (at least for me)
Above I actually used SpaceInvaders-v0