I'm brand new to Stack Overflow and the world of containers, so hopefully my questions aren't too silly.
So first I will say that I'm aware that there are other questions similar to the one I'm asking, but I've tried the solutions in all of the ones I've found and they haven't worked for me. If there is another question out there that does have the answer, I'm really sorry for double-asking!
So, background info: I've got a Raspberry Pi 3 running Raspbian, with docker freshly installed. I'm able to pull images down from repositories with no real issues. However, I can't run any of them. I always get the same error (the title of my question). Someone pointed out that it might be because there are mostly 64-bit images in the repositories and I'm running a 32-bit machine, which I thought was the problem. but then I pulled a 32-bit Debian image (the first thing I could find that was 32-bit) and tried to do docker run with the image ID. but it still comes up with that error.
What else may cause that error? Or maybe it's the fact that I'm doing it on a Pi...? Open to anything!
Thanks in advance!
I have had similar issues when I tried to run Docker images on Rasperri Pi. Most of the Docker images are built for x86/x64 architecture. You need Docker-based apps packaged specifically for ARM to run on Raspberry Pi. Hypriot (Based on Debian) is one of the Raspberry Pi images that built for running latest Docker. Check it out here. They also have images specifically built for ARM. Search for hypriot on docker hub.You still may run these images with your current Docker installation, which I did not try.
Related
I am facing issue with Docker desktop. it is getting crashed in 15-20 min.
Docker Desktop 4.3.2
Docker 20.10.11
Windows 10 PRO
reference link: https://github.com/docker/for-win/issues/12477
I met that problem too, but that's when I start a relatively big image/container, and don't see many useful solutions on the community.
One tutorial that I follow is, add some config documents to limit the RAM use of the WSL-2, and restart my PC, it seems to work.
User Win+R, then enter '%UserProfile%' to enter the user dictionary in Windows, create a file: '.wslconfig', and then edit the file as follows:
[wsl2]
memory=8GB
swap=0
localhostForwarding=true
The 'memory' should be edited by the condition of our laptops, I used 4GB in my one. Then restart the PC.
I'm new to Docker, don't know whether it's the right solution, but it seems work in my laptop:D Hope that can help someone meet the same problem.
The one listed on https://hub.docker.com/_/tomcat is based on debian. Where can I get a rhel based image? Or is there a way I can create it by myself.
I am currently working on rhel 7.6 and have docker installed on my machine?
You have to build it yourself because RHEL is proprietary and therefore underrepresented in docker hub. You could go for a centos version though, which is almost identical.
Note: RHEL would be considered an extremely unusual choice for a container OS. Are you sure you're doing the right thing? If this is a rule given to you by your employer then it's wrong and you should go fix that instead -- it'll be easier than trying to build rhel containers.
You could take a look at this as a starting point for ideas on how to build it yourself: https://github.com/sclorg/rhscl-dockerfiles/blob/master/centos7.python27/Dockerfile.rhel7
I am looking for a way to set up or modify an existing Docker image for installing tensorflow that will install it such that the SSE4, AVX, AVX2, and FMA instructions can be utilized for CPU speed up. So far I have found how to install from source using bazel How to Compile Tensorflow... and CPU instructions not compiled.... Neither of these explain how to do this within Docker. So I think what I am looking for is what you need to add to an existing docker image that installs without these options so that you can get a compile version of tensorflow with the CPU options enabled. The existing docker images do not do this because they want the image to run on as many machines as possible. I am using Ubuntu 14.04 on linux PC. I am new to docker but have installed tensorflow and have it working without getting the CPU warnings I get when I use the docker images. I may not need this for speed, but I have seen posts that claim the speed up can be significant. I searched for existing docker images that do this and could not find anything. I need this to work with gpu so needs to be compatible with nvidia-docker.
I just found this docker support for bazel and it might provide an answer, however I do not understand it well enough to know for sure. I believe this is saying that you can not build tensorflow with bazel inside a Dockerfile. You have to build a Dockerfile using bazel. Is my understanding correct and is this the only way to get a docker image with tensorflow compiled from source? If so, I could still use help in how to do it and still get the other dependencies that I would get if using an existing docker image for tensorflow.
Dockerfiles that build with CPU support can be found here.
Hope that helps! Spent many a late night here on Stack Overflow and Github Issues and stuff. Now it's my turn to give back! :)
The GPU stuff in particular is really hairy - especially when enabling the XLA/JIT/AOT stuff as well as the Graph Transform Tools.
Lots of hacks embedded in my Dockerfiles. Feel free to review and ask me questions!
The contributing guidelines mention building TensorFlow from source with Docker to run the unit tests:
Refer to the
CPU-only developer Dockerfile and
GPU developer Dockerfile
for the required packages. Alternatively, use the said
Docker images, e.g.,
tensorflow/tensorflow:nightly-devel and tensorflow/tensorflow:nightly-devel-gpu
for development to avoid installing the packages directly on your system.
I've been doing a lot of Raspberry Pi work, but that means I have to carry about my Pi (or SSH home), and well, the Pi isn't the fastest in the world. I've been using Docker for running things like Postgres, and was thinking it would be awesome to just download a Docker image of the ARM build of Debian Jessie, and have everything function as if it was actually running in a real rPi. Even better if I could just somehow then quickly mirror this to an SD card and throw it into a real rPi.
Has anyone explored this? Everything I'm finding is about running Docker on the rPi, not running Docker to emulate an rPi.
Based on the answers and comments to similar questions - such as this one on the Raspberry Pi Stack Exchange site I think that the short answer to "no" (or at least not without a lot of effort)
Your problem is that as mentioned in the comments Docker doesn't do full-on virtualisation (that's kind of the point of it) so you can't get an ARM Raspbian Docker image and run it on an x86 Virtualbox host - which is what it sounds like you'd like to do.
The Docker image needs to be built for the same architecture as the host system. you get the same problem if you try to run x86 Docker images on the Raspberry Pi if it is acting as a Docker host.
By way of a solution - what I'd suggest is running a Debian VM on your Mac. Raspbian is close enough to Debian that you'll have a fairly "Pi-like" environment to develop in and can copy your code to an SD card when you're done.
If you want an easy way to manage the configuration so that the number of cores, RAM, disk space etc matches your Pi, then Vagrant may be a good solution.
my question is little vague but I tried looking for the answer here and there but could not understand if I can leverage docker for my work. My requirements
I usually try different versions of java, python and other software like different versions of eclipse, Linux package and other tools. This at the end make my Ubuntu installation a complete mess and some time completely broken. Then I started using Vm it solve most of the problem but make my pc very slow for frequent switching.
So my question can I achieve my work using docker without affecting my os? Can I run gui application, install different package without affecting underlying OS.
Switch actively between different docker container and underlying os.
Clean/remove unused/broken install of docker instance (containers?) etc. Any pointer to similar use case or how to would be helpful.
Thanks.
Ps- if it doesn't fit for SO then please move it to where it is best fitted. Sorry for non programming question.
Can it be done?
yes, there are examples of docker images that run graphical application, but running those containers might be a bit tricky. See for instance Can you run GUI apps in a docker container?
Is Docker the right tool for your problem ?
Maybe a package manager such as Nix would be better suited, as graphical software installed with it won't have any issue. With Nix you can install side-by-side many versions of a single software without interference.