I’m trying to share my setups on Raspberry pi4 for my teammates
So that they can all use the same setups I have now.
So far, On My Raspberry pi , I have downloaded tensorflow, stuffs for Object detections and also set up a Web server with APM.
I heard that I can share all these setups on this pi using docker by making an docker image
But I don’t know how .
I’ve tried to pull images of Tensorflow and APM from Docker hub and run them all in one container then share it after making it as an image.
but then I realized that I wouldn’t be able to share the files I have for OD.
Can anyone please explain me how to make an Docker image of the entire setups on my pi?
If you want to share you're whole raspberry-pi setup, then you have to share a image of your SD-card.
A image contains the whole filesystem and has normally a size of a few gigabytes. It's not recommended to share them to teammates for each little change.
There are different ways to create a image - just google for "create image sd card".
A docker-image is like a image, but is has much less data. Docker-Images does not contain the whole operation-system.
If you want to share your docker-image, then your teammates have to setup their Pi's first.
And you need a repository where you can upload your images (and your mates can download). A good plattform is https://hub.docker.com
It sounds for me, that you want to share code. For code-sharing I recommend git. There are some good tutorials for https://github.com
My hint: Setup a pi with all the things you need (tenserflow, git, ...).
You can share this image with your mates.
The code of the project can you store on GitHub. When you update the code, then your mates can get these updates with git pull - these changes are normally very small and not some gigabytes for the whole image.
If you train big complex models and want to share them, than a docker-image is the best choice.
In this case share a image for raspberry with docker. Docker-Images are split in different layers. With a good design, these layers are small and good for sharing.
Your questions in your comment:
You can install tenserflow in a docker-image. There are good instructions on their site. https://www.tensorflow.org/install/docker
I would recommend a Dockerfile like this:
FROM tensorflow/tensorflow # latest stable release
ADD requirements.txt requirements.txt # https://pip.pypa.io/en/stable/user_guide/#requirements-files
RUN pip install -r requirements.txt && rm -f requirements.txt
ADD src/* /app/
WORKDIR /app
ENTRYPOINT["python"]
CMD["-u", "yourscript.py"]
Every time when you change your code, then you can build it with docker build . -t <IMAGE-NAME>.
You have to push the image somewhere (like hub.docker.com). docker push <IMAGE-NAME>
Related
Is there a simple way of converting my docker image to a cloud foundry droplet ?
What did not work:
docker save registry/myapp1 |gzip > myapp1.tgz
cf push myapp1 --droplet myapp1.tgz
LOG: 2021-02-13T12:36:28.80+0000 [APP/PROC/WEB/0] OUT Exit status 1
LOG: 2021-02-13T12:36:28.80+0000 [APP/PROC/WEB/0] ERR /tmp/lifecycle/launcher: no start command specified or detected in droplet
If you want to run your docker image on Cloud Foundry, simply run cf push -o <your/image>. Cloud Foundry can natively run docker images so long as your operations team has enabled that functionality (not a lot of reason to disable it) and you meet the requirements.
You can check to see if Docker support is enabled by running cf feature-flag and looking for the line diego_docker enabled. If it says disabled, talk to your operations team about enabling it.
By doing this, you don't need to do any complicated conversion. The image is just run directly on Cloud Foundry.
This doesn't 100% answer your question, but it's what I would recommend if at all possible.
To try and answer your question, I don't think there's an easy way to make this conversion. The output of docker save is a bunch of layers. This is not the same as a droplet which is an archive containing some specific folders (app bits + what's installed by your buildpacks). I suppose you could convert them, but there's not a clear path to doing this.
The way Cloud Foundry uses a droplet is different and more constrained than a Docker image. The droplet gets extracted into /home/vcap overtop of an Ubuntu Bionic (cflinuxfs3 root filesystem) and the app is then run out of there. This your droplet can only contain files that will go into this one place in the file system.
For a Docker image, you can literally have a completely custom file system.
So given that difference, I don't think there's a generic way you can take a random docker image and convert that to a droplet. The best you could probably do is take some constrained set of docker images, like those build from Ubuntu Bionic, using certain patterns, extract the files necessary to run your app, stuff them directories that will unpack overtop of /home/vcap (i.e. that resembles a droplet), tar gzip it and try to use that.
Starting with the output of docker save is probably a good idea. You'd then just need to extract the files you want from the tar archive of the layers (i.e. dig through each layer, which is another tar archive and extract files), then move them into a directory structure that resembles this:
./
./deps/
./profile.d/
./staging_info.yml
./tmp/
./logs/
./app/
where ./deps is typically where buildpacks will install required dependencies, ./.profile.d/ is where you can put scripts that will run before your app starts and ./app is where your app (most of your files) will end up.
The staging_info.yml, I'm not 100% sure is required, but basically breaks down to {"detected_buildpack":"java","start_command":""}. You could fake the detected_buildpack setting it to anything and then start_command is obviously the command to run (you can override this later though).
I haven't tried doing this because cf push -o is much easier, but you could give it a shot if cf push -o isn't an option.
I don't really understand something basic about Docker, specifically if I wanted to build from multiple bases within the same Dockerfile. For example, I know that these two lines wouldn't work:
FROM X
FROM Y
(well it would compile but then it seems to only include the image from X in the final build). Or perhaps I'm wrong and this is correct but I still haven't seen any other Dockerfiles like this.
Why would I want to do this? For instance, if X and Y are images I found on DockerHub that I would like to build from. For a concrete example if I wanted ubuntu and I also wanted python:
FROM python:2
FROM ubuntu:latest
What is the best way to go about it? Am I just limited to one base? If I want the functionality from both am I supposed to go into the docker files until I find something in common to both of them and build the image myself by copying the one of the dockerfile's code manually all the way through sub images until I reach the common base and add those lines to the other Dockerfile? I imagine this is not the correct way to do this as it seems quite involved and not in line with the simplicity that Docker aims to provide.
For a concrete example if I wanted ubuntu and I also wanted python:
FROM python:2
FROM ubuntu:latest
Ubuntu is Os, not a python. so what you need a Ubuntu base image which has python installed.
you can check offical python docker hub are based on ubuntu, so at one image you will get ubuntu + python, then why bother with two FROM? which is not also not working.
Some of these tags may have names like buster or stretch in them.
These are the suite code names for releases of Debian and indicate
which release the image is based on. If your image needs to install
any additional packages beyond what comes with the image, you'll
likely want to specify one of these explicitly to minimize breakage
when there are new releases of Debian.
So for you below question
What is the best way to go about it? Am I just limited to one base? If
I want the functionality from both am I supposed to go into the docker
files
yes, limit it one base image suppose your base image
python:3.7-stretch
So with this base image, you have python and ubuntu both. you do not need to make Dockerfile that have two FROM.
Also, you do need to maintain and build the image from scratch, use the offical one and extend as per your need.
For example
FROM python:3.7-stretch
RUN apt-get update && apt-get install -y vim
RUN pip install mathutils
Multiple FROM lines in a Dockerfile are used to create a multi-stage build. The result of a build will still be a single image, but you can run commands in one stage, and copy files to the final stage from earlier stages. This is useful if you have a full compile environment with all of the build tools to compile your application, but only want to ship the runtime environment with the image you deploy. Your first stage would be something like a full JDK with Maven or similar tools, while your final stage would be just your JRE and the JAR file copied from the first stage.
I see that some containers are created FROM official Apache docker image while some others are created from a Debian image with RUN apt get install. What is the difference? What is the best practice here and which one should I prefer?
This is really basic. The purpose of the two commands are different.
When you want to create an image of your own for your specific purpose you you go thru two steps:
Find a suitable base image to start from. And there is a lot of images out there. That is where you use the FROM clause... To get a starting point.
Specialize the image to a more specific purpose. And that is where your use RUN to install new things into the new image and often also COPY to add scripts and configurations to the new image.
So in your case: If you want to control the installation of Apache then you start of with a basic Debian image (FROM) and control the installation on Apache yourself (RUN). Or if you want to make it easy your find an image where Apache is alreay there, ready to run.
I'm new to this containers. I have seen docker doing really awesome job in virtualization. what is the point of using OS images like Ubuntu, centos etc. For an example if I need to run a mysql server. I can pull it up and just simply run it. I don't think so i need a help of another os image. Can anyone clarify this ? Thanks.
These Linux-distribution images are useful as a base for further images. It’s common enough to build an Ubuntu-based image for some specific application, for example:
FROM ubuntu:18.04
RUN apt-get update && apt-get install ...
WORKDIR /app
COPY . ./
CMD ["./myapp"]
To the extent that you might need, say, a PostgreSQL client library, getting it via a standard distribution package manager is much more convenient than building it from source.
You’re right that there’s basically no point in directly running these images.
(You’re also right that you don’t need a distribution: if you have a statically-linked binary, you can build an image FROM scratch that contains only the application itself and not common niceties like the system C library or a shell. I’ve mostly only seen this for Go applications; it can be very hard to use and debug otherwise if you’re not confident in Docker.)
I was trying out Docker and I did the following:
Pulled an image called: docker/whalesay
Built another image with some minor changes.
Pushed it back in a different name to my public repository (Had to upload approximately the same size I downloaded).
I then built another image with this public image as the starting point.
Had only a single command. But again I had to upload the entire image back.
My question is, isn't Docker supposed to upload just the changes? I read it somewhere. It seems like I am making some stupid mistake, I can't believe that we have to upload the entire image everytime after minor changes. Am I missing something?
This is the Dockerfile I am using to build the image fishsay:
FROM docker/whalesay:latest
RUN apt-get -y update && apt-get install -y fortunes
CMD /usr/games/fortune -a | cowsay
The whalesay image was ~180 MB; so when I push shouldn't I have to just upload the changed layers?
Any changes to a layer in your image would requires to be updated in the repository when you call the docker push. This could be a small and trivial such as including a new package (ex: vi) in your image. However, this would cause new layers to be created and replace the existing layers, causing different layer id's, from what is already in the registry. docker push uploads all new layers created into the registry, excluding the base image.
Me also facing same issue, what I have come to is
https://github.com/docker/docker/issues/18866#issuecomment-192770785
https://github.com/docker/docker/issues/14018
As mentioned in above links this feature is implemented in Docker
Engine 1.10 / Registry 2.3.
And after email to docker support I got the following reply
Hello,
Unfortunately, we don't have any timelines for when updates to the
Docker Hub will happen that we can share publicly. Sorry for any
trouble caused by this.
/Jeff