I’m using docker composer in order to run ChirpStack on my Windows 10 machine. I need to reinstall operating system, but how to keep working ChirpStack docker system without creating new one?
If all the base images you're using are from public repos and are not only saved on your machine you only need to save your docker configuration, since you're using docker compose you can just copy the docker-compose.yml file to an external storage medium and you're all set. Unless you have some more dependencies that only exist on your computer that's all the files you need.
Related
I am using an ubuntu host (22.04) which uses docker container in which I defined my build environment (compiler, toolchain, usb devices). I created a volume share so that I can access the git repo on my host, inside my container.
The problem is, when I compile a project, and I need to do something on my host with the build artifacts (e.g. upload a binary to a web portal), the files belong to the root user (which is the only user on my docker environment). Thus, I need to chmod specific files before I can access them on my host which is annoying.
I tried to run the docker image with a user name, but then VScode no longer is able to install stuff when it connects to the docker container.
Is there a way to get an active user in my container, and still allow VScode remote-container to install extensions on connecting to the container? Or is there a better way to avoid chmodding all build results?
I am working on a Django project. I have a directory in my old laptop (Linux) having Docker containers. That directory contains my code. Now I want to move that working directory along with Docker containers to new laptop. I also want to keep environment intact.
I am new to Docker.
I researched and found that the docker container can have linked database volumes to it.
So I followed the following steps:
Copy directory to another machine
Create a backup of database volume(in my case it was Postgresql)
Build images from the directory in the new machine.
Restore the database from the backup-database file.
I'm trying to configure golang project with Jetbrains Gogland and docker compose. I want to use GOPATH and go from the docker container. I mean using the go installation from the container for the autocomplete etc without installing golang on the local machine.
the project structure is:
project root
docker-compose.yml
back|
Dockerfile
main.go
some other packages
front|
all the front files...
After that, I want to deploy my back folder to the /go/src/app in the docker container. The problem is that when I develop the project I can''t use autocomplete as this project is not in my local GOPATH and there are different golang versions in the docker container and on my local machine
I already read this question but I still can't solve my issue.
At the moment this is not possible. Nor do I see how it could be possible in the future. Mounting a volume in docker means you "hide" the contents of that folder from the container and use the files on the host instead. As such, any time you'll mount the directory from your machine, your container files from that instance won't be available to the machine. This means you can't have Go installed in the container and then mount a folder and use that location for the Go sources. If you are thinking: I'll just mount things in another place, do some symlink magic / copy files around, that's just a bad idea that leads to nowhere.
Gogland supports remote debugging as of EAP 10, released a a few weeks ago. This allows you to debug applications running in containers or on remote hosts. As such, you can install Go, and the source code on your machine but have them running in containers.
I am exploring using Docker containers on a Raspberry PI to help with managing upgrades to my application and the versions of NodeJS that it runs with.
I am wondering how the best way to build the containers would be. I could build the containers in the production machine, but it would be much more convenient if I could start with (say) the latest armvf nodejs image and build a new image with the application sources added (along with npm modules and bower components) that the application needs on my Home Desktop (Debian AMD64) or Laptop (OSX) or the Windows 7 machine I have available at work. I don't need to run the containers, just build them.
One slight niggle is that the code needs to be kept confidential, so I can't put the resultant containers in any public repository. Can I ensure the containers have managable names and can I just copy them around between machines?
AFAIK containers are architecture agnostic. You should be able to modify them one a host with a different architecture, but will be unable to enter it. Entering basically means executing a program (e.g. a shell) in the container's context. Since the container's shell is not executable on your host this won't work. Consequently cross-compiling within the container is also no option.
However, if you cross-compile on the outside, you should be able to add your executables to the image, move it over to your pi and run it.
You can move docker images without any public repository either with a private repository or you use docker save IMAGE > image.tar to store an image in a tarball, move it to the pi, and use docker load -i image.tar to restore it.
I have to install Docker on windows 7 in a private netwrok with no internet access.
I can download anything and bring it in by usb from another computer.
How do I intall and use docker?
Meaning: From installation, (what to install and how to setup) to creating the first image.
Most of the instruction I found use proxy and I cant use a proxy.
The installation itself involve copying docker-machine-Windows-x86_64.exe, renaming it to docker-machine.exe, and creating a virtualbox machine with it.
The issue is that it will attempt to download boot2docker.iso (the tinyCore-based linux image which includes docker pre-installed)
That means you need to copy that file on your usb key first, from boot2docker/boot2docker/releases.
From issue 539:
docker-machine create mydocker --virtualbox-boot2docker-url=file:///Users/auser/Downloads/boot2docker.iso --driver=virtualbox
You will need a similar docker setup on a machine with internet access in order to:
docker pull the images you want
docker save them
copy them on the USB key
copy them on your offline server, in C:\Users... (which is the only folders mounted in boot2docker VM)
Then you need to open an ssh session
docker-machine ssh default
And within that session, you can access the folder where the saved images are copied, and docker load them.