The Docker image of ROS is unusable? - docker

So, I pulled the ros container from docker hub, using
docker pull ros
which got me the latest docker 'foxy' version.
I proceeded with the tutorial on starting the docker container of ROS. I could successfully start the container and connect to it. It's a small little tutorial. Nothing long nor complicated.
The penultimate step in that tutorial asks for sourcing gthe setup.bash file, which I did and received no errors. (Actually nothing at all. Neither success, nor failure came up).
source /opt/ros/<distro>/setup.bash
And after that, to taste the sweet fruit of my hard labour I entered the final command (as mentioned in the tutorial),
rostopic list
which returned to my surprise,
rostopic command not found
I proceeded then to enter at the terminal roscore, roscd, etc and none of them worked. All of them were not found.
I did try to just run that setup script myself, from the terminal without using source, like:
$ /opt/ros/foxy/setup.bash
(after changing the permissions of course), which brought little change to the situation.
I looked in at the docker page for ros and nothing helpful was to be found there. Plenty of instructions there about how to build my own docker image for ROS, but that is not what I want to do right now, I guess.
I googled and the hits on the first page were:
this (the original tutorial which I am anyways following),
this (something general about docker)
and,
this (about how to
run GUI with Docker - not there yet frankly),
which begs the question, what good is the container if I have to install everything myself anyways by following their other tutorial?
Or do I not understand something here? If someone could throw some light on it, it would be much appreciated.

Your container has ROS2, not ROS1. Try
ros2 topic list
If you want to get the ROS1 version instead, try pulling and running a different image instead:
docker pull ros:noetic-robot
docker run ros:noetic-robot
Context
The tutorial you are following was written some time ago, when the default container used ROS1. The new latest container uses ROS2 (in your case, Foxy). ROS2 doesn't have the same command names. rostopic does not work, and there isn't even a master, so roscore would make no sense!
The good news is, the tutorial page is a wiki, so I've already updated it to make it (at least slightly) clearer. If you have ideas for how to improve it, you could also make an account and do so.

Related

Run e2e test with simulation of k8s

we want to create e2e test (integration test ) for our applications on k8s and we want to use
minikube but it seems that there is no proper (maintained or official ) docker file for minikube. at least
I didn’t find any…In addition I see k3s and not sure which is better to run e2e test on k8s ?
I found this docker file but when I build it it fails with errors
https://aspenmesh.io/2018/01/building-istio-with-minikube-in-a-container-and-jenkins/
e - –no-install-recommends error
any idea ?
Currently there's no official way to run minikube from within a container. Here's a two months old quote from one of minikube's contributors:
It is on the roadmap. For now, it is VM based.
If you decide to go with using a VM image containing minikube, there are some guides how to do it out there. Here's one called "Using Minikube as part of your CI/CD flow
".
Alternatively, there's a project called MicroK8S backed by Canonical. In a Kubernetes Podcast ep. 39 from February, Dan Lorenc mentions this:
MicroK8s is really exciting. That's based on some new features of recent Ubuntu distributions to let you run a Kubernetes environment in an isolated fashion without using a virtual machine. So if you happen to be on one of those Ubuntu distributions and can take advantage of those features, then I would definitely recommend MicroK8s.
I don't think he's referring to running minikube in a container though, but I am not fully sure: I'd enter a Ubuntu container, try to install microk8s as a package, then see what happens.
That said, unless there's a compelling reason you want to run kubernetes from within a container and you are ready to spend the time going the possible rabbit hole – I think these days running minikube, k3s or microk8s from within a VM should be the safest bet if you want to get up and running with a CI/CD pipeline relatively quickly.
As to the problem you encountered when building image from this particular Dockerfile...
I found this docker file but when I build it it fails with errors
https://aspenmesh.io/2018/01/building-istio-with-minikube-in-a-container-and-jenkins/
e - –no-install-recommends error
any idea ?
notice that:
--no-install-recommends install
and
–no-install-recommends install
are two completely different strings. So that the error you get:
E: Invalid operation –no-install-recommends
is the result you've copied content of your Dockerfile from here and you should have rather copied it from github (you can even click raw button there to be 100% sure you copy totally plain text without any additional formatting, changed encoding etc.)

Do the various caches in Docker on Mac get corrupted?

I am trying to troubleshoot a Dockerfile I found on the web. As it is failing in a weird way, I am wondering whether failed docker builds or docker runs from various subsets of that file or other files that I have been experimenting with might corrupt some part of Docker's own state.
In other words, would it possibly help to restart Docker itself, Reboot the computer, or do some other Docker command, to eliminate that possibility?
Sometimes just rebooting things helps and it's not wrong to try restarting Docker for Mac or do a full reboot, but I can't think of a specific symptom it would fix and it's not something I need to do routinely.
I've only really run into two classes of problems that sound like what you're describing.
If you have a Dockerfile step that consistently succeeds, but produces inconsistent results:
RUN curl http://may.not.exist.example.com/ || true
You can wind up in a situation where the underlying command failed or produced the wrong output, but the RUN step as a whole succeeded. docker build --no-cache will re-run a build ignoring this, and an extremely aggressive docker rmi sequence (deleting every build, current and past, of the image in question) will clean it up too.
The other class of problem I've encountered involves some level of corruption in /var/lib/docker. This usually has very obvious symptoms generally involving "file not found" or "failed mounting directory" type errors on a setup that you otherwise know works. I've encountered it more on native Linux than Docker for Mac, probably because the DfM Linux installation is a little more controlled and optimized for Docker (it definitely isn't running a 3-year-old kernel with arbitrary vendor patches). On Linux you can work around this by stopping Docker, deleting everything in /var/lib/docker, and starting Docker again; in Docker for Mac, on the preferences window, there's a "Reset" page with various destructive cleanup options and "Reset to factory defaults" is closest to this.
I would first attempt using the Docker 'Diagnose and Feedback option. This generally runs tests on the health of Docker and the Docker engine.
Docker desktop also has options for various troubleshooting scenarios under 'Preferences' > 'Reset' (if you're using Docker Desktop) which have helped me in the past.
A brief look through the previous Docker Release notes.
It certainly looks like it has been possible in the past to corrupt the Docker Engine; there is evidence suggesting the engine has been iteratively fixed since.

Docker image created from environment, pushed to a registry, pulled from a server... now what?

I started to use Docker a few days ago so I'm still a newbie in this domain, so I deeply apologize if my questions seem obvious, because so far, most of them aren't for me.
My goal is to create a custom image from a Rails application, to send it up to the Docker Hub, then pull it from a server and simply make it run.
I used this doc to create my image excepted that I chose to use MariaDB (works fine). So far, my project only contains a CRUD / scaffold that works nicely.
I then pushed it to a private repository on Docker Hub using this link. Again, no problem, hub is telling me the push went okay, same for my console.
Then, I connected to a private server running Debian, pulled the project from the hub, made sure it existed using docker images.
My main question is the following: what should I do next?
If I refer to the first link, I create the rails project from close to empty Gemfile, then synchronise the local files with the image. However, on my server, all I have is an empty directory. If I'm not stupid, redoing the Docker's tutorial will "reset" my image.
This is where I'm currently lost: what should I do now? I don't believe that running docker run repo/my-image rails server is the good solution here
Thank you in advance
You are going good till now. Now think what is the use of you pushing the image to private repository - You and others who have access to repo should be able to get the image and should be able to create containers from it.
The point where you lost is exactly what you should do now i.e. execute docker run
redoing the Docker's tutorial will "reset" my image.
Docker is smart enough to download image once and use again. Resetting will remove your locally downloaded images but it won't remove from private repo.

Setting up a container from a users github source

Can be closed, not sure how to do it.
I am to be quite frank lost right now, the user whom published his source on github somehow failed to update the installation instructions when he released a new branch. Now, I am not dense, just uneducated when it comes to docker. I would really appreciate a push in the right direction. If I am missing any information from this post, please allow me to provide it in the comments.
Current Setup
O/S - Debian 8 Minimal (Latest kernel)
Hardware - 1GB VPS (KVM)
Docker - Installed with Compose (# docker info)
I am attempting to setup this (https://github.com/pboehm/ddns/tree/docker_and_rework), first I should clone this git to my working directory? Lets say /home for example. I will run the following command;
git clone -b docker_and_rework https://github.com/pboehm/ddns.git
Which has successfully cloned the source files into /home/ddns/... (working dir)
Now I believe I am supposed to go ahead and build something*, so I go into the following directory;
/home/ddns/docker
Inside contains a docker-compose.yml file, I am not sure what this does but by looking at it, it appears to be sending a bunch of instructions which I can only presume is to do with actually deploying or building the whole container/image or magical thing right? From here I go ahead and do the following;
docker-compose build
As we can see, I believe its building the container or image or whatever its called, you get my point (here). After a short while, that completes and we can see the following (docker images running). Which is correct, I see all of the dependencies in there, but things like;
go version
It does not show as a command, so I presume I need to run it inside the container maybe? If so I dont have a clue how, I need to run 'ddns.go' which is inside /home/ddns, the execution command is;
ddns --soa_fqdn=dns.stealthy.pro --domain=d.stealthy.pro backend
I am also curious why the front end web page is not showing? There should be a page like this;
http://ddns.pboehm.org/
But again, I believe there is some more to do I just do not know what??
docker-compose build will only build the images.
You need to run this. It will build and run them.
docker-compose up -d
The -d option runs containers in the background
To check if it's running after docker-compose up
docker-compose ps
It will show what is running and what ports are exposed from the container.
Usually you can access services from your localhost
If you want to have a look inside the container
docker-compose exec SERVICE /bin/bash
Where SERVICE is the name of the service in docker-compose.yml
The instructions it runs that you probably care about are in the Dockerfile, which for that repo is in the docker/ddns/ directory. What you're missing is that Dockerfile creates an image, which is a template to create an instance. Every time you docker run you'll create a new instance from the image. docker run docker_ddns go version will create a new instance of the image, run go version and output it, then die. Running long running processes like the docker_ddns-web image probably does will run the process until something kills that process. The reason you can't see the web page is probably because you haven't run docker-compose up yet, which will create linked instances of all of the docker images specified in the docker-compose.yml file. Hope this helps

Re-running docker-compose in Windows says network configuration changed

I have docker-compose version 1.11.2 on Windows and using a version 2.1 docker-compose.yml but whenever I try to run something like docker-compose up or docker-compose run a subsequent time, I get an error that the network needs to be recreated because configuration options changed (even if I didn't change anything). I can docker network rm to remove the network, but from other documentation and posts about docker-compose on Linux it seems this is unnecessary.
I can reproduce this reliably but can't really find any further information. Can anyone explain why I keep getting errors to recreate the network (using a transparent driver to download some stuff when building the image, but even using the nat driver gives me a similar error) or at least how to work around it? One of my scenarios is to be able to use docker-compose run on one of the services a couple of times on the same machine as part of cloud build/test.
Turns out this was a bug and was fixed in a subsequent update several weeks ago. I was told by one of the Docker developers that Windows 10 Creators Update was required as well.

Resources