Running a go container in Codenvy (Next-Generation Beta) - codenvy

Do you know of any descriptions on how to run a Dockerfile in Codenvy (Next-Generation Beta). I have tried to look for how you reference a Dockerfile as there is no template for docker?
I can install go from the console alright, but it would be nice just to reference docker hub.

You can reference or author a custom dockerfile in the dashboard. These docs explain more: https://eclipse-che.readme.io/docs/recipes
In order to get the built-in terminal its simplest to inherit from a Codenvy base image, but you can also tweak any dockerfile to work with Codenvy as well. It's all covered in the same page.
As a note we'll be redoing this section of our docs to make it more clear.

Related

Run e2e test with simulation of k8s

we want to create e2e test (integration test ) for our applications on k8s and we want to use
minikube but it seems that there is no proper (maintained or official ) docker file for minikube. at least
I didn’t find any…In addition I see k3s and not sure which is better to run e2e test on k8s ?
I found this docker file but when I build it it fails with errors
https://aspenmesh.io/2018/01/building-istio-with-minikube-in-a-container-and-jenkins/
e - –no-install-recommends error
any idea ?
Currently there's no official way to run minikube from within a container. Here's a two months old quote from one of minikube's contributors:
It is on the roadmap. For now, it is VM based.
If you decide to go with using a VM image containing minikube, there are some guides how to do it out there. Here's one called "Using Minikube as part of your CI/CD flow
".
Alternatively, there's a project called MicroK8S backed by Canonical. In a Kubernetes Podcast ep. 39 from February, Dan Lorenc mentions this:
MicroK8s is really exciting. That's based on some new features of recent Ubuntu distributions to let you run a Kubernetes environment in an isolated fashion without using a virtual machine. So if you happen to be on one of those Ubuntu distributions and can take advantage of those features, then I would definitely recommend MicroK8s.
I don't think he's referring to running minikube in a container though, but I am not fully sure: I'd enter a Ubuntu container, try to install microk8s as a package, then see what happens.
That said, unless there's a compelling reason you want to run kubernetes from within a container and you are ready to spend the time going the possible rabbit hole – I think these days running minikube, k3s or microk8s from within a VM should be the safest bet if you want to get up and running with a CI/CD pipeline relatively quickly.
As to the problem you encountered when building image from this particular Dockerfile...
I found this docker file but when I build it it fails with errors
https://aspenmesh.io/2018/01/building-istio-with-minikube-in-a-container-and-jenkins/
e - –no-install-recommends error
any idea ?
notice that:
--no-install-recommends install
and
–no-install-recommends install
are two completely different strings. So that the error you get:
E: Invalid operation –no-install-recommends
is the result you've copied content of your Dockerfile from here and you should have rather copied it from github (you can even click raw button there to be 100% sure you copy totally plain text without any additional formatting, changed encoding etc.)

What is the difference between docker FROM and RUN apt-get?

I see that some containers are created FROM official Apache docker image while some others are created from a Debian image with RUN apt get install. What is the difference? What is the best practice here and which one should I prefer?
This is really basic. The purpose of the two commands are different.
When you want to create an image of your own for your specific purpose you you go thru two steps:
Find a suitable base image to start from. And there is a lot of images out there. That is where you use the FROM clause... To get a starting point.
Specialize the image to a more specific purpose. And that is where your use RUN to install new things into the new image and often also COPY to add scripts and configurations to the new image.
So in your case: If you want to control the installation of Apache then you start of with a basic Debian image (FROM) and control the installation on Apache yourself (RUN). Or if you want to make it easy your find an image where Apache is alreay there, ready to run.

Extending an existing Docker Image on Docker Hub

I'm new to Docker and trying to get my head around extending existing Images.
I understand you can extend an existing Docker image using the FROM command in a Dockerfile (e.g. How to extend an existing docker image?), but my question is -- in general, how can I install additional software / packages without knowing what the base operating system is of the base image or which package manager is available?
Or am I thinking about this the wrong way?
The best practice is to run the base image you want to start FROM (perhaps using docker exec) and see what package managers are available (if any). Then you can write your Dockerfile with the correct software installation procedure.
Think of it the same way you'd add software to any computer: you'd either log into it yourself and poke around, or write an installation program that can handle all of the expected variations.
In most cases, the source Dockerfile is provided and you can walk the chain backwards and gain a better understanding as you do.
For example, if we look at the official Redis image we see the information tab says
Supported tags and respective Dockerfile links
2.6.17, 2.6 (2.6/Dockerfile)
2.8.19, 2.8, 2, latest (2.8/Dockerfile)
So if you're interested in building off redis:latest you'd follow the second link and see that it in turn is built off of debian:wheezy.
Most user-created images will either include their Dockerfile on the hub page or from a link there.

Dynamically get docker version during image build

I'm working on a project the requires me to run docker within docker. Currently, I am just relying on the docker client to be running within docker and passing in an environment variable to the TCP address of the docker daemon with which I want to communicate.
The file in the Dockerfile that I use to install the client looks like this:
RUN curl -s https://get.docker.io/builds/Linux/x86_64/docker-latest -o /usr/local/bin/docker
However, the problem is that this will always download the latest docker version. Ideally, I will always have the Docker instance running this container on the latest version, but occasionally it may be a version behind (for example I haven't yet upgraded from 1.2 to 1.3). What I really want is a way to dynamically get the version of the Docker instance that's building this Dockerfile, and then pass that in to the URL to download the appropriate version of Docker. Is this at all possible? The only thing I can think of is to have an ENV command at the top of the Dockerfile, which I need to manually set, but ideally I was hoping that it could be set dynamically based on the actual version of the Docker instance.
While your question makes sense from an engineering point of view, it is at odds with the intention of the Dockerfile. If the build process depended on the environment, it would not be reproducible elsewhere. There is not a convenient way to achieve what you ask.

How can i install my package automatically on docker container

I have to create three container and i have to installed package automatically on this container?
How can i do that?
I need save docker file.
Thanks for help by advance.
Basic knowledge
So, read official guideline https://docs.docker.com/reference/builder/ and write your own Dockerfile to automate your installation of your packages in your container.
Reuse existing docker images
docker is built on community as well, before you write your Dockerfile, you may check the http://hub.docker.com to reuse exsiting one or take them as examples

Resources