I have a C++ project which has a large set of dependencies and a very long compilation time. My idea is to only distribute the project executable along with the dependencies using docker. However , i am unable to identify how to do this?
A common way I've seen this achieved is signed release packages along with signature verification with packages deployed on something like github or using ppa packages for ubuntu.
I suppose my question is part docker , part build related.
How do I build and package this ?
I am running arch linux with a higher kernel , and will be building docker with ubuntu lts. Does the binary have any issues on what kernel it was built on ?
Can i build on local arch and package that to release or do i need to do some CI delivery via github actions?
Instead of using releases via github/ppa , can i just do a cp of binary built locally as a docker file action?
Thanks!
Related
Is it possible to run bentoml build without importing the services.py file during the process?
I'm trying to put the bento build and containarize steps in our CI/CD server. Our model depends on some OS packages installed and some python packages. I thought I could run bentoml build to package the model code and binaries that are present. I'd leave the dependencies especification to the contanairize step.
To my surprise the bentoml build process tried to import the service file during the packaging and the build failed since I didn't have the dependencies installed in my CI/CD machine.
Can I prevent this importing while building/packaging the model? Maybe I should ignore the bento containarize and create my bento container by hand and just execute the bentoml serve inside.
I feel that the need to install by hand the dependencies is doubling the effort to specify them in the bentofile.yaml and preventing the reproducibility of my environment.
This is not currently possible. The community is working on an environment management feature, such that an environment with the necessary dependencies will be automatically created during build.
I am just starting to learn about docker. Is docker repository (like Docker Hub) useful? I see the docker image as a package of source code and environment configurations (dockerfile) for deploying my application. Well if it's just a package, why can't I just share my source code with the dockerfile (via GitHub for example)? Then the user just downloads it all and uses docker build and docker run. And there is no need to push the docker image to the repository.
There are two good reasons to prefer pushing an image somewhere:
As a downstream user, you can just docker run an image from a repository, without additional steps of checking it out or building it.
If you're using a compiled language (C, Java, Go, Rust, Haskell, ...) then the image will just contain the compiled artifacts and not the source code.
Think of this like any other software: for most open-source things you can download its source from the Internet and compile it yourself, or you can apt-get install or brew install a built package using a package manager.
By the same analogy, many open-source things are distributed primarily as source code, and people who aren't the primary developer package and redistribute binaries. In this context, that's the same as adding a Dockerfile to the root of your application's GitHub repository, but not publishing an image yourself. If you don't want to set up a Docker Hub account or CI automation to push built images, but still want to have your source code and instructions to build the image be public, that's a reasonable decision.
That is how it works. You need to put the configuration files in your code, i.e,
Dockerfile and docker-compose.yml.
I have a Visual Studio application which we would like to build and deploy with OpenShift, and have already a success doing a manual build by pointing to the github repository holding the sources. As far as I can see this mean that OpenShift uses s2i to process the repository contents and create the docker image which we then can deploy. This works!
I need to automate this and I am not familiar with the .NET ecosystem, so at first I would like to replicate the current behavior. I have previously managed to build with a suitable SDK image, but that work was lost by accident, and Redhat really want s2i used, so therefore this is what I am looking at now. If another approach is better, I am very open for that.
To my understanding means that we need to locate a suitable Linux dotnetcore SDK image to build it with. The manual primarily refers to Redhat images (requiring a valid subscription) but also refers to registry.centos.org/dotnet/dotnet-21-centos7:latest which I have then tried to use.
The full build command so far is:
s2i build --loglevel 4 https://github.com/.... --context-dir=TPCIP.Web --ref=develop registry.centos.org/dotnet/dotnet-21-centos7:latest tpcip
Which correctly checks out the remote repository at the develop branch, but then fails with:
I0304 17:01:50.142784 17028 sti.go:711] ---> Installing application source...
I0304 17:01:50.154784 17028 sti.go:715] A compatible SDK version for global.json version: [2.1.300] from [/opt/app-root/global.json] was not found
I0304 17:01:50.154784 17028 sti.go:715] Did you mean to run dotnet SDK commands? Please install dotnet SDK from:
I0304 17:01:50.154784 17028 sti.go:715] https://go.microsoft.com/fwlink/?LinkID=798306&clcid=0x409
Not being familar with this ecosystem I do not know how to proceed from here. Suggestions?
EDIT: Poking inside I saw:
bash-4.2$ dotnet --list-sdks
2.1.503 [/opt/rh/rh-dotnet21/root/usr/lib64/dotnet/sdk]
bash-4.2$ exit
So it is too new?
Currently, I'm doing POC of docker in Tizen OS for arm architecture. I just want to know how to build an own image.
Downloaded the code from "https://github.com/moby/moby."
Stuck with below todo,
1) whats the build command in Linux, a host pc which we are using for the build?
2) where to make the changes for setting the arm cross compiler path for docker build?
3) The Tizen kernel which we are using is 4.4/armv7/32bit, should we need to enable any specific configs? some idea we got here
4) On successful build, where the docker engine image will reside in the build?
I have dockerized a nodejs app on github. My Dockerfile is based on the offical nodejs images. The offical node-repo supports multiple architectures (x86, amd64, arm) seamlessly. This means I can build the exact same Dockerfile on different machines resulting in different images for the respective architecture.
So I am trying to offer the same architectures seamlessly for my app, too. But how?
My goal is automate it as much as possible.
I know I need in theory to create a docker-manifest, which acts as a docker-repo and redirects the end-users-docker-clients to their suitable images.
Docker-Hub itself can monitor a github repo and kick off an automated build. Thats would take care of the amd64 image. But what about the remaining architectures?
There is also the service called 'TravisCI' which I guess could take care of the arm-build with the help of qemu.
Then I think both repos could then be referenced statically by the manifest-repo. But this still leaves a couple architectures unfulfilled.
But using multiple services/ways of building the same app feels wrong. Does anyone know a better and more complete solution to this problem?
It's basically running the same dockerfile through a couple machines and recording them in a manifest.
Starting with Docker 18.02 CLI you can create multi-arch manifests and push them to the docker registries if you enabled client-side experimental features. I was able to use VSTS and create a custom build task for multi-arch tags after the build. I followed this pattern.
docker manifest create --amend {multi-arch-tag} {os-specific-tag-1} {os-specific-tag-2}
docker manifest annotate {multi-arch-tag} {os-specific-tag-1} --os {os-1} --arch {arch-1}
docker manifest annotate {multi-arch-tag} {os-specific-tag-2} --os {os-2} --arch {arch-2}
docker manifest push --purge {multi-arch-tag}
On a side note, I packaged the 18.02 docker CLI for Windows and Linux in my custom VSTS task so no install of docker was required. The manifest command does not appear to need the docker daemon to function correctly.