Building custom Erlang debian package from source files - erlang

I'm trying to create Debian package from source files, but I'm stuck.
What I want to achieve:
Download Erlang source (https://erlang.org/download/otp_src_23.1.tar.gz)
Create the Debian package from it with configure option "--enable-fips"
Be able to install that package on other machines
Is there any simple solution for a case like this?

Related

Distribute binaries using docker

I have a C++ project which has a large set of dependencies and a very long compilation time. My idea is to only distribute the project executable along with the dependencies using docker. However , i am unable to identify how to do this?
A common way I've seen this achieved is signed release packages along with signature verification with packages deployed on something like github or using ppa packages for ubuntu.
I suppose my question is part docker , part build related.
How do I build and package this ?
I am running arch linux with a higher kernel , and will be building docker with ubuntu lts. Does the binary have any issues on what kernel it was built on ?
Can i build on local arch and package that to release or do i need to do some CI delivery via github actions?
Instead of using releases via github/ppa , can i just do a cp of binary built locally as a docker file action?
Thanks!

BentoML: how to build a model without importing the services.py file?

Is it possible to run bentoml build without importing the services.py file during the process?
I'm trying to put the bento build and containarize steps in our CI/CD server. Our model depends on some OS packages installed and some python packages. I thought I could run bentoml build to package the model code and binaries that are present. I'd leave the dependencies especification to the contanairize step.
To my surprise the bentoml build process tried to import the service file during the packaging and the build failed since I didn't have the dependencies installed in my CI/CD machine.
Can I prevent this importing while building/packaging the model? Maybe I should ignore the bento containarize and create my bento container by hand and just execute the bentoml serve inside.
I feel that the need to install by hand the dependencies is doubling the effort to specify them in the bentofile.yaml and preventing the reproducibility of my environment.
This is not currently possible. The community is working on an environment management feature, such that an environment with the necessary dependencies will be automatically created during build.

Do I need to share the docker image if I can just share the docker file along with the source code?

I am just starting to learn about docker. Is docker repository (like Docker Hub) useful? I see the docker image as a package of source code and environment configurations (dockerfile) for deploying my application. Well if it's just a package, why can't I just share my source code with the dockerfile (via GitHub for example)? Then the user just downloads it all and uses docker build and docker run. And there is no need to push the docker image to the repository.
There are two good reasons to prefer pushing an image somewhere:
As a downstream user, you can just docker run an image from a repository, without additional steps of checking it out or building it.
If you're using a compiled language (C, Java, Go, Rust, Haskell, ...) then the image will just contain the compiled artifacts and not the source code.
Think of this like any other software: for most open-source things you can download its source from the Internet and compile it yourself, or you can apt-get install or brew install a built package using a package manager.
By the same analogy, many open-source things are distributed primarily as source code, and people who aren't the primary developer package and redistribute binaries. In this context, that's the same as adding a Dockerfile to the root of your application's GitHub repository, but not publishing an image yourself. If you don't want to set up a Docker Hub account or CI automation to push built images, but still want to have your source code and instructions to build the image be public, that's a reasonable decision.
That is how it works. You need to put the configuration files in your code, i.e,
Dockerfile and docker-compose.yml.

TFS 2015 Deploy website to multiple machines with a load balancer

I'm using TFS 2015 to build and deploy my websites.
I have multipe websites and i need to deploy then to multiple machines that have a NLB.
So the steps are:
1 - Stop NLB on machine 1
2 - Deploy files
3 - Start NLB on machine 1
4 - Repeat to all machines.
Is there a way of doing this without have to configure this steps to each machine?
Its possible to have a machine group and apply the steps to each one?
Thanks
You need to use a custom task called Tokenizer in the workflow of the release. It tokenizes the variable in web.config which then can be transformed. Tokenizer needs the initial values of the custom variable in a specific format.
To install the tokenizer you first need node.js with npm packager
installed on our machine. Follow this process to install and use
Tokenizer.
Download and install node.js on your machine if it is not present. It
also installs npm package loader.
Download tokenizer from https://github.com/openalm/VSOtasks. It comes
as a .zip file. Unzip it.
Open command prompt and change directory to the folder
“Tokenizer\x.x.x” in the unzipped folder.
From that folder run the command npm install -g tfx-cli to install the
command line tool that can upload the tokenizer task.
After using this you will be albe to write the environment specific configuration file when you are deploying to different environments. More detail steps and tutorials. Please take a look at this blog from MSDN: Deploy to multiple environments with appropriate configurations
Update
For "rolling deploy", this can't be achieved for now. No this option and task in web base release management. You may have to apply the steps to each machine. If you really need this feature, you can add it in uservoice of VSTS, TFS admin and PM will kindly review your suggestion.

Portable build of ImageMagick

I'd like to build ImageMagick for use with CloudBees. Normally, you would use a package manager like apt, yum, or homebrew to install it. However, on CloudBees you don't have admin access or access to these tools.
I've tried including ImageMagick as part of my build process - however it's linked to use the directory it's built out of "/jenkins/somethingsomething". At runtime it fails to find its libraries. The run-environment is a separate machine, in a directory "/apps/
I've tried building it from source as part of the deploy process, but this causes the deployments to timeout.
Is there any way to build ImageMagick so that it looks in $MAGICK_HOME at runtime instead of binding to a specific, hard-coded path?
Thanks!
Chris
On development environment, using Jenkins on DEV#cloud, you can try to use it through "curl" command for example. However, on runtime you can only use it, if you customize the stack you want to use.
CloudBees has created stacks for Tomcat, JBoss, Jetty and Glassfish. For example, Tomcat 6 and Tomcat 7 stacks used by CloudBees on runtime are available on GitHub (here) on different branches.
More information about ClickStacks is available here. Also the way in which you can customize your own stack, Developing and using your own ClickStacks section.

Resources