Some general questions about the docker nodemcu-build process:
Is there a way to specify which modules are included in the build? (similar to the way the cloud build service works)
Is there a way to include a description that will appear when the resultant firmware is run?
Is SSL enabled?
The size of the bin file created by the docker nodemcu-build process (from dev branch source) is 405k. A recent build using the cloud service resulted in a bin file of size 444k. The cloud service build only included the following modules: cjson, file, gpio, http, net, node, tmr, uart, wifi, ssl. Why is the docker build bin file, that contains all modules(?), smaller than the cloud build bin file that only contains 10 modules? (i am concerned that my local docker build version is missing something - even though the build process was error free).
You specify the modules to be built by uncommenting them in the /app/include/user_modules.h file in the source tree. The default build from the source tree is relatively minimal - not an "all modules" build.
The banner at connection is the "Version" field. The nodemcu-build.com builds change this out for custom text. It is defined in /app/include/user_version.h as the USER_VERSION define. You'll need to embed "\n" newlines in the string to get separate lines.
Yes, the Net module can include limited SSL support (TLS 1.1 only) (TLS 1.2 in dev per Marcel's comment below). You need to enable it in /app/include/user_config.h by defining CLIENT_SSL_ENABLE.
The default module and config setup in user_modules.h / user_config.h is different than the defaults on nodemcu-build.com, so the builds are not likely to be the same out of the box.
Related
I have to share a docker image (Spring boot application) with a collaborator which works outside my company. I need to prevent access to the source code.
I was trying to share it as a .tar file that contains the jar in the docker image but as far as i know this won't prevent source code access. The second solution is to push this one to Docker Hub as a private repo and grant the access only to him, but i think the source code can be access as well.
Are there any other solutions that i can use for this situation?
Thanks.
It doesn't matter that the image is in Docker Hub as a private image or a tar file, in both cases, if the image can be pulled, it can be exported again using docker save in the machine that pulled it.
The most you can do is use a multi stage build (if you are building the jar file using Docker as well), so that once the jar file is generated, a new image containing only the JRE and the JAR is present.
Something like this, and will heavily vary on your implementation specifics:
FROM openjdk:latest
# Build your image
FROM openjdk:11-jre
COPY --from=0 /build/app.jar ./
# runtime environment, CMD, etc
This will not prevent a third party from fetching the JAR file from the Docker image and decompile it, but it will prevent the user from reading clean code.
To further complicate it, you will have to refer to a Java obfuscator, there are plenty of products, both free and commercial, available online for that purpose.
Only things you COPY into a Docker image are present into it. I see a lot of Java-based Dockerfiles like
FROM openjdk:17
COPY target/app.jar /app.jar
CMD java -jar /app.jar
and these only contain some base Linux distribution, the JDK, and the jar file; they do not contain any application source since it is not COPYed in.
As #MarcSances notes in their answer this is as secure as otherwise distributing the jar file; it is straightforward to decompile it and you will get relatively readable results, but this is not "distributing the source" per se. (Compare with Javascript, Python, PHP, or Ruby scripted applications, where the only way to run the application is to actually have its source code; and also compare with C++, Go, or Rust, where you have a standalone runnable binary which is even harder to decompile.)
Is it possible to run the Bazel with a mode, which execute all actions on the remote executor? Ideally I would like to limit all possible network traffic like sending *.o files, which are required only to link a final binary
It sure is. This is something I tend to keep as my default (as I often develop on cellular data).
If you add the following to your .bazelrc you can minimise the downloads.
build --remote_download_minimal
build --remote_download_outputs=minimal
There are a whole host of other flags that can be used to fine-tune what is downloaded from the remote executor and when. You can find these flags by searching through the Bazel command-line reference. e.g. use ctrl-F and 'remote_download_'.
NOTE: This does not eliminate downloads for targets that are tagged as 'local'.
I'm new to Docker and read some articles about it.
I read many articles that say "use same image for all environments(dev/stage/production)" and "image for CI/CD and for deployment are different".
But I can't integrate those two advices and I also can't find dockerfile examples for that.
Is that mean I have to make below two docker images?
(1) image for deployment
- application code and its dependencies
- there is no CMD
(2) image for CI/CD
- use (1) as base image
- add extra for CI/CD
I think your confusion comes from section 4:
The deployment images should contain:
The application code in minified/compiled form plus its runtime dependencies.
Nothing else. Really nothing else.
The second category is the images used for the CI/CD systems or
developers and might contain:
The source code in its original form (i.e. unminified)
Compilers/minifiers/transpilers
(etc)
While many developers see it as natural, I think it's not a great setup, and it shows the antipattern No 1, treating a container like a VM.
To my mind, during development, the target container should not include compilers, test frameworks, etc. It should only contain the compiled code and the runtime for it, exactly like the container that goes to prod.
All these tools belong to a different container (let's call it "utility"), especially created to make building and testing uniform and reproducible. That container has installed all the tools one might need to build all the containers, or a wide subset thereof (e.g. all Node and Python containers). You mount your source directories when invoking it, and it compiles / minifies / packages the code, generates gRPC stubs, runs the test suite, etc.
You can use the same utility container locally and in CI/CD. Your build and test pipeline is independent from the OS (in our company developers run Windows, macOS, and Linux on their desktops, but for building a backend service it makes no difference). You never have to deal with a diverging zoo of compiler versions, test framework versions, eslint configurations, etc between different "development" images.
Of course, you can run the same image with your compiled code differently in prod and in development: e.g. you can expose ports to attach a debugger, etc. But it's (light) configuration from the outside the container, not a different build.
So no, to my mind, you should use the same container on development, CI/CD, and prod. In one of the companies I worked for all containers had crypto signatures, and you could only promote to QA / staging / prod a container which was build from a particular commit and passed tests, with the signature checked at each promotion. Of course leaving a compiler inside such a container would be a gaffe.
OCI containers are a convenient way to package suitable toolchain for a project so that the development environments are consistent and new project members can start quickly by simply checking out the project and pulling the relevant containers.
Of course I am not talking about projects that simply need a C++ compiler or Node.JS. I am talking about projects that need specific compiler packages that don't work with newer than Fedora 22, projects with special tools that need to be installed manually into strange places, working on multiple projects that have tools that are not co-installable and such. For this kind of things it is easier to have a container than follow twenty installation steps and then pray the bits left from previous project don't break things for you.
However, starting a container with compiler to build a project requires quite a few options on the docker (or podman) command-line. Besides the image name, usually:
mount of the project working directory
user id (because the container should access the mounted files as the user running it)
if the tool needs access to some network resources, it might also need
some credentials, via environment or otherwise
ssh agent socket (mount and environment variable)
if the build process involves building docker containers
docker socket (mount); buildah may work without special setup though
and if is a graphic tool (e.g. IDE)
X socket mount and environment variable
--ipc host to make shared memory work
And then it can get more complicated by other factors. E.g. if the developers are in different departments and don't have access to the same docker repository, their images may be called differently, because docker does not support symbolic names of repositories (podman does though).
Is there some standard(ish) way to handle these options or is everybody just using ad-hoc wrapper scripts?
I use Visual Studio Code Remote - Containers extension to connect the source code to a Docker container that holds all the tools needed to build the code (e.g npm modules, ruby gems, eslint, Node.JS, java). The container contains all the "tools" used to develop/build/test the source code.
Additionally, you can also put the VSCode extensions into the Docker image to help keep VSCode IDE tools portable as well.
https://code.visualstudio.com/docs/remote/containers#_managing-extensions
You can provide a Dockerfile in the source code for newcomers to build the Docker image themselves or attach VSCode to an existing Docker container.
If you need to run a server inside the Docker container for testing purposes, you can expose a port on the container via VSCode, and start hitting the server inside the container with a browser or cURL from the host machine.
Be aware of the known limitations to Visual Studio Code Remote - Containers extension. The one that impacts me the most is the beta support for Alphine Linux. I have often noticed some of the popular Docker Hub images are based on Alphine.
I would like to set up log forwarding as part of a deployment process. The activity of the machines will be different but they will all log to specific places (notably /var/log).
Is it possible to configure fluentd so that it monitors a whole directory? (including the ability to pick up files which pop-up while it is active)
I know that in_tail can do this for a given, specified file but the documentation does not mention a whole directory.
There is an ideal exact duplicate of this question from 2014 which points to the tail_ex plugin. Unfortunately its description mentions that
Deprecated: Fluentd has the features of this plugin since 0.10.45. So,
the plugin no longer maintained
I still could not find the mentioned features.
Using the wildcard support within Fluentd's in_tail plugin this is absolutely possible. In the path section you would specify the /var/log/* directory and Fluentd will automatically skip files that are non-readable.
Additionally, if you write new files to this directory Fluentd will periodically scan based on the configuration item https://docs.fluentd.org/v0.12/articles/in_tail#refreshinterval
Some notes: If you use Treasure Data's packaged version of Fluentd, td-agent then you need to ensure that the files you want to tail are readable by the td-agent user that is provisioned as part of that install.
Lastly, if you need to securely read these files you may consider Treasure Data's Enterprise Fluentd offering