I have several CloudRun services running. I have the need to do some environment specific things (also because I have a docker container being built from the same sources too) in code. I searched around quite a bit to find out if I can get the following run-time:
Check if my code is running in a CloudRun instance or not
Get other environment variables like service-name, project-name, deploy-time, awake-time, region-name, etc. - for various reasons
This demo container code shows how to get that kind of information:
https://github.com/GoogleCloudPlatform/cloud-run-hello/blob/master/hello.go
Some things are available directly as environment variables like service and revision
service := os.Getenv("K_SERVICE")
revision := os.Getenv("K_REVISION")
The container contract docs page shows the full list
https://cloud.google.com/run/docs/container-contract
as well as information about the metadata server that can give you things like project-id or region.
Related
I'm trying to write a rule where the condition depends on the output of a script.sh. I had tried several approaches, but I did not have success.
Searching in your documentation but didn´t find anything that help me. I tried several evt or proc, but neither of them given me any info.
In fact, this is the rule with I'm trying to see how I can find a workaround:
- rule: FIM Custom rule
desc: Testing rule
condition: access_log_files and (evt.type=close)
output: Test result (proc_name=%proc.name command=%proc.cmdline evt_type=%evt.type evt.args =%evt.args syslog_.facility_str=%syslog.facility.str syslog_message=%syslog.message)
priority: WARNING
Consider please that I´m running Falco on Docker with the last image.
When I execute in the Ubuntu host the command logger test, I recievedin the stdout of the docker falco container this message:
{"hostname":"dc95654c63c3","output":"01:21:29.759239580: Warning Test result (proc_name=python3 command=python3 /usr/lib/ubuntu-advantage/timer.py evt_type=close evt.args =res=0 syslog_.facility_str= syslog_message=)","priority":"Warning","rule":"FIM Custom rule","source":"syscall","tags":[],"time":"2022-12-17T01:21:29.759239580Z", "output_fields": {"evt.args":"res=0 ","evt.time":1671240089759239580,"evt.type":"close","proc.cmdline":"python3 /usr/lib/ubuntu-advantage/timer.py","proc.name":"python3","syslog.facility.str":null,"syslog.message":null}}
So please tell me what I can do.
Thanks
In order to feed Falco with external sources of events (those that are not Kernel Syscalls) you'd need to use a Falco plugin. There are plugins to obtain events from Kubernetes, AWS CloudTrail, or even from GitHub. However, there is no plugin, that I know of, to obtain information from the standard output of a program or from Syslog.
Due to the nature of the project Falco, anyone in the community can contribute with such a plugin, so I invite you to join the Falco slack channel and ask around, or even write your own plugin.
I'm working with the following:
Docker for Windows v20.10.11
Docker running in Windows container mode
mcr.microsoft.com/windows:1903 base image
Proprietary application installed on top of this base image
Each year we create a Docker image with the latest version of our company's software. However this year's version behaves differently. Host machine installation runs fine. Containerized installation fails to run in certain situations. I can start the application as a simple EXE, for example using the Docker run command. The app will start and show up in "tasklist". However I can't start the app via the COM API, which is a critical requirement. The problem appears to be COM related. Normally we can create COM objects for our software just like for any other application. For example, IE returns a COM object just fine:
Creating these objects for our application works outside containers. However inside the container, our latest installation gives this error:
Access permissions appear to be ok. I tried a couple tests to prove this. First I can install other software like MS Word into a container and create COM objects for that:
Second I tried retrieving + modifying the application's DACL in PowerShell.
Changing access masks or trustees can cause an Access Denied error:
This also appears to confirm the access permissions were Ok by default.
Next I made sure COM is aware of the application. This appears to be fine. I get the same result on host machine and container when running this PS script:
gci HKLM:\Software\Classes -ea 0| ? {$.PSChildName -match '^\w+.\w+$' -and
(gp "$($.PSPath)\CLSID" -ea 0)} | ft PSChildName
The application shows up just like any other. The details show up fine when querying by AppID. LocalServer32 points to the correct EXE:
Some other things I tried:
Querying registry keys. There are 7 keys created when installing our software. These appear identical on host machine install and container install.
Even though permissions appear fine, I still tried logging into the container as alternate users. For example "nt authority\system" is another virtual admin user. I also changed the password of the "builtin\administrator" user to enable logging in with that one. Lastly tried creating new users entirely and adding them to the Administrators user group. All these attempts had the same errors as "builtin\containeradministrator" (default user).
A minor check was ensuring CMD.exe / Powershell is running as x64:
Re-registering the DLLs associated with the installation using regsvr32.
Starting from different base images. https://learn.microsoft.com/en-us/virtualization/windowscontainers/manage-containers/container-base-images. The full Win Server base image behaves exactly the same way regarding errors. The smaller Win Server Core base image is even more problematic, as I can't even start the app's EXE manually using that base. Lastly I tried other tags of the full Windows base image such as 20H2 and 2004. Same result from those. Multiarch or x64 makes no difference.
Included the "Ogawa hack" which was historically needed to make MS Office apps function correctly with COM: https://stackoverflow.com/a/1680214/7991646. It could be necessary for other COM apps too, but didn't help with my specific installation.
Is there anything else I can do to diagnose or solve this COM issue?
There are several things to consider:
The Considerations for server-side Automation of Office article states the following:
Microsoft does not currently recommend, and does not support, Automation of Microsoft Office applications from any unattended, non-interactive client application or component (including ASP, ASP.NET, DCOM, and NT Services), because Office may exhibit unstable behavior and/or deadlock when Office is run in this environment.
If you are building a solution that runs in a server-side context, you should try to use components that have been made safe for unattended execution. Or, you should try to find alternatives that allow at least part of the code to run client-side. If you use an Office application from a server-side solution, the application will lack many of the necessary capabilities to run successfully. Additionally, you will be taking risks with the stability of your overall solution.
The When CoCreateInstance returns 0x80080005 (CO_E_SERVER_EXEC_FAILURE) page describes possible reasons.
If many COM+ applications run under different user accounts that are specified in the This User property, the computer cannot allocate memory to create a new desktop heap for the new user. Therefore, the process cannot start. See Error when you start many COM+ applications: Error code 80080005 -- server execution failed for more information.
Finally, you may find a similar thread here helpful, see Server execution failed (Exception from HRESULT: 0x80080005 (CO_E_SERVER_EXEC_FAILURE)).
I would like to enable caching in ArangoDB, automatically when my app start.
I'm using docker-compose to start the whole thing but apparently there's no simple parameter to enable caching in ArangoDB official image.
According to the doc, all the files in /docker-entrypoint-initdb.d/ are executed at container start. So I added a js file with that code:
require('#arangodb/aql/cache').properties({mode: 'on'});
It is indeed executed but caching doesn't seem to be enabled (from what I see with arangosh within the container).
My app is a JS app using arangojs, so if I can do it this way, I'd be happy too.
Thanks!
According to the performance and server config docs, you can enable caching in several ways.
Your method of adding require("#arangodb/aql/cache").properties({ mode: "on" }); to a .js file in the /docker-entrypoint-initdb.d/ directory should work, but keep an eye on the logs. You may need to redirect log output with a different driver (journals, syslog, etc.) to see what's going on. Make sure to run the command via arangosh to see if it works.
If that's a bust, you might want to see if there is a way to pass parameters at runtime (such as --query.cache-mode on). Unfortunately, I don't use Docker Compose, so I can't give you direct advice here, but try something like -e QUERY.CACHE-MODE=ON
If there isn't a way to pass params, then you could modify the config file: /etc/arangodb3/arangod.conf.
And don't forget about the REST API methods for system management. You can access AQL configuration (view and alter) in the Web UI by clicking on the Support -> Rest API -> AQL.
One thing to keep in mind - I'm not sure if the caching settings are global or tied to a specific database. View the configuration on multiple databases (including _system) to test the settings.
JupyterHub has various authentication methods, and the one I am using is the PAMAuthenticator, which basically means you log into the JupyterHub with your Linux userid and password.
However, environment variables that I create, like this (or for that matter in those set in my .bashrc), before running JupyterHub, do not get set within the user's JupyterLab session. As you can see they're available in the console, with or without the pipenv, and within python itself via os.getenv().
However in JupyterHub's spawned JupyterLab for my user (me):
This environment variable myname is not available even if I export it in a bash session from within JupyterLab as follows:
Now the documentation says I can customize user environments using a Docker container for each user, but this seems unnecessarily heavyweight. Is there an easier way of doing this?
If not, what is the easiest way to do this via Docker?
In the jupyterhub_config.py file, you may want to add the environment variables which you need using the c.Spawner.env_keep variable
c.Spawner.env_keep = ['PATH', 'PYTHONPATH', 'CONDA_ROOT', 'CONDA_DEFAULT_ENV', 'VIRTUAL_ENV', 'LANG', 'LC_ALL', 'JUPYTERHUB_SINGLEUSER_APP']
Additional information on all the different configurations are available at https://jupyterhub.readthedocs.io/en/stable/reference/config-reference.html
Unfortunately, unlike a single-user Jupyter notebook/lab, Jupyterhub is for a multi-user environment and the customization along with setting security is not some concrete area. They provide you some default settings and a ton of ways to customize the use, alas they provide only a handful amount of examples. You need to dig into documents, check for similarities to your use case, and make adjustments in a trial-error process.
Fortunately, other than using configuration files used to configure Jupyterhub and Jupyter notebook servers, namely jupyter_notebook_config.py and jupyterhub_config.py, we can use environment reading packages per user. This flexibility comes from the use of a programming language kernel.
But this needs being able to install new packages, having them already installed, or asking admins to install them on the current kernel.
Here is one way to use customized environment variables in the current workspace.
Create a new file and give a clear name to show it is an environment file. You can have as many different files as you need. Most production exercises use the .env name but jupyter will not list dot files in file view so avoid doing that. Also, be careful about quotes; sometimes you need them, sometimes you get errors depending on what library you use and where you use them.
test.env:
NAME="My Name"
TEST=This is test 42
Install and use your preferred environment file reader then read from the file(s) you want. you can use `pip install`` in the notebook when needed, just use it cautiously.
test.ipynb
#package already installed, so installation commented out
#%pip install python-environ
import environ
env = environ.Env()
env.read_env(env.str('ENV_PATH', 'test.env'))
NAME=env("NAME")
TEST=env("TEST")
print(NAME," : ",TEST)
If you are an admin of the hub, then beware of the use cases for libraries such that some may break your restrictions. So keep an eye on what permissions you give to your users. If you use custom docker images though, there should not be a leakage as they are already designed to be isolated from your system.
I'm running Artifactory CPP CE 7.7.3 and Traefik v2.2 using docker-compose. The service is only available over http://localhost/ui/. Now, what I need is an option which allows to add a URL path-prefix (e. g. http://localhost/artifactroy/ui).
My Setup
I used the described setup process from the Artifactory Docs suggest it.
My docker.compose.yaml is the official extracted from the jfrog-artifactory-cpp-ce-7.7.3-compose.tar.gz: ./templates/docker-compose.yaml.
I'm using a reverse proxy (traefik). For this, I've added the necessary traefik configuration lines to the docker-compose-file. Here is a small extract what I've added:
[...]
labels:
- "traefik.http.routers.artifactory.rule=Host(`localhost`) && PathPrefix(`/ui`)"
- "traefik.http.routers.artifactory.middlewares=artifactory-stripprefix"
- "traefik.http.middlewares.artifactory-stripprefix.stripprefix.prefixes=/"
- "traefik.http.services.artifactory.loadbalancer.server.port=8082"
With this I managed to access artifactory over http://localhost/ui/.
Problem:
I have multiple small services running on my server, each of this service is accusable via http://localhost/<service-name>. This is very convenient and want to make clear that this URL is related to this service on my production server.
Because of this, I want to have an URL like http://localhost/artifactroy/ui/... instead of http://localhost/ui/...
I struggled getting artifactory setup in that way. I already managed to get a redirection from typing e. g. http://localhost/artifactroy/ to http://localhost/ui/ but this is not what I want on my production server.
What I did
Went through the documentation in hope of finding an option which I just can passt to artifactroy to add a prefix (Not successful).
Tried configure traefik two full days, to alter headers to get the repose point to http://localhost/artifactroy/ui/... (Only partially successful, redirection didn’t work afterwards)
Tried finding the configuration which is responsible for configure artifactory in $JFROG_HOME/artifactory/var/etc (Not successful)
Is this even possible? Help is highly appreciated..
This example (even though not traefic example) gives you a direction to implement it. There are certain routes already used within the product. You need to add a context over and above it to ensure all comes via the new context path.
https://jfrog.com/knowledge-base/how-to-remove-artifactory-from-the-context-url-in-artifactory-7/