passing a variable value from Jenkins to Node APP - jenkins

Not sure how to ask this question but would like to put it here and want to hear some suggestions.
So far, I use "DB_LINK" variable that has mongo database url in my config.json file. My Node app uses this variable to connect to the Mongo. But this DB_LINK also gets checked into git, which we dont want this to happen because we dont want to check in the passwords into git.
in my local development, I use local.json file that has all these configs and not check that file into git (in .gitignore entry). So it is fine making it work in my local dev environment, but the challenge is when Jenkins try to push the code to TEST, it has to pass testcases (it has to run test cases, so that time the DB_LINK value is needed) before deployment happens. so this is when I need this DB_LINK variable be passed from Jenkins.
Here is what I did so far ..
in Jenkins configurations, at the 'predefined parameters' I added DB_LINK=myMongoLink to parameters list.
but this value is not being handed over to my node app. Any suggestions on how to achieve what I am trying to achieve?

Ok. I figured this out. before the change, I used to pass a command from Jenkins to run my test cases like this
npm run test
but now,
DBLink=myDB npm run test
so the DBLink variable from here being handed over to the node app and was able to run the test cases. Before this change,
I used to pass this DBLink=myDB from a config file

Related

PyCharm: Unit testing directory setup with remote interpreter

For years I've been running a Docker container on my local machine and using it as a remote Python interpreter via SSH in PyCharm. This works great (though 2022.2.1 brought a lot of new bugs that have been slowly being ironed out) for running my code! I'm now on 2022.2.3.
However, I'm having issues running unit tests. In the past (i.e. before version 2022.2.1), I could simply right click my tests directory (a direct child of my main project directory) and click Run Python tests in test... and it would all work as expected.
Now, though, when I click this, I receive an error message about "No such file or directory."
I've tried everything I can think of- I've setup my path mappings in the Python test run config to exactly match those shown in my Python run config, and have tried every version of directory and subdirectory in the mappings and working directory, but I always receive an error about either having an empty test suite (no tests found), or that the directory "must be in the project."
It seems like no matter what I do, PyCharm is trying to create a temp directory somewhere, or is trying to read from some temp directory that I never specified, because I see errors this like:
AssertionError: /tmp/pycharm_project_405/docker/tests: No such file or directory
Yet I never created, specified, or requested a temp directory of any sort, let alone one named /tmp/pycharm_project_405/; this is a mystery to me.
PyCharm with an SSH interpreter is rapidly becoming unusable for me and my team because we cannot figure out how to set this up. Can anybody please offer some guidance on what we need to do?
Thank you all so very much!
I tried:
Changing run config for Python tests to match the working directory and path mapping of Python run configs (which work)
Directly specifying the path to the tests from the container's perspective
Setting up run config templates
Specifying one directory up/down from the actual tests
Expected:
Unit tests to be found and run as they were in previous versions of PyCharm
Answer
Create a run config for testing
In the testing run config, set Target: to Custom
Set the correct remote interpreter
Set Working directory to the test folder
Set TWO path mappings: 1) Map the code directory (in my case, the parent directory of the tests folder) and 2) Map the test directory itself
Voila!!!

How to force .env variables update in a nuxt project?

hi!
I wonder if anyone knows if there is any way to force the update of the .env file.
First of all, every time that I modified my .env variables, the changes were happening right away but now I started using the next build configuration:
build: {
hardSource: true,
cache: true,
parallel: true,
}
And ever since I started using those experimental features, the .env variables do not seem to get updated after I update one value in my .env file.
In my project, I develop the API in one machine and the front end on other machine (Just for convenience), so, sometimes my machine has the next IP address: 192.168.100.100 and sometimes 192.168.100.101, etc.
My project uses the environment variable (in the .env file)
API_URL=http://192.168.100.100:4100
BASE_URL=http://localhost:4200
So, when the local IP address of the first machine changes, I have to update the .env file.
The problem now is that even after killing the app, deleting .nuxt folder and running npm run dev I still see the API requests having the previous IP address.
Solutions?
I have thought of disabling the cache and hardSource configurations but they are really helpful to me and the IP changes are not that often, but once in a while I have to update other variable, so that's not a solution for me.
I have also thought of disabling DHCP on my other machine and assigning it a fixed local IP address, but that is not ideal for me, although I think I will do that for now, hoping that in the future I get to know a better way of updating the environment variables (Because sooner or later I will need to update another variable that has nothing to do with the IP address)
I'd like to know if there is a way to force the .env variables to be updated in a nuxt project with hardSource, cache and parallel set to true.

Jenkins Configuration as Code - migrate configuration from another instance

I'm moving a Jenkins from 'traditional' to JCasC.
We have a quite complex setup already, and I am wondering if there is a way to migrate somehow the current configuration without a need of going through settings and code it in .yaml?
btw. I'm not sure about installing JCasC plugin on prod to see the configuration... Am I wrong?
Thanks!
Make sure to have read the Getting Started and other docs.
Create new instance (you can do this onto your desktop/laptop) and copy over all the configuration, config.xml,*.xml, secrets and keys, but NO jobs into the new instance. See what to backup.
Add an entry to start in quiet mode by adding Jenkins.instance.doQuietDown()
You might want/need to change the url and port config too.depending on host. Mind you, you can do this all onto your desktop/laptop
Copy the plugins as well. Add the config as code plugin.
Startup the new instance, export the CasC config and review. The export is a starting point so fill in any missing gaps, etc. For example, since you did not copy jobs over, folders and views will not be created. Some plugin configs are.also not yet implemented.
Stop Jenkins, delete all the config (except the secret key). Put the JCasC config in place and start up. Compare the new configs to what you backed up/copied over. Repeat until they match. Now you should have a config that matches Prod.
Now you can install JCasc in Prod. The plugin does nothing until configured. But do a similar backup/compare of Prod config, before and after the config, just in case something changed in the intervening period
Of course, now you need to mke sure any changes are now only done via JCasC and not the UI, or you are out of sync again. See blog and JEP.

.env file Flaged as not being an object when deploying trough DevOps CD pipeline

I have just created a .env file to separate my environment variables from my main docker-compose file. I can run this document on my local machine fine with no errors or issues but when I try run it through my CD pipeline I get the following error.
[error]Top level object in 'C:\BuildAgent_work\r38\a\"Myproject Name"\drop\ .env' needs to be an object not 'class 'str'.
I first thought this was because I had set up my build/CI process wrong but I have played around with it and have had no luck.
I have also done some research online to find others with the same problem but none relate to DevOps in anyway so it has been unhelpful
I am not sure how to reproduce this problem but if anyone knows I can try provide some of my code if needed
Edit:
Here is a snippet of my .env file. Check comment below for my thoughts
ContainerInfrastructure_Version=6.7.93-beta.1
ContainerInfrastructureCore_Version=6.7.41-beta.1
AuthenticationWebService_Version=6.7.52-beta.1
CRM_Version=6.7.52-beta.1
Expected result:
Deploys successfully
What I'm getting during the docker-compose task:
[error]Top level object in 'C:\BuildAgent_work\r38\a\Goldpine.ReleaseManagement\drop.env' needs to be an object not 'class 'str'.
Ok so I figured it out. I'm not sure how to explain this briefly but I'll do my best.
So the problem was within DevOps itself not my code. It turns out a .env file only works/gets picked up if you run the docker-compose command from within the working directory that the docker-compose.yml file exists.
When it went into DevOps it was not running the command from within the downloaded artefact directory but by creating a path to it using the -f command.
So long story short, If you use a .env file you need to set the working directory within the CD pipeline to your artefact folder for it to be able to see the .env file correctlly.
I hope this is clear enough if not just let me know and I'll try change it accordingly :)

finding lib directory during common test

My question is, how should my Erlang app reliably find a binary in the priv directory, not just in production; when installed properly, but during common test?
I realised today when I added a travis-ci configuration to an old Erlang app and pushed it to git-hub, that the process by which it works locally for me, is a little more fragile than I thought. The travis-ci build failed because it, not unreasonably, checked out my repo into a directory named after the repo, which is of the form erlang-APP. Locally my app is in a directory called APP-VSN though.
The result of this is that a call to code:lib_dir(APP) returns a correct result during the common test run locally, but if I rename my current directory to erlang-APP instead of APP-VSN (or just APP works too) my local build fails, just like it does for travis-ci, because code:lib_dir(APP) returns {error,bad_name}. The behaviour as though .. is added to the library path for rebar ct.
Renaming my github repo from erlang-APP to APP resolves the travis-ci build failure... but knowing the build tests only pass depending on the name of the directory the repo is checked out into doesn't sit right with me.
One way could be to use a soft link (either in the repo under version control, or created when initializing the tests), and make your Erlang code path go via the link. E.g., "./APP" -> ".", or "./lib/APP" -> "..".

Resources