Docker failing to see updated fixtures CSV in rspec test directory - ruby-on-rails

This one is quite strange.
I am running a very typical Docker container that holds a Rails API. Inside this API, I have an endpoint which takes an upload of a CSV and does some things and stuff.
Here is the exact flow:
vim spec/fixtuers/bid_update.csv
# fill it with some data
# now we call the spec that uses this fixture
docker-compose run --rm web bundle exec rspec spec/requests/bids_spec.rb
# and now the csv is loaded and I can see it as plaintext
However, after creating this, I decided to change the content of the CSV. So I do this, adding a column and respective value to it for each piece.
Now, however, when we run our spec again after saving this it has the old version of the CSV. The one originally used at the breakpoint in the spec.
cat'ing out the CSV shows it clearly should have the new content.
Restarting the VM does nothing. The only solution I've found is to docker-machine rm dev and build a new machine (my main one for this is called dev).
I am entirely perplexed as to what could cause this or a simple means to fix it (building with all those images takes a while).
Ideas? Inform me I'm an idiot and I just had to press 0 for an operator and they would have fixed it?
Any help appreciated :)

I think it could be an issue with how virtualbox shares folders with your environment. More information here https://github.com/mitchellh/vagrant/issues/351#issuecomment-1339640

Related

Problem understanding how to, if at all possible, run my docker file (.tar)

I received a .tar docker file from a friend that told me that it should contain all dependences for a program that I've been struggling to get working and that all I need to do is "run" the Docker file. The Docker file is of a .tar format and is around 3.1 GB. The program this file was setup to run is call opensimrt. The GitHub link to the file is as follows:
https://github.com/mitkof6/OpenSimRT
The google drive link to the Docker file is as follows:
https://drive.google.com/file/d/1M-5RnnBKGzaoSB4MCktzsceU4tWCCr3j/view?usp=sharing
This program has many dependencies, some big ones to note is that it runs off ubuntu 18.04 and Opensim 4.1.
I'm not a computer scientist by any means, so I've been struggling to even learn to do docker basics like load and run a image. However, I desperately need this program to work. If you have any steps or advice on how to run this .tar I'd greatly appreciate it. Alternatively if you are able to find a way to get opensimrt up and running and can post those steps I'd be more than happy with that solution as well.
I've tried the commands "docker run" and "docker load" followed by their respective tags, file paths, args..etc. However, even when I fix various issues I always get stuck with a missing var/lib/docker/tmp/docker-import-....(random numbers) file. The numbers change every so often when trying to solve the issue, but eventually I always end up getting some variation of this error: Error response from daemon: open /var/lib/docker/tmp/docker-import-3640220538/bin/json: no such file or directory.
ps: I have extracted the .tar already and there is no install guide/instruction, .exe, install application. As a result I'm not sure how to get the program installed and running.

PyCharm: Unit testing directory setup with remote interpreter

For years I've been running a Docker container on my local machine and using it as a remote Python interpreter via SSH in PyCharm. This works great (though 2022.2.1 brought a lot of new bugs that have been slowly being ironed out) for running my code! I'm now on 2022.2.3.
However, I'm having issues running unit tests. In the past (i.e. before version 2022.2.1), I could simply right click my tests directory (a direct child of my main project directory) and click Run Python tests in test... and it would all work as expected.
Now, though, when I click this, I receive an error message about "No such file or directory."
I've tried everything I can think of- I've setup my path mappings in the Python test run config to exactly match those shown in my Python run config, and have tried every version of directory and subdirectory in the mappings and working directory, but I always receive an error about either having an empty test suite (no tests found), or that the directory "must be in the project."
It seems like no matter what I do, PyCharm is trying to create a temp directory somewhere, or is trying to read from some temp directory that I never specified, because I see errors this like:
AssertionError: /tmp/pycharm_project_405/docker/tests: No such file or directory
Yet I never created, specified, or requested a temp directory of any sort, let alone one named /tmp/pycharm_project_405/; this is a mystery to me.
PyCharm with an SSH interpreter is rapidly becoming unusable for me and my team because we cannot figure out how to set this up. Can anybody please offer some guidance on what we need to do?
Thank you all so very much!
I tried:
Changing run config for Python tests to match the working directory and path mapping of Python run configs (which work)
Directly specifying the path to the tests from the container's perspective
Setting up run config templates
Specifying one directory up/down from the actual tests
Expected:
Unit tests to be found and run as they were in previous versions of PyCharm
Answer
Create a run config for testing
In the testing run config, set Target: to Custom
Set the correct remote interpreter
Set Working directory to the test folder
Set TWO path mappings: 1) Map the code directory (in my case, the parent directory of the tests folder) and 2) Map the test directory itself
Voila!!!

How do I use this config.yml file to run a web scraper that someone else built?

My end goal: I want to fetch data from a retail site on an hourly schedule to see if a specific product is back in stock or not.
I tried using xpath in python to scrape the site myself, but I'm not too familiar, and why reinvent the wheel if someone built a scraper already? In this case, Diggernaut has a github repo.
https://github.com/Diggernaut/configs/tree/master/bananarepublic.gap.com
I'm using the above github repo to try and run a pre-existing web scraper on the banana republic retail site. All that's included in the folder is a config.yml file. I don't even know where to start to try and run it... I am not familiar with using .yml files at all, barely know my way around a terminal (I can do basic "ls" and "cd" and "brew install", otherwise, no idea).
Help! I have docker and git installed (not that I know how to use docker). I have a Mac version 10.13.6 (High Sierra).
I'm not sure why you're looking at using Docker for this, as the config.yml is designed for use on Diggernaut.com and not as part of a docker container deployment. In fact, there is no docker container for Diggernaut that exists as far as I can see.
On the main Github config page for Diggernaut they list the following instructions:
All configs can be used with Diggernaut service to retrieve products information.
You need to create free account at Diggernaut
Login to your account
Create a project with any name and description you want
Get into your new project by clicking it and create new digger with any name
Then you will see 3 options suggested to you, you need to use one where you will use meta-language
Config editor will open and you can simply copy and paste config code and click on save button.
Switch mode for digger from Debug to Active and then run your digger.
Wait for completion.
Download data.
Schedule your runs if required.

.env file Flaged as not being an object when deploying trough DevOps CD pipeline

I have just created a .env file to separate my environment variables from my main docker-compose file. I can run this document on my local machine fine with no errors or issues but when I try run it through my CD pipeline I get the following error.
[error]Top level object in 'C:\BuildAgent_work\r38\a\"Myproject Name"\drop\ .env' needs to be an object not 'class 'str'.
I first thought this was because I had set up my build/CI process wrong but I have played around with it and have had no luck.
I have also done some research online to find others with the same problem but none relate to DevOps in anyway so it has been unhelpful
I am not sure how to reproduce this problem but if anyone knows I can try provide some of my code if needed
Edit:
Here is a snippet of my .env file. Check comment below for my thoughts
ContainerInfrastructure_Version=6.7.93-beta.1
ContainerInfrastructureCore_Version=6.7.41-beta.1
AuthenticationWebService_Version=6.7.52-beta.1
CRM_Version=6.7.52-beta.1
Expected result:
Deploys successfully
What I'm getting during the docker-compose task:
[error]Top level object in 'C:\BuildAgent_work\r38\a\Goldpine.ReleaseManagement\drop.env' needs to be an object not 'class 'str'.
Ok so I figured it out. I'm not sure how to explain this briefly but I'll do my best.
So the problem was within DevOps itself not my code. It turns out a .env file only works/gets picked up if you run the docker-compose command from within the working directory that the docker-compose.yml file exists.
When it went into DevOps it was not running the command from within the downloaded artefact directory but by creating a path to it using the -f command.
So long story short, If you use a .env file you need to set the working directory within the CD pipeline to your artefact folder for it to be able to see the .env file correctlly.
I hope this is clear enough if not just let me know and I'll try change it accordingly :)

What is wrong with this rails template?

I am attempting to take a whack at creating my first Rails application template and I am running into a slight issue with the copy_file method.
First some background.... Apparently the Ruby OpenSSL package does not ship with a CA store, so any attempt to connect to an HTTPS service will fail out of the box. The way around this(for Rails 3 apps) is to add the line OpenSSL::SSL::VERIFY_PEER = OpenSSL::SSL::VERIFY_NONE to the top of your config/environment.rb file. I need to do this on the fly in my template so I can install jQuery.
So I have that all figured out, my general thought is to:
Make a backup of my config/environment.rb file.
Prepend the data to original
Run the jquery:install --ui task
Restore the original config/environment.rb file.
See my template Gist, Lines 25..34 is the relevant section.
So all of that works until step #4 which fails with Error: Could not find "env.orig" in any of your source paths on line #31.
This is VERY perplexing to me because line #28 works, I can see the env.orig file on disk, so why won't the reverse work?
What am I doing wrong?
Update 1:
After looking at the Thor source thor\actions.rb it became clear that Thor uses different paths (not your current project path) for the source and destination. Furthermore my copy was actually not working, it was actually coping the ERB template file, not the already generated file.
After a breather it occurred to me use the right tool for the job so now I have: run 'cp environment.rb environment.~' and run 'mv environment.~ environment.rb' which works just fine. I am fairly certain this would not work on a windows box without the unix tools installed, but I can live with that. Does anyone have a better way?
See my Update for a Why, but the solution was to use the right tool for the job so now I have: run 'cp environment.rb environment.~' and run 'mv environment.~ environment.rb' which works just fine. I am fairly certain this would not work on a windows box without the unix tools installed, but I can live with that.

Resources