I'm trying to run a lambda function with sharp installed. Sharp has instructions to install on AWS Lambda, but they are specific to npm. Specifically it requires me to run this:
npm install --arch=x64 --platform=linux --libc=glibc sharp
pnpm doesn't explicitly support the arch and platform options however.
I'm developing on a Max but am using Seed.run for CI/CD.
So far I really like pnpm and seed and I don't want to change either one. Is there a way for pnpm to install the right library?
You can pass unknown options to pnpm by prefixing them with config:
pnpm install --config.arch=x64 --config.platform=linux --config.libc=glibc sharp
Related
I am trying to use javascript generator and yield in k6.
When I try to run the script I get this error:
SyntaxError: ...yield is a reserved word
Is it possible to use yield in k6?
Unfortunately this is not natively supported in the JavaScript VM used by k6 (goja). According to this comment generators might eventually be supported, but there's no current plans for it.
That said, you can workaround this by using the template-es6 project to transform your script to an ES5 variant with Babel, which can polyfill support for generators.
First clone the template-es6 Git repo locally.
Install all dependencies with yarn add or npm install.
Add #babel/plugin-transform-runtime to the plugins list in the .babelrc. It should look like this:
{
"presets": [
[
"#babel/preset-env",
{
"useBuiltIns": "usage",
"corejs": 3
}
]
],
"plugins": [
"#babel/plugin-transform-runtime"
]
}
Install the plugin with yarn add -D #babel/plugin-transform-runtime or npm install --save-dev #babel/plugin-transform-runtime.
Modify the main.js script and install whatever other dependencies you need.
Run npm run-script webpack to bundle everything.
Finally run the script with k6 with k6 run --compatibility-mode=base build/app.bundle.js.
You can also run it without --compatibility-mode=base, but since it's already transformed to an ES5 script you can avoid the extra transformation done by k6, which improves performance and memory usage.
Yeah, this is not as straightforward as we'd like it to be, but it should be familiar to JavaScript developers and we hope to improve support of these features in the future.
I am trying to setup docker-compose architecture for local development and production and I can't figure when in the containers life it's the best time to install library dependencies. In the same time I am not sure if these should be placed in the container or in external volume.
All my code is mounted in external volumes, so that changes are immidiately taken into without rebuilding the containers, but I am not sure about libraries that need to be installed by pip (I am running python backend) and npm/yarn (for webpack front-end).
Placing requirments.txt and package.json into the containers and running pip install and yarn install in the container build process means that I have to rebuild the container any time dependecies change - that is too much overhead.
Putting them in an external volume and running pip install and yarn install as part of the command of each container when it is started seems to solve the issue.
The build process of each container then contains only platform dependencies (eg. installing python, webpack or other platform tools), but libraries are installed after started (with CMD directive).
Is this the correct approach? I have seen lot of examples doing exactly the oposite and running npm install in the build process of the container - but I don't see any advantage for that, am I missing something?
Installing dependecies is usually part of the build process. Mounting code is a good trick when developing in order to get changes directly reflected.
Concerning adding requirements.txt or package.json. Installing dependecies takes time, and for that you need to take advantage of docker layer caching. In particular, you want to avoid cache invalidation.
For pip I suggest the following in development phase: For dependencies that you are unlikely to change, install these in separate RUN instuction. Your Docker file will look something like.
FROM ..
RUN pip install package1 package2 package3 ...
ADD requirements.txt requirements.txt
RUN RUN pip install -r requirements.txt
...
Keep only dependencies that might be changed in requirements.txt. Once you are done developing, add the packages back to the requirements.txt and build using the requirements file.
A similar approach would be adding two requirements files, and at the end combining them.
I have a requirement that before an application runs, some part of it needs to read the environmental variable. For this I have the following docker file
FROM nodesource/jessie:0.12.7
# install gettext for envsubst
RUN apt-get update
RUN apt-get install -y gettext-base
# cache package.json and node_modules to speed up builds
ADD package.json package.json
RUN npm install
# Add source files
ADD src src
# Substiture value for backend endpoint env var
RUN envsubst < src/js/envapp.js > src/js/app.js
ADD node_modules node_modules
EXPOSE 8000
CMD ["npm","start"]
The above envsubst line reads (should read) an env variable $MYENV and substitutes it. But when I open the file app.js, its empty.
I checked if the environmental variable exists in the container and it does. Any reason its value is not read and substituted?
I also tried the same command in teh container and it works. It only does not work when I run the image
This is likely because $MYENV is not available for envsubst when you run the image.
Each RUN command runs on its own shell.
From the Docker documentations:
RUN (the command is run in a shell - /bin/sh -c - shell form)
You need to source your profile as well, for example if the $MYENV environment variable is available in the .bashrc file, you can modify your Dockerfile like this:
RUN source ~/.bashrc && envsubst < src/js/envapp.js > src/js/app.js
I encountered the same issues, and after much research and fishing through the internet. I managed to find a few work arounds to this issue. Below I'll list them and identifiable risks at the time of this "Answer post"
Solutions:
1.) apt-get install -y gettext its a standard GNU package language library, one of these libraries that it includes is envsubst` and I can confirm that it works for docker UBUNTU:latest and it should work for every flavored version.
2.) npm install envsub dependent on the "use case" - this approach would be better supported by node based projects.
3.) enstub cli project lib in my opinion it seems a bit overkill to downloading a custom cli from a random stranger but, it's also another option.
Risk:
apt-get install -y gettext:
1.) gettext - this approach would NOT be ideal for VM's as with any package library, it requires maintenance and updates as time passes. However, this isn't necessary for docker because once an a container is initialized and up and running we can create a bashscript to add the package, substitute env vars and then remove the package.
2.) It's a bad idea for VM's because it can be used to execute arbitrary code
npm install ensub
1.) envsub - updating packages and this approach wouldn't be ideal if your dealing with a different stack and not using nodejs.
NOTE:
There's also a PHP version for those developing a PHP application and it seems to work PHP's cli if you need a custom environment.
Resources:
GetText package library info: https://www.gnu.org/software/gettext/
GetText Risk - https://ubuntu.com/security/notices/USN-3815-2
PHP-GetText - apt-get install -y php-gettext
Custom ensubst cli: https://github.com/a8m/envsubst
I suggest that since your are using Node, you use the npm envsub module.
This module is well tested and is developed with docker in mind.
It avoids the need for relying on other dependencies when you already have the full Node arsenal at your fingertips.
envsub is described as
envsub is envsubst for NodeJS
NodeJS global CLI module providing file-level environment variable substitution via Handlebars
I am the author of the package. I think you will enjoy it.
I had some issues with envsubst in Docker.
For some reasons envsubst doesn't work when I try to copy the output in the same file. For example, this is not working:
RUN envsubst < file.conf > file.conf
But when I when I tried to use a temp file the issue disappeared:
RUN envsubst < file.conf > file.conf.temp && cp -f file.conf.temp file.conf
What is the correct way to install python3-gi on Travis-CI using the .travis.yml file?
The past recommendation was to use Python 3.2 (Travis-ci & Gobject introspection), but I would prefer testing against more recent versions.
I did try a few sensible combinations of commands, but my knowledge of the Travis-CI environment is very basic:
This for example fails with and without using system_site_packages: true:
before_install:
- sudo apt-get install -qq python3-gi
virtualenv:
- system_site_packages: true
Two examples of repositories that have this working (as far as I can tell):
https://github.com/ignatenkobrain/gnome-news (CircleCI)
https://github.com/devassistant/devassistant (Travis-CI)
In order to use a newer version you would either have to build it or use a container system like docker.
gnome-news has an example of a pygobject project using circleci (which is another free alternative to travis-ci). They are using fedora rawhide in docker which has the latest versions of the entire gnome stack.
I followed the application to run the tests of pylons project:
http://pylonshq.com/docs/en/0.9.7/i18n/#testing-the-application
But when I run:
nosetests --with-pylons test.ini
It reports an error:
E:\pylons\helloworld>nosetests --with-pylons test.ini
Usage: nosetests-script.py [options]
nosetests-script.py: error: no such option: --with-pylons
Why nosetests doesn't know the --with-pylons, how to fix it?
If you are using Pylons 1.0.1, the nose plugin is not registered by Pylons itself any more.
A workaround is to add this to the entry_points section of your own project's setup.py:
[nose.plugins]
pylons = pylons.test:PylonsPlugin
This error happens in cases where nose cannot find installed pylons.
This can happen if nose is installed system-wide (for example, via apt-get install python-nose), but Pylons is installed in virtual environment. In that case you can either:
Install Pylons system-wide, that would pollute your global environment and defeat the purpose of having virtual environment
Install nose in virtual environment (easy_install -U nose when virtual environment is activated)
If you've installed the latest version of pylons using pip, version 1.0.1rc1 is installed. Nose is not able to find the pylons-plugin.
To fix this downgrade to pylons 1.0.
pip uninstall pylons
pip install pylons==1.0
I had the same problem and found the solution here
I never used --with-pylons. When I am in the directory of the project, nosetests does the job without any parameters.
I'm on Linux, with the proper virtualenv activated. Maybe it's different on Windows.