I am trying to install Swagger php for ubuntu from http://blog.nbostech.com/2016/01/integrating-swagger-ui-for-php-application/
In that link they ask me to run the following command - "php composer.phar require zircote/swagger-php".
When i give, "php composer.phar require zircote/swagger-php" in my terminal it says "Could not open input file: composer.phar". Due to this error i am unable to proceed swagger-php installation from above link.
I am in need of Support.
1) Install composer for your php.
2) Use meantioned command if composer was installed locally.
3) Use "composer require zircote/swagger-php" if composer is used globally.
4) Consult composer documenation for proper syntax if you use something different then Linux/MacOS.
Above commands have to be executed inside php project, which will be annotated
To install zircote/swagger-php first you need to have php installed in your system correctly (obviously), so to verify that run following command in terminal
php -v
If you don't have php installed, run this,
sudo apt install php
Then to download composer run these set of commands
php -r "copy('https://getcomposer.org/installer', 'composer-setup.php');"
php -r "if (hash_file('sha384', 'composer-setup.php') === '48e3236262b34d30969dca3c37281b3b4bbe3221bda826ac6a9a62d6444cdb0dcd0615698a5cbe587c3f0fe57a54d8f5') { echo 'Installer verified'; } else { echo 'Installer corrupt'; unlink('composer-setup.php'); } echo PHP_EOL;"
php composer-setup.php
php -r "unlink('composer-setup.php');"
Then make composer run globally. This way you can run composer from any directory within your system and there's no need to run php composer.phar
mv composer.phar /usr/local/bin/composer
Then finally, navigate to your project folder and simply do
composer require zircote/swagger-php
Related
I have below Dockerfile:
FROM mcr.microsoft.com/dotnet/core/sdk:3.1 AS installer-env
COPY . /src/dotnet-function-app
RUN cd /src/dotnet-function-app && \
mkdir -p /home/site/wwwroot && \
dotnet publish *.csproj --output /home/site/wwwroot
FROM mcr.microsoft.com/azure-functions/dotnet:4
ENV AzureWebJobsScriptRoot=/home/site/wwwroot \
AzureFunctionsJobHost__Logging__Console__IsEnabled=true
#ODBCINI=/etc/odbc.in \
#ODBCSYSINI=/etc/odbcinst.ini \
#SIMBASPARKINI=/opt/simba/spark/lib/64/simba.sparkodbc.ini
WORKDIR ./home/site/wwwroot
COPY --from=installer-env /home/site/wwwroot /home/site/wwwroot
RUN apt update && apt install -y apt-utils odbcinst1debian2 libodbc1 odbcinst vim unixodbc unixodbc-dev freetds-dev curl tdsodbc unzip libsasl2-modules-gssapi-mit
RUN curl -sL https://databricks.com/wp-content/uploads/drivers-2020/SimbaSparkODBC-2.6.16.1019-Debian-64bit.zip -o databricksOdbc.zip && unzip databricksOdbc.zip
RUN dpkg -i SimbaSparkODBC-2.6.16.1019-Debian-64bit/simbaspark_2.6.16.1019-2_amd64.deb
RUN export ODBCINI=/etc/odbc.ini ODBCSYSINI=/etc/odbcinst.ini SIMBASPARKINI=/opt/simba/spark/lib/64/simba.sparkodbc.ini
Purpose why this azure function app is containerized is to enable using databricks odbc driver to connect to azure databricks instance and delta lake. I have read on stackoverflow, in other thread, that there is no other way for installing custom drivers if app service is not containerized. I thought it should work the same with function app if is containerized.
Unfortunatelly I get exception that:
ERROR [01000] [unixODBC][Driver Manager]Can't open lib 'Simba Spark ODBC Driver' : file not found
or
Dependency unixODBC with minimum version 2.3.1 is required. Unable to load shared library 'libodbc.so.2' or one of its dependencies. In order to help diagnose loading problems, consider setting the LD_DEBUG environment variable: liblibodbc.so.2: cannot open shared object file: No such file or directory
even if I point drivers in this line:
RUN export ODBCINI=/etc/odbc.ini ODBCSYSINI=/etc/odbcinst.ini SIMBASPARKINI=/opt/simba/spark/lib/64/simba.sparkodbc.ini
Looks like home/site/wwwroot can not access above folders. What interessting I also tried to copy content of /etc to /home/site/wwwroot/bin to set enrironment variable pointing from that folder, but it is not possible to copy:
WORKDIR /etc
COPY . /home/site/wwwroot/bin
Generally, I pass connection details to databricks instance in connection string, but I also tried to point /etc files by below command:
RUN gawk -i inplace '{ print } ENDFILE { print "[ODBC Drivers]" }' /etc/odbcinst.ini
but I get exception during building that:
gawk: inplace:59: warning: inplace::begin: Cannot stat '/etc/odbcinst.ini' (No such file or directory)
I have deployed jmeter in kubernetes by using
https://github.com/kubernauts/jmeter-kubernetes
But I am facing difficulties when I want to integrate selenium webdriver with jmeter. I am able to install selenium packages within the docker using
RUN cd /jmeter/apache-jmeter-$JMETER_VERSION/ && wget -q -O /tmp/jpgc-webdriver-3.3.zip https://jmeter-plugins.org/files/packages/jpgc-webdriver-3.3.zip && unzip -n /tmp/jpgc-webdriver-3.3.zip && rm /tmp/jpgc-webdriver-3.3.zip
But how to install chromedriver within docker. There is no official documentation for jmeter on this and I am new to jmeter. I really appreciate if anyone would guide me on this.
There is no official documentation for JMeter because JMeter doesn't support Selenium
You should look into official documentation for Chromedriver and Docker
Given you were able to come up with RUN directive to download and unpack WebDriver Sampler plugin you should be able to do the same with the Chromedriver, like:
RUN wget -q -O /tmp/chromedriver.zip https://chromedriver.storage.googleapis.com/87.0.4280.20/chromedriver_linux64.zip && unzip /tmp/chromedriver.zip && rm /tmp/chromedriver.zip && mv chromedriver/tmp
You will also need to change the ENTRYPOINT to set webdriver.chrome.driver JMeter system property like -Dwebdriver.chrome.driver=/tmp/chromedriver
I want to run my unit tests against the latest versions of PHP and Node, which means I need both installed into one image for it to work with Bitbucket Pipelines.
What I've been doing until now is to pick one or the other as my base, and then manually install the other. e.g., I've started with php:5.6-fpm as my base, and then installed Node:
# Dockerfile
FROM php:5.6-fpm
RUN docker-php-ext-install bcmath
RUN curl -sL https://deb.nodesource.com/setup_6.x | bash -
RUN apt-get install -y git mercurial unzip nodejs
RUN npm set progress=false
RUN php -r "copy('https://getcomposer.org/installer', 'composer-setup.php');"
RUN php -r "if (hash_file('SHA384', 'composer-setup.php') === 'e115a8dc7871f15d853148a7fbac7da27d6c0030b848d9b3dc09e2a0388afed865e6a3d6b3c0fad45c48e2b5fc1196ae') { echo 'Installer verified'; } else { echo 'Installer corrupt'; unlink('composer-setup.php'); } echo PHP_EOL;"
RUN php composer-setup.php --install-dir=/usr/local/bin --filename=composer
RUN php -r "unlink('composer-setup.php');"
Is there any way to utilize both PHP and Node for my image, and then install some stuff on top of that (e.g. Composer and Yarn)?
You could create a image and commit it either only locally on your machine by:
docker commit <container-id> image-name:tagname
from: https://docs.docker.com/engine/reference/commandline/commit/
then later use this image in new Dockerfile using FROM image-name:tagname
Everything that you will add to new Dockerfile will be stacked on top of that image you created with PHP and Node.js
Sometimes you could create several layers of images that will progress with different processes and functions. Very good reference is: https://hub.docker.com/u/million12/
So you can create a base image with PHP and then another using PHP image that has Node.js installed on it and then another with composer.
If you wish to export your images from your local machine you should register on docker hub and export to it or alternative service like quay.io
Hope that answered your question.
I am continuing on the road to learn Docker and how to deal with images and containers. I am working on an image to be used by me at work. This is the Dockerfile:
FROM ubuntu:trusty
...
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer && \
chown www-data /usr/local/bin/composer && composer --version
RUN composer global require sebastian/phpcpd && \
composer global require phpmd/phpmd && \
composer global require squizlabs/php_codesniffer
...
It works but each time I build an image I am seeing the annoying message about composer run as root user, which is not bad at all but I would like to change this behavior just for fun :) and learning.
I would like to create and keep (for another usage, maybe for other tasks as well) a user docker-dev for running this task. This user should have a home directory and composer should install libraries globally but under this user home directory.
Right now composer gets installed under /root/.composer and every library is installed there. I would like to turn it into /home/docker-dev/.composer and then have the libraries installed there.
I have read a lot of docs about this topic:
http://www.projectatomic.io/docs/docker-image-author-guidance/
https://github.com/airdock-io/docker-base/wiki/How-Managing-user-in-docker-container
http://blog.dscpl.com.au/2015/12/overriding-user-docker-containers-run-as.html
Running app inside Docker as non-root user
http://www.yegor256.com/2014/08/29/docker-non-root.html
How to run Docker commands as non-root user in Docker in Docker?
..and many more, but this is a lot of information and is confusing.
Can anybody help me to make those changes for creating the user and installing composer libraries under its home directory, in my Dockerfile?
Note: I have removed the irrelevant part from the Dockerfile so the post is not too long.
Update
After the solution provided this is what I have tried without success:
# This still running as root because we need to move composer to /user/local/bin
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer && \
chown www-data /usr/local/bin/composer && composer --version
# I am adding a docker-dev user
RUN useradd docker-dev
# Switch to RUN the next command as the recently created user
USER docker-dev
# Run the command in verbose mode (just for debug)
RUN composer global --verbose require sebastian/phpcpd && \
composer global --verbose require phpmd/phpmd && \
composer global --verbose require squizlabs/php_codesniffer
And this is the output from console:
Step 8 : RUN composer global --verbose require sebastian/phpcpd && composer global --verbose require phpmd/phpmd && composer global --verbose require pdepend/pdepend && composer global --verbose require squizlabs/php_codesniffer && composer global --verbose require phpunit/phpunit
---> Running in 0f203e1760a4
[ErrorException]
chdir(): No such file or directory (errno 2)
Exception trace:
() at phar:///usr/local/bin/composer/src/Composer/Command/GlobalCommand.php:74
Composer\Util\ErrorHandler::handle() at n/a:n/a
chdir() at phar:///usr/local/bin/composer/src/Composer/Command/GlobalCommand.php:74
Composer\Command\GlobalCommand->run() at phar:///usr/local/bin/composer/vendor/symfony/console/Application.php:847
Symfony\Component\Console\Application->doRunCommand() at phar:///usr/local/bin/composer/vendor/symfony/console/Application.php:192
Symfony\Component\Console\Application->doRun() at phar:///usr/local/bin/composer/src/Composer/Console/Application.php:231
Composer\Console\Application->doRun() at phar:///usr/local/bin/composer/vendor/symfony/console/Application.php:123
Symfony\Component\Console\Application->run() at phar:///usr/local/bin/composer/src/Composer/Console/Application.php:104
Composer\Console\Application->run() at phar:///usr/local/bin/composer/bin/composer:43
require() at /usr/local/bin/composer:24
global <command-name> [<args>]...
I am not sure if the problem is with permissions, what do you think?
In your Dockerfile - Use a RUN command to create the user:
RUN useradd userToRunComposer
Then use the USER command in your Dockerfile, after creating it.
USER userToRunComposer
RUN curl -sS https://getcomposer.org/instal...
RUN composer global require se...
You could also take a different approach by creating the user inside the container, then committing the image:
docker exec -ti <my container name> /bin/bash
userAdd userToRunComposer
And then do a: docker commit <my container id> <myimagename> to avoid having to create the user every single time
See this question.
Problem
My wercker build exits with Failed step: setup environment - Command exited with exit code: 1 when I'm switching user in my Docker image. I'm running wercker dev from the commandline. The Dockerfile builds fine with Docker itself on the commandline, as well as on Docker Hub. I can run it fine. It's just when I use it for wercker, that the error occurs.
For example in my Dockerfile is the following code:
# Adding user
RUN adduser --disabled-password --gecos '' dockworker && adduser dockworker sudo && echo '%sudo ALL=(ALL) NOPASSWD:ALL' >> /etc/sudoers
RUN mkdir -p /home/dockworker && chown -R dockworker:dockworker /home/dockworker
USER dockworker # Line the build seems to break on
When I comment this line out, it seems to pass. Now the problem with this, for me, is the following: I'd like to switch to another user, since I'm trying to install nvm (for gulp, bower). Generally I don't prefer to install this this as root, therefore I add a user for this.
Workaround?
However, when I do install nvm as root in my Dockerfile (so just removing the user related lines in the codeblock above completely):
ENV NODE_VERSION 0.12.7
ENV NVM_DIR /usr/local/nvm
# NVM
RUN curl https://raw.githubusercontent.com/creationix/nvm/v0.25.4/install.sh | NVM_DIR=/usr/local/nvm bash
#install the specified node version and set it as the default one, install the global npm packages
RUN . /usr/local/nvm/nvm.sh && nvm install $NODE_VERSION && nvm alias default $NODE_VERSION && npm install -g bower && npm install -g gulp
Then it does get past the setup environment stage, but during the steps it errors out that nvm and npm are not found. The step in the wercker.yml:
box:
id: francobolli/docker-ubuntu-14.04-php-5.6
tag: latest
env:
NVM_DIR: /usr/local/nvm
dev:
steps:
- script:
name: gulp styles and javascript
code: |
npm install
bower install --allow-root
gulp --env=production
I don't really understand this. When I run both docker images from the commandline (so with wercker removed from the context completely) I can execute nvm and npm just fine, but when I'm running it through wercker, it seems the .bashrc file is not being executed. When I cat ~/.bashrc during the steps, I can see:
export NVM_DIR="/usr/local/nvm"
[ -s "$NVM_DIR/nvm.sh" ] && . "$NVM_DIR/nvm.sh" # This loads nvm
Workaround!
When I enter this in a step, it will be executed and I can npm install without a problem, so it seems this is never executed through the .bashrc:
...
- script:
name: gulp styles and javascript
code: |
[ -s "$NVM_DIR/nvm.sh" ] && . "$NVM_DIR/nvm.sh" # It works when I put it here, but it's also in ~/.bashrc, which doesn't seem to get executed
npm install
...
Note: If I source ~/.bashrc in the wercker step instead, it does not work.
Question
So my question is: What am I doing wrong, for not being able to switch user in the Wercker build and even if I could, would I have the same problem as running nvm with root: nvm and npm CAN be found when a Docker container is instantiated from the commandline, but CAN'T be found when running it with Wercker. What's the best solution?
I'd rather not add commands in the wercker.yml if it can be resolved through proper user configuration or proper nvm configuration. Sorry if I'm missing something very obvious.
This has nothing to do with Docker configuration, but with how Wercker handles Docker boxes. From the documentation:
Using Sudo
The sudo command is no longer supported in wercker v2 and effectively does nothing when used.
And for deployment:
Please note that if you update a project to make use of Docker (Ewok version) and this project has autodeployment, this deploy will most likely fail. We will update our documentation in the future on how to deploy these containers.
However, I did get it to build (and deploy) with the solution (temporary workaround?) as displayed in the original question.