Different php ini files by directory or alias in lighttpd - fastcgi

Take 2 domains: www.domain.com and sub.domain.com. Each hosted on the same server at /home/www and /home/sub respectively and using a different php.ini file through vhost configuration within lighttpd.
fastcgi.server = ( ".php" =>
((
"bin-path" => "/usr/bin/php5-cgi -c /home/www/php.ini"
))
)
$HTTP["host"]=="sub.domain.com" {
fastcgi.server = ( ".php" =>
((
"bin-path" => "/usr/bin/php5-cgi -c /home/sub/php.ini"
))
)
}
Is it possible to enable www.domain.com/sub to serve the content of sub.domain.com while still using the appropriate php.ini?
The major difference in the php.ini files we are using is the include_path. Is there an alternative way to alter this through the server config by directory or alias? Or within a single php.ini?
The motivation for this is that we only have an SSL certificate for the main www domain, but wish to serve sub content via SSL on the primary domain path.
Using lighttpd on debian.

Don't know if it's still valid, but I followed this a few years ago... sadly I don't use lighttp where I work right now so can't verify it will still work.
http://www.cyberciti.biz/tips/custom-phpini-file-for-each-domain-user.html

Related

Error retrieving credentials from the instance profile metadata server.Failed to connect to 169.254.169.254 port 80: No route to host

I am trying to create a sub-domain using Route53 with aws-php-sdk.
but I am getting this error again and again:
[2017-06-16 12:17:00] local.ERROR: Aws\Exception\CredentialsException: Error retrieving credentials from the instance profile metadata server.
(cURL error 7: Failed to connect to 169.254.169.254 port 80: No route to host (see http://curl.haxx.se/libcurl/c/libcurl-errors.html)) in /var/www/html/test/vendor/aws/aws-sdk-php/src/Credentials/InstanceProfileProvider.php:79
I am using aws-sdk-php version: 3.29
"aws/aws-sdk-php": "^3.29"
Here is my written code
use Aws\Route53\Route53Client;
$client = Route53Client::factory(array(
'region' => 'us-east-1',
'version' => '2013-04-01',
'credentials ' => array('key'=>'AWS_KEY',
'secret'=>'AWS_SECRET_KEY')
));
$result = $client->changeResourceRecordSets(array(
// HostedZoneId is required
'HostedZoneId' => 'ROUTER_53_HOSTED_ZONE_ID',
// ChangeBatch is required
'ChangeBatch' => array(
// Changes is required
'Changes' => array(
array(
// Action is required
'Action' => 'CREATE',
// ResourceRecordSet is required
'ResourceRecordSet' => array(
// Name is required
'Name' => 'test2.xyz.co.in.',
// Type is required
'Type' => 'A',
'TTL' => 600,
"AliasTarget"=> array(
"HostedZoneId"=> "LOAD_BALANCER_ZONE_ID",
"DNSName"=> "LOAD_BALANCER_DOMAIN_NAME",
"EvaluateTargetHealth"=> false
),
),
),
),
),
));
Help will be appreciable. Thanks in advance.
This question is very old but I want to drop an answer in case someone has a similar issue.
The AWS PHP SDK needs credentials to communicate with AWS. the credentials are known as access key ID and secret access key.
As highlighted in AWS documentation
If you do not provide credentials to a client object at the time of its instantiation, the SDK will attempt to find credentials in your environment.
According to your logs it seems that the SDK is still pulling credentials from your environment which are stored in ~/.aws/credentials, and not using the provided keys.
Either make sure you have the access key and the secret key in your environment variable.
$ less ~/.aws/credentials
[default]
aws_access_key_id = key
aws_secret_access_key = secret
Or
Clear the configuration cache to force using the explicit credentials declared in the instantiation of your client. in case they were cached.
php artisan config:cache
Also refer to this documentation on how to properly setup a client.
https://docs.aws.amazon.com/aws-sdk-php/v3/guide/guide/credentials.html
If you use
php artisan config:cache
make sure you don't use env() helper for accessing env variables from anywhere except the config files (config/*). Avoid using env() helper in your blade templates. This is because, calling env() helper after the above command is run, will return null.
Instead use a config file for accessing env values. If a separate config file under the config folder is not available for that vendor package / service, The config/services.php is a good place to point to env values.
Thephp artisan config:cache command will speed up your app as the the env variables are cached and so is recommended in a production environment.
Refer Laravel Configuration Caching for more details.

Django 1.6 not loading css files

Im using Django 1.6 version over apache webserver on windows.
Im not able to load the css file when accesisng the
DJango admin panel ,once i login also im not able to load the css file
href="/static/admin/css/base.css"
PROJECT_ROOT = 'D:/DjangoProjects/firstproject/firstproject'
STATIC_ROOT = os.path.join(PROJECT_ROOT,'static')
STATIC_URL = '/static/'
STATICFILES_DIRS = (
# empty
)
Am i missing anything?
Run
python manage.py collectstatic
to add the admin files to static.
You should set a static url different than the static folder name - not required, but recommended
STATIC_URL = '/public/'
STATIC_ROOT = os.path.join(PROJECT_ROOT,'static')
Create an alias on the virtualhost (/etc/apache2/sites-avaliable/you_site.conf) for the STATIC_ROOT
Alias /public /var/www/public_html/your_project/static
<Directory /var/www/public_html/your_project/static>
Order allow,deny
Allow from all
</Directory>
Restart apache

Difficulty in sourcing tcl files from sharepoint

I have tcl byte code on sharepoint with url like
https://share.abc.com/sites/abc/test.tcl
I want to source this file in another tcl file residing on my machine.
I don't want to copy the file from sharepoint.
Can anyone help me out here?
The source command only reads from the filesystem, but that can be a virtual filesystem. Thus, you can use the tclvfs package to make it so that HTTP sites can be mounted within the process, and then you can read from that.
# Add in HTTPS support
package require http
package require tls
::http::register https 443 ::tls::socket
# Mount the site; the vfs::urltype package won't work as it doesn't support https
package require vfs::http
# Double quotes only because of Stack Overflow highlighting sucking
vfs::http::Mount "https://share.abc.com/" /https.share.abc.com
# Load and evaluate the file
source /https.share.abc.com/sites/abc/test.tcl
This all assumes that you don't need any username/password credentials. If you do, you need to set them as part of the mount:
vfs::http::Mount "https://theuser:thepassword#share.abc.com/" /https.share.abc.com
Note that this currently requires that you're using HTTP Basic Auth (over HTTPS). That's sufficiently secure for almost any reasonable use.
This is quite a large stack of stuff. You can do it in rather less if you are willing to do some more of the work yourself:
package require base64
package require http
package require tls
::http::register https 443 ::tls::socket
proc source_https {url username password} {
set auth "Basic [base64::encode ${username}:${password}]"
set headers [list Authorization $auth]
set tok [http::geturl $url -headers $headers]
if {[http::ncode $tok] != 200} {
# Cheap and nasty version...
set msg [http::code $tok]
http::cleanup $tok
error "Problem with fetch: $msg"
}
set script [http::data $tok]
http::cleanup $tok
# These next two commands are effectively what [source] does (apart from I/O)
info script $url
uplevel 1 $script
}
source_https "https://share.abc.com/sites/abc/test.tcl" AzureDiamond hunter2

Capistrano & X-Sendfile

I'm trying to make X-Sendfile work for serving my heavy attachments with capistrano. I found that X-Sendfile is not working with symlinks. How could I handle the files inside a folder symlinked by Capistrano so?
my web server is apache2 + passenger
in my production.rb:
config.action_dispatch.x_sendfile_header = "X-Sendfile"
in my controller action:
filename = File.join([Rails.root, "private/videos", #lesson.link_video1 + ".mp4"])
response.headers["X-Sendfile"]= filename
send_file filename, :disposition => :inline, :stream => true, :x_sendfile => true
render nothing: true
my filesystem structure (where a "->" stands for "symlink" and indentation means subfolder):
/var/www/myproject
releases/
....
current/ -> /var/www/myproject/releases/xxxxxxxxxxxx
app/
public/
private/
videos/ -> /home/ftp_user/videos
my apache config
XSendFile on
XSendFilePath / #also tried /home/ftp_user/videos
My application is able to serve small files, but with big ones it gives a NoMemoryError(failed to allocate memory)
I think it's not using x-sendfile, because the behavior is the same if I don't use it.
Here are the response headers of the file i'm trying to serve
Accept-Ranges:bytes
Cache-Control:private
Connection:Keep-Alive
Content-Disposition:inline
Content-Range:bytes 0-1265/980720989
Content-Transfer-Encoding:binary
Content-Type:video/mp4
Date:Sat, 01 Mar 2014 13:24:19 GMT
ETag:"70b7da582d090774f6e42d4e44ae3ba5"
Keep-Alive:timeout=5, max=97
Server:Apache/2.4.6 (Ubuntu)
Status:200 OK
Transfer-Encoding:chunked
X-Content-Type-Options:nosniff
X-Frame-Options:SAMEORIGIN
X-Powered-By:Phusion Passenger 4.0.37
X-Request-Id:22ff0a30-c2fa-43fe-87c6-b9a5e7da12f2
X-Runtime:0.008150
X-UA-Compatible:chrome=1
X-XSS-Protection:1; mode=block
I really don't know how to debug it, if it's a x-sendfile issue or if I'm trying to do something impossible for the symlinks problem
EDIT:
Following the suggested answer in the accepted one, it "magically" started working!
I created a capistrano task this way:
task :storage_links do
on roles(:web), in: :sequence, wait: 2 do
#creo i link simbolici alle risorse
within "/var/www/my_application/current/private" do
execute :ln, "-nFs", "/home/ftp_user/videos"
end
end
end
I didn't manage to run it after finalize_update, so i run it after the restart, by hand.
And i corrected my apache configuration in this way:
XSendFilePath /var/www/my_application
(before i was pointing x-sendfile to the ftp folder)
In my response headers also now X-Sendfile is not appearing, and i got a 206 - partial content, but everything seems to work and apache is serving files in the right way (also very heavy files).
I know this can be a security issue, but i will try to point it to the last release of my application cause pointing it to the current symlink is not working.
Maybe I found a solution. How did you make your symlinks?
maybe you did ln -s, and it's not enough
Here they suggest using ln -nFs, so he recognizes it's a directory that you are linking in

Lighttpd server configuration to run cgi binaries using fast cgi

I have a lighttpd webserver runnning in a linux embedded box. Lighttpd have got Fastcgi enabled for php. How to edit my lighttpd.conf such that to run cgi binaries using fastcgi? Also my linux box doesn't have cgi-bin folder in it's document root.
I am quoting the excerpt from lighttpd.conf which enables fastcgi and php configuration. Also, it can be seen that cgi.assign is commented.
server.modules = (
# "mod_rewrite",
"mod_redirect",
# "mod_alias",
"mod_access",
# "mod_trigger_b4_dl",
# "mod_auth",
# "mod_status",
# "mod_setenv",
"mod_fastcgi",
# "mod_proxy",
#
and
## read fastcgi.txt for more info
## for PHP don't forget to set cgi.fix_pathinfo = 1 in the php.ini
fastcgi.server = ( ".php" =>
( "localhost" =>
(
"socket" => "/tmp/php-fastcgi.socket",
"bin-path" => "/bin/php-cgi -c /etc/php.ini"
)
)
)
#### CGI module
#cgi.assign = ( ".pl" => "/usr/bin/perl",
# ".cgi" => "/usr/bin/perl" )
#
You need to compile php with cgi option. Basically its a standalone php.

Resources