How do I simulate a bad connection to AWS S3 using toxiproxy? - ruby-on-rails

What I'm trying to accomplish:
I have a ruby on rails app which uses carrierwave to store data using the fog-aws adapter
I'm trying to simulate poor communication with AWS S3
What I've done:
I have a sample ruby script which tries to connect to AWS and enumerate objects:
require 'aws-sdk-s3'
params = {region: 'us-west-2', access_key_id: 'key', secret_access_key: 'secret'}
s3 = Aws::S3::Client.new(params)
puts s3.list_objects({bucket: 'bucketname', prefix: '', max_keys: 1})
This works - so the credentials are fine.
I now want to toxify this using toxiproxy (a chaos engineering tool that'll randomly break connections, etc)
Added an http proxy to the params: {region: 'us-west-2', access_key_id: 'key', secret_access_key: 'secret', http_proxy: 'http://localhost:7890'}
Created a toxiproxy definition using anything I could think of:
toxiproxy-cli create --listen localhost:7890 -u bucketname.s3.amazonaws.com:443 test-aws
toxiproxy-cli create --listen localhost:7890 -u s3-r-w.us-west-2.amazonaws.com:443 test-aws
I then execute the above mentioned script and all I see is: lib/ruby/2.6.0/net/protocol.rb:225:in 'rbuf_fill': end of file reached (Seahorse::Client::NetworkingError)
I can't figure out the magic parameters required to make everything connect.
So questions:
Can what I'm trying to accomplish work using toxiproxy?
If not, what is recommended?

I figured it out. It looks like toxiproxy by itself is not enough to intercept & forward the connection -- I needed to also use tinyproxy.
This is what I did:
Setup tinyproxy as an http proxy on port 8888
Setup toxiproxy on port 7890 to forward to port 8888
Configured the AWS client to connect to an http_proxy on port 7890 -- this allows toxiproxy to mess with the connection, but will allow tinyproxy to consume the HTTP CONNECT and tunnel to AWS.
Example code:
require 'aws-sdk-s3'
s3 = Aws::S3::Client.new(region: 'us-west-2', access_key_id: 'key', secret_access_key: 'secret', http_proxy: 'http://localhost:7890')
puts s3.list_objects({bucket: 'bucketname', prefix: '', max_keys: 1})
Example tinyproxy configuration:
Port 8888
Listen 127.0.0.1
Timeout 600
Allow 127.0.0.1
Example toxiproxy configuration:
toxiproxy-cli create --listen 127.0.0.1:7890 -u 127.0.0.1:8888 test-aws
toxiproxy-cli toxic add --type reset_peer -a timeout=25 test-aws

Related

Rails CQL cannot connect to AWS Keyspaces (AWS Cassandra)

I am trying to connect from a Ruby on Rails application to AWS Keyspaces (AWS Cassandra), but I cannot manage to do it. I use the cequel gem and generated the config/cequel.yml which contains a similar thing to the following:
development:
host: "CONTACT_POINT"
username: "USER"
password: "PASS"
port: 9142
keyspace: key_development
max_retries: 3
retry_delay: 0.5
newrelic: true
ssl: true
server_cert: 'config/certs/AmazonRootCA1.pem'
replication:
class: NetworkTopologyStrategy
datacenter1: 3
datacenter2: 2
durable_writes: false
(Credentials where used in another app and they work which is working as expected.)
when I try to run:
rake cequel:keyspace:create
I get the following errors:
Cassandra::Errors::NoHostsAvailable: All attempted hosts failed: x.xxx.xxx.xxx (Cassandra::Errors::ServerError: Internal Server Error)
Set the dc to us-east-1 . drop the replication definition.

Why is elasticserach-rails suddenly raising Faraday::ConnectionFailed (execution expired)?

I'm using Elasticsearch in a Rails app via the elasticsearch-model and elasticsearch-rails gems.
Everything was previously working fine, but after some updates I am now getting a Connection Failed error whenever I attempt to interact with the remote cluster (AWS Elasticsearch).
> MyModel.__elasticsearch__.create_index! force: true
=> Faraday::ConnectionFailed (execution expired)
I'm struggling to work out what is causing this connection error. After searching for similar issues, I've adjusted timeouts and tried various combinations of http, https and naked urls, but no success.
What is a sensible way to debug this connection error?
My Elasticsearch is initialized like this.
#initializers/elasticsearch.rb
require 'faraday_middleware'
require 'faraday_middleware/aws_sigv4'
credentials = Aws::Credentials.new(
ENV.fetch('AWS_ACCESS_KEY_ID'),
ENV.fetch('AWS_SECRET_ACCESS_KEY')
)
config = {
url: ENV.fetch('AWS_ELASTICSEARCH_URL'),
retry_on_failure: true,
transport_options: {
request: { timeout: 10 }
}
}
client = Elasticsearch::Client.new( config ) do |f|
f.request :aws_sigv4, credentials: credentials, service: 'es', region: ENV.fetch('AWS_ELASTICSEARCH_REGION')
end
Elasticsearch::Model.client = client
It turns out that there were two parts to this issue.
First, the Elasticsearch::Client, as configured above, was using the default ES port 9200. My ES is hosted on AWS, which appears to not expose this port.
After fixing this, I ran into the second issue (which I suspect is more specific to this app). I started getting a Faraday::ConnectionFailed (end of file) error. I don't know what caused this, but configuring the client with host and scheme fixed it.
My final config is as follows:
#initializers/elasticsearch.rb
# ...
config = {
host: ENV.fetch('AWS_ELASTICSEARCH_URL'),
port: 443,
scheme: "https",
retry_on_failure: true,
transport_options: {
request: { timeout: 10 }
}
}
client = Elasticsearch::Client.new( config ) do |f|
# ...
N.B. AWS_ELASTICSEARCH_URL must return a URL without protocol.
This is because of version issue.
Use this gem 'elasticsearch-model', '~> 5'

How to configure ejabberd with Oauth support

I've tried to follow the steps here to configure ejabberd OAuth but failed. ejabberd.yml looks like below :
-
port: 5280
module: ejabberd_http
request_handlers:
"/websocket": ejabberd_http_ws
"/log": mod_log_http
# OAuth support:
"/oauth": ejabberd_oauth
# ReST API:
"/api": mod_http_api
## "/pub/archive": mod_http_fileserver
web_admin: true
http_bind: true
## register: true
captcha: true
Note : I've restart the ejabberd.
URL that I used (this is the page where I entered User, Server and Password) : http://mytestsite.com:5280/oauth/authorization_token?response_type=token&client_id=Client1&redirect_uri=http://mytestsite.com&scope=user_get_roster+sasl_auth
I've been redirected to https://mytestsite.com/?error=access_denied&state=&gws_rd=ssl
According to the tutorial, once I enabled /oauth and /api in the .yml file, the following URL should redirect me to http://mytestsite.com/?access_token=RHIT8DoudzOctdzBhYL9bYvXz28xQ4Oj&token_type=bearer&expires_in=3600&scope=user_get_roster+sasl_auth&state=
You must defined oauth_access parameter in ejabberd.yml config file, otherwise, no one can create an oauth token.
We will update the documentation to make it more accurate on that part.

Gitlab LDAP (Active Directory) Authentication without Server Side Access

I am using GitLab Omnibus 7.10.0 on RHEL 6.6. I have enabled LDAP using the following configuration:
gitlab_rails['ldap_enabled'] = true
gitlab_rails['ldap_servers'] = YAML.load <<-'EOS' # remember to close this block with 'EOS' below
main: # 'main' is the GitLab 'provider ID' of this LDAP server
label: 'FOO COM Active Directory (LDAP)'
host: 'ad.server.foo.com'
port: 3268
uid: 'someuser'
method: 'plain' # "tls" or "ssl" or "plain"
bind_dn: 'CN=My Whole. Name,OU=Some Users,DC=ad,DC=server,DC=foo,DC=com'
password: 'thepassword'
active_directory: true
allow_username_or_email_login: false
block_auto_created_users: false
base: 'DC=ad,DC=server,DC=foo,DC=com'
user_filter: ''
# ## EE only
# group_base: ''
# admin_group: ''
# sync_ssh_keys: false
#
# secondary: # NOT FILLED OUT
EOS
My problem is that I can't get users to authenticate via LDAP. I'm not sure if the configuration is wrong, or I need to do something on the server side (which I have no direct access to). When I run
gitlab-rake gitlab:ldap:check RAILS_ENV=production
I get this
Checking LDAP ...
LDAP users with access to your GitLab server (only showing the first 100 results)
Server: ldapmain
Checking LDAP ... Finished
I can search for individual users using java with this account (my personal account) or another account for a different application, but can't get AD working with gitlab. I got the bind_dn "My Whole. Name" by running this command on a Windows box.
gpresult -r
I have also tried a bind_dn of:
uid=myADaccountname,OU=Some Users,DC=ad,DC=server,DC=foo,DC=com
and
myADaccountname#ad.server.foo.com
but I still have the same problem.
For Active Directory, the uid should be:
uid: 'sAMAccountName'
Gitlab should connect using the user specified in the bind_dn, with the given password.
Since GitLab 9.5.1 the uid now requires [ ]
See this issue: https://gitlab.com/gitlab-org/gitlab-ce/issues/37120
This might just be a bug which will be fixed.
I had to update the value for Active Directory from the answer above to:
uid: ['sAMAccountName']

GitLab LDAP scondary strategy

I'm using GitLab CE Omnibus package (gitlab_7.7.2-omnibus.5.4.2.ci-1_amd64) on a clean Debian (debian-7.8.0-amd64) installation.
I followed the installation process on https://about.gitlab.com/downloads/ and everything works fine.
I modified /etc/gitlab/gitlab.rb to use a single LDAP server for authentification.
Which worked also as expected.
But when I tried to use a secondary LDAP connection "gitlab-ctl reconfigure" gives me the output:
---- Begin output of /opt/gitlab/bin/gitlab-rake cache:clear ----
STDOUT:
STDERR: rake aborted!
Devise::OmniAuth::StrategyNotFound: Could not find a strategy with name `Ldapsecondary'. Please ensure it is required or explicitly set it using the :strategy_class option .
Tasks: TOP => cache:clear => environment
(See full trace by running task with --trace)
---- End output of /opt/gitlab/bin/gitlab-rake cache:clear ----
So, the problem is that I can use the LDAP connection 'main' but I cannot use the connection 'secondary'.
Is there any possibility to use two different LDAP connection in the CE edition at once?
I'm new to ruby [on rails]. I found something in /opt/gitlab/embedded/service/gitlab-rails/lib/gitlab/ldap/config.rb but I'm not able to debug anything.
Here are my settings in /etc/gitlab/gitlab.rb
gitlab_rails['ldap_enabled'] = true
gitlab_rails['ldap_servers'] = YAML.load <<-EOS # remember to close this block with 'EOS' below
main: # 'main' is the GitLab 'provider ID' of this LDAP server
label: 'First Company'
host: '192.168.100.1'
port: 389
uid: 'sAMAccountName'
method: 'tls' # "tls" or "ssl" or "plain"
bind_dn: 'debian#firstcompany.local'
password: 'Passw0rd'
active_directory: true
allow_username_or_email_login: false
base: 'dc=firstcompany,dc=local'
user_filter: '(&(objectClass=person)(objectClass=user)(!(userAccountControl:1.2.840.113556.1.4.803:=2)))'
## EE only
group_base: ''
admin_group: ''
sync_ssh_keys: false
secondary: # 'secondary' is the GitLab 'provider ID' of second LDAP server
label: 'Second Company'
host: '192.168.200.1'
port: 389
uid: 'sAMAccountName'
method: 'tls' # "tls" or "ssl" or "plain"
bind_dn: 'debian#secondcompany.local'
password: 'Passw0rd'
active_directory: true
allow_username_or_email_login: false
base: 'dc=secondcompany,dc=local'
user_filter: '(&(objectClass=person)(objectClass=user)(!(userAccountControl:1.2.840.113556.1.4.803:=2)))'
## EE only
group_base: ''
admin_group: ''
sync_ssh_keys: false
EOS
Thank you very much!
Multiple LDAP servers is an EE feature so setting the config in CE won't do anything. You can see the feature in GitLab documentation.
With GitLab 14.7 (January 2022, seven years later), this is now possible! (for hosted instances)
LDAP failover support
You can now specify multiple hosts (using hosts) in your GitLab LDAP configuration.
GitLab will use the first reachable host. This ensures continuity of access to GitLab should one of your LDAP hosts become unresponsive.
Thanks to Mathieu Parent for the contribution!
See Documentation and Issue.

Resources