How to pass zap session files to dockerized zap scanner? - docker

How to properly pass session files (.session .session.data .session.properties .session.script and context) to the following command before the scan is executed?
docker run -rm -v $(pwd):/zap/wrk/:rw -t owasp/zap2docker-stable zap-baseline.py \
-t https://www.example.com -r testreport.html

Why do you want to?
It is (probably) possible, but it wont do anything useful. The ZAP baseline scan spiders the target you specify, so supplying an existing ZAP session file wont help you. Unless I'm missing something ;)

Use the core/action/loadSession/ API endpoint. Something like this:
from zapv2 import ZAPv2 as zap
import time
apikey = 'apikey12345' #Change this to match your setup
z = zap(apikey=apikey, proxies={'http': 'http://127.0.0.1:9999', 'https': 'http://127.0.0.1:9999'})
time.sleep(5)
print 'start..'
z.core.load_session('/root/Download/zaptmp/test.session') #Obviously this needs to be your session path
sites = z.core.sites
# Check that the session loaded... I'm printing, you could check count not zero, whatever
print 'Listing sites in loaded session:'
for site in sites:
print site

Related

Right way to use secret flag in docker buildkit

I am struggling with the same problem mentioned by Gavin on
this question.
Specifically in with new
docker build secret information
What is the right way to use it that feature?
Looking around on the internet I only found some variations of the same example in docker documentation mentioned above, which prints the secret on build time. Maybe I didn't fully understand the example, so please help me.
If there is no way to get the secret in build time an use in another part of a Dockerfile (e.g. an ARG variable or RUN command), when and how that new feature can be used to truly keep my secret safe and also do the work?
My go is to use this new feature in build time and also keep my secret information safe in case someone get my image file and execute a history on it.
For example, ff I have a Dockerfile like this:
FROM influxdb:2.0
ENV DOCKER_INFLUXDB_INIT_MODE=setup
ENV DOCKER_INFLUXDB_INIT_USERNAME=admin
ENV DOCKER_INFLUXDB_INIT_PASSWORD="mypassword"
How can I use that new feature mentioned in docker documentation to set my variable (DOCKER_INFLUXDB_INIT_PASSWORD), for example, in a way that it will not logged in image history?
Thanks in advance
How can I use that new feature mentioned in docker documentation to
set my variable (DOCKER_INFLUXDB_INIT_PASSWORD), for example, in a way
that it will not logged in image history?
It depends on whether you need the secret only at build time, or
whether you actually want to use it at runtime. The latter situation
is probably more common, and there's already a canonical solution:
If you want to keep DOCKER_INFLUXDB_INIT_PASSWORD out of your image
history, just don't set it during the build process. Require it to be
set a runtime, e.g., via the -e VAR=VALUE argument to docker run
(or the --env-file option):
docker run -e DOCKER_INFLUXDB_INIT_PASSWORD=mypassword myimage
You could have an ENTRYPOINT script that checks for the variable at
runtime and exits with a helpful error message if it's not set.
The official Docker images for things like MySQL and PostgreSQL work
this way.
In contrast, a build secret is meant for secrets that are only
required at build time. For example, let's say your build process
needs to do something like this:
RUN curl -o /data/mydataset -u username:password \
https://example.com/dataset
You'd like to share your Dockerfile and associated sources with
other people, but you don't want to share your username and password.
This is where build secrets come in. You would instead stick your
authentication information in a file, and modify your Dockerfile to
read that information from a secret:
RUN --mount=type=secret,id=mysecret \
curl -o /data/mydataset -u $(cat /run/secrets/mysecret) \
https://example.com/dataset
In this example, once we've copied the remote file into the image,
we're done with the secret: we don't need it in order to run a
container from the image; it was only required during the build
process.
NB: The above assumes that you're providing the secret at build time as described in the documentation, so your build command might look something like:
DOCKER_BUILDKIT=1 docker build --secret id=mysecret,src=mysecret.txt -t myimage .

Updating IOS's via SCP in bash with expect

Good day. I am attempting to create/run a script that will allow me to send an updated IOS from a server to my network devices. The following code works when I put in a manual IP address right before the ":flash" command.
#!/user/bin/expect
set IOSroot "/xxxxx/xxx/c3750e-universalk9-mz.150-2.SE10a.bin"
set pw xxxxxxxxxxxxxxxxxxx
spawn scp $IOSroot 1.1.1.1:flash:/c3750e-universalk9-mz.150-2.SE10a.bin
expect "TACACS Password:"
send "$pw\r"
interact
The code there works great and as expected. The issue arises when I try to use a file called "ioshost" with a list of IP's and use that within this script to get some automation. I have tried various things to get this to work. Some of them are as follows:
Settings Variables
IPHosts=$(cat ioshost)
set IPHost 'cat ioshost'
Along with trying to use the read/do command...
while read line; do
spawn scp $IOSroot $line:flash:/c3750e-universalk9-mz.150-2.SE10a.bin
done < ioshost
None of these seem to work and I am looking for guidance. Please note I understand that setting a password is not best practice but setting RSA keys as mentioned in other articles is not allowed so I am forced to do it this way.
Thank you for your time.
You can use one Expect script and one Bash script.
First update your Expect script a bit:
#!/user/bin/expect
set IOSroot "/xxxxx/xxx/c3750e-universalk9-mz.150-2.SE10a.bin"
set pw xxxxxxxxxxxxxxxxxxx
spawn scp $IOSroot [lindex $argv 0]:flash:/c3750e-universalk9-mz.150-2.SE10a.bin
# ^^^^^^^^^^^^^^^^
expect "TACACS Password:"
send "$pw\r"
interact
Then write a simple Bash for loop:
for host in $(<ioshost); do
expect /your/script.exp $host
done

Docker. How to resume downloading image when interrupted?

How can I resume pull when disconnected? The pull process always start from the beginning every time I run docker pull some-image again after disconnected. My connection is so unstable that even downloading just a 100MB image take so long and almost fails every time. So, it is almost impossible for me to pull a bigger image. So, how can I resume the pull process?
Update:
The pull process will now automatically resume based on which layers have already been downloaded. This was implemented with https://github.com/moby/moby/pull/18353.
Old:
There is no resume feature yet. However there are discussions around this feature being implemented with docker's download manager.
Docker's code isn't as updated as the moby in development repository on github. People have been having issues for several years relating to this. I had tried to manually use several patches which aren't in the upstream yet, and none worked decent.
The github repository for moby (docker's development repo) has a script called download-frozen-image-v2.sh. This script uses bash, curl, and other things like JSON interpreters via command line. It will retrieve a docker token, and then download all of the layers to a local directory. You can then use 'docker load' to insert into your local docker installation.
It does not do well with resume though. It had some comment in the script relating to 'curl -C' isn't working. I had tracked down, and fixed this problem. I made a modification which uses a ".headers" file to retrieve initially, which has always returned a 302 while I've been monitoring, and then retrieves the final using curl (+ resume support) to the layer tar file. It also has to loop on the calling function which retrieves a valid token which unfortunately only lasts about 30 minutes.
It will loop this process until it receives a 416 stating that there is no resume possible since it's ranges have been fulfilled. It also verifies the size against a curl header retrieval. I have been able to retrieve all images necessary using this modified script. Docker has many more layers relating to retrieval, and has remote control processes (Docker client) which make it more difficult to control, and they viewed this issue as only affecting some people on bad connections.
I hope this script can help you as much as it has helped me:
Changes:
fetch_blob function uses a temporary file for its first connection. It then retrieves 30x HTTP redirect from this. It attempts a header retrieval on the final url and checks whether the local copy has the full file. Otherwise, it will begin a resume curl operation. The calling function which passes it a valid token has a loop surrounding retrieving a token, and fetch_blob which ensures the full file is obtained.
The only other variation is a bandwidth limit variable which can be set at the top, or via "BW:10" command line parameter. I needed this to allow my connection to be viable for other operations.
Click here for the modified script.
In the future it would be nice if docker's internal client performed resuming properly. Increasing the amount of time for the token's validation would help tremendously..
Brief views of change code:
#loop until FULL_FILE is set in fetch_blob.. this is for bad/slow connections
while [ "$FULL_FILE" != "1" ];do
local token="$(curl -fsSL "$authBase/token?service=$authService&scope=repository:$image:pull" | jq --raw-output '.token')"
fetch_blob "$token" "$image" "$layerDigest" "$dir/$layerTar" --progress
sleep 1
done
Another section from fetch_blob:
while :; do
#if the file already exists.. we will be resuming..
if [ -f "$targetFile" ];then
#getting current size of file we are resuming
CUR=`stat --printf="%s" $targetFile`
#use curl to get headers to find content-length of the full file
LEN=`curl -I -fL "${curlArgs[#]}" "$blobRedirect"|grep content-length|cut -d" " -f2`
#if we already have the entire file... lets stop curl from erroring with 416
if [ "$CUR" == "${LEN//[!0-9]/}" ]; then
FULL_FILE=1
break
fi
fi
HTTP_CODE=`curl -w %{http_code} -C - --tr-encoding --compressed --progress-bar -fL "${curlArgs[#]}" "$blobRedirect" -o "$targetFile"`
if [ "$HTTP_CODE" == "403" ]; then
#token expired so the server stopped allowing us to resume, lets return without setting FULL_FILE and itll restart this func w new token
FULL_FILE=0
break
fi
if [ "$HTTP_CODE" == "416" ]; then
FULL_FILE=1
break
fi
sleep 1
done
Try this
ps -ef | grep docker
Get PID of all the docker pull command and do a kill -9 on them. Once killed, re-issue the docker pull <image>:<tag> command.
This worked for me!

Web service URL for ComIbmWSRequestNode

Any reason for why executing this shows the web service URL for ComIbmSOAPRequestNode but not for ComIbmWSRequestNode?
mqsireportproperties MYBROKER -o AllMessageFlows -e default -r
I'm using MB6.1 and I have a requirement to extract all hardcoded URLs of HTTP nodes that vary between two environments.
You could raise a requirement here to try to get the property added:
http://www.ibm.com/developerworks/rfe/?PROD_ID=532
It is almost certainly jsut an oversight, but, given that 6.1 is almost out of support you may find that an RFE will not be accepted, or even if it is it will be targeted to a future release.
The embedded EG level listener outputs it's URI registrations with the following command:
mqsireportproperties -e -o HTTP(S)Connector -r
So if you could temporarily swith the http nodes to use the EG level listener using:
mqsichangeproeprties -e -o ExecutionGroup -n httpNodesUseEmbeddedListener -v true
Then mqsireportproperties will give you what you need.
Otherwise I think the best bet is writing a small CMP app that examines the flows that are deployed looking for SOAP and HTTP Input Nodes and extract the property that way.

lost logout functionality for grails app using spring security

I have a grails app that moved to a new subnet with a change to the DNS. As a result, the logout functionality stopped working. When I inspect the network using chrome, I get this message under request headers: CAUTION: Provisional headers are shown.
This means request to retrieve that resource was never made, so the headers being shown are not the real thing.
The logout function is executing this action
package edu.example.performanceevaluations
import org.codehaus.groovy.grails.plugins.springsecurity.SpringSecurityUtils
class LogoutController {
def index = {
// Put any pre-logout code here
redirect uri: SpringSecurityUtils.securityConfig.logout.filterProcessesUrl // '/j_spring_security_logout'
}
}
Would greatly appreciate a direction to look towards.
As suggested by that link run chrome://net-internals and see if you get anywhere
If you are still lost, I would suggest a two way debugging if you have Linux find something related to your traffic and run either something like tcpdump or if thats too complex install and run ngrep -W byline -d any port 8080 -q. and look for the pattern see what is going on.
ngrep/tcpdump and look for that old ip or subnet on entire traffic see if anything is still trying get through - (this all be best on grails app server ofcourse
(unsure possibly port 8080 or any other clear text port that your app may be running on)
Look for your ip in the apache logs does it hit the actual server when you log out etc?
Has the application been restarted since subnet change since it could have cached the next point from application in the running Java process:
pgrep java|awk '{print "netstat -plant "$1" |grep "$1 }'|/bin/sh
or
pgrep java|awk '{print " lsof -p "$1" |grep -i listen"}'|/bin/sh
I personally think something somewhere needs to be restarted since its hooking on to a cache of something .
Also check the hosts files of any end machines involved ensure nothing has previous subnet physically configured in there.

Resources