read checksum from artifactory given the artifact url - ruby-on-rails

I want to read checksum from artifactory given the artifactory url and append it to an attribute.
I tried to look for examples but people are hardcoding the checksum value like below.
If I hardcode the value, I will have to update it when I have new artifact. I do not want to do that.
Please let me know if there is anyway to get this value from artifactory.
I have a code to compute checksum in the my chef code using digest. I will compare the checksum from artifactory and the checksum I computed in the recipe.
source 'http://www.example.com/tempfiles/testfile'
mode '0755'
checksum '3a7dac00b1' # A SHA256 (or portion thereof) of the file.
end
To compare the computed checksum with the local checksum, I have seen people hardcoding local checksum value. Instead I want to read it from artifactory through chef. ex:
```computed_checksum = Digest::SHA2.file(temp.path).hexdigest Artifactory_checksum = Read from artifactory ?
if Artifactory_checksum != computed_chceksum throws error.....''''

require 'open-uri'
require 'tempfile'
jar_file = 'https://repo1.maven.org/maven2/mysql/mysql-connector-java/8.0.17/mysql-connector-java-8.0.17.jar'
temp = Tempfile.new
temp << open(jar_file).read
temp.flush
actual_sha1 = Digest::SHA1.file(temp.path).hexdigest
sha1_file = 'https://repo1.maven.org/maven2/mysql/mysql-connector-java/8.0.17/mysql-connector-java-8.0.17.jar.sha1'
expected_sha1 = open(sha1_file).read.strip
p actual_sha1 == expected_sha1

Related

Read rule argument value from config file

Consider the following bazel rule written in a WORKSPACE file:
container_pull(
name = "release-base",
registry = "mydockernet:9443",
repository = "release-base",
digest = "sha256:...",
tag = "1.8.2",
)
The problem is that the tag value 1.8.2 is written in a yaml config file and we want to respect the DRY principle (read the value from the config file instead of duplicating the value in bazel files). Is there a way to handle this?
It's not yaml but you can define things in another bzl file and then load them into your WORKSPACE:
load("common.bzl", "MYVERSION")
container_pull(
name = "release-base",
registry = "mydockernet:9443",
repository = "release-base",
digest = "sha256:...",
tag = MYVERSION,
)
then in common.bzl:
MYVERSION=1.8.2

How to get a bitmap image in ruby?

The google vision API requires a bitmap sent as an argument. I am trying to convert a png from a URL to a bitmap to pass to the google api:
require "google/cloud/vision"
PROJECT_ID = Rails.application.secrets["project_id"]
KEY_FILE = "#{Rails.root}/#{Rails.application.secrets["key_file"]}"
google_vision = Google::Cloud::Vision.new project: PROJECT_ID, keyfile: KEY_FILE
img = open("https://www.google.com/images/branding/googlelogo/2x/googlelogo_color_272x92dp.png").read
image = google_vision.image img
ArgumentError: string contains null byte
This is the source code processing of the gem:
def self.from_source source, vision = nil
if source.respond_to?(:read) && source.respond_to?(:rewind)
return from_io(source, vision)
end
# Convert Storage::File objects to the URL
source = source.to_gs_url if source.respond_to? :to_gs_url
# Everything should be a string from now on
source = String source
# Create an Image from a HTTP/HTTPS URL or Google Storage URL.
return from_url(source, vision) if url? source
# Create an image from a file on the filesystem
if File.file? source
unless File.readable? source
fail ArgumentError, "Cannot read #{source}"
end
return from_io(File.open(source, "rb"), vision)
end
fail ArgumentError, "Unable to convert #{source} to an Image"
end
https://github.com/GoogleCloudPlatform/google-cloud-ruby
Why is it telling me string contains null byte? How can I get a bitmap in ruby?
According to the documentation (which, to be fair, is not exactly easy to find without digging into the source code), Google::Cloud::Vision#image doesn't want the raw image bytes, it wants a path or URL of some sort:
Use Vision::Project#image to create images for the Cloud Vision service.
You can provide a file path:
[...]
Or any publicly-accessible image HTTP/HTTPS URL:
[...]
Or, you can initialize the image with a Google Cloud Storage URI:
So you'd want to say something like:
image = google_vision.image "https://www.google.com/images/branding/googlelogo/2x/googlelogo_color_272x92dp.png"
instead of reading the image data yourself.
Instead of using write you want to use IO.copy_stream as it streams the download straight to the file system instead of reading the whole file into memory and then writing it:
require 'open-uri'
require 'tempfile'
uri = URI("https://www.google.com/images/branding/googlelogo/2x/googlelogo_color_272x92dp.png")
tmp_img = Tempfile.new(uri.path.split('/').last)
IO.copy_stream(open(uri), tmp_img)
Note that you don't need to set the 'r:BINARY' flag as the bytes are just streamed without actually reading the file.
You can then use the file by:
require "google/cloud/vision"
# Use fetch as it raises an error if the key is not present
PROJECT_ID = Rails.application.secrets.fetch("project_id")
# Rails.root is a Pathname object so use `.join` to construct paths
KEY_FILE = Rails.root.join(Rails.application.secrets.fetch("key_file"))
google_vision = Google::Cloud::Vision.new(
project: PROJECT_ID,
keyfile: KEY_FILE
)
image = google_vision.image(File.absolute_path(tmp_img))
When you are done you clean up by calling tmp_img.unlink.
Remember to read things in binary format:
open("https://www.google.com/..._272x92dp.png",'r:BINARY').read
If you forget this it might try and open it as UTF-8 textual data which would cause lots of problems.

MiniMagick can't write decoded Base64 image

I'm having trouble calling image.write with MiniMagick on a decoded base64 image in Rails. Every line seems to be working properly except for image.write. The code below is in my Rails API ImageController, which my React frontend is hitting through a POST request with the encoded image.
def create
uploaded_io = params["image_io"]["base64"] # base64 string + metadata
metadata = uploaded_io.split(',/')[0] + "," # "data:image/jpeg;base64,"
filetype = metadata.split("/")[1].split("base64")[0][0...-1] # "jpeg"
base64_string = uploaded_io[metadata.size..-1] # base64 string w/o metadata
blob = Base64.decode64(base64_string)
image = MiniMagick::Image.read(blob)
image.write `#{Time.new.to_i}.#{filetype}`
storage = Google::Cloud::Storage.new(
project_id: ENV['GOOGLE_CLOUD_PROJECT'],
credentials: JSON.parse(File.read('config/google_cloud_credentials.json'))
)
bucket = storage.bucket "auto-stock-189103.appspot.com"
bucket.create_file image,`test/#{Time.new.to_i}.jpg`
end
I added comments to the first few lines in the code describing their value. base64_string was too long to comment, so here is its value:
"/9j/4QAYRXhpZgAASUkqAAgAAAAAAAAAAAAAAP/sABFEdWNreQABAAQAAABkAAD/4QMtaHR0cDovL25zLmFkb2JlLmNvbS94YXAvMS4wLwA8P3hwYWNrZXQgYmVnaW49Iu+7vyIgaWQ9Ilc1TTBNcENlaGlIenJlU3pOVGN6a2M5ZCI/PiA8eDp4bXBtZXRhIHhtbG5zOng9ImFkb2JlOm5zOm1ldGEvIiB4OnhtcHRrPSJBZG9iZSBYTVAgQ29yZSA1LjMtYzAxMSA2Ni4xNDU2NjEsIDIwMTIvMDIvMDYtMTQ6NTY6MjcgICAgICAgICI+IDxyZGY6UkRGIHhtbG5zOnJkZj0iaHR0cDovL3d3dy53My5vcmcvMTk5OS8wMi8yMi1yZGYtc3ludGF4LW5zIyI+IDxyZGY6RGVzY3JpcHRpb24gcmRmOmFib3V0PSIiIHhtbG5zOnhtcD0iaHR0cDovL25zLmFkb2JlLmNvbS94YXAvMS4wLyIgeG1sbnM6eG1wTU09Imh0dHA6Ly9ucy5hZG9iZS5jb20veGFwLzEuMC9tbS8iIHhtbG5zOnN0UmVmPSJodHRwOi8vbnMuYWRvYmUuY29tL3hhcC8xLjAvc1R5cGUvUmVzb3VyY2VSZWYjIiB4bXA6Q3JlYXRvclRvb2w9IkFkb2JlIFBob3Rvc2hvcCBDUzYgKE1hY2ludG9zaCkiIHhtcE1NOkluc3RhbmNlSUQ9InhtcC5paWQ6MTNDQjQyRjlEQTAxMTFFN0E4N0VBNzdGODEwREFGMTYiIHhtcE1NOkRvY3VtZW50SUQ9InhtcC5kaWQ6MTNDQjQyRkFEQTAxMTFFN0E4N0VBNzdGODEwREFGMTYiPiA8eG1wTU06RGVyaXZlZEZyb20gc3RSZWY6aW5zdGFuY2VJRD0ieG1wLmlpZDoxM0NCNDJGN0RBMDExMUU3QTg3RUE3N0Y4MTBEQUYxNiIgc3RSZWY6ZG9jdW1lbnRJRD0ieG1wLmRpZDoxM0NCNDJGOERBMDExMUU3QTg3RUE3N0Y4MTBEQUYxNiIvPiA8L3JkZjpEZXNjcmlwdGlvbj4gPC9yZGY6UkRGPiA8L3g6eG1wbWV0YT4gPD94cGFja2V0IGVuZD0iciI/Pv/uAA5BZG9iZQBkwAAAAAH/2wCEAAEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQECAgICAgICAgICAgMDAwMDAwMDAwMBAQEBAQEBAgEBAgICAQICAwMDAwMDAwMDAwMDAwMDAwMDAwMDAwMDAwMDAwMDAwMDAwMDAwMDAwMDAwMDAwMDA//AABEIAAoACgMBEQACEQEDEQH/xABNAAEBAAAAAAAAAAAAAAAAAAAACQEBAQEAAAAAAAAAAAAAAAAAAAkKEAEAAAAAAAAAAAAAAAAAAAAAEQEAAAAAAAAAAAAAAAAAAAAA/9oADAMBAAIRAxEAPwCMKU7f4AAA/9k="
Testing it here renders the correct image (a red square), but when I run the image.write line it returns the following error:
bin/rails: No such file or directory - 1513397345.jpeg
*** NoMethodError Exception: undefined method `write' for nil:NilClass
nil
Here's the return value of image = MiniMagick::Image.read(blob) for reference:
#<MiniMagick::Image:0x00007f9ba76ba1f8 #path="/var/folders/pf/xhvv11092_j08hw47q6rt9z80000gn/T/mini_magick20171215-26353-l2lcyu", #tempfile=#<Tempfile:/var/folders/pf/xhvv11092_j08hw47q6rt9z80000gn/T/mini_magick20171215-26353-l2lcyu (closed)>, #info=#<MiniMagick::Image::Info:0x00007f9ba76ba1d0 #path="/var/folders/pf/xhvv11092_j08hw47q6rt9z80000gn/T/mini_magick20171215-26353-l2lcyu", #info={}>>
Ultimately, my goal is to upload the image to Google Cloud so please let me know if there's a better way to go about this. I'm following this answer from a similar question, which is why I have it structured this way.
I think your problem is that you're using backticks where you mean to use double quotes:
image.write `#{Time.new.to_i}.#{filetype}`
# ----------^----------------------------^
Backticks will attempt to execute their contents in the shell. You don't have an executable file named 1513397345.jpeg (which is what #{Time.new.to_i}.#{filetype} evaluates to) so you get an error.
You just want to use plain old double quotes to get the string interpolation you're expecting:
image.write "#{Time.new.to_i}.#{filetype}"
and again a few lines below that:
bucket.create_file image, "test/#{Time.new.to_i}.jpg"
Furthermore, you probably want to store that filename in a variable because Time.new.to_i isn't guaranteed to be the same in both invocation:
name = "#{Time.new.to_i}.#{filetype}
image.write name
#...
bucket.create_file image, name

How to write to tmp file or stream an image object up to s3 in ruby on rails

The code below resizes my image. But I am not sure how to write it out to a temp file or blob so I can upload it to s3.
origImage = MiniMagick::Image.open(myPhoto.tempfile.path)
origImage.resize "200x200"
thumbKey = "tiny-#{key}"
obj = bucket.objects[thumbKey].write(:file => origImage.write("tiny.jpg"))
I can upload the original file just fine to s3 with the below command:
obj = bucket.objects[key].write('data')
obj.write(:file => myPhoto.tempfile)
I think I want to create a temp file, read the image file into it and upload that:
thumbFile = Tempfile.new('temp')
thumbFile.write(origImage.read)
obj = bucket.objects[thumbKey].write(:file => thumbFile)
but the origImage class doesn't have a read command.
UPDATE: I was reading the source code and found this out about the write command
# Writes the temporary file out to either a file location (by passing in a String) or by
# passing in a Stream that you can #write(chunk) to repeatedly
#
# #param output_to [IOStream, String] Some kind of stream object that needs to be read or a file path as a String
# #return [IOStream, Boolean] If you pass in a file location [String] then you get a success boolean. If its a stream, you get it back.
# Writes the temporary image that we are using for processing to the output path
And the s3 api docs say you can stream the content using a code block like:
obj.write do |buffer, bytes|
# writing fewer than the requested number of bytes to the buffer
# will cause write to stop yielding to the block
end
How do I change my code so
origImage.write(s3stream here)
http://docs.aws.amazon.com/AWSRubySDK/latest/AWS/S3/S3Object.html
UPDATE 2
This code successfully uploads the thumbnail file to s3. But I would still love to know how to stream it up. It would be much more efficient I think.
#resize image and upload a thumbnail
smallImage = MiniMagick::Image.open(myPhoto.tempfile.path)
smallImage.resize "200x200"
thumbKey = "tiny-#{key}"
newFile = Tempfile.new("tempimage")
smallImage.write(newFile.path)
obj = bucket.objects[thumbKey].write('data')
obj.write(:file => newFile)
smallImage.to_blob ?
below code copy from https://github.com/probablycorey/mini_magick/blob/master/lib/mini_magick.rb
# Gives you raw image data back
# #return [String] binary string
def to_blob
f = File.new #path
f.binmode
f.read
ensure
f.close if f
end
Have you looked into the paperclip gem? The gem offers direct compatibility to s3 and works great.

Jenkins Continuous Integration with Amazon S3 - Everything is uploading to the root?

I'm running Jenkins and I have it successfully working with my GitHub account, but I can't get it working correctly with Amazon S3.
I installed the S3 plugin and when I run a build it successfully uploads to the S3 bucket I specify, but all of the files uploaded end up in the root of the bucket. I have a bunch of folders (such as /css /js and so on), but all of the files in those folders from hithub end up in the root of my S3 account.
Is it possible to get the S3 plugin to upload and retain the folder structure?
It doesn't look like this is possible. Instead, I'm using s3cmd to do this. You must first install it on your server, and then in one of the bash scripts within a Jenkins job you can use:
s3cmd sync -r -P $WORKSPACE/ s3://YOUR_BUCKET_NAME
That will copy all of the files to your S3 account maintaining the folder structure. The -P keeps read permissions for everyone (needed if you're using your bucket as a web server). This is a great solution using the sync feature, because it compares all your local files against the S3 bucket and only copies files that have changed (by comparing file sizes and checksums).
I have never worked with the S3 plugin for Jenkins (but now that I know it exists, I might give it a try), though, looking at the code, it seems you can only do what you want using a workaround.
Here's what the actual plugin code does (taken from github) --I removed the parts of the code that are not relevant for the sake of readability:
class hudson.plugins.s3.S3Profile, method upload:
final Destination dest = new Destination(bucketName,filePath.getName());
getClient().putObject(dest.bucketName, dest.objectName, filePath.read(), metadata);
Now if you take a look into hudson.FilePath.getName()'s JavaDoc:
Gets just the file name portion without directories.
Now, take a look into the hudson.plugins.s3.Destination's constructor:
public Destination(final String userBucketName, final String fileName) {
if (userBucketName == null || fileName == null)
throw new IllegalArgumentException("Not defined for null parameters: "+userBucketName+","+fileName);
final String[] bucketNameArray = userBucketName.split("/", 2);
bucketName = bucketNameArray[0];
if (bucketNameArray.length > 1) {
objectName = bucketNameArray[1] + "/" + fileName;
} else {
objectName = fileName;
}
}
The Destination class JavaDoc says:
The convention implemented here is that a / in a bucket name is used to construct a structure in the object name. That is, a put of file.txt to bucket name of "mybucket/v1" will cause the object "v1/file.txt" to be created in the mybucket.
Conclusion: the filePath.getName() call strips off any prefix (S3 does not have any directory, but rather prefixes, see this and this threads for more info) you add to the file. If you really need to put your files into a "folder" (i.e. having a specific prefix that contains a slash (/)), I suggest you to add this prefix to the end of your bucket name, as explicited in the Destination class JavaDoc.
Yes this is possible.
It looks like for each folder destination, you'll need a separate instance of the S3 plugin however.
"Source" is the file you're uploading.
"Destination bucket" is where you place your path.
Using Jenkins 1.532.2 and S3 Publisher Plug-In 0.5, the UI configure Job screen rejects additional S3 publish entries. There would also be a significant maintenance benefit to us if the plugin recreated the workspace directory structure as we'll have many directories to create.
Set up your git plugin.
Set up your Bash script
All in your folder marked as "*" will go to bucket

Resources