Alchemy CMS - Cloudinary - Image crop - ruby-on-rails

In Alchemy CMS how can I use the Cloudinary feature for get the image in the size that I want?
I need this for:
a specific image, I mean that, one image could be 400x300 and another could be 200x200
for all the images of the same element
How can I do it?
In the element definition, in elements.yml, I can use the settings property:
- name: content_block
contents:
- name: title_text
type: EssenceText
default: :title_text_sample
- name: picture
type: EssencePicture
settings:
size: 400x300
crop: true
- name: multi_line_text
type: EssenceRichtext
but this is the same for all contents and I think that in this way the resize is done by the Alchemy server and not by the cloudinary.

Is there a way to edit the URLs manually? If so you can change the size of the images on the fly. For example, http://res.cloudinary.com/demo/image/upload/w_200,h_200/sample

Related

Rails API create QR code and store image in active_storage

I am working on Rails 6 API where I need to generate the QR code image and save that image to S3 using active_storage.
I am using rqrcode gem for this which gives me the SVG string. How can I convert the SVG string into the image and store that image to S3 using active_storage?
Following is my code
Controller
qrcode = RQRCode::QRCode.new("#{#order.id}")
svg = qrcode.as_svg(
offset: 0,
color: '000',
shape_rendering: 'crispEdges',
module_size: 6,
standalone: true
)
I should also generate the png image using
png = qrcode.as_png(
bit_depth: 1,
border_modules: 4,
color_mode: ChunkyPNG::COLOR_GRAYSCALE,
color: 'black',
file: nil,
fill: 'white',
module_px_size: 6,
resize_exactly_to: false,
resize_gte_to: false,
size: 120
)
But while storing I got the following error
ArgumentError (Could not find or build blob: expected attachable, got <ChunkyPNG::Image 120x120 [
Looking for a way to generate the image from the svg string or png file.
Thanks.
note: frontend is react-native
I was able to achieve this by following the answer here.
In short, after you generate the png, you can attach it to your model like so:
#model.qr_code.attach(io: StringIO.new(png.to_s), filename: "filename.png")

Rails application - How to get TinyMCE to save a pasted image locally

I am running into a very unique edge case with my TinyMCE user experience.
I want to be able to
COPY IMAGE (Right click, copy image on any image on the internet)
PASTE IMAGE (CTRL + V) into TinyMCE editor
and have it save a local copy of this image and serve that.
The problem is a user can paste an image being served in S3 bucket and it is only authenticated for a certain amount of time, then days later the image will not show.
I have looked at TinyMCE - File Image Upload documentation to no avail.
Also looked into TinyMCE Paste Plugin, TinyMCE Local Upload Demo, TinyMCE Docs - Upload Images and dated gem TinyMCE-Rails-ImageUpload
Ultimately, I have a feeling that a custom handler for Paste Preprocess will need to be used
My tinymce.yml configuration follows:
menubar: false
statusbar: false
branding: false
toolbar:
- styleselect | bold italic underline strikethrough | indent outdent | blockquote | image | link | codesample | bullist numlist | table | code | undo redo
plugins:
- link
- codesample
- image
- lists
- code
- table
images_upload_url: "/tinymce_assets"
automatic_uploads: true
relative_urls: false
remove_script_host: false
convert_urls: true
table_responsive_width: true
I feel like this type of problem should be common and there should be a simple solution that I am not seeing. However, if not at all possible, would the solution be to create a custom js function that intercepts paste call, check if it is coming from external url, then decide to create a local image copy and give that url?
Thank you, and any help would be appreciated.

Rails: Download images from S3, resize and upload back to S3

In my Rails 4 application I have large number of images stored on S3 using Paperclip. Image url looks like http://s3.amazonaws.com/bucketname/files/images/000/000/012/small/image.jpg?1366900621.
Given following attachment class:
How can I download images from S3 and store locally ?
Then how to resize that locally stored image
Upload resized image to another S3 bucket without Paperclip (at a path s3/newbucket/images/{:id}/{imagesize.jpg})
Attachment class:
class Image < ActiveRecord::Base
has_attached_file :file, styles: { thumbnail: '320x320', icon: '64x64', original: '1080x1080' }
validates_attachment :file, presence: true, content_type: { content_type: /\Aimage\/.*\Z/ }
end
The basic advice would be not to resize images on-the-fly as this may take a while and your users may experience a huge response times during this operation. In case you have some predefined set of styles it would be wise to generate them in advance and just return back when required.
Well, here is what you could do if there is no other option.
def download_from_s3 url_to_s3, filename
uri = URI(url_to_s3)
response = Net::HTTP.get_response(uri)
File.open(filename, 'wb'){|f| f.write(response.body)}
end
Here we basically downloaded an image located at a given URL and saved it as a file locally. Resizing may be done in a couple of different ways (it depends on whether you want to serve the downloaded file as a Paperclip attachment).
The most common approach here would be to use image-magick and its convert command-line script. Here is an example of resizing an image to width of 30:
convert -strip -geometry 30 -quality 100 -sharpen 1 '/photos/aws_images/000/000/015/original/index.jpg' '/photos/aws_images/000/000/015/original/S_30_WIDTH__q_100__index.jpg' 2>&1 > /dev/null
You can find documentation for convert here, it's suitable not only for image resizing, but also for converting between image formats, bluring, cropping and much more! Also you could be intrested in Attachment-on-the-Fly gem, which seems a little bit outdated, but has some insights of how to resize images using convert.
The last step is to upload resized image to some S3 bucket. I assume that you've already got aws-sdk gem and AWS::S3 instance (the docs can be found here).
def upload_to_s3 bucket_name, key, file
s3 = AWS::S3.new(:access_key_id => 'YOUR_ACCESS_KEY_ID', :secret_access_key => 'YOUR_SECRET_ACCESS_KEY')
bucket = s3.buckets[bucket_name]
obj = bucket.objects[key]
obj.write(File.open(file, 'rb'), :acl => :public_read)
end
So, here you obtain an AWS::S3 object to communicate with S3 server, provide your bucket name and desired key, and basically upload an image with an option to make it visible to everybody on the web. Note that there are lots of additional upload options (including file encryption, access permissions, metadata and much more).

python-fu select copy paste

I'm a newbie in python-fu, (my second day), so my question may seem naive: I'd like to select a rectangular portion from "r400r.png", rotate it 90 degrees, and save my selection in "r400ra.png".
So far, I tried something on these lines:
for fv in range(400,401):
fn='r%sr.png' % fv
img=pdb.gimp_file_load('/path/'+fn,fn)
drw=pdb.gimp_image_get_active_layer(img)
img1=pdb.gimp_image_new(1024,1568,0)
lyr=pdb.gimp_layer_new(img1,1024,1568,0,'ly1',0,0)
pdb.gimp_rect_select(img,10,200,1422,1024,2,0,0)
drw=pdb.gimp_rotate(drw,0,1.570796327)
pdb.script_fu_selection_to_image(img1,drw)
f0=fn[:5]+'a'+fn[5:]
pdb.gimp_file_save(drw,'/path/'+f0,f0)
The "lyr" layer is there because my understanding is that it is a must, although it's not clear to me why. The "for" loop eventually should bulk process a bunch of files; for testing it is restricted to one file only. I get an error where I try o execute "script_fu_selection_to_image".
Can you point me, please, in the right direction?
Thanks,
SxN
The PDB calls to do that are better in this order:
# import your image:
img=pdb.gimp_file_load('/path/'+fn,fn)
#make the selection
pdb.gimp_rect_select(img,10,200,1422,1024,2,0,0)
# copy
pdb.gimp_edit_copy(img.layers[0])
# (no need to "get_active_layer" - if
# your image is a flat PNG or JPG, it only has one layer,
# which is accessible as img.layers[0])
# create a new image from the copied area:
new_img = pdb.gimp_paste_as_new()
#rotate the newly created image:
pdb.gimp_image_rotate(new_img, ...)
#export the resulting image:
pdb.gimp_file_save(new_img, ...)
#delete the loaded image and the created image:
# (as the objects being destroyed on the Python side
# do not erase then from the GIMP app, where they
# stay consuming memory)
pdb.gimp_image_delete(new_img)
pdb.gimp_image_delete(img)

How to use MapFish print module for GeoServer-GeoWebCache layer?

I am in the process of developing a webGIS application using GeoServer (2.1.1), GeoWebCache(1.2.6), OpenLayers(2.11), GeoExt. All my layers are served as wms through GeoWebCache. A sample definition for any layer is as follows:
var My_Layer = new OpenLayers.Layer.WMS( "My_Layer",
"http://my-ip + my-port/geoserver/gwc/service/wms",
{layers: 'layer-name',transparent: "true",format: "image/png",
tileSize: new OpenLayers.Size(256,256),
tilesOrigin : map.maxExtent.left + ',' + map.maxExtent.bottom },
{ isBaseLayer: false, visibility:false} );
Everything was working fine, till this point. But, when I planned to move a bit ahead and tried implementing MapFish Printing module...... the output pdf is blank!!! I am getting the following error message:
java.io.IOException: Error (status=400) while reading the image
from........
I have searched a lot. According to this one option is to access my layers as TMS layer. But I don't want a static image layer, instead of a GeoServer WMS map layer.
Again another option found here is using OpenLayers.Control.ExportMap().
But that restricts using different scales, since my data extent is too big . As a result at a specific scale if user wants to take a print of the entire map area(may be in an A0 paper), which is not visible fully in the Openlayers div, this can not solve the purpose.
So the question is how can I accomplish this, without using a TMS or GeoWebCache layer?
Edit # 1 :
Sorry I am late, as I was out of office. Following is my config.yaml file. I feel there is no error, this can print my WMS layers, coming directly from GeoServer.
dpis: [75, 150, 300]
outputFormats:
- pdf
scales:
- 10000
- 25000
- 50000
- 100000
hosts:
- !localMatch
dummy: true
- !ipMatch
ip: www.camptocamp.org
- !dnsMatch
host: labs.metacarta.com
port: 80
- !dnsMatch
host: terraservice.net
port: 80
- !dnsMatch
host: sigma.openplans.org
- !dnsMatch
host: demo.mapfish.org
layouts:
A4 portrait:
metaData:
title: 'Arunava TopoMap PDF'
author: 'Arunava print module'
subject: 'Map layout'
keywords: 'map,print'
creator: 'Arunava'
mainPage:
pageSize: A4
rotation: true
items:
- !text
text: '${mapTitle} ${now MM.dd.yyyy}'
fontSize: 20
spacingAfter: 30
- !map
spacingAfter: 30
width: 440
height: 600
- !scalebar
type: bar
maxSize: 100
barBgColor: white
fontSize: 8
align: right
- !text
font: Helvetica
fontSize: 9
align: right
text: '1:${scale}'
footer: *commonFooter
A2 portrait:
metaData:
title: 'Arunava TopoMap PDF'
author: 'Arunava print module'
subject: 'Map layout'
keywords: 'map,print'
creator: 'Arunava'
mainPage:
pageSize: A2
rotation: true
items:
- !text
text: '${mapTitle} ${now MM.dd.yyyy}'
fontSize: 20
spacingAfter: 30
- !map
spacingAfter: 30
width: 880
height: 1200
- !scalebar
type: bar
maxSize: 100
barBgColor: white
fontSize: 8
align: right
- !text
font: Helvetica
fontSize: 9
align: right
text: '1:${scale}'
footer: *commonFooter
Without further debugging, the 400 error is too vague for much help. From experience, I can tell you I've seen an issue before where the geowebcache server doesn't like serving the wms layer you are requesting. Mapfish tries to do weird things with different tile sizes (and you eventually get a 10% threshold error). Does your log show the image it was requesting? Can you go to that tile in our browser to see what the server actually says? This is how I eventually exposed my issues.
For easier debugging, I've also created a seperate mapfish log to make it easier to find my mapfish issues. Use the Geoserver admin screen to figure out which logging profile you are using, then in that log4j.properties file, add a seperate file appender for mapfish, and direct all org.mapfish activity to it. This makes debugging much easier.
And FINALLY, my own personal crusade: in your config.yaml, don't use outputFormats: [pdf],
instead, use formats: ['pdf'].
Even though all the docs describe outputFormat (and that's what required in the client "spec"), the actual server config is uses the 'formats' variable. I've submitted a patch to make this more clear in the docs, but until then, let's this note be a guide. If you want to get into the image output, this is key.

Resources