where to host small GPU machine learning API cheaply [closed] - machine-learning

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 11 months ago.
Improve this question
I am trying to make some new web APIs that would strongly benefit from GPU processing for completed/trained machine learning models. I would like to make this a publicly available endpoint. However I don't know where is a realistic place to host a machine learning hobby project API? If there is a better way (i.e. only use the GPU while processing an API request which would be infrequent) I'm also open to that.
AWS Pricing https://aws.amazon.com/ec2/pricing/on-demand/
The cheapest I can see is $0.50/hr which is around $350 monthly
Google Cloud Pricing https://cloud.google.com/compute/gpus-pricing
Cheapest I can see is $180 monthly
Vast AI pricing https://vast.ai/console/create/
Cheapest i've found is $0.077/hr which is $56 a month
And I found this quora post https://www.quora.com/Which-cloud-hosting-provides-GPU-servers-at-the-lowest-cost which pointed me to https://www.paperspace.com/pricing which is an $8/mo solution but I'm not sure if this is actually server hosting.

Check VPS Smart. I think it's one of the best choice for small projects. It's starting from $45.00.

There are different possibility:
Google Colab: It's free, includes Jupyter notebook system with a nice user interface. It's integrated with Google Drive and GitHub. And you can collaborate on it. Both GPU and TPU is available.
GoogleCloud: More powerful and customizable than Google Colab is. GPU and TPU are available.
Preemptible instances on GoogleCloud: Preemptible VMs offer the same machine types and options as regular compute instances and last for up to 24 hours. They are fine for most learning tasks. It can reduce your Compute Engine costs by up to 80% against normal GoogleCloud!
AWS EC2: Even if the configuration is not the easiest one, the spot instances pricing offers a way to spare up to 90% against on-demand prices.
Update:
Vast.ai: GPU Sharing Economy with a market place
Paperspace has a free offer for hobbyist too with 5GB persistent storage and 6 hours running time. Other plans are available (8$/month for 200GB permanent storage). Hourly price for computing is available there (GPU: 0.5-2.9$/hour for most of the GPU)

If you are planning to host your ML project API then you I'd recommend going with a platform that has K8s serving capability. So that it can auto manage scaling up/down for you on-demand as and when there is load on your API.
Otherwise you'd end up scrambling for resources online. I think ovhcloud and Alibabacloud provide managed Kubernetes offering and could potentially be less costly for your requirement.
And if your sole purpose is to train ML models then you can explore services such as:
Q Blocks - Decentralized GPU computing for ML: 80% less costly
Paperspace - GPU optimized platform for ML
Google colab - Free but there are a lot of limitations

You can check Jarvislabs.ai, we have GPU starting at 0.49$ for RTX 5000. Since you may have occasional loads you could also automate the process through a simple API and reduce your cost further.
Disclaimer: I am the founder of the startup.

Related

Smart Contracts (Hyperledger vs Eth) [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
Quick few questions on smart contracts
how does hyperledger smart contract (chaincode) stack up against ethereum ?
hyperledger - golang; Expressibility? ; Performance? ; Security?
Ethereum - solidity; Expressibility ; Performance? ; Security?
how to secure smart contracts to ensure that code is not tampered?
how can both parties trust and trace the results of the smart contract? any audit/traceability capability?
Also in a decentralised ideal world, whose legal liability if there is a bug and/or buffer overflow in the smart contract resulting in some losses?
Any performance benchmarks? E.g., 2000 of complicated smart contracts executed during a span of 10 seconds?
How does one enable/restrict security access to these smart contracts? i.e., only Alice and Bob can see the contract and not John
I suspect that it is still fairly rare for someone to have spent a lot of time developing smart contracts on Ethereum and Hyperledger Fabric. Couple that with the fact that anyone who has such experience is probably up to their eyeballs in work right now :-)
I worked on Go chaincode for about a year, building an IoT-oriented platform for smart contracts that has been temporarily suspended while I worked on JavaScript smart contracts through the Hyperledger Composer this year. I don't have direct expertise on Ethereum and Solidity, but I will do my best to answer what I can.
Do note though, that Ethereum is based on crypto-currencies and mining, and a lot of the activity is centered around the public, permissionless network. I.e. this is not designed for secure business networks, which require that you take a version of the Ethereum code base and hack it. This is not the same thing at all as working with Fabric, which is designed from the ground up to be used for secure business transactions.
Quick few questions on smart contracts
how does hyperledger smart contract (chaincode) stack up against
ethereum ?
Ethereum, like Fabric, have multiple smart contract languages. Ethereum's are Solidity -- a JavaScript-like language, Serpent -- a Python-like language, and LLL -- a Lisp-Like Language). The big difference here is that Fabric runs the actual versions of those languages so your skills are portable in both directions.
hyperledger - golang; Expressibility? ; Performance? ;
Security?
Golang looks a lot like C language but is more expressive, with concepts like channels, receivers, and so on. The performance is pretty extreme.
I also favour the Hyperledger Composer infrastructure, which uses interpreted JavaScript code and a powerful business network modelling language. This is worth exploring as it is evolving fast. A lot of security headaches are solved with minimal fuss using their access control language in permissions.acl.
Ethereum - solidity; Expressibility ; Performance? ;
Security?
Not sure about expressibility of any of their languages, but presumably you can do common contract stuff. Performance, though, is limited by definition to the block cadence of the Ethereum network, which is limited by the speed of mining. Bitcoin commits blocks about every 10 minutes. Ethereum is faster, but there will be a limit.
Regarding security of these two -- Fabric is permissioned and is generally expected to run on a private network, in backoffice(s) or on a cloud. Thus, it can be architected and engineered for as much physical security as you desire and / or can afford. Ethereum is likely the same when deployed privately, but not when deployed into an exchange that is meant to be public a la Bitcoin.
There are attack vectors of course, but presuming that you keep your chaincode in private repositories then again you can get as much security as you can afford.
how to secure smart contracts to ensure that code is not tampered?
You have to secure your network and repositories. For example, if you are running on a single Kubernetes cluster for a small blockchain, then you secure the cluster. If you are running on a large collaboration with multiple separate back offices running the HSBN (IBM's Fabric-based High Security Business Network) on Z systems, then you will secure the physical hardware and the internetworks. The chaincode has few to zero attack vectors if you spend enough money. (I'm using cost also as a synonym for effort by the way). Presumably, a private Ethereum deployment will have similar characteristics but again it is conceived as a crypto-currency engine and is natively permissionless.
how can both parties trust and trace the results of the smart contract? any audit/traceability capability?
Fabric has a historian that tracks every transaction and world state change (and I mean all of them ever). You can write complex SQL-like queries to gather and analyze such data. It is extremely powerful.
When I search for similar info for Ethereum, I get article after article discussing the historical price of Ethereum's currency. These are different worlds.
Also in a decentralised ideal world, whose legal liability if there is a bug
and/or buffer overflow in the smart contract resulting in some losses?
With Fabric, someone will be responsible for implementing smart contracts as codified business rules, and there is little logical difference between that and any existing financial system that was implemented either internally or using contracts. The dynamics of liability will be the same.
With Ethereum, I have no idea. There is a funky crypto angle to be aware of and if you try to implement a business network a la Fabric you are probably stepping into territory for which Ethereum takes no responsibility. This is not all that different from Fabric I suppose. But there is a difference in original purpose and that might make a difference when it comes to legal arguments (as in the "what were you thinking?" defense.) That is all pure speculation :-)
Any performance benchmarks? E.g., 2000 of complicated smart contracts
executed during a span of 10 seconds?
I ran some load tests (poisson traffic into a Go smart contract on a 4 node v0.6 fabric on Bluemix) for months at an average of about 23,000 transactions per hour with full history retention in world state. It ran fine. Hyperledger v1 has been engineered to be considerably higher performing than v0.6, however there are more complexities in using it so it will require serious system engineering to eke out its best performance (and what is new about that?)
How does one enable/restrict security access to these smart contracts? i.e.,
only Alice and Bob can see the contract and not John
Take a look at the ACL language in Hyperledger Composer and you will see that there is a rather sophisticated view of participant restrictions.
UPDATE: That link is busted. The new one is https://hyperledger.github.io/composer/latest/reference/acl_language.html
There is also research going on with Go libraries for ACL concepts, but I don't know when such might appear.
Anyway, I hope some of this was useful.

Experiences OrientDB vs Neo4j [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I am looking for the right Graph DB for a project. I tested Neo4j and really liked it. But the AGPL licensing issues put me off a little (you can read about it here).
I then found a couple of articles claiming that OrientDB is actually much faster. But they aren't really up to date. You find them here and here. And it is licensed under Apache 2, which is good.
So I just want to ask the great people of stackoverflow for your opinion.
Scalability is important and OrientDB claims to be better at that (here)
Licensing should be open
I have a complex model of vertexes/edges and need to retrieve relationships up to 3 levels deep
The mixture of document-graph that OrientDB offers seems to be a benefit
Thanks for your feedback!
Note: I am on the OrientDB team, my opinion is definitely slanted. I am also replying in a decidedly casual tone.
On your points:
1) On the topic of clustered deployment, currently it's not even a comparison. Neo4j is master-slave replication, they state themselves that it is generally only suited to single digit node deployments and the entire graph must belong on one machine. Hear it from them directly: http://www.infoq.com/interviews/ian-robinson-neo4j?utm_source=infoq&utm_medium=videos_homepage&utm_campaign=videos_row1
OrientDB has a full ability to do multi-master replication (every node can accept reads and writes), has the ability to shard data, intelligently distribute data using clusters and automate distributed queries and transactions. Our CEO recently did an excellent webinar for hazelcast showing our setup in this area: http://hazelcast.com/resources/orientdb-hazelcast-memory-distributed-graph-database/
2) Apache 2.0 is our community license, this is extremely liberal. You can even embed OrientDB community edition at no cost. (A)GPL worries some that their closed source code will be polluted. This may or may not be a threat, but it's sometimes hard to determine. Our community license is very feature rich including full distributed, multi-master replication and sharding.
3) Traversing relationships is kind of the point of graph databases. So either Neo4j or OrientDB will suit you just fine here... go 2000 levels in depth and it will still be performant.
4) The document-graph capabilities are great, but you knew I would say that. The product we've built is a production grade system designed to be a full on database not a side database used as a supplement to a RDBMS or other datastore.
I am coming off strong here. But I have good reason. Over the past 3 weeks a full team of developers at a world leading tech company have been testing OrientDB against Neo4j. For their very demanding use case, we were the better choice for a variety of reasons. Money was not the issue, we earned the business with our technology.
Take it for what it's worth, I've stated my bias up front. From my experience, once you work with OrientDB, there will be no looking back. Let us know if you need any help!

Suggestions for the best Rails collaborative development stack? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 5 years ago.
Improve this question
Looking for some suggestions from the community for development stacks for collaborative environments. Could you share what you have and what has worked for you or your team?
The following is probably too verbose for some and an expression of just some rambling thoughts I've had about my particular scenario as I'm working with a hatchling dev group. SO, if you read it 1UP for you, otherwise, please just feel free to just share your thoughts re: the first question and what's worked for your team.
I have a situation where myself and a couple other developers are working together and I'd like to set up the "best" dev environment possible for Ruby on Rails development. At the moment I use git and some of the usually accepted best practices for development, however the other guys are new and not terribly familiar with the shell, git, etc. They're more from a php and monolithic environment.
I do have a central linux server that has been used hitherto for LAMP based dev for them. I can retool it to anything I'd like it to be as I'm quite adept and experienced at Unix system and network admin.
Could someone please suggest what may work well in this scenario? Again, ultimately we need to do collaborative development that has the lowest learning curve. I'll be the only one deploying to Heroku until I feel comfortable with their experience.
I would like to put something together that can get us all up to speed quickly in a matter of a day vs a longer learning curve and then allow them to grow into the shell and so forth over the next couple weeks.
What I was thinking was more of a shared SMB (mixed Windows and Mac workstations) and SFTP unified projects folder that has either apache virtual hosts for each project or thin rack. I'd continue to use my methods, but this could provide the flexibility for them to grow into this and be able to restart httpd or thin as per need.
Am I on the proverbial right track or has someone seen a better alternative? A lot of things have crossed my mind such as Gitorious (since we'll have a lot of small projects needing to be tracked and an enormous GitHub account is not feasible), Heroku, OpenShift and a lot of other things, but I have enough uncertainty that I'd like to get some input from the community as to the right mix for great collaborative agile development.
I have an answer but I think you have conflicting requirements: i.e. lowest learning curve vs low/free cost.
You say that GitHub is not feasible but it does offer unparalleled features for novice users. They can see commits on a website instead of on the commandline, can even edit files right in the browser (since yesterday, uses Ace) and gain insight into the branching/merging process.
Another paid option is http://cloud9ide.com/ which is also web-based.
I use my own development server as well but only use it for experienced people who need no hand-holding. If I were to let everyone on there the amount of support would consume my entire day.
It is my opinion that doing Rails development people should adopt the best practices in the field. See it like this: at least you won't burden them with learning Subversion or --eek-- CVS. Just seeing the commits on GitHub and having a discussion right after puzzling pieces of code is worth the money.

Building a community photography site, where can I store my photos online? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 4 years ago.
Improve this question
I am in the process of laying down the requirements for a photography community site. An important feature to investigate would be allowing more fotos/account than rival sites around my country's internet. What are the possibilities out there?
Should I go for something like amazon S3, or is there anything that offers more image-related features? I am mostly interested in low price per GB (storage and transfer out).
I used to work for a social networking website that hosts billions of images and we evaluated S3. Conclusion was that it is too expensive for heavy-traffic sites. The storage itself is pretty cheap, but the costs for accessing the content on S3 add up quickly. That makes S3 more suitable for applications like online backups. In my view, cost is the main con.
On the other hand, this is only a concern once your site gets large. The biggest advantages of S3 are that you don't have to worry about scalability and that it's pretty easy to set up and then forget about it because it just works. Many medium sized services use S3 with great success.
The solution we went for is an array of dedicated servers that host the images and also run webservers (don't use Apache, use webservers optimized for static content such as lighttpd or nginx), and in front of those, use a CDN (content delivery network, such as akamai or panther express). You will typically get high hit rates (depending on the access patterns of your site), so the end users will get most files directly out of the CDN and not cause any load on your servers (except for the first time a file is accessed). Thus you might be fine with just one server and a mirror for a while. As you scale, the challenges become how to distribute your images across the farm, how to manage redundancy etc.
I assume that time-to-market also plays a role. In that respect, a good strategy might be to start with S3 and be up-and-running quickly. Later on you can still migrate to a more sophisticated solution. In that case, make sure management keeps this in mind. Non-tech people tend to believe once a functionality works, you never have to touch it again. And be aware that migrating a lot of data takes time. When we changed our photo architecture, the copy jobs ran for months.
How about a Flickr/Picasa integration? The users use their own Flickr/Picasa account to store their photo and use the features in your site. In that case you pay for nothing for storing photo :P
I myself would like to have a single photo storing acc. instead of having individual acc. for each site.

Where is Erlang used and why? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I would like to know a list of the most common application/websites/solutions where Erlang is used, successfully or not.
Explaining why it is used into a specific solution instead of others programming languages would be very much appreciated, too.
Listing BAD Erlang case studies (cases in which Erlang is misused) it would be interesting, as well.
From Programming Erlang:
(source: google.com)
Many companies are using Erlang in their production systems:
• Amazon uses Erlang to implement SimpleDB, providing database services as a part
of the Amazon Elastic Compute Cloud (EC2).
• Yahoo! uses it in its social bookmarking service, Delicious, which has more than
5 million users and 150 million bookmarked URLs.
• Facebook uses Erlang to power the backend of its chat service, handling more than
100 million active users.
• WhatsApp uses Erlang to run messaging servers, achieving up to 2 million connected users per server.
• T-Mobile uses Erlang in its SMS and authentication systems.
• Motorola is using Erlang in call processing products in the public-safety industry.
• Ericsson uses Erlang in its support nodes, used in GPRS and 3G mobile networks
worldwide.
The most popular open source Erlang applications include the following:
• The 3D subdivision modeler Wings 3D, used to model and texture polygon
meshes.
• The Ejabberd system, which provides an Extensible Messaging and Presence Protocol
(XMPP) based instant messaging (IM) application server.
• The CouchDB “schema-less” document-oriented database, providing scalability
across multicore and multiserver clusters.
• The MochiWeb library that provides support for building lightweight HTTP servers.
It is used to power services such as MochiBot and MochiAds, which serve
dynamically generated content to millions of viewers daily.
• RabbitMQ, an AMQP messaging protocol implementation. AMQP is an emerging
standard for high-performance enterprise messaging.
ejabberd is one of the most well know erlang application and the one I learnt erlang with.
I think it's the one of most interesting project for learning erlang because it is really building on erlang's strength. (However some will argue that it's not OTP, but don't worry there's still a trove of great code inside...)
Why ?
An XMPP server (like ejabberd) can be seen as a high level router, routing messages between end users. Of course there are other features, but this is the most important aspect of an instant messaging server. It has to route many messages simultaneously, and handle many a lot of TCP/IP connections.
So we have 2 features:
handle many connections
route messages given some aspects of the message
These are examples where erlang shines.
handle many connections
It is very easy to build scalable non-blocking TCP/IP servers with erlang. In fact, it was designed to solve this problem.
And given it can spawn hundreds of thousand of processes (and not threads, it's a share-nothing approach, which is simpler to design), ejabberd is designed as a set of erlang processes (which can be distributed over several servers) :
client connection process
router process
chatroom process
server to server processes
All of them exchanging messages.
route messages given some aspects of the message
Another very lovable feature of erlang is pattern matching.
It is used throughout the language.
For instance, in the following :
access(moderator, _Config)-> rw;
access(participant, _Config)-> rw;
access(visitor, #config{type="public"})-> r;
access(visitor, #config{type="public_rw"})-> rw;
access(_User,_Config)-> none.
That's 5 different versions of the access function.
Erlang will select the most appropriate version given the arguments received. (Config is a structure of type #config which has a type attribute).
That means it is very easy and much clearer than chaining if/else or switch/case to make business rules.
To wrap up
Writing scalable servers, that's the whole point of erlang. Everything is designed it making this easy. On the two previous features, I'd add :
hot code upgrade
mnesia, distributed relational database (included in the base distribution)
mochiweb, on which most http erlang servers are built on
binary support (decoding and encoding binary protocol easy as ever)
a great community with great open source projects (ejabberd, couchdb but also webmachine, riak and a slew of library very easy to embed)
Fewer LOCs
There is also this article from Richard Jones. He rewrote an application from C++ to erlang: 75% fewer lines in erlang.
The list of most common applications for Erlang as been covered (CouchDb, ejabberd, RabbitMQ etc) but I would like to contribute the following.
The reason why it is used in these applications comes from the core strength of Erlang: managing application availability.
Erlang was built from ground up for the telco environment requiring that systems meet at least 5x9's availability (99.999% yearly up-time). This figure doesn't leave much room for downtime during a year! For this reason primarily, Erlang comes loaded with the following features (non-exhaustive):
Horizontal scalability (ability to distribute jobs across machine boundaries easily through seamless intra & inter machine communications). The built-in database (Mnesia) is also distributed by nature.
Vertical scalability (ability to distribute jobs across processing resources on the same machine): SMP is handled natively.
Code Hot-Swapping: the ability to update/upgrade code live during operations
Asynchronous: the real world is async so Erlang was built to account for this basic nature. One feature that contributes to this requirement: Erlang's "free" processes (>32000 can run concurrently).
Supervision: many different strategies for process supervision with restart strategies, thresholds etc. Helps recover from corner-cases/overloading more easily whilst still maintaining traces of the problems for later trouble-shooting, post-mortem analysis etc.
Resource Management: scheduling strategies, resource monitoring etc. Note that the default process scheduler operates with O(1) scaling.
Live debugging: the ability to "log" into live nodes at will helps trouble-shooting activities. Debugging can be undertaken live with full access to any process' running state. Also the built-in error reporting tools are very useful (but sometimes somewhat awkward to use).
Of course I could talk about its functional roots but this aspect is somewhat orthogonal to the main goal (high availability). The main component of the functional nature which contributes generously to the target goal is, IMO: "share nothing". This characteristic helps contain "side effects" and reduce the need for costly synchronization mechanisms.
I guess all these characteristics help extending a case for using Erlang in business critical applications.
One thing Erlang isn't really good at: processing big blocks of data.
We built a betting exchange (aka prediction market) using Erlang. We chose Erlang over some of the more traditional financial languages (C++, Java etc) because of the built-in concurrency. Markets function very similarly to telephony exchanges. Our CTO gave a talk on our use of Erlang at CTO talk.
We also use CouchDB and RabbitMQ as part of our stack.
Erlang comes from Ericsson, and is used within some of their telecoms systems.
Outside telecoms, CouchDb (a document-oriented database) is possibly the best known Erlang application so far.
Why Erlang ? From the overview (worth reading in full):
The document, view, security and
replication models, the special
purpose query language, the efficient
and robust disk layout and the
concurrent and reliable nature of the
Erlang platform are all carefully
integrated for a reliable and
efficient system.
I came across this is in the process of writing up a report: Erlang in Acoustic Ray Tracing.
It's an experience report on a research group's attempt to use Erlang for Acoustic Ray Tracing. They found that while it was easier to write the program, less buggy, etc. It scaled worse, and performed 10x slower than a comparable C program. So one spot where it may not be well suited is CPU intensive scenarios.
Do note though, that the people wrote the paper were in the stages of first learning Erlang, and may not have known the proper development procedures for CPU intensive Erlang.
Apparently, Yahoo used Erlang to make something it calls Harvester. Article about it here: http://www.ddj.com/architect/220600332
What is erlang good for?
http://beebole.com/en/blog/erlang/why-erlang/
http://www.aquabu.com/2008/2/15/erlang-pragmatic-studio-day-3-notes
http://www.reddit.com/r/programming/comments/9q0lr/erlang_and_highfrequency_trading/
(jerf's answer)
It's important to realize that Erlang's 4 parts: the language itself, the VMs(BEAM, hipe) standard libs (plus modules on github, CEAN, etc.) and development environment are being steadily updated / expanded/improved. For example, i remember reading that the floating point performance improved when Wings3d's author realized it needed to improve (I can't find a source for this). And this guy just wrote about it:
http://marian-dan.com/wordpress/?p=324
A couple years ago, Tim Bray's Wide Finder publicity and all the folks starting to do web app frameworks and HTTP servers lead (at least in part) to improved regex and binaries handling. And there's all the work integrating HiPE and SMP, the dialyzer project, multiple unit testing and build libs springing up, ..
So its sweet spot is expanding, The difficult thing is that the official docs can't keep up very well, and the mailing list and erlang blogosphere volume are growing quickly
We are using Erlang to provide the back-end muscle power for our really real-time browser-based multi-player game Pixza. We don't use Flash or any other third-party plugins, though the game is real-time multi-player. We use pure JS and COMET techniques instead. And Erlang supports the "really realtimeliness" of Pixza.
I'm working for wooga, a social game company and we use Erlang for some of our game backends (basically http apis for millions of daily users) and auxiliary services like ios push notification provider, payment etc.
I think it really shines in network related tasks and it makes it kind of straight forward to structure and implement simple and complex network services alike in it. Distribution, fault tolerance and performance are easy to achieve because Erlang already has some of the key ingredients built in and they are being used for a long time in critical production infrastructure. So its not like "the new hip technology thing 0.0.2 alpha".
I know that other game companies use Erlang as well. You should be able to find presentations on slideshare about that.
Erlang draws its strength from being a functional language with no shared memory. Hence IMO, Erlang won't be suitable for applications that require in place memory manipulations. Image editing for example.

Resources