I have the following spreadsheet:
Cost Analysis Google Sheet
I am trying to think of the best way to analyze the cost impact on a Variant based on the different components of the product. I am having a very tough time trying to be creative and think of ways to identify cost trade-off analysis using Google Sheets.
Basically, I am trying to find methods within Sheets to help me visualize the added value of certain components for a Variant.
I know that this is difficult to do without any domain knowledge of the application, but I am hoping that someone has some more general ideas for how to do some reporting and visualization of data like this!
Thanks so much!
You have already added conditional formating, which I consider great for visual identification of tables, given that I don't think this model would be improved by a graph. First I would recommend changing the conditional formatting to gradual and having the green extreme be the maximum negative value of the Diff.-columns. Second, if you want the simplest visualization possible, you can do a rank-list.
This would work like a dashboard, presenting the variants with the information you want. Here is an example, which takes the PN-column of the row with the lowest Diff-value in the Variant 3-rows: =index(G3:I16,match(vlookup(SMALL(I3:I16,1),I3:I16,1,false),I3:I16),1)
You can then alter the rank and the offset, to get a list for the # best variants with the columns you want.
Hope that helps, for the visualization. For advice on organization, I believe Google Sheets-forum is less appropriate :)
Related
I have a list of items with data connected to them and I need the best combination of 3 while multiple criteria based on that data are met.
Here is an example sheet - https://docs.google.com/spreadsheets/d/1-5R0OfJWsjnCUJ9mGKvYuphDRaXeoJZoTsiEdoXi9ag/edit?usp=sharing
I do not know if a spreadsheet is the best tool to solve your optimization problem.
You may want to take a look at linear programming and see if another tool would serve you better.
If you absolutely want to do this using Google Sheets, see if one of the existing solver add-ons would work for you. Choose Add-ons > Get add-ons and search for solver to get started.
I'm working with a large data set that really warrants a graph db. My goal is to visualize identify trends in the data set to make decisions.
I'm currently using neo4j and i really like the tool, however the nodes returned are capped at 300. This number is only a fraction of my data, and doesnt really allow me to gain the insight i've been looking for, even with queries to filter out portions. Additionally, I'd really like to add node weights and color per conditions, which isn't possible using just neo4j.
Has anybody found a solution to this problem. I'd imagine there may be some client side libraries designed for these sorts of problems. Alternatively, I wouldn't be opposed to switching to some other graph db better suited to solve these problems.
I would suggest using Neo4j Bloom. This will provide you better visualization of your Neo4j data.
I am trying to use collaborative filtering to recommend items to the user based on their past purchase. I have created a user vector representing his usage and item vector(A) with values populated as probabilty of B given A. The objective to somewhat capture the items sold together in items vector representation. Now I need to find the time when these recommendations should be presented. As the items I am recommending are of periodic use timing is very important.
So I am trying to explore constraint-based Recommendations to make my recommendation time sensitive. The approach I am thinking is to create time-sensitive constraint based on the last date of purchase and average consumption rate. But the problem is creating constraint as user level will become computationally difficult.
I need your suggestion regarding the approach or suggestion of any better way to implement the same. All I want to develop a recommendation engine using customer's usage data of items that are consumed and required to purchase again. I need to output list of recommendation as well as timing of presenting the recommendation to the user
Thanks
The way I see it, there are two basic options here that you can pursue. On the one hand, the temporal features can be incorporated as additional information and converted into a kind of hybrid recommendation. The Python package "lightfm" is a good example.
On the other hand, the problem can also be modeled as a time series problem. A well-known paper dealing with Next Basket Recommendations is "A Dynamic Recurrent Model for Next Basket Recommendation". Here too, there are already implementations on Github.
I am working in a big company and we are having a lot of JIRA projects, I would like to have a dashboard or a way to know if the projects that exist in JIRA are used, e.g if there are any issues in them (I don't need to see the issues just to have a number).
Can I do it without accessing to the database, do I need a plugin, is there a functional way to get the info? :)
thanks a lot
best regards
Adrien k
You can easily do this with the built-in Two Dimensional Filter Statistics gadget:
first, search for all issues in your JIRA instance. There may be an easier way to do this, but you can certainly use JQL like project=ABC or project != ABC.
save the search as a filter
go to a dashboard, add a new Two Dimensional Filter Statistics gadget. Select your newly-saved filter, select "Project" for one axis, and something small in number (like Issue Type) to the other axis. You'll also need to adjust "Number of Results" to exceed the number of issue types in your system.
save the gadget
Note that the Projects gadget also provides somewhat-similar information with fewer configuration requirements, but as far as I know, it doesn't show the numeric issue totals unless you hover the mouse pointer over the bars.
The company I work for makes a plugin that can do that - Structure
That's an example structure containing all issues in available projects, they are then grouped by project, and there's a column showing the number of sub-items (issues) in each group (project).
You can also add a structure to a dashboard/Confluence page.
On a large JIRA instance it be a bit on the expensive side to use it just for that alone though...
i need to explain the practical problems that might be encountered when transforming their transactional (and other) data from their diverse sources into the Data Warehouse. according to my knowledge this is about cleansing and scrubbing data. if anyone knows about any practical problem please help me.thanks for your help
That's a broad topic, but I'll offer a few good starting points.
For starters, think about history. If a transaction updates some data point, do you need to apply that retroactively, or do you need to remember what the value was at any given point in time. For example, suppose you have a monthly report of customers by city, and one of your customers moves. How should the DW reflect that.
Think about data acceptance. Is every input row a good input? For example, if you're dealing with web data, there are crawlers and spammers that you might not want to count the same as you count user traffic.
Think about data synchronization. Do all your inputs use the same keys? Do you know how to translate between them? Does Team A mean the same thing by "cust_id" as Team B does? A project glossary is very helpful here.
Think about localization. Are you inputs all in the same time zone? Do they all use the same calendar system? Do you need to handle unicode?
Think about reporting. Are the data you're capturing able to answer the questions people will ask of the DW? If not, how can you capture data that can?
Think about presentation. Should you be showing customers the same data you're using for internal reporting? Does finance need to see a different slice of the data than marketing?
This really only scratches the surface of the issues that come up on a major DW project. I would refer you to Ralph Kimball's assorted books on Data Warehousing for a more in depth discussion of problems and solutions. Hope this helps you get started.
You give the answer in your question.
According to my knowledge this is about cleansing and scrubbing data.
And you are correct. Cleansing data means that you have a company-wide list of clean element attributes, and a mapping that changes the unclean elements into clean elements.
Processing the data against the clean element attributes is a piece of cake compared to creating the company-wide list of clean element attributes.
You have to get people from different departments to agree on what data to warehouse, and to agree on what each element means. This is a difficult sociological problem. It's not a terribly hard technical problem.
Good luck getting your company-wide list of clean element attributes.