|
I am not sure whether this is what you want, but I really think you need to learn some cost/lost functions to decide how to manage the resources to minimize the cost and maximize the outputs. If I had paid any more attention to the class, I would have learnt how the maximization or cost reduction algorithms work, but... Cost function - Wikipedia.
Following resources will help you in understanding the overall algorithm, then write your own,
Cost Function - Coursera.org
Mathematical optimization - Wikipedia
The sh*t I complain about
It's like there ain't a cloud in the sky and it's raining out - Eminem
~! Firewall !~
|
|
|
|
|
Thanks Afzaal, the cost function is certainly what I'd like to end up with but when I look into the articles it quickly gets to a level of mathematical notation that I'm not comfortable with, despite being an engineering graduate (long time ago though).
I think in this case I'm going to have to approximate something by working out each products weekly average, then phase shifting them by a number of days each and see if I can derive a function from this somehow. It won't be optimal but then real-world production usually isn't .
The links reference some fascinating material though, it's certainly an area I'd like to explore in more mathematical detail at some point.
Andy B
|
|
|
|
|
Or on the other hand you can consider looking for a library that lets you do this, I do not know of any but a quick Google search will yield a few results easily.
The sh*t I complain about
It's like there ain't a cloud in the sky and it's raining out - Eminem
~! Firewall !~
|
|
|
|
|
|
Thanks Gerry but in this instance I'll be programming in a non-Microsoft environment, so making API calls will not be possible, or at least not easily. I have used the Excel solver function in the past for optimization exercises and it is very effective.
|
|
|
|
|
Don't get me wrong. I believe Codility proposes interesting problems. This tool might even be a good way of testing candidates' algorithmic skills (not programming skills). My frustration is elsewhere: it's hard, even when it pretends otherwise.
Problems can be one of three categories: "painless", "respectable", and "ambitious".
My experience however is that few of the painless ones actually are painless. Most of them will require a respectable amount of time to solve. A lot of the problems conceal some trick that you need to somehow discover, and you cannot get any hints. Solutions on the internet will instantaneously spoil the whole exercise instead of giving you hints progressively.
Now, even though you got the algorithm right. You are far from being done, as all you have so far isn't worth a penny. The implementation might actually take you more time than finding the algorithm itself. Because, indexes. You misplaced one incrementation, you forgot one, or whatever. The smallest mistake will quickly drop your score down to 0%, you failed, goodbye. You had the right idea though, it's a shame.
And the worst part is: Google won't find anybody that had or is having a similar experience. I'm alone, and it doesn't feel good.
|
|
|
|
|
One wrong "increment" can destroy a $$$ industrial process; or kill a patient.
Better on paper, than in the real world.
"(I) am amazed to see myself here rather than there ... now rather than then".
― Blaise Pascal
|
|
|
|
|
.. or using the wrong units can prang some very expensive hardware a long long way from home.
Software rusts. Simon Stephenson, ca 1994. So does this signature. me, 2012
|
|
|
|
|
Of course, but that's why you have more than a couple of hours to think about things, and you have discussions with your peers, and peer reviewing, ...
But that wasn't exactly my point.
|
|
|
|
|
Background:
I developed a 2-d simulation that allows for the training of surgical technologists in the setup of surgical trays. The student is able to pick a tool from a toolbox and drag it onto the surface of the tray and orient it the way it needs to be for the surgery. The grading used a nearest neighbor algorithm using data from a tray set up in the graphical editor by an instructor or subject matter expert. I have available the coordinates of each image as well as the height and width and the orientation of the image. It works okay for that because we had a set of rules guiding the distance the tool can be away from a right answer and still be right.
Problem:
Now, I have to do a similar application for another purpose. The catch is that the grading is more "fuzzy". They don't want grading by the exact location of the user's answer as compared to the right answer. They just want the objects graded based on the approximate location of the object with regard to the other objects on the tray and they should be in the correct order. I have tried to figure this out but have come up short. I can always find that case that would throw the whole thing off. I was wondering if anyone here has any ideas?
Thanks in advance,
Joe
|
|
|
|
|
You said: "based on order".
There needs to be an "ideal order". Once that's established, you can determine what consitutes a "variance" from this "ideal"; how it's measured; etc.
"(I) am amazed to see myself here rather than there ... now rather than then".
― Blaise Pascal
|
|
|
|
|
The definition of a "right" answer seems to be reasonably simple, are they in the right order and do the come within a minimum distance of another instrument.
Never underestimate the power of human stupidity
RAH
|
|
|
|
|
In the following problem, which algorithm should be considered?
Having a finite set of variable height brikcs, how can I find the subset that can reach at least a given height, minimizing the total height of the chosen bricks?
In other words, which bricks should I use to reach a given height minimizing cost of material?
Thanks
|
|
|
|
|
|
Yes, I thought about that algorithm for my purpose.
But the two targets are different: in the knapsack problem you must maximize a value minimizing a cost; the value and the cost are two different quantities.
In this case the value and the cost are very related and you have to minimize the value constrained it is more than a given quantity.
Maybe I can apply the same algorithm, but I cannot see how to reduce my problem to the knapsack's one.
|
|
|
|
|
It may be overkill but you can easily write it as an integer linear program:
minimize h.x
st.
h.x >= minheight
for all i: x[i] is boolean
Where h are the heights of the brights, x are boolean decision variables deciding for each brick whether to take it or not, minheight is the minimum height that must be reached, and . is the dot product between two vectors.
With just a little coding effort, you can make solvers like GLPK or Gurobi solve that.
Of course it can be solved with DP, but it will be at least an O(height*#bricks) time algorithm where height is the final height, and realistically you'd have to go a bit higher up to some guessed upper bound.
|
|
|
|
|
correct
thanks for the pointer to GLPK and Gurobi
|
|
|
|
|
Hello !
Can you mathmaticiens, help me ! I'm stuck for weaks to understand this;(least squares conformal mapping)
In the file uvedit_parametrizer.c (in blender or any src code) this part of code :
/* angle based lscm formulation */
ratio = (sina3 == 0.0f) ? 1.0f : sina2 / sina3;
cosine = cosf(a1) * ratio;
sine = sina1 * ratio;
EIG_linear_solver_matrix_add(context, row, 2 * v1->u.id, cosine - 1.0f);
EIG_linear_solver_matrix_add(context, row, 2 * v1->u.id + 1, -sine);
EIG_linear_solver_matrix_add(context, row, 2 * v2->u.id, -cosine);
EIG_linear_solver_matrix_add(context, row, 2 * v2->u.id + 1, sine);
EIG_linear_solver_matrix_add(context, row, 2 * v3->u.id, 1.0);
row++;
EIG_linear_solver_matrix_add(context, row, 2 * v1->u.id, sine);
EIG_linear_solver_matrix_add(context, row, 2 * v1->u.id + 1, cosine - 1.0f);
EIG_linear_solver_matrix_add(context, row, 2 * v2->u.id, -sine);
EIG_linear_solver_matrix_add(context, row, 2 * v2->u.id + 1, -cosine);
EIG_linear_solver_matrix_add(context, row, 2 * v3->u.id + 1, 1.0);
row++;
(sparsematrix) prepares a matrix.
what kind of eqaution is getting solved, Thanks!
|
|
|
|
|
Find optimum rectangular size of the box to pack all the small rectangular boxes
|
|
|
|
|
Hi Andrew and thank you very much for your library !
I have a little question for your concerning the BlobCounter method, what is its principle ?
Thank for advance
Vincent
|
|
|
|
|
You should post your question as a comment to the article it refers to. It's unlikely the author of the article will come here and answer, as there will not be any notification of your question.
selfish adj. Defines someone who does not think of me.
|
|
|
|
|
I would like to share some interesting results of playing with the N-Queens problem solver. The following plots represent distribution of the solutions number depending on the arrangement of a subset of queens. This distribution is built by iterating the possible permutations of such a subset and counting the number of all solutions containing the current permutation (i.e. by solving N-Queens Completion Problem for each permutation).
In this particular case, subset consists of three queens that occupy the first three adjacent columns and only the permutations without overlaps (no one attacks each other) are enumerated. The subset length affects the resolution of the plot but not the general nature of the distribution.
plots_img
plotly_link
|
|
|
|
|
If you want to offer this sort of information then you should write a proper article. The forums are more for technical questions.
|
|
|
|
|
OK! I just thought that amount of information I have at the moment is not enough for a new article
|
|
|
|
|
You can always post it as a Tip if there is not enough for a full article. But either way, it will be more accessible there than in the forums.
|
|
|
|